The new Guidance in AQUATOX Setup and Application provides a quick start guide to introduce major model features, as well as being a type of cookbook to guide basic model setup, calibration, and validation.
ASD FieldSpec Calibration Setup and Techniques
NASA Technical Reports Server (NTRS)
Olive, Dan
2001-01-01
This paper describes the Analytical Spectral Devices (ASD) Fieldspec Calibration Setup and Techniques. The topics include: 1) ASD Fieldspec FR Spectroradiometer; 2) Components of Calibration; 3) Equipment list; 4) Spectral Setup; 5) Spectral Calibration; 6) Radiometric and Linearity Setup; 7) Radiometric setup; 8) Datadets Required; 9) Data files; and 10) Field of View Measurement. This paper is in viewgraph form.
SWAT Model Configuration, Calibration and Validation for Lake Champlain Basin
The Soil and Water Assessment Tool (SWAT) model was used to develop phosphorus loading estimates for sources in the Lake Champlain Basin. This document describes the model setup and parameterization, and presents calibration results.
Calibrating the orientation between a microlens array and a sensor based on projective geometry
NASA Astrophysics Data System (ADS)
Su, Lijuan; Yan, Qiangqiang; Cao, Jun; Yuan, Yan
2016-07-01
We demonstrate a method for calibrating a microlens array (MLA) with a sensor component by building a plenoptic camera with a conventional prime lens. This calibration method includes a geometric model, a setup to adjust the distance (L) between the prime lens and the MLA, a calibration procedure for determining the subimage centers, and an optimization algorithm. The geometric model introduces nine unknown parameters regarding the centers of the microlenses and their images, whereas the distance adjustment setup provides an initial guess for the distance L. The simulation results verify the effectiveness and accuracy of the proposed method. The experimental results demonstrate the calibration process can be performed with a commercial prime lens and the proposed method can be used to quantitatively evaluate whether a MLA and a sensor is assembled properly for plenoptic systems.
Evaluation of calibration efficacy under different levels of uncertainty
Heo, Yeonsook; Graziano, Diane J.; Guzowski, Leah; ...
2014-06-10
This study examines how calibration performs under different levels of uncertainty in model input data. It specifically assesses the efficacy of Bayesian calibration to enhance the reliability of EnergyPlus model predictions. A Bayesian approach can be used to update uncertain values of parameters, given measured energy-use data, and to quantify the associated uncertainty.We assess the efficacy of Bayesian calibration under a controlled virtual-reality setup, which enables rigorous validation of the accuracy of calibration results in terms of both calibrated parameter values and model predictions. Case studies demonstrate the performance of Bayesian calibration of base models developed from audit data withmore » differing levels of detail in building design, usage, and operation.« less
Feedforward operation of a lens setup for large defocus and astigmatism correction
NASA Astrophysics Data System (ADS)
Verstraete, Hans R. G. W.; Almasian, MItra; Pozzi, Paolo; Bilderbeek, Rolf; Kalkman, Jeroen; Faber, Dirk J.; Verhaegen, Michel
2016-04-01
In this manuscript, we present a lens setup for large defocus and astigmatism correction. A deformable defocus lens and two rotational cylindrical lenses are used to control the defocus and astigmatism. The setup is calibrated using a simple model that allows the calculation of the lens inputs so that a desired defocus and astigmatism are actuated on the eye. The setup is tested by determining the feedforward prediction error, imaging a resolution target, and removing introduced aberrations.
Delivery of calibration workshops covering herbicide application equipment : final report.
DOT National Transportation Integrated Search
2014-03-31
Proper herbicide sprayer set-up and calibration are critical to the success of the Oklahoma Department of Transportation (ODOT) herbicide program. Sprayer system set-up and calibration training is provided in annual continuing education herbicide wor...
NASA Astrophysics Data System (ADS)
Stavroulakis, Petros I.; Chen, Shuxiao; Sims-Waterhouse, Danny; Piano, Samanta; Southon, Nicholas; Bointon, Patrick; Leach, Richard
2017-06-01
In non-rigid fringe projection 3D measurement systems, where either the camera or projector setup can change significantly between measurements or the object needs to be tracked, self-calibration has to be carried out frequently to keep the measurements accurate1. In fringe projection systems, it is common to use methods developed initially for photogrammetry for the calibration of the camera(s) in the system in terms of extrinsic and intrinsic parameters. To calibrate the projector(s) an extra correspondence between a pre-calibrated camera and an image created by the projector is performed. These recalibration steps are usually time consuming and involve the measurement of calibrated patterns on planes, before the actual object can continue to be measured after a motion of a camera or projector has been introduced in the setup and hence do not facilitate fast 3D measurement of objects when frequent experimental setup changes are necessary. By employing and combining a priori information via inverse rendering, on-board sensors, deep learning and leveraging a graphics processor unit (GPU), we assess a fine camera pose estimation method which is based on optimising the rendering of a model of a scene and the object to match the view from the camera. We find that the success of this calibration pipeline can be greatly improved by using adequate a priori information from the aforementioned sources.
Linear positioning laser calibration setup of CNC machine tools
NASA Astrophysics Data System (ADS)
Sui, Xiulin; Yang, Congjing
2002-10-01
The linear positioning laser calibration setup of CNC machine tools is capable of executing machine tool laser calibraiotn and backlash compensation. Using this setup, hole locations on CNC machien tools will be correct and machien tool geometry will be evaluated and adjusted. Machien tool laser calibration and backlash compensation is a simple and straightforward process. First the setup is to 'find' the stroke limits of the axis. Then the laser head is then brought into correct alignment. Second is to move the machine axis to the other extreme, the laser head is now aligned, using rotation and elevation adjustments. Finally the machine is moved to the start position and final alignment is verified. The stroke of the machine, and the machine compensation interval dictate the amount of data required for each axis. These factors determine the amount of time required for a through compensation of the linear positioning accuracy. The Laser Calibrator System monitors the material temperature and the air density; this takes into consideration machine thermal growth and laser beam frequency. This linear positioning laser calibration setup can be used on CNC machine tools, CNC lathes, horizontal centers and vertical machining centers.
Research on vacuum utraviolet calibration technology
NASA Astrophysics Data System (ADS)
Wang, Jiapeng; Gao, Shumin; Sun, Hongsheng; Chen, Yinghang; Wei, Jianqiang
2014-11-01
Importance of extreme ultraviolet (EUV) and far ultraviolet (FUV) calibration is growing fast as vacuum ultraviolet payloads are wildly used in national space plan. A calibration device is established especially for the requirement of EUV and FUV metrology and measurement. Spectral radiation and detector relative spectral response at EUV and FUV wavelengths can be calibrated with accuracy of 26% and 20%, respectively. The setup of the device, theoretical model and value retroactive method are introduced and measurement of detector relative spectral response from 30 nm to 200 nm is presented in this paper. The calibration device plays an important role in national space research.
Calibrating page sized Gafchromic EBT3 films
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crijns, W.; Maes, F.; Heide, U. A. van der
2013-01-15
Purpose: The purpose is the development of a novel calibration method for dosimetry with Gafchromic EBT3 films. The method should be applicable for pretreatment verification of volumetric modulated arc, and intensity modulated radiotherapy. Because the exposed area on film can be large for such treatments, lateral scan errors must be taken into account. The correction for the lateral scan effect is obtained from the calibration data itself. Methods: In this work, the film measurements were modeled using their relative scan values (Transmittance, T). Inside the transmittance domain a linear combination and a parabolic lateral scan correction described the observed transmittancemore » values. The linear combination model, combined a monomer transmittance state (T{sub 0}) and a polymer transmittance state (T{sub {infinity}}) of the film. The dose domain was associated with the observed effects in the transmittance domain through a rational calibration function. On the calibration film only simple static fields were applied and page sized films were used for calibration and measurements (treatment verification). Four different calibration setups were considered and compared with respect to dose estimation accuracy. The first (I) used a calibration table from 32 regions of interest (ROIs) spread on 4 calibration films, the second (II) used 16 ROIs spread on 2 calibration films, the third (III), and fourth (IV) used 8 ROIs spread on a single calibration film. The calibration tables of the setups I, II, and IV contained eight dose levels delivered to different positions on the films, while for setup III only four dose levels were applied. Validation was performed by irradiating film strips with known doses at two different time points over the course of a week. Accuracy of the dose response and the lateral effect correction was estimated using the dose difference and the root mean squared error (RMSE), respectively. Results: A calibration based on two films was the optimal balance between cost effectiveness and dosimetric accuracy. The validation resulted in dose errors of 1%-2% for the two different time points, with a maximal absolute dose error around 0.05 Gy. The lateral correction reduced the RMSE values on the sides of the film to the RMSE values at the center of the film. Conclusions: EBT3 Gafchromic films were calibrated for large field dosimetry with a limited number of page sized films and simple static calibration fields. The transmittance was modeled as a linear combination of two transmittance states, and associated with dose using a rational calibration function. Additionally, the lateral scan effect was resolved in the calibration function itself. This allows the use of page sized films. Only two calibration films were required to estimate both the dose and the lateral response. The calibration films were used over the course of a week, with residual dose errors Less-Than-Or-Slanted-Equal-To 2% or Less-Than-Or-Slanted-Equal-To 0.05 Gy.« less
Monte-Carlo-based uncertainty propagation with hierarchical models—a case study in dynamic torque
NASA Astrophysics Data System (ADS)
Klaus, Leonard; Eichstädt, Sascha
2018-04-01
For a dynamic calibration, a torque transducer is described by a mechanical model, and the corresponding model parameters are to be identified from measurement data. A measuring device for the primary calibration of dynamic torque, and a corresponding model-based calibration approach, have recently been developed at PTB. The complete mechanical model of the calibration set-up is very complex, and involves several calibration steps—making a straightforward implementation of a Monte Carlo uncertainty evaluation tedious. With this in mind, we here propose to separate the complete model into sub-models, with each sub-model being treated with individual experiments and analysis. The uncertainty evaluation for the overall model then has to combine the information from the sub-models in line with Supplement 2 of the Guide to the Expression of Uncertainty in Measurement. In this contribution, we demonstrate how to carry this out using the Monte Carlo method. The uncertainty evaluation involves various input quantities of different origin and the solution of a numerical optimisation problem.
NASA Technical Reports Server (NTRS)
Nguyen, Quang-Viet; Kojima, Jun
2005-01-01
Researchers from NASA Glenn Research Center s Combustion Branch and the Ohio Aerospace Institute (OAI) have developed a transferable calibration standard for an optical technique called spontaneous Raman scattering (SRS) in high-pressure flames. SRS is perhaps the only technique that provides spatially and temporally resolved, simultaneous multiscalar measurements in turbulent flames. Such measurements are critical for the validation of numerical models of combustion. This study has been a combined experimental and theoretical effort to develop a spectral calibration database for multiscalar diagnostics using SRS in high-pressure flames. However, in the past such measurements have used a one-of-a-kind experimental setup and a setup-dependent calibration procedure to empirically account for spectral interferences, or crosstalk, among the major species of interest. Such calibration procedures, being non-transferable, are prohibitively expensive to duplicate. A goal of this effort is to provide an SRS calibration database using transferable standards that can be implemented widely by other researchers for both atmospheric-pressure and high-pressure (less than 30 atm) SRS studies. A secondary goal of this effort is to provide quantitative multiscalar diagnostics in high pressure environments to validate computational combustion codes.
NASA Astrophysics Data System (ADS)
Sperling, A.; Meyer, M.; Pendsa, S.; Jordan, W.; Revtova, E.; Poikonen, T.; Renoux, D.; Blattner, P.
2018-04-01
Proper characterization of test setups used in industry for testing and traceable measurement of lighting devices by the substitution method is an important task. According to new standards for testing LED lamps, luminaires and modules, uncertainty budgets are requested because in many cases the properties of the device under test differ from the transfer standard used, which may cause significant errors, for example if a LED-based lamp is tested or calibrated in an integrating sphere which was calibrated with a tungsten lamp. This paper introduces a multiple transfer standard, which was designed not only to transfer a single calibration value (e.g. luminous flux) but also to characterize test setups used for LED measurements with additional provided and calibrated output features to enable the application of the new standards.
Cryogenic actuator testing for the SAFARI ground calibration setup
NASA Astrophysics Data System (ADS)
de Jonge, C.; Eggens, M.; Nieuwenhuizen, A. C. T.; Detrain, A.; Smit, H.; Dieleman, P.
2012-09-01
For the on-ground calibration setup of the SAFARI instrument cryogenic mechanisms are being developed at SRON Netherlands Institute for Space Research, including a filter wheel, XYZ-scanner and a flipmirror mechanism. Due to the extremely low background radiation requirement of the SAFARI instrument, all of these mechanisms will have to perform their work at 4.5 Kelvin and low-dissipative cryogenic actuators are required to drive these mechanisms. In this paper, the performance of stepper motors, piezoelectric actuators and brushless DC-motors as cryogenic actuators are compared. We tested stepper motor mechanical performance and electrical dissipation at 4K. The actuator requirements, test setup and test results are presented. Furthermore, design considerations and early performance tests of the flipmirror mechanism are discussed. This flipmirror features a 102 x 72 mm aluminum mirror that can be rotated 45°. A Phytron stepper motor with reduction gearbox has been chosen to drive the flipmirror. Testing showed that this motor has a dissipation of 49mW at 4K with a torque of 60Nmm at 100rpm. Thermal modeling of the flipmirror mechanism predicts that with proper thermal strapping the peak temperature of the flipmirror after a single action will be within the background level requirements of the SAFARI instrument. Early tests confirm this result. For low-duty cycle operations commercial stepper motors appear suitable as actuators for test equipment in the SAFARI on ground calibration setup.
NASA Technical Reports Server (NTRS)
Parker, Peter A. (Inventor)
2003-01-01
A single vector calibration system is provided which facilitates the calibration of multi-axis load cells, including wind tunnel force balances. The single vector system provides the capability to calibrate a multi-axis load cell using a single directional load, for example loading solely in the gravitational direction. The system manipulates the load cell in three-dimensional space, while keeping the uni-directional calibration load aligned. The use of a single vector calibration load reduces the set-up time for the multi-axis load combinations needed to generate a complete calibration mathematical model. The system also reduces load application inaccuracies caused by the conventional requirement to generate multiple force vectors. The simplicity of the system reduces calibration time and cost, while simultaneously increasing calibration accuracy.
Cryocooler based test setup for high current applications
NASA Astrophysics Data System (ADS)
Pradhan, Jedidiah; Das, Nisith Kr.; Roy, Anindya; Duttagupta, Anjan
2018-04-01
A cryo-cooler based cryogenic test setup has been designed, fabricated, and tested. The setup incorporates two numbers of cryo-coolers, one for sample cooling and the other one for cooling the large magnet coil. The performance and versatility of the setup has been tested using large samples of high-temperature superconductor magnet coil as well as short samples with high current. Several un-calibrated temperature sensors have been calibrated using this system. This paper presents the details of the system along with results of different performance tests.
Delft Dashboard: a quick setup tool for coastal and estuarine models
NASA Astrophysics Data System (ADS)
Nederhoff, C., III; Van Dongeren, A.; Van Ormondt, M.; Veeramony, J.
2016-02-01
We developed easy-to-use Delft DashBoard (DDB) software for the rapid set-up of coastal and estuarine hydrodynamic and basic morphological numerical models. In the "Model Maker" toolbox, users have the capability to set-up Delft3D models, in a minimal amount of time (in the order of a hour), for any location in the world. DDB draws upon public internet data sources of bathymetry and tidesto construct the model. With additional toolboxes, these models can be forced with parameterized hurricane wind fields, uplift of the sea surface due to tsunamis nested in publically available ocean models and forced with meteo data (wind speed, pressure, temperature) In this presentation we will show the skill of a model which is setup with Delft Dashboard and compare it to well-calibrated benchmark models. These latter models have been set-up using detailed input data and boundary conditions. We have tested the functionality of Delft DashBoard and evaluate the performance and robustness of the DDB model system on a variety of cases, ranging from a coastal to basin models. Furthermore, we have performed a sensitivity study to investigate the most critical physical and numerical processes. The software can benefit operational modellers, as well as scientists and consultants.
Broadband interferometric characterisation of nano-positioning stages with sub-10 pm resolution
NASA Astrophysics Data System (ADS)
Li, Zhi; Brand, Uwe; Wolff, Helmut; Koenders, Ludger; Yacoot, Andrew; Puranto, Prabowo
2017-06-01
A traceable calibration setup for investigation of the quasi-static and the dynamic performance of nano-positioning stages is detailed, which utilizes a differential plane-mirror interferometer with double-pass configuration from the National Physical Laboratory (NPL). An NPL-developed FPGA-based interferometric data acquisition and decoding system has been used to enable traceable quasi-static calibration of nano-positioning stages with high resolution. A lockin based modulation technique is further introduced to quantitatively calibrate the dynamic response of moving stages with a bandwidth up to 100 kHz and picometer resolution. First experimental results have proven that the calibration setup can achieve under nearly open-air conditions a noise floor lower than 10 pm/sqrt(Hz). A pico-positioning stage, that is used for nanoindentation with indentation depths down to a few picometers, has been characterized with this calibration setup.
Towards a global network of gamma-ray detector calibration facilities
NASA Astrophysics Data System (ADS)
Tijs, Marco; Koomans, Ronald; Limburg, Han
2016-09-01
Gamma-ray logging tools are applied worldwide. At various locations, calibration facilities are used to calibrate these gamma-ray logging systems. Several attempts have been made to cross-correlate well known calibration pits, but this cross-correlation does not include calibration facilities in Europe or private company calibration facilities. Our aim is to set-up a framework that gives the possibility to interlink all calibration facilities worldwide by using `tools of opportunity' - tools that have been calibrated in different calibration facilities, whether this usage was on a coordinated basis or by coincidence. To compare the measurement of different tools, it is important to understand the behaviour of the tools in the different calibration pits. Borehole properties, such as diameter, fluid, casing and probe diameter strongly influence the outcome of gamma-ray borehole logging. Logs need to be properly calibrated and compensated for these borehole properties in order to obtain in-situ grades or to do cross-hole correlation. Some tool providers provide tool-specific correction curves for this purpose. Others rely on reference measurements against sources of known radionuclide concentration and geometry. In this article, we present an attempt to set-up a framework for transferring `local' calibrations to be applied `globally'. This framework includes corrections for any geometry and detector size to give absolute concentrations of radionuclides from borehole measurements. This model is used to compare measurements in the calibration pits of Grand Junction, located in the USA; Adelaide (previously known as AMDEL), located in Adelaide Australia; and Stonehenge, located at Medusa Explorations BV in the Netherlands.
Safta, C.; Ricciuto, Daniel M.; Sargsyan, Khachik; ...
2015-07-01
In this paper we propose a probabilistic framework for an uncertainty quantification (UQ) study of a carbon cycle model and focus on the comparison between steady-state and transient simulation setups. A global sensitivity analysis (GSA) study indicates the parameters and parameter couplings that are important at different times of the year for quantities of interest (QoIs) obtained with the data assimilation linked ecosystem carbon (DALEC) model. We then employ a Bayesian approach and a statistical model error term to calibrate the parameters of DALEC using net ecosystem exchange (NEE) observations at the Harvard Forest site. The calibration results are employedmore » in the second part of the paper to assess the predictive skill of the model via posterior predictive checks.« less
A polychromatic adaption of the Beer-Lambert model for spectral decomposition
NASA Astrophysics Data System (ADS)
Sellerer, Thorsten; Ehn, Sebastian; Mechlem, Korbinian; Pfeiffer, Franz; Herzen, Julia; Noël, Peter B.
2017-03-01
We present a semi-empirical forward-model for spectral photon-counting CT which is fully compatible with state-of-the-art maximum-likelihood estimators (MLE) for basis material line integrals. The model relies on a minimum calibration effort to make the method applicable in routine clinical set-ups with the need for periodic re-calibration. In this work we present an experimental verifcation of our proposed method. The proposed method uses an adapted Beer-Lambert model, describing the energy dependent attenuation of a polychromatic x-ray spectrum using additional exponential terms. In an experimental dual-energy photon-counting CT setup based on a CdTe detector, the model demonstrates an accurate prediction of the registered counts for an attenuated polychromatic spectrum. Thereby deviations between model and measurement data lie within the Poisson statistical limit of the performed acquisitions, providing an effectively unbiased forward-model. The experimental data also shows that the model is capable of handling possible spectral distortions introduced by the photon-counting detector and CdTe sensor. The simplicity and high accuracy of the proposed model provides a viable forward-model for MLE-based spectral decomposition methods without the need of costly and time-consuming characterization of the system response.
Model of an optical system's influence on sensitivity of microbolometric focal plane array
NASA Astrophysics Data System (ADS)
Gogler, Sławomir; Bieszczad, Grzegorz; Zarzycka, Alicja; Szymańska, Magdalena; Sosnowski, Tomasz
2012-10-01
Thermal imagers and used therein infrared array sensors are subject to calibration procedure and evaluation of their voltage sensitivity on incident radiation during manufacturing process. The calibration procedure is especially important in so-called radiometric cameras, where accurate radiometric quantities, given in physical units, are of concern. Even though non-radiometric cameras are not expected to stand up to such elevated standards, it is still important, that the image faithfully represents temperature variations across the scene. The detectors used in thermal camera are illuminated by infrared radiation transmitted through a specialized optical system. Each optical system used influences irradiation distribution across an sensor array. In the article a model describing irradiation distribution across an array sensor working with an optical system used in the calibration set-up has been proposed. In the said method optical and geometrical considerations of the array set-up have been taken into account. By means of Monte-Carlo simulation, large number of rays has been traced to the sensor plane, what allowed to determine the irradiation distribution across the image plane for different aperture limiting configurations. Simulated results have been confronted with proposed analytical expression. Presented radiometric model allows fast and accurate non-uniformity correction to be carried out.
2012-08-07
sealed quartz ampoule under a mercury overpressure in a conventional clam-shell furnace . The reduction in the dislocation density has been studied as...46 2.6.4 Etch Pit Characterization . . . . . . . . . . . . . . . . . . . . . . . . 46 5 3 Furnace Setup and Calibration...Setup . . . . . . . . . . . . . . . . . . . . . . . 54 3.1.2 Furnace Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 4 In Situ
Brouckaert, D; Uyttersprot, J-S; Broeckx, W; De Beer, T
2018-03-01
Calibration transfer or standardisation aims at creating a uniform spectral response on different spectroscopic instruments or under varying conditions, without requiring a full recalibration for each situation. In the current study, this strategy is applied to construct at-line multivariate calibration models and consequently employ them in-line in a continuous industrial production line, using the same spectrometer. Firstly, quantitative multivariate models are constructed at-line at laboratory scale for predicting the concentration of two main ingredients in hard surface cleaners. By regressing the Raman spectra of a set of small-scale calibration samples against their reference concentration values, partial least squares (PLS) models are developed to quantify the surfactant levels in the liquid detergent compositions under investigation. After evaluating the models performance with a set of independent validation samples, a univariate slope/bias correction is applied in view of transporting these at-line calibration models to an in-line manufacturing set-up. This standardisation technique allows a fast and easy transfer of the PLS regression models, by simply correcting the model predictions on the in-line set-up, without adjusting anything to the original multivariate calibration models. An extensive statistical analysis is performed in order to assess the predictive quality of the transferred regression models. Before and after transfer, the R 2 and RMSEP of both models is compared for evaluating if their magnitude is similar. T-tests are then performed to investigate whether the slope and intercept of the transferred regression line are not statistically different from 1 and 0, respectively. Furthermore, it is inspected whether no significant bias can be noted. F-tests are executed as well, for assessing the linearity of the transfer regression line and for investigating the statistical coincidence of the transfer and validation regression line. Finally, a paired t-test is performed to compare the original at-line model to the slope/bias corrected in-line model, using interval hypotheses. It is shown that the calibration models of Surfactant 1 and Surfactant 2 yield satisfactory in-line predictions after slope/bias correction. While Surfactant 1 passes seven out of eight statistical tests, the recommended validation parameters are 100% successful for Surfactant 2. It is hence concluded that the proposed strategy for transferring at-line calibration models to an in-line industrial environment via a univariate slope/bias correction of the predicted values offers a successful standardisation approach. Copyright © 2017 Elsevier B.V. All rights reserved.
Metric Calibration of a Focused Plenoptic Camera Based on a 3d Calibration Target
NASA Astrophysics Data System (ADS)
Zeller, N.; Noury, C. A.; Quint, F.; Teulière, C.; Stilla, U.; Dhome, M.
2016-06-01
In this paper we present a new calibration approach for focused plenoptic cameras. We derive a new mathematical projection model of a focused plenoptic camera which considers lateral as well as depth distortion. Therefore, we derive a new depth distortion model directly from the theory of depth estimation in a focused plenoptic camera. In total the model consists of five intrinsic parameters, the parameters for radial and tangential distortion in the image plane and two new depth distortion parameters. In the proposed calibration we perform a complete bundle adjustment based on a 3D calibration target. The residual of our optimization approach is three dimensional, where the depth residual is defined by a scaled version of the inverse virtual depth difference and thus conforms well to the measured data. Our method is evaluated based on different camera setups and shows good accuracy. For a better characterization of our approach we evaluate the accuracy of virtual image points projected back to 3D space.
Evanoff, M G; Roehrig, H; Giffords, R S; Capp, M P; Rovinelli, R J; Hartmann, W H; Merritt, C
2001-06-01
This report discusses calibration and set-up procedures for medium-resolution monochrome cathode ray tubes (CRTs) taken in preparation of the oral portion of the board examination of the American Board of Radiology (ABR). The board examinations took place in more than 100 rooms of a hotel. There was one display-station (a computer and the associated CRT display) in each of the hotel rooms used for the examinations. The examinations covered the radiologic specialties cardiopulmonary, musculoskeletal, gastrointestinal, vascular, pediatric, and genitourinary. The software used for set-up and calibration was the VeriLUM 4.0 package from Image Smiths in Germantown, MD. The set-up included setting minimum luminance and maximum luminance, as well as positioning of the CRT in each examination room with respect to reflections of roomlights. The calibration for the grey scale rendition was done meeting the Digital Imaging and communication in Medicine (DICOM) 14 Standard Display Function. We describe these procedures, and present the calibration data in. tables and graphs, listing initial values of minimum luminance, maximum luminance, and grey scale rendition (DICOM 14 standard display function). Changes of these parameters over the duration of the examination were observed and recorded on 11 monitors in a particular room. These changes strongly suggest that all calibrated CRTs be monitored over the duration of the examination. In addition, other CRT performance data affecting image quality such as spatial resolution should be included in set-up and image quality-control procedures.
Why Bother to Calibrate? Model Consistency and the Value of Prior Information
NASA Astrophysics Data System (ADS)
Hrachowitz, Markus; Fovet, Ophelie; Ruiz, Laurent; Euser, Tanja; Gharari, Shervan; Nijzink, Remko; Savenije, Hubert; Gascuel-Odoux, Chantal
2015-04-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.
Why Bother and Calibrate? Model Consistency and the Value of Prior Information.
NASA Astrophysics Data System (ADS)
Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J. E.; Savenije, H.; Gascuel-Odoux, C.
2014-12-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.
Calibration of an arbitrarily arranged projection moiré system for 3D shape measurement
NASA Astrophysics Data System (ADS)
Tang, Ying; Yao, Jun; Zhou, Yihao; Sun, Chen; Yang, Peng; Miao, Hong; Chen, Jubing
2018-05-01
An arbitrarily arranged projection moiré system is presented for three-dimensional shape measurement. We develop a model for projection moiré system and derive a universal formula expressing the relation between height and phase variation before and after we put the object on the reference plane. With so many system parameters involved, a system calibration technique is needed. In this work, we provide a robust and accurate calibration method for an arbitrarily arranged projection moiré system. The system no longer puts restrictions on the configuration of the optical setup. Real experiments have been conducted to verify the validity of this method.
Maximum likelihood estimation in calibrating a stereo camera setup.
Muijtjens, A M; Roos, J M; Arts, T; Hasman, A
1999-02-01
Motion and deformation of the cardiac wall may be measured by following the positions of implanted radiopaque markers in three dimensions, using two x-ray cameras simultaneously. Regularly, calibration of the position measurement system is obtained by registration of the images of a calibration object, containing 10-20 radiopaque markers at known positions. Unfortunately, an accidental change of the position of a camera after calibration requires complete recalibration. Alternatively, redundant information in the measured image positions of stereo pairs can be used for calibration. Thus, a separate calibration procedure can be avoided. In the current study a model is developed that describes the geometry of the camera setup by five dimensionless parameters. Maximum Likelihood (ML) estimates of these parameters were obtained in an error analysis. It is shown that the ML estimates can be found by application of a nonlinear least squares procedure. Compared to the standard unweighted least squares procedure, the ML method resulted in more accurate estimates without noticeable bias. The accuracy of the ML method was investigated in relation to the object aperture. The reconstruction problem appeared well conditioned as long as the object aperture is larger than 0.1 rad. The angle between the two viewing directions appeared to be the parameter that was most likely to cause major inaccuracies in the reconstruction of the 3-D positions of the markers. Hence, attempts to improve the robustness of the method should primarily focus on reduction of the error in this parameter.
NASA Astrophysics Data System (ADS)
Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J.; Savenije, H. H. G.; Gascuel-Odoux, C.
2014-09-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus, ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study, the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by four calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce a suite of hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by "prior constraints," inferred from expert knowledge to ensure a model which behaves well with respect to the modeler's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model setup exhibited increased performance in the independent test period and skill to better reproduce all tested signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if counter-balanced by prior constraints, can significantly increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge-driven strategy of constraining models.
NASA Astrophysics Data System (ADS)
Hrachowitz, Markus; Fovet, Ophelie; Ruiz, Laurent; Gascuel-Odoux, Chantal; Savenije, Hubert
2014-05-01
Hydrological models are frequently characterized by what is often considered to be adequate calibration performances. In many cases, however, these models experience a substantial uncertainty and performance decrease in validation periods, thus resulting in poor predictive power. Besides the likely presence of data errors, this observation can point towards wrong or insufficient representations of the underlying processes and their heterogeneity. In other words, right results are generated for the wrong reasons. Thus ways are sought to increase model consistency and to thereby satisfy the contrasting priorities of the need a) to increase model complexity and b) to limit model equifinality. In this study a stepwise model development approach is chosen to test the value of an exhaustive and systematic combined use of hydrological signatures, expert knowledge and readily available, yet anecdotal and rarely exploited, hydrological information for increasing model consistency towards generating the right answer for the right reasons. A simple 3-box, 7 parameter, conceptual HBV-type model, constrained by 4 calibration objective functions was able to adequately reproduce the hydrograph with comparatively high values for the 4 objective functions in the 5-year calibration period. However, closer inspection of the results showed a dramatic decrease of model performance in the 5-year validation period. In addition, assessing the model's skill to reproduce a range of 20 hydrological signatures including, amongst others, the flow duration curve, the autocorrelation function and the rising limb density, showed that it could not adequately reproduce the vast majority of these signatures, indicating a lack of model consistency. Subsequently model complexity was increased in a stepwise way to allow for more process heterogeneity. To limit model equifinality, increase in complexity was counter-balanced by a stepwise application of "realism constraints", inferred from expert knowledge (e.g. unsaturated storage capacity of hillslopes should exceed the one of wetlands) and anecdotal hydrological information (e.g. long-term estimates of actual evaporation obtained from the Budyko framework and long-term estimates of baseflow contribution) to ensure that the model is well behaved with respect to the modeller's perception of the system. A total of 11 model set-ups with increased complexity and an increased number of realism constraints was tested. It could be shown that in spite of largely unchanged calibration performance, compared to the simplest set-up, the most complex model set-up (12 parameters, 8 constraints) exhibited significantly increased performance in the validation period while uncertainty did not increase. In addition, the most complex model was characterized by a substantially increased skill to reproduce all 20 signatures, indicating a more suitable representation of the system. The results suggest that a model, "well" constrained by 4 calibration objective functions may still be an inadequate representation of the system and that increasing model complexity, if counter-balanced by realism constraints, can indeed increase predictive performance of a model and its skill to reproduce a range of hydrological signatures, but that it does not necessarily result in increased uncertainty. The results also strongly illustrate the need to move away from automated model calibration towards a more general expert-knowledge driven strategy of constraining models if a certain level of model consistency is to be achieved.
ANN-based calibration model of FTIR used in transformer online monitoring
NASA Astrophysics Data System (ADS)
Li, Honglei; Liu, Xian-yong; Zhou, Fangjie; Tan, Kexiong
2005-02-01
Recently, chromatography column and gas sensor have been used in online monitoring device of dissolved gases in transformer oil. But some disadvantages still exist in these devices: consumption of carrier gas, requirement of calibration, etc. Since FTIR has high accuracy, consume no carrier gas and require no calibration, the researcher studied the application of FTIR in such monitoring device. Experiments of "Flow gas method" were designed, and spectrum of mixture composed of different gases was collected with A BOMEM MB104 FTIR Spectrometer. A key question in the application of FTIR is that: the absorbance spectrum of 3 fault key gases, including C2H4, CH4 and C2H6, are overlapped seriously at 2700~3400cm-1. Because Absorbance Law is no longer appropriate, a nonlinear calibration model based on BP ANN was setup to in the quantitative analysis. The height absorbance of C2H4, CH4 and C2H6 were adopted as quantitative feature, and all the data were normalized before training the ANN. Computing results show that the calibration model can effectively eliminate the cross disturbance to measurement.
Broadband microwave spectroscopy in Corbino geometry at 3He temperatures
NASA Astrophysics Data System (ADS)
Steinberg, Katrin; Scheffler, Marc; Dressel, Martin
2012-02-01
A broadband microwave spectrometer has been constructed to determine the complex conductivity of thin metal films at frequencies from 45 MHz to 20 GHz working in the temperature range from 0.45 K to 2 K (in a 3He cryostat). The setup follows the Corbino approach: a vector network analyzer measures the complex reflection coefficient of a microwave signal hitting the sample as termination of a coaxial transmission line. As the calibration of the setup limits the achievable resolution, we discuss the sources of error hampering different types of calibration. Test measurements of the complex conductivity of a heavy-fermion material demonstrate the applicability of the calibration procedures.
Absolute charge calibration of scintillating screens for relativistic electron detection
NASA Astrophysics Data System (ADS)
Buck, A.; Zeil, K.; Popp, A.; Schmid, K.; Jochmann, A.; Kraft, S. D.; Hidding, B.; Kudyakov, T.; Sears, C. M. S.; Veisz, L.; Karsch, S.; Pawelke, J.; Sauerbrey, R.; Cowan, T.; Krausz, F.; Schramm, U.
2010-03-01
We report on new charge calibrations and linearity tests with high-dynamic range for eight different scintillating screens typically used for the detection of relativistic electrons from laser-plasma based acceleration schemes. The absolute charge calibration was done with picosecond electron bunches at the ELBE linear accelerator in Dresden. The lower detection limit in our setup for the most sensitive scintillating screen (KODAK Biomax MS) was 10 fC/mm2. The screens showed a linear photon-to-charge dependency over several orders of magnitude. An onset of saturation effects starting around 10-100 pC/mm2 was found for some of the screens. Additionally, a constant light source was employed as a luminosity reference to simplify the transfer of a one-time absolute calibration to different experimental setups.
Contributions to the problem of piezoelectric accelerometer calibration. [using lock-in voltmeter
NASA Technical Reports Server (NTRS)
Jakab, I.; Bordas, A.
1974-01-01
After discussing the principal calibration methods for piezoelectric accelerometers, an experimental setup for accelerometer calibration by the reciprocity method is described It is shown how the use of a lock-in voltmeter eliminates errors due to viscous damping and electrical loading.
Toward Active Control of Noise from Hot Supersonic Jets
2012-05-14
was developed that would allow for easy data sharing among the research teams. This format includes the acoustic data along with all calibration ...SUPERSONIC | QUARTERLY RPT. 3 ■ 1 i; ’XZ. "• Tff . w w i — r i (a) Far-Field Array Calibration (b) MHz Rate PIV Camera Setup Figure... Plenoptic camera is a similar setup to determine 3-D motion of the flow using a thick light sheet. 2.3 Update on CFD Progress In the previous interim
NASA Astrophysics Data System (ADS)
Chaibou Begou, Jamilatou; Jomaa, Seifeddine; Benabdallah, Sihem; Rode, Michael
2015-04-01
Due to the climate change, drier conditions have prevailed in West Africa, since the seventies, and the consequences are important on water resources. In order to identify and implement management strategies of adaptation to climate change in the sector of water, it is crucial to improve our physical understanding of water resources evolution in the region. To this end, hydrologic modelling is an appropriate tool for flow predictions under changing climate and land use conditions. In this study, the applicability and performance of the recent version of Soil and Water Assessment Tool (SWAT2012) model were tested on the Bani catchment in West Africa under limited data condition. Model parameters identification was also tested using one site and multisite calibration approaches. The Bani is located in the upper part of the Niger River and drains an area of about 101, 000 km2 at the outlet of Douna. The climate is tropical, humid to semi-arid from the South to the North with an average annual rainfall of 1050 mm (period 1981-2000). Global datasets were used for the model setup such as: USGS hydrosheds DEM, USGS LCI GlobCov2009 and the FAO Digital Soil Map of the World. Daily measured rainfall from nine rain gauges and maximum and minimum temperature from five weather stations covering the period 1981-1997 were used for model setup. Sensitivity analysis, calibration and validation are performed within SWATCUP using GLUE procedure at Douna station first (one site calibration), then at three additional internal stations, Bougouni, Pankourou and Kouoro1 (multi-site calibration). Model parameters were calibrated at daily time step for the period 1983-1992, then validated for the period 1993-1997. A period of two years (1981-1982) was used for model warming up. Results of one-site calibration showed that the model performance is evaluated by 0.76 and 0.79 for Nash-Sutcliffe (NS) and correlation coefficient (R2), respectively. While for the validation period the performance improved considerably with NS and R2 equal to 0.84 and 0.87, respectively. The degree of total uncertainties is quantified by a minimum P-factor of 0.61 and a maximum R-factor of 0.59. These statistics suggest that the model performance can be judged as very good, especially considering limited data condition and high climate, land use and soil variability in the studied basin. The most sensitive parameters (CN2, OVN and SLSUBBSN) are related to surface runoff reflecting the dominance of this process on the streamflow generation. In the next step, multisite calibration approach will be performed on the BANI basin to assess how much additional observations improve the model parameter identification.
A user-friendly software package to ease the use of VIC hydrologic model for practitioners
NASA Astrophysics Data System (ADS)
Wi, S.; Ray, P.; Brown, C.
2016-12-01
The VIC (Variable Infiltration Capacity) hydrologic and river routing model simulates the water and energy fluxes that occur near the land surface and provides users with useful information regarding the quantity and timing of available water at points of interest within the basin. However, despite its popularity (proved by numerous applications in the literature), its wider adoption is hampered by the considerable effort required to prepare model inputs; e.g., input files storing spatial information related to watershed topography, soil properties, and land cover. This study presents a user-friendly software package (named VIC Setup Toolkit) developed within the MATLAB (matrix laboratory) framework and accessible through an intuitive graphical user interface. The VIC Setup Toolkit enables users to navigate the model building process confidently through prompts and automation, with an intention to promote the use of the model for both practical and academic purposes. The automated processes include watershed delineation, climate and geographical input set-up, model parameter calibration, graph generation and output evaluation. We demonstrate the package's usefulness in various case studies with the American River, Oklahoma River, Feather River and Zambezi River basins.
NASA Astrophysics Data System (ADS)
Schumann, Andreas; Oppel, Henning
2017-04-01
To represent the hydrological behaviour of catchments a model should reproduce/reflect the hydrologically most relevant catchment characteristics. These are heterogeneously distributed within a watershed but often interrelated and subject of a certain spatial organisation. Since common models are mostly based on fundamental assumptions about hydrological processes, the reduction of variance of catchment properties as well as the incorporation of the spatial organisation of the catchment is desirable. We have developed a method that combines the idea of the width-function used for determination of the geomorphologic unit hydrograph with information about soil or topography. With this method we are able to assess the spatial organisation of selected catchment characteristics. An algorithm was developed that structures a watershed into sub-basins and other spatial units to minimise its heterogeneity. The outcomes of this algorithm are used for the spatial setup of a semi-distributed model. Since the spatial organisation of a catchment is not bound to a single characteristic, we have to embed information of multiple catchment properties. For this purpose we applied a fuzzy-based method to combine the spatial setup for multiple single characteristics into a union, optimal spatial differentiation. Utilizing this method, we are able to propose a spatial structure for a semi-distributed hydrological model, comprising the definition of sub-basins and a zonal classification within each sub-basin. Besides the improved spatial structuring, the performed analysis ameliorates modelling in another way. The spatial variability of catchment characteristics, which is considered by a minimum of heterogeneity in the zones, can be considered in a parameter constrained calibration scheme in a case study both options were used to explore the benefits of incorporating the spatial organisation and derived parameter constraints for the parametrisation of a HBV-96 model. We use two benchmark model setups (lumped and semi-distributed by common approaches) to address the benefits for different time and spatial scales. Moreover, the benefits for calibration effort, model performance in validation periods and process extrapolation are shown.
Large scale modelling of catastrophic floods in Italy
NASA Astrophysics Data System (ADS)
Azemar, Frédéric; Nicótina, Ludovico; Sassi, Maximiliano; Savina, Maurizio; Hilberts, Arno
2017-04-01
The RMS European Flood HD model® is a suite of country scale flood catastrophe models covering 13 countries throughout continental Europe and the UK. The models are developed with the goal of supporting risk assessment analyses for the insurance industry. Within this framework RMS is developing a hydrologic and inundation model for Italy. The model aims at reproducing the hydrologic and hydraulic properties across the domain through a modeling chain. A semi-distributed hydrologic model that allows capturing the spatial variability of the runoff formation processes is coupled with a one-dimensional river routing algorithm and a two-dimensional (depth averaged) inundation model. This model setup allows capturing the flood risk from both pluvial (overland flow) and fluvial flooding. Here we describe the calibration and validation methodologies for this modelling suite applied to the Italian river basins. The variability that characterizes the domain (in terms of meteorology, topography and hydrologic regimes) requires a modeling approach able to represent a broad range of meteo-hydrologic regimes. The calibration of the rainfall-runoff and river routing models is performed by means of a genetic algorithm that identifies the set of best performing parameters within the search space over the last 50 years. We first establish the quality of the calibration parameters on the full hydrologic balance and on individual discharge peaks by comparing extreme statistics to observations over the calibration period on several stations. The model is then used to analyze the major floods in the country; we discuss the different meteorological setup leading to the historical events and the physical mechanisms that induced these floods. We can thus assess the performance of RMS' hydrological model in view of the physical mechanisms leading to flood and highlight the main controls on flood risk modelling throughout the country. The model's ability to accurately simulate antecedent conditions and discharge hydrographs over the affected area is also assessed, showing that spatio-temporal correlation is retained through the modelling chain. Results show that our modelling approach can capture a wide range of conditions leading to major floods in the Italian peninsula. Under the umbrella of the RMS European Flood HD models this constitutes, to our knowledge, the only operational flood risk model to be applied at continental scale with a coherent model methodology and a domain wide MonteCarlo stochastic set.
Tripathi, T S; Bala, M; Asokan, K
2014-08-01
We report on an experimental setup for the simultaneous measurement of the thermoelectric power (TEP) of two samples in the temperature range from 77 K to 500 K using optimum electronic instruments. The setup consists of two rectangular copper bars in a bridge arrangement for sample mounting, two surface mount (SM) chip resistors for creating alternate temperature gradient, and a type E thermocouple in differential geometry for gradient temperature (ΔT) measurement across the samples. In addition, a diode arrangement has been made for the alternate heating of SM resistors using only one DC current source. The measurement accuracy of ΔT increases with the differential thermocouple arrangement. For the calibration of the setup, measurements of TEP on a high purity (99.99%) platinum wire and type K thermocouple wires Chromel and Alumel have been performed from 77 K to 500 K with respect to copper lead wires. Additionally, this setup can be utilized to calibrate an unknown sample against a sample of known absolute TEP.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tripathi, T. S.; Bala, M.; Asokan, K.
2014-08-01
We report on an experimental setup for the simultaneous measurement of the thermoelectric power (TEP) of two samples in the temperature range from 77 K to 500 K using optimum electronic instruments. The setup consists of two rectangular copper bars in a bridge arrangement for sample mounting, two surface mount (SM) chip resistors for creating alternate temperature gradient, and a type E thermocouple in differential geometry for gradient temperature (ΔT) measurement across the samples. In addition, a diode arrangement has been made for the alternate heating of SM resistors using only one DC current source. The measurement accuracy of ΔTmore » increases with the differential thermocouple arrangement. For the calibration of the setup, measurements of TEP on a high purity (99.99%) platinum wire and type K thermocouple wires Chromel and Alumel have been performed from 77 K to 500 K with respect to copper lead wires. Additionally, this setup can be utilized to calibrate an unknown sample against a sample of known absolute TEP.« less
An Open Source modular platform for hydrological model implementation
NASA Astrophysics Data System (ADS)
Kolberg, Sjur; Bruland, Oddbjørn
2010-05-01
An implementation framework for setup and evaluation of spatio-temporal models is developed, forming a highly modularized distributed model system. The ENKI framework allows building space-time models for hydrological or other environmental purposes, from a suite of separately compiled subroutine modules. The approach makes it easy for students, researchers and other model developers to implement, exchange, and test single routines in a fixed framework. The open-source license and modular design of ENKI will also facilitate rapid dissemination of new methods to institutions engaged in operational hydropower forecasting or other water resource management. Written in C++, ENKI uses a plug-in structure to build a complete model from separately compiled subroutine implementations. These modules contain very little code apart from the core process simulation, and are compiled as dynamic-link libraries (dll). A narrow interface allows the main executable to recognise the number and type of the different variables in each routine. The framework then exposes these variables to the user within the proper context, ensuring that time series exist for input variables, initialisation for states, GIS data sets for static map data, manually or automatically calibrated values for parameters etc. ENKI is designed to meet three different levels of involvement in model construction: • Model application: Running and evaluating a given model. Regional calibration against arbitrary data using a rich suite of objective functions, including likelihood and Bayesian estimation. Uncertainty analysis directed towards input or parameter uncertainty. o Need not: Know the model's composition of subroutines, or the internal variables in the model, or the creation of method modules. • Model analysis: Link together different process methods, including parallel setup of alternative methods for solving the same task. Investigate the effect of different spatial discretization schemes. o Need not: Write or compile computer code, handle file IO for each modules, • Routine implementation and testing. Implementation of new process-simulating methods/equations, specialised objective functions or quality control routines, testing of these in an existing framework. o Need not: Implement user or model interface for the new routine, IO handling, administration of model setup and run, calibration and validation routines etc. From being developed for Norway's largest hydropower producer Statkraft, ENKI is now being turned into an Open Source project. At the time of writing, the licence and the project administration is not established. Also, it remains to port the application to other compilers and computer platforms. However, we hope that ENKI will prove useful for both academic and operational users.
Out of lab calibration of a rotating 2D scanner for 3D mapping
NASA Astrophysics Data System (ADS)
Koch, Rainer; Böttcher, Lena; Jahrsdörfer, Maximilian; Maier, Johannes; Trommer, Malte; May, Stefan; Nüchter, Andreas
2017-06-01
Mapping is an essential task in mobile robotics. To fulfil advanced navigation and manipulation tasks a 3D representation of the environment is required. Applying stereo cameras or Time-of-flight cameras (TOF cameras) are one way to archive this requirement. Unfortunately, they suffer from drawbacks which makes it difficult to map properly. Therefore, costly 3D laser scanners are applied. An inexpensive way to build a 3D representation is to use a 2D laser scanner and rotate the scan plane around an additional axis. A 3D point cloud acquired with such a custom device consists of multiple 2D line scans. Therefore the scanner pose of each line scan need to be determined as well as parameters resulting from a calibration to generate a 3D point cloud. Using external sensor systems are a common method to determine these calibration parameters. This is costly and difficult when the robot needs to be calibrated outside the lab. Thus, this work presents a calibration method applied on a rotating 2D laser scanner. It uses a hardware setup to identify the required parameters for calibration. This hardware setup is light, small, and easy to transport. Hence, an out of lab calibration is possible. Additional a theoretical model was created to test the algorithm and analyse impact of the scanner accuracy. The hardware components of the 3D scanner system are an HOKUYO UTM-30LX-EW 2D laser scanner, a Dynamixel servo-motor, and a control unit. The calibration system consists of an hemisphere. In the inner of the hemisphere a circular plate is mounted. The algorithm needs to be provided with a dataset of a single rotation from the laser scanner. To achieve a proper calibration result the scanner needs to be located in the middle of the hemisphere. By means of geometric formulas the algorithms determine the individual deviations of the placed laser scanner. In order to minimize errors, the algorithm solves the formulas in an iterative process. First, the calibration algorithm was tested with an ideal hemisphere model created in Matlab. Second, laser scanner was mounted differently, the scanner position and the rotation axis was modified. In doing so, every deviation, was compared with the algorithm results. Several measurement settings were tested repeatedly with the 3D scanner system and the calibration system. The results show that the length accuracy of the laser scanner is most critical. It influences the required size of the hemisphere and the calibration accuracy.
An electro-optic modulator-assisted wavevector-resolving Brillouin light scattering setup.
Neumann, T; Schneider, T; Serga, A A; Hillebrands, B
2009-05-01
Brillouin light scattering spectroscopy is a powerful technique which incorporates several extensions such as space-, time-, phase-, and wavevector-resolution. Here, we report on the improvement of the wavevector-resolving setup by including an electro-optic modulator. This provides a reference to calibrate the position of the diaphragm hole which is used for wavevector selection. The accuracy of this calibration is only limited by the accuracy of the wavevector measurement itself. To demonstrate the validity of the approach the wavevectors of dipole-dominated spin waves excited by a microstrip antenna were measured.
SU-E-T-641: Proton Range Measurements Using a Geometrically Calibrated Liquid Scintillator Detector
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hui, C; Robertson, D; Alsanea, F
2015-06-15
Purpose: The purpose of this work is to develop a geometric calibration method to accurately calculate physical distances within a liquid scintillator detector and to assess the accuracy, consistency, and robustness of proton beam range measurements when using a liquid scintillator detector system with the proposed geometric calibration process. Methods: We developed a geometric calibration procedure to accurately convert pixel locations in the camera frame into physical locations in the scintillator frame. To ensure accuracy, the geometric calibration was performed before each experiment. The liquid scintillator was irradiated with spot scanning proton beams of 94 energies in two deliveries. Amore » CCD camera was used to capture the two-dimensional scintillation light profile of each of the proton energies. An algorithm was developed to automatically calculate the proton range from the acquired images. The measured range was compared to the nominal range to assess the accuracy of the detector. To evaluate the robustness of the detector between each setup, the experiments were repeated on three different days. To evaluate the consistency of the measurements between deliveries, three sets of measurements were acquired for each experiment. Results: Using this geometric calibration procedure, the proton beam ranges measured using the liquid scintillator system were all within 0.3mm of the nominal range. The average difference between the measured and nominal ranges was −0.20mm. The delivery-to-delivery standard deviation of the proton range measurement was 0.04mm, and the setup-to-setup standard deviation of the measurement was 0.10mm. Conclusion: The liquid scintillator system can measure the range of all 94 beams in just two deliveries. With the proposed geometric calibration, it can measure proton range with sub-millimeter accuracy, and the measurements were shown to be consistent between deliveries and setups. Therefore, we conclude that the liquid scintillator system provides a reliable and convenient tool for proton range measurement. This project was supported in part by award number CA182450 from the National Cancer Institute.« less
Integrated calibration of multiview phase-measuring profilometry
NASA Astrophysics Data System (ADS)
Lee, Yeong Beum; Kim, Min H.
2017-11-01
Phase-measuring profilometry (PMP) measures per-pixel height information of a surface with high accuracy. Height information captured by a camera in PMP relies on its screen coordinates. Therefore, a PMP measurement from a view cannot be integrated directly to other measurements from different views due to the intrinsic difference of the screen coordinates. In order to integrate multiple PMP scans, an auxiliary calibration of each camera's intrinsic and extrinsic properties is required, in addition to principal PMP calibration. This is cumbersome and often requires physical constraints in the system setup, and multiview PMP is consequently rarely practiced. In this work, we present a novel multiview PMP method that yields three-dimensional global coordinates directly so that three-dimensional measurements can be integrated easily. Our PMP calibration parameterizes intrinsic and extrinsic properties of the configuration of both a camera and a projector simultaneously. It also does not require any geometric constraints on the setup. In addition, we propose a novel calibration target that can remain static without requiring any mechanical operation while conducting multiview calibrations, whereas existing calibration methods require manually changing the target's position and orientation. Our results validate the accuracy of measurements and demonstrate the advantages on our multiview PMP.
Oceanic Whitecaps and Associated, Bubble-Mediated, Air-Sea Exchange Processes
1992-10-01
experiments performed in laboratory conditions using Air-Sea Exchange Monitoring System (A-SEMS). EXPERIMENTAL SET-UP In a first look, the Air-Sea Exchange...Model 225, equipped with a Model 519 plug-in module. Other complementary information on A-SEMS along with results from first tests and calibration...between 9.50C and 22.40C within the first 24 hours after transferring the water sample into laboratory conditions. The results show an enhancement of
ENKI - An Open Source environmental modelling platfom
NASA Astrophysics Data System (ADS)
Kolberg, S.; Bruland, O.
2012-04-01
The ENKI software framework for implementing spatio-temporal models is now released under the LGPL license. Originally developed for evaluation and comparison of distributed hydrological model compositions, ENKI can be used for simulating any time-evolving process over a spatial domain. The core approach is to connect a set of user specified subroutines into a complete simulation model, and provide all administrative services needed to calibrate and run that model. This includes functionality for geographical region setup, all file I/O, calibration and uncertainty estimation etc. The approach makes it easy for students, researchers and other model developers to implement, exchange, and test single routines and various model compositions in a fixed framework. The open-source license and modular design of ENKI will also facilitate rapid dissemination of new methods to institutions engaged in operational water resource management. ENKI uses a plug-in structure to invoke separately compiled subroutines, separately built as dynamic-link libraries (dlls). The source code of an ENKI routine is highly compact, with a narrow framework-routine interface allowing the main program to recognise the number, types, and names of the routine's variables. The framework then exposes these variables to the user within the proper context, ensuring that distributed maps coincide spatially, time series exist for input variables, states are initialised, GIS data sets exist for static map data, manually or automatically calibrated values for parameters etc. By using function calls and memory data structures to invoke routines and facilitate information flow, ENKI provides good performance. For a typical distributed hydrological model setup in a spatial domain of 25000 grid cells, 3-4 time steps simulated per second should be expected. Future adaptation to parallel processing may further increase this speed. New modifications to ENKI include a full separation of API and user interface, making it possible to run ENKI from GIS programs and other software environments. ENKI currently compiles under Windows and Visual Studio only, but ambitions exist to remove the platform and compiler dependencies.
Hijazi, Bilal; Cool, Simon; Vangeyte, Jürgen; Mertens, Koen C; Cointault, Frédéric; Paindavoine, Michel; Pieters, Jan G
2014-11-13
A 3D imaging technique using a high speed binocular stereovision system was developed in combination with corresponding image processing algorithms for accurate determination of the parameters of particles leaving the spinning disks of centrifugal fertilizer spreaders. Validation of the stereo-matching algorithm using a virtual 3D stereovision simulator indicated an error of less than 2 pixels for 90% of the particles. The setup was validated using the cylindrical spread pattern of an experimental spreader. A 2D correlation coefficient of 90% and a Relative Error of 27% was found between the experimental results and the (simulated) spread pattern obtained with the developed setup. In combination with a ballistic flight model, the developed image acquisition and processing algorithms can enable fast determination and evaluation of the spread pattern which can be used as a tool for spreader design and precise machine calibration.
Pella, A; Riboldi, M; Tagaste, B; Bianculli, D; Desplanques, M; Fontana, G; Cerveri, P; Seregni, M; Fattori, G; Orecchia, R; Baroni, G
2014-08-01
In an increasing number of clinical indications, radiotherapy with accelerated particles shows relevant advantages when compared with high energy X-ray irradiation. However, due to the finite range of ions, particle therapy can be severely compromised by setup errors and geometric uncertainties. The purpose of this work is to describe the commissioning and the design of the quality assurance procedures for patient positioning and setup verification systems at the Italian National Center for Oncological Hadrontherapy (CNAO). The accuracy of systems installed in CNAO and devoted to patient positioning and setup verification have been assessed using a laser tracking device. The accuracy in calibration and image based setup verification relying on in room X-ray imaging system was also quantified. Quality assurance tests to check the integration among all patient setup systems were designed, and records of daily QA tests since the start of clinical operation (2011) are presented. The overall accuracy of the patient positioning system and the patient verification system motion was proved to be below 0.5 mm under all the examined conditions, with median values below the 0.3 mm threshold. Image based registration in phantom studies exhibited sub-millimetric accuracy in setup verification at both cranial and extra-cranial sites. The calibration residuals of the OTS were found consistent with the expectations, with peak values below 0.3 mm. Quality assurance tests, daily performed before clinical operation, confirm adequate integration and sub-millimetric setup accuracy. Robotic patient positioning was successfully integrated with optical tracking and stereoscopic X-ray verification for patient setup in particle therapy. Sub-millimetric setup accuracy was achieved and consistently verified in daily clinical operation.
A novel 360-degree shape measurement using a simple setup with two mirrors and a laser MEMS scanner
NASA Astrophysics Data System (ADS)
Jin, Rui; Zhou, Xiang; Yang, Tao; Li, Dong; Wang, Chao
2017-09-01
There is no denying that 360-degree shape measurement technology plays an important role in the field of threedimensional optical metrology. Traditional optical 360-degree shape measurement methods are mainly two kinds: the first kind, by placing multiple scanners to achieve 360-degree measurements; the second kind, through the high-precision rotating device to get 360-degree shape model. The former increases the number of scanners and costly, while the latter using rotating devices lead to time consuming. This paper presents a low cost and fast optical 360-degree shape measurement method, which possesses the advantages of full static, fast and low cost. The measuring system consists of two mirrors with a certain angle, a laser projection system, a stereoscopic calibration block, and two cameras. And most of all, laser MEMS scanner can achieve precise movement of laser stripes without any movement mechanism, improving the measurement accuracy and efficiency. What's more, a novel stereo calibration technology presented in this paper can achieve point clouds data registration, and then get the 360-degree model of objects. A stereoscopic calibration block with special coded patterns on six sides is used in this novel stereo calibration method. Through this novel stereo calibration technology we can quickly get the 360-degree models of objects.
A data assimilation system combining CryoSat-2 data and hydrodynamic river models
NASA Astrophysics Data System (ADS)
Schneider, Raphael; Ridler, Marc-Etienne; Godiksen, Peter Nygaard; Madsen, Henrik; Bauer-Gottwein, Peter
2018-02-01
There are numerous hydrologic studies using satellite altimetry data from repeat-orbit missions such as Envisat or Jason over rivers. This study is one of the first examples for the combination of altimetry from drifting-ground track satellite missions, namely CryoSat-2, with a river model. CryoSat-2 SARIn Level 2 data is used to improve a 1D hydrodynamic model of the Brahmaputra River in South Asia, which is based on the Saint-Venant equations for unsteady flow and set up in the MIKE HYDRO River software. After calibration of discharge and water level the hydrodynamic model can accurately and bias-free represent the spatio-temporal variations of water levels. A data assimilation framework has been developed and linked with the model. It is a flexible framework that can assimilate water level data which are arbitrarily distributed in time and space. The setup has been used to assimilate CryoSat-2 water level observations over the Assam valley for the years 2010-2015, using an Ensemble Transform Kalman Filter (ETKF). Performance improvement in terms of discharge forecasting skill was then evaluated. For experiments with synthetic CryoSat-2 data the continuous ranked probability score (CRPS) was improved by up to 32%, whilst for experiments assimilating real data it could be improved by up to 10%. The developed methods are expected to be transferable to other rivers and altimeter missions. The model setup and calibration is based almost entirely on globally available remote sensing data.
NASA Astrophysics Data System (ADS)
Reynders, Edwin; Maes, Kristof; Lombaert, Geert; De Roeck, Guido
2016-01-01
Identified modal characteristics are often used as a basis for the calibration and validation of dynamic structural models, for structural control, for structural health monitoring, etc. It is therefore important to know their accuracy. In this article, a method for estimating the (co)variance of modal characteristics that are identified with the stochastic subspace identification method is validated for two civil engineering structures. The first structure is a damaged prestressed concrete bridge for which acceleration and dynamic strain data were measured in 36 different setups. The second structure is a mid-rise building for which acceleration data were measured in 10 different setups. There is a good quantitative agreement between the predicted levels of uncertainty and the observed variability of the eigenfrequencies and damping ratios between the different setups. The method can therefore be used with confidence for quantifying the uncertainty of the identified modal characteristics, also when some or all of them are estimated from a single batch of vibration data. Furthermore, the method is seen to yield valuable insight in the variability of the estimation accuracy from mode to mode and from setup to setup: the more informative a setup is regarding an estimated modal characteristic, the smaller is the estimated variance.
A novel explicit approach to model bromide and pesticide transport in connected soil structures
NASA Astrophysics Data System (ADS)
Klaus, J.; Zehe, E.
2011-07-01
The present study tests whether an explicit treatment of worm burrows and tile drains as connected structures is feasible for simulating water flow, bromide and pesticide transport in structured heterogeneous soils at hillslope scale. The essence is to represent worm burrows as morphologically connected paths of low flow resistance in a hillslope model. A recent Monte Carlo study (Klaus and Zehe, 2010, Hydrological Processes, 24, p. 1595-1609) revealed that this approach allowed successful reproduction of tile drain event discharge recorded during an irrigation experiment at a tile drained field site. However, several "hillslope architectures" that were all consistent with the available extensive data base allowed a good reproduction of tile drain flow response. Our second objective was thus to find out whether this "equifinality" in spatial model setups may be reduced when including bromide tracer data in the model falsification process. We thus simulated transport of bromide for the 13 spatial model setups that performed best with respect to reproduce tile drain event discharge, without any further calibration. All model setups allowed a very good prediction of the temporal dynamics of cumulated bromide leaching into the tile drain, while only four of them matched the accumulated water balance and accumulated bromide loss into the tile drain. The number of behavioural model architectures could thus be reduced to four. One of those setups was used for simulating transport of Isoproturon, using different parameter combinations to characterise adsorption according to the Footprint data base. Simulations could, however, only reproduce the observed leaching behaviour, when we allowed for retardation coefficients that were very close to one.
A novel explicit approach to model bromide and pesticide transport in soils containing macropores
NASA Astrophysics Data System (ADS)
Klaus, J.; Zehe, E.
2011-01-01
The present study tests whether an explicit treatment of worm burrows is feasible for simulating water flow, bromide and pesticide transport in structured heterogeneous soils. The essence is to represent worm burrows as morphologically connected paths of low flow resistance in the spatially highly resolved model domain. A recent Monte Carlo study (Klaus and Zehe, 2010) revealed that this approach allowed successful reproduction of tile drain event discharge recorded during an irrigation experiment at a tile drained field site. However, several "hillslope architectures" that were all consistent with the available extensive data base allowed a good reproduction of tile drain flow response. Our second objective was thus to find out whether this "equifinality" in spatial model setups may be reduced when including bromide tracer data in the model falsification process. We thus simulated transport of bromide and Isoproturon (IPU) for the 13 spatial model setups, which performed best with respect to reproduce tile drain event discharge, without any further calibration. All model setups allowed a very good prediction of the temporal dynamics of cumulated bromide leaching into the tile drain, while only four of them matched the accumulated water balance and accumulated bromide loss into the tile drain. The number of behavioural model architectures could thus be reduced to four. One of those setups was used for simulating transport of IPU, using different parameter combinations to characterise adsorption according to the Footprint data base. Simulations could, however, only reproduce the observed leaching behaviour, when we allowed for retardation coefficients that were very close to one.
Extrinsic Calibration of a Laser Galvanometric Setup and a Range Camera.
Sels, Seppe; Bogaerts, Boris; Vanlanduit, Steve; Penne, Rudi
2018-05-08
Currently, galvanometric scanning systems (like the one used in a scanning laser Doppler vibrometer) rely on a planar calibration procedure between a two-dimensional (2D) camera and the laser galvanometric scanning system to automatically aim a laser beam at a particular point on an object. In the case of nonplanar or moving objects, this calibration is not sufficiently accurate anymore. In this work, a three-dimensional (3D) calibration procedure that uses a 3D range sensor is proposed. The 3D calibration is valid for all types of objects and retains its accuracy when objects are moved between subsequent measurement campaigns. The proposed 3D calibration uses a Non-Perspective-n-Point (NPnP) problem solution. The 3D range sensor is used to calculate the position of the object under test relative to the laser galvanometric system. With this extrinsic calibration, the laser galvanometric scanning system can automatically aim a laser beam to this object. In experiments, the mean accuracy of aiming the laser beam on an object is below 10 mm for 95% of the measurements. This achieved accuracy is mainly determined by the accuracy and resolution of the 3D range sensor. The new calibration method is significantly better than the original 2D calibration method, which in our setup achieves errors below 68 mm for 95% of the measurements.
A multi-layered active target for the study of neutron-unbound nuclides at NSCL
NASA Astrophysics Data System (ADS)
Freeman, Jessica; Gueye, Paul; Redpath, Thomas; MoNA Collaboration
2017-01-01
The characteristics of neutron-unbound nuclides were investigated using a multi-layered Si/Be active target designed for use with the MoNA/LISA setup at the National Superconducting Cyclotron (NSCL). The setup consists of the MoNA/LISA arrays (for neutron detection) and a superconducting sweeper magnet (for charged separation) to identify products following the decay of neutron unbound states. The segmented target consisted of three 700 mg/cm2 beryllium targets and four 0.14 mm thick 62x62 mm2 silicon detectors. As a commissioning experiment for the target the decay of two-neutron unbound 26O populated in a one-proton removal reaction from a radioactive 27F beam was performed. The 27F secondary radioactive beam from the NSCL's Coupled Cyclotron Facility was produced from the fragmentation of a 140 MeV/u 48Ca beam incident on a thick beryllium target and then cleanly selected by the A1900 fragment separator. The energy loss and position spectra of the incoming beam and reaction products were used to calibrate the Silicon detectors to within 1.5% in both energy and position. A dedicated Geant4 model of the target was developed to simulate the energy loss within the target. A description of the experimental setup, simulation work, and energy and position calibration will be presented. DoE/NNSA - DE-NA0000979.
Using wind setdown and storm surge on Lake Erie to calibrate the air-sea drag coefficient.
Drews, Carl
2013-01-01
The air-sea drag coefficient controls the transfer of momentum from wind to water. In modeling storm surge, this coefficient is a crucial parameter for estimating the surge height. This study uses two strong wind events on Lake Erie to calibrate the drag coefficient using the Coupled Ocean Atmosphere Wave Sediment Transport (COAWST) modeling system and the the Regional Ocean Modeling System (ROMS). Simulated waves are generated on the lake with Simulating WAves Nearshore (SWAN). Wind setdown provides the opportunity to eliminate wave setup as a contributing factor, since waves are minimal at the upwind shore. The study finds that model results significantly underestimate wind setdown and storm surge when a typical open-ocean formulation without waves is used for the drag coefficient. The contribution of waves to wind setdown and storm surge is 34.7%. Scattered lake ice also increases the effective drag coefficient by a factor of 1.1.
Legato: Personal Computer Software for Analyzing Pressure-Sensitive Paint Data
NASA Technical Reports Server (NTRS)
Schairer, Edward T.
2001-01-01
'Legato' is personal computer software for analyzing radiometric pressure-sensitive paint (PSP) data. The software is written in the C programming language and executes under Windows 95/98/NT operating systems. It includes all operations normally required to convert pressure-paint image intensities to normalized pressure distributions mapped to physical coordinates of the test article. The program can analyze data from both single- and bi-luminophore paints and provides for both in situ and a priori paint calibration. In addition, there are functions for determining paint calibration coefficients from calibration-chamber data. The software is designed as a self-contained, interactive research tool that requires as input only the bare minimum of information needed to accomplish each function, e.g., images, model geometry, and paint calibration coefficients (for a priori calibration) or pressure-tap data (for in situ calibration). The program includes functions that can be used to generate needed model geometry files for simple model geometries (e.g., airfoils, trapezoidal wings, rotor blades) based on the model planform and airfoil section. All data files except images are in ASCII format and thus are easily created, read, and edited. The program does not use database files. This simplifies setup but makes the program inappropriate for analyzing massive amounts of data from production wind tunnels. Program output consists of Cartesian plots, false-colored real and virtual images, pressure distributions mapped to the surface of the model, assorted ASCII data files, and a text file of tabulated results. Graphical output is displayed on the computer screen and can be saved as publication-quality (PostScript) files.
Zyvoloski, G.; Kwicklis, E.; Eddebbarh, A.-A.; Arnold, B.; Faunt, C.; Robinson, B.A.
2003-01-01
This paper presents several different conceptual models of the Large Hydraulic Gradient (LHG) region north of Yucca Mountain and describes the impact of those models on groundwater flow near the potential high-level repository site. The results are based on a numerical model of site-scale saturated zone beneath Yucca Mountain. This model is used for performance assessment predictions of radionuclide transport and to guide future data collection and modeling activities. The numerical model is calibrated by matching available water level measurements using parameter estimation techniques, along with more informal comparisons of the model to hydrologic and geochemical information. The model software (hydrologic simulation code FEHM and parameter estimation software PEST) and model setup allows for efficient calibration of multiple conceptual models. Until now, the Large Hydraulic Gradient has been simulated using a low-permeability, east-west oriented feature, even though direct evidence for this feature is lacking. In addition to this model, we investigate and calibrate three additional conceptual models of the Large Hydraulic Gradient, all of which are based on a presumed zone of hydrothermal chemical alteration north of Yucca Mountain. After examining the heads and permeabilities obtained from the calibrated models, we present particle pathways from the potential repository that record differences in the predicted groundwater flow regime. The results show that Large Hydraulic Gradient can be represented with the alternate conceptual models that include the hydrothermally altered zone. The predicted pathways are mildly sensitive to the choice of the conceptual model and more sensitive to the quality of calibration in the vicinity on the repository. These differences are most likely due to different degrees of fit of model to data, and do not represent important differences in hydrologic conditions for the different conceptual models. ?? 2002 Elsevier Science B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Santos, C. Almeida; Costa, C. Oliveira; Batista, J.
2016-05-01
The paper describes a kinematic model-based solution to estimate simultaneously the calibration parameters of the vision system and the full-motion (6-DOF) of large civil engineering structures, namely of long deck suspension bridges, from a sequence of stereo images captured by digital cameras. Using an arbitrary number of images and assuming a smooth structure motion, an Iterated Extended Kalman Filter is used to recursively estimate the projection matrices of the cameras and the structure full-motion (displacement and rotation) over time, helping to meet the structure health monitoring fulfilment. Results related to the performance evaluation, obtained by numerical simulation and with real experiments, are reported. The real experiments were carried out in indoor and outdoor environment using a reduced structure model to impose controlled motions. In both cases, the results obtained with a minimum setup comprising only two cameras and four non-coplanar tracking points, showed a high accuracy results for on-line camera calibration and structure full motion estimation.
Jaquez, Javier; Farrell, Mike; Huang, Haibo; ...
2016-08-01
In 2014/2015 at the Omega laser facility, several experiments took place to calibrate the National Ignition Facility (NIF) X-ray spectrometer (NXS), which is used for high-resolution time-resolved spectroscopic experiments at NIF. The spectrometer allows experimentalists to measure the X-ray energy emitted from high-energy targets, which is used to understand key data such as mixing of materials in highly compressed fuel. The purpose of the experiments at Omega was to obtain information on the instrument performance and to deliver an absolute photometric calibration of the NXS before it was deployed at NIF. The X-ray emission sources fabricated for instrument calibration weremore » 1-mm fused silica spheres with precisely known alloy composition coatings of Si/Ag/Mo, Ti/Cr/Ag, Cr/Ni/Zn, and Zn/Zr, which have emission in the 2- to 18-keV range. Critical to the spectrometer calibration is a known atomic composition of elements with low uncertainty for each calibration sphere. This study discusses the setup, fabrication, and precision metrology of these spheres as well as some interesting findings on the ternary magnetron-sputtered alloy structure.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jaquez, Javier; Farrell, Mike; Huang, Haibo
In 2014/2015 at the Omega laser facility, several experiments took place to calibrate the National Ignition Facility (NIF) X-ray spectrometer (NXS), which is used for high-resolution time-resolved spectroscopic experiments at NIF. The spectrometer allows experimentalists to measure the X-ray energy emitted from high-energy targets, which is used to understand key data such as mixing of materials in highly compressed fuel. The purpose of the experiments at Omega was to obtain information on the instrument performance and to deliver an absolute photometric calibration of the NXS before it was deployed at NIF. The X-ray emission sources fabricated for instrument calibration weremore » 1-mm fused silica spheres with precisely known alloy composition coatings of Si/Ag/Mo, Ti/Cr/Ag, Cr/Ni/Zn, and Zn/Zr, which have emission in the 2- to 18-keV range. Critical to the spectrometer calibration is a known atomic composition of elements with low uncertainty for each calibration sphere. This study discusses the setup, fabrication, and precision metrology of these spheres as well as some interesting findings on the ternary magnetron-sputtered alloy structure.« less
Calibration of cathode strip gains in multiwire drift chambers of the GlueX experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berdnikov, V. V.; Somov, S. V.; Pentchev, L.
A technique for calibrating cathode strip gains in multiwire drift chambers of the GlueX experiment is described. The accuracy of the technique is estimated based on Monte Carlo generated data with known gain coefficients in the strip signal channels. One of the four detector sections has been calibrated using cosmic rays. Results of drift chamber calibration on the accelerator beam upon inclusion in the GlueX experimental setup are presented.
Brouckaert, D; Uyttersprot, J-S; Broeckx, W; De Beer, T
2017-06-08
The industrial production of liquid detergent compositions entails delicate balance of ingredients and process steps. In order to assure high quality and productivity in the manufacturing line, process analytical technology tools such as Raman spectroscopy are to be implemented. Marked chemical specificity, negligible water interference and high robustness are ascribed to this process analytical technique. Previously, at-line calibration models have been developed for determining the concentration levels of the being studied liquid detergents main ingredients from Raman spectra. A strategy is now proposed to transfer such at-line developed regression models to an in-line set-up, allowing real-time dosing control of the liquid detergent composition under production. To mimic in-line manufacturing conditions, liquid detergent compositions are created in a five-liter vessel with an overhead mixer. Raman spectra are continuously acquired by pumping the detergent under production via plastic tubing towards a Raman superhead probe, which is incorporated into a metal frame with a sapphire window facing the detergent fluid. Two at-line developed partial least squares (PLS) models are aimed at transferring, predicting the concentration of surfactant 1 and polymer 2 in the examined liquid detergent composition. A univariate slope/bias correction (SBC) is investigated, next to three well-acknowledged multivariate transformation methods: direct, piecewise and double-window piecewise direct standardization. Transfer is considered successful when the magnitude of the validation sets root mean square error of prediction (RMSEP) is similar to or smaller than the corresponding at-line prediction error. The transferred model offering the most promising outcome is further subjected to an exhaustive statistical evaluation, in order to appraise the applicability of the suggested calibration transfer method. Interval hypothesis tests are thereby performed for method comparison. It is illustrated that the investigated transfer approach yields satisfactory results, provided that the original at-line calibration model is thoroughly validated. Both SBC transfer models return lower RMSEP values than their corresponding original models. The surfactant 1 assay met all relevant evaluation criteria, demonstrating successful transfer to the in-line set-up. The in-line quantification of polymer 2 levels in the liquid detergent composition could not be statistically validated, due to the poorer performance of the at-line model. Copyright © 2017 Elsevier B.V. All rights reserved.
Precision and Accuracy Parameters in Structured Light 3-D Scanning
NASA Astrophysics Data System (ADS)
Eiríksson, E. R.; Wilm, J.; Pedersen, D. B.; Aanæs, H.
2016-04-01
Structured light systems are popular in part because they can be constructed from off-the-shelf low cost components. In this paper we quantitatively show how common design parameters affect precision and accuracy in such systems, supplying a much needed guide for practitioners. Our quantitative measure is the established VDI/VDE 2634 (Part 2) guideline using precision made calibration artifacts. Experiments are performed on our own structured light setup, consisting of two cameras and a projector. We place our focus on the influence of calibration design parameters, the calibration procedure and encoding strategy and present our findings. Finally, we compare our setup to a state of the art metrology grade commercial scanner. Our results show that comparable, and in some cases better, results can be obtained using the parameter settings determined in this study.
A versatile calibration procedure for portable coded aperture gamma cameras and RGB-D sensors
NASA Astrophysics Data System (ADS)
Paradiso, V.; Crivellaro, A.; Amgarou, K.; de Lanaute, N. Blanc; Fua, P.; Liénard, E.
2018-04-01
The present paper proposes a versatile procedure for the geometrical calibration of coded aperture gamma cameras and RGB-D depth sensors, using only one radioactive point source and a simple experimental set-up. Calibration data is then used for accurately aligning radiation images retrieved by means of the γ-camera with the respective depth images computed with the RGB-D sensor. The system resulting from such a combination is thus able to retrieve, automatically, the distance of radioactive hotspots by means of pixel-wise mapping between gamma and depth images. This procedure is of great interest for a wide number of applications, ranging from precise automatic estimation of the shape and distance of radioactive objects to Augmented Reality systems. Incidentally, the corresponding results validated the choice of a perspective design model for a coded aperture γ-camera.
Digital dental photography. Part 6: camera settings.
Ahmad, I
2009-07-25
Once the appropriate camera and equipment have been purchased, the next considerations involve setting up and calibrating the equipment. This article provides details regarding depth of field, exposure, colour spaces and white balance calibration, concluding with a synopsis of camera settings for a standard dental set-up.
On-ground re-calibration of the GOME-2 satellite spectrometer series
NASA Astrophysics Data System (ADS)
Otter, Gerard; Dijkhuizen, Niels; Vosteen, Amir; Brinkers, Sanneke; Gür, Bilgehan; Kenter, Pepijn; Sallusti, Marcello; Tomuta, Dana; Veratti, Rubes; Cappani, Annalisa
2017-11-01
The Global Ozone Monitoring Experiment-2[1] (GOME-2) represents one of the European instruments carried on board the MetOp satellite within the ESA's "Living Planet Program". Consisting of three flight models (FM's) it is intended to provide long-term monitoring of atmospheric ozone and other trace gases over a time frame of 15-20 years, thus contributing valuable input towards climate and atmospheric research and providing near real-time data for use in air quality forecasting. The ambition to achieve highly accurate scientific results requires a thorough calibration and characterization of the instrument prior to launch. These calibration campaigns were performed by TNO in Delft in the Netherlands, in the "Thermal Vacuum Calibration Facility" of the institute. Due to refurbishment and / or storage of the instruments over a period of a few years, several re-calibration campaigns were necessary. These re-calibrations provided the unique opportunity to study the effects of long term storage and build up statistics on the instrument as well as the calibration methods used. During the re-calibration of the second flight model a difference was found in the radiometric calibration output, which was not understood initially. In order to understand the anomalies on the radiometry, a deep investigation was performed using numerous variations of the setup and different sources. The major contributor was identified to be a systematic error in the alignment, for which a correction was applied. Apart from this, it was found that the geometry of the sources influenced the results. Based on the calibration results combined with a theoretical geometrical hypothesis inferred that the on-ground calibration should mimic as close as possible the in-orbit geometry.
On-line carbon balance of yeast fermentations using miniaturized optical sensors.
Beuermann, Thomas; Egly, Dominik; Geoerg, Daniel; Klug, Kerris Isolde; Storhas, Winfried; Methner, Frank-Juergen
2012-03-01
Monitoring of microbiological processes using optical sensors and spectrometers has gained in importance over the past few years due to its advantage in enabling non-invasive on-line analysis. Near-infrared (NIR) and mid-infrared (MIR) spectrometer set-ups in combination with multivariate calibrations have already been successfully employed for the simultaneous determination of different metabolites in microbiological processes. Photometric sensors, in addition to their low price compared to spectrometer set-ups, have the advantage of being compact and are easy to calibrate and operate. In this work, the detection of ethanol and CO(2) in the exhaust gas during aerobic yeast fermentation was performed by two photometric gas analyzers, and dry yeast biomass was monitored using a fiber optic backscatter set-up. The optical sensors could be easily fitted to the bioreactor and exhibited high robustness during measuring. The ethanol content of the fermentation broth was monitored on-line by measuring the ethanol concentration in the fermentation exhaust and applying a conversion factor. The vapor/liquid equilibrium and the associated conversion factor strongly depend on the process parameter temperature but not on aeration and stirring rate. Dry yeast biomass was determined in-line by a backscattering signal applying a linear calibration. An on-line balance with a recovery rate of 95-97% for carbon was achieved with the use of three optical sensors (two infrared gas analyzers and one fiber optic backscatter set-up). Copyright © 2011 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Gibbs, Matthew S.; McInerney, David; Humphrey, Greer; Thyer, Mark A.; Maier, Holger R.; Dandy, Graeme C.; Kavetski, Dmitri
2018-02-01
Monthly to seasonal streamflow forecasts provide useful information for a range of water resource management and planning applications. This work focuses on improving such forecasts by considering the following two aspects: (1) state updating to force the models to match observations from the start of the forecast period, and (2) selection of a shorter calibration period that is more representative of the forecast period, compared to a longer calibration period traditionally used. The analysis is undertaken in the context of using streamflow forecasts for environmental flow water management of an open channel drainage network in southern Australia. Forecasts of monthly streamflow are obtained using a conceptual rainfall-runoff model combined with a post-processor error model for uncertainty analysis. This model set-up is applied to two catchments, one with stronger evidence of non-stationarity than the other. A range of metrics are used to assess different aspects of predictive performance, including reliability, sharpness, bias and accuracy. The results indicate that, for most scenarios and metrics, state updating improves predictive performance for both observed rainfall and forecast rainfall sources. Using the shorter calibration period also improves predictive performance, particularly for the catchment with stronger evidence of non-stationarity. The results highlight that a traditional approach of using a long calibration period can degrade predictive performance when there is evidence of non-stationarity. The techniques presented can form the basis for operational monthly streamflow forecasting systems and provide support for environmental decision-making.
BRDF Calibration of Sintered PTFE in the SWIR
NASA Technical Reports Server (NTRS)
Georgiev, Georgi T.; Butler, James J.
2009-01-01
Satellite instruments operating in the reflective solar wavelength region often require accurate and precise determination of the Bidirectional Reflectance Distribution Function (BRDF) of laboratory-based diffusers used in their pre-flight calibrations and ground-based support of on-orbit remote sensing instruments. The Diffuser Calibration Facility at NASA's Goddard Space Flight Center is a secondary diffuser calibration standard after NEST for over two decades, providing numerous NASA projects with BRDF data in the UV, Visible and the NIR spectral regions. Currently the Diffuser Calibration Facility extended the covered spectral range from 900 nm up to 1.7 microns. The measurements were made using the existing scatterometer by replacing the Si photodiode based receiver with an InGaAs-based one. The BRDF data was recorded at normal incidence and scatter zenith angles from 10 to 60 deg. Tunable coherent light source was setup. Broadband light source application is under development. Gray-scale sintered PTFE samples were used at these first trials, illuminated with P and S polarized incident light. The results are discussed and compared to empirically generated BRDF data from simple model based on 8 deg directional/hemispherical measurements.
Broadband Outdoor Radiometer Calibration Process for the Atmospheric Radiation Measurement Program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dooraghi, Michael
2015-09-01
The Atmospheric Radiation Measurement program (ARM) maintains a fleet of monitoring stations to aid in the improved scientific understanding of the basic physics related to radiative feedback processes in the atmosphere, particularly the interactions among clouds and aerosols. ARM obtains continuous measurements and conducts field campaigns to provide data products that aid in the improvement and further development of climate models. All of the measurement campaigns include a suite of solar measurements. The Solar Radiation Research Laboratory at the National Renewable Energy Laboratory supports ARM's full suite of stations in a number of ways, including troubleshooting issues that arise asmore » part of the data-quality reviews; managing engineering changes to the standard setup; and providing calibration services and assistance to the full fleet of solar-related instruments, including pyranometers, pyrgeometers, pyrheliometers, as well as the temperature/relative humidity probes, multimeters, and data acquisition systems that are used in the calibrations performed at the Southern Great Plains Radiometer Calibration Facility. This paper discusses all aspects related to the support provided to the calibration of the instruments in the solar monitoring fleet.« less
Low-cost precision rotary index calibration
NASA Astrophysics Data System (ADS)
Ng, T. W.; Lim, T. S.
2005-08-01
The traditional method for calibrating angular indexing repeatability of rotary axes on machine tools and measuring equipment is with a precision polygon (usually 12 sided) and an autocollimator or angular interferometer. Such a setup is typically expensive. Here, we propose a far more cost-effective approach that uses just a laser, diffractive optical element, and CCD camera. We show that significantly high accuracies can be achieved for angular index calibration.
NASA Astrophysics Data System (ADS)
Liu, Yonghuai; Rodrigues, Marcos A.
2000-03-01
This paper describes research on the application of machine vision techniques to a real time automatic inspection task of air filter components in a manufacturing line. A novel calibration algorithm is proposed based on a special camera setup where defective items would show a large calibration error. The algorithm makes full use of rigid constraints derived from the analysis of geometrical properties of reflected correspondence vectors which have been synthesized into a single coordinate frame and provides a closed form solution to the estimation of all parameters. For a comparative study of performance, we also developed another algorithm based on this special camera setup using epipolar geometry. A number of experiments using synthetic data have shown that the proposed algorithm is generally more accurate and robust than the epipolar geometry based algorithm and that the geometric properties of reflected correspondence vectors provide effective constraints to the calibration of rigid body transformations.
A Vision-Based Self-Calibration Method for Robotic Visual Inspection Systems
Yin, Shibin; Ren, Yongjie; Zhu, Jigui; Yang, Shourui; Ye, Shenghua
2013-01-01
A vision-based robot self-calibration method is proposed in this paper to evaluate the kinematic parameter errors of a robot using a visual sensor mounted on its end-effector. This approach could be performed in the industrial field without external, expensive apparatus or an elaborate setup. A robot Tool Center Point (TCP) is defined in the structural model of a line-structured laser sensor, and aligned to a reference point fixed in the robot workspace. A mathematical model is established to formulate the misalignment errors with kinematic parameter errors and TCP position errors. Based on the fixed point constraints, the kinematic parameter errors and TCP position errors are identified with an iterative algorithm. Compared to the conventional methods, this proposed method eliminates the need for a robot-based-frame and hand-to-eye calibrations, shortens the error propagation chain, and makes the calibration process more accurate and convenient. A validation experiment is performed on an ABB IRB2400 robot. An optimal configuration on the number and distribution of fixed points in the robot workspace is obtained based on the experimental results. Comparative experiments reveal that there is a significant improvement of the measuring accuracy of the robotic visual inspection system. PMID:24300597
Calibration of the Nikon 200 for Close Range Photogrammetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheriff, Lassana; /City Coll., N.Y. /SLAC
2010-08-25
The overall objective of this project is to study the stability and reproducibility of the calibration parameters of the Nikon D200 camera with a Nikkor 20 mm lens for close-range photogrammetric surveys. The well known 'central perspective projection' model is used to determine the camera parameters for interior orientation. The Brown model extends it with the introduction of radial distortion and other less critical variables. The calibration process requires a dense network of targets to be photographed at different angles. For faster processing, reflective coded targets are chosen. Two scenarios have been used to check the reproducibility of the parameters.more » The first one is using a flat 2D wall with 141 coded targets and 12 custom targets that were previously measured with a laser tracker. The second one is a 3D Unistrut structure with a combination of coded targets and 3D reflective spheres. The study has shown that this setup is only stable during a short period of time. In conclusion, this camera is acceptable when calibrated before each use. Future work should include actual field tests and possible mechanical improvements, such as securing the lens to the camera body.« less
Marker Configuration Model-Based Roentgen Fluoroscopic Analysis.
Garling, Eric H; Kaptein, Bart L; Geleijns, Koos; Nelissen, Rob G H H; Valstar, Edward R
2005-04-01
It remains unknown if and how the polyethylene bearing in mobile bearing knees moves during dynamic activities with respect to the tibial base plate. Marker Configuration Model-Based Roentgen Fluoroscopic Analysis (MCM-based RFA) uses a marker configuration model of inserted tantalum markers in order to accurately estimate the pose of an implant or bone using single plane Roentgen images or fluoroscopic images. The goal of this study is to assess the accuracy of (MCM-Based RFA) in a standard fluoroscopic set-up using phantom experiments and to determine the error propagation with computer simulations. The experimental set-up of the phantom study was calibrated using a calibration box equipped with 600 tantalum markers, which corrected for image distortion and determined the focus position. In the computer simulation study the influence of image distortion, MC-model accuracy, focus position, the relative distance between MC-models and MC-model configuration on the accuracy of MCM-Based RFA were assessed. The phantom study established that the in-plane accuracy of MCM-Based RFA is 0.1 mm and the out-of-plane accuracy is 0.9 mm. The rotational accuracy is 0.1 degrees. A ninth-order polynomial model was used to correct for image distortion. Marker-Based RFA was estimated to have, in a worst case scenario, an in vivo translational accuracy of 0.14 mm (x-axis), 0.17 mm (y-axis), 1.9 mm (z-axis), respectively, and a rotational accuracy of 0.3 degrees. When using fluoroscopy to study kinematics, image distortion and the accuracy of models are important factors, which influence the accuracy of the measurements. MCM-Based RFA has the potential to be an accurate, clinically useful tool for studying kinematics after total joint replacement using standard equipment.
Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis.
Bernardina, Gustavo R D; Cerveri, Pietro; Barros, Ricardo M L; Marins, João C B; Silvatti, Amanda P
2016-01-01
Action sport cameras (ASC) are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D) motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels) were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720) and 1.5mm (1920×1080). The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems.
Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis
Cerveri, Pietro; Barros, Ricardo M. L.; Marins, João C. B.; Silvatti, Amanda P.
2016-01-01
Action sport cameras (ASC) are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D) motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels) were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720) and 1.5mm (1920×1080). The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems. PMID:27513846
Optimized linear motor and digital PID controller setup used in Mössbauer spectrometer
NASA Astrophysics Data System (ADS)
Kohout, Pavel; Kouřil, Lukáš; Navařík, Jakub; Novák, Petr; Pechoušek, Jiří
2014-10-01
Optimization of a linear motor and digital PID controller setup used in a Mössbauer spectrometer is presented. Velocity driving system with a digital PID feedback subsystem was developed in the LabVIEW graphical environment and deployed on the sbRIO real-time hardware device (National Instruments). The most important data acquisition processes are performed as real-time deterministic tasks on an FPGA chip. Velocity transducer of a double loudspeaker type with a power amplifier circuit is driven by the system. Series of calibration measurements were proceeded to find the optimal setup of the P, I, D parameters together with velocity error signal analysis. The shape and given signal characteristics of the velocity error signal are analyzed in details. Remote applications for controlling and monitoring the PID system from computer or smart phone, respectively, were also developed. The best setup and P, I, D parameters were set and calibration spectrum of α-Fe sample with an average nonlinearity of the velocity scale below 0.08% was collected. Furthermore, the width of the spectral line below 0.30 mm/s was observed. Powerful and complex velocity driving system was designed.
Using Wind Setdown and Storm Surge on Lake Erie to Calibrate the Air-Sea Drag Coefficient
Drews, Carl
2013-01-01
The air-sea drag coefficient controls the transfer of momentum from wind to water. In modeling storm surge, this coefficient is a crucial parameter for estimating the surge height. This study uses two strong wind events on Lake Erie to calibrate the drag coefficient using the Coupled Ocean Atmosphere Wave Sediment Transport (COAWST) modeling system and the the Regional Ocean Modeling System (ROMS). Simulated waves are generated on the lake with Simulating WAves Nearshore (SWAN). Wind setdown provides the opportunity to eliminate wave setup as a contributing factor, since waves are minimal at the upwind shore. The study finds that model results significantly underestimate wind setdown and storm surge when a typical open-ocean formulation without waves is used for the drag coefficient. The contribution of waves to wind setdown and storm surge is 34.7%. Scattered lake ice also increases the effective drag coefficient by a factor of 1.1. PMID:23977309
Jordt, Anne; Zelenka, Claudius; von Deimling, Jens Schneider; Koch, Reinhard; Köser, Kevin
2015-12-05
Several acoustic and optical techniques have been used for characterizing natural and anthropogenic gas leaks (carbon dioxide, methane) from the ocean floor. Here, single-camera based methods for bubble stream observation have become an important tool, as they help estimating flux and bubble sizes under certain assumptions. However, they record only a projection of a bubble into the camera and therefore cannot capture the full 3D shape, which is particularly important for larger, non-spherical bubbles. The unknown distance of the bubble to the camera (making it appear larger or smaller than expected) as well as refraction at the camera interface introduce extra uncertainties. In this article, we introduce our wide baseline stereo-camera deep-sea sensor bubble box that overcomes these limitations, as it observes bubbles from two orthogonal directions using calibrated cameras. Besides the setup and the hardware of the system, we discuss appropriate calibration and the different automated processing steps deblurring, detection, tracking, and 3D fitting that are crucial to arrive at a 3D ellipsoidal shape and rise speed of each bubble. The obtained values for single bubbles can be aggregated into statistical bubble size distributions or fluxes for extrapolation based on diffusion and dissolution models and large scale acoustic surveys. We demonstrate and evaluate the wide baseline stereo measurement model using a controlled test setup with ground truth information.
Jordt, Anne; Zelenka, Claudius; Schneider von Deimling, Jens; Koch, Reinhard; Köser, Kevin
2015-01-01
Several acoustic and optical techniques have been used for characterizing natural and anthropogenic gas leaks (carbon dioxide, methane) from the ocean floor. Here, single-camera based methods for bubble stream observation have become an important tool, as they help estimating flux and bubble sizes under certain assumptions. However, they record only a projection of a bubble into the camera and therefore cannot capture the full 3D shape, which is particularly important for larger, non-spherical bubbles. The unknown distance of the bubble to the camera (making it appear larger or smaller than expected) as well as refraction at the camera interface introduce extra uncertainties. In this article, we introduce our wide baseline stereo-camera deep-sea sensor bubble box that overcomes these limitations, as it observes bubbles from two orthogonal directions using calibrated cameras. Besides the setup and the hardware of the system, we discuss appropriate calibration and the different automated processing steps deblurring, detection, tracking, and 3D fitting that are crucial to arrive at a 3D ellipsoidal shape and rise speed of each bubble. The obtained values for single bubbles can be aggregated into statistical bubble size distributions or fluxes for extrapolation based on diffusion and dissolution models and large scale acoustic surveys. We demonstrate and evaluate the wide baseline stereo measurement model using a controlled test setup with ground truth information. PMID:26690168
Electromagnetic calibration system for sub-micronewton torsional thrust stand
NASA Astrophysics Data System (ADS)
Lam, J. K.; Koay, S. C.; Cheah, K. H.
2017-12-01
It is critical for a micropropulsion system to be evaluated. Thrust stands are widely recognised as the instrument to complete such tasks. This paper presents the development of an alternative electromagnetic calibration technique for thrust stands. Utilising the commercially made voice coils and permanent magnets, the proposed system is able to generate repeatable and also consistent steady-state calibration forces at over four orders of magnitude (30 - 23000 μN). The system is then used to calibrate a custom-designed torsional thrust stand, where its inherent ability in ease of setup is well demonstrated.
NASA Astrophysics Data System (ADS)
Koppa, A.; Gebremichael, M.; Yeh, W. W. G.
2017-12-01
Calibrating hydrologic models in large catchments using a sparse network of streamflow gauges adversely affects the spatial and temporal accuracy of other water balance components which are important for climate-change, land-use and drought studies. This study combines remote sensing data and the concept of Pareto-Optimality to address the following questions: 1) What is the impact of streamflow (SF) calibration on the spatio-temporal accuracy of Evapotranspiration (ET), near-surface Soil Moisture (SM) and Total Water Storage (TWS)? 2) What is the best combination of fluxes that can be used to calibrate complex hydrological models such that both the accuracy of streamflow and the spatio-temporal accuracy of ET, SM and TWS is preserved? The study area is the Mississippi Basin in the United States (encompassing HUC-2 regions 5,6,7,9,10 and 11). 2003 and 2004, two climatologically average years are chosen for calibration and validation of the Noah-MP hydrologic model. Remotely sensed ET data is sourced from GLEAM, SM from ESA-CCI and TWS from GRACE. Single objective calibration is carried out using DDS Algorithm. For Multi objective calibration PA-DDS is used. First, the Noah-MP model is calibrated using a single objective function (Minimize Mean Square Error) for the outflow from the 6 HUC-2 sub-basins for 2003. Spatial correlograms are used to compare the spatial structure of ET, SM and TWS between the model and the remote sensing data. Spatial maps of RMSE and Mean Error are used to quantify the impact of calibrating streamflow on the accuracy of ET, SM and TWS estimates. Next, a multi-objective calibration experiment is setup to determine the pareto optimal parameter sets (pareto front) for the following cases - 1) SF and ET, 2) SF and SM, 3) SF and TWS, 4) SF, ET and SM, 5) SF, ET and TWS, 6) SF, SM and TWS, 7) SF, ET, SM and TWS. The best combination of fluxes that provides the optimal trade-off between accurate streamflow and preserving the spatio-temporal structure of ET, SM and TWS is then determined by validating the model outputs for the pareto-optimal parameter sets. Results from single-objective calibration experiment with streamflow shows that it does indeed negatively impact the accuracy of ET, SM and TWS estimates.
Measurement of electromagnetic tracking error in a navigated breast surgery setup
NASA Astrophysics Data System (ADS)
Harish, Vinyas; Baksh, Aidan; Ungi, Tamas; Lasso, Andras; Baum, Zachary; Gauvin, Gabrielle; Engel, Jay; Rudan, John; Fichtinger, Gabor
2016-03-01
PURPOSE: The measurement of tracking error is crucial to ensure the safety and feasibility of electromagnetically tracked, image-guided procedures. Measurement should occur in a clinical environment because electromagnetic field distortion depends on positioning relative to the field generator and metal objects. However, we could not find an accessible and open-source system for calibration, error measurement, and visualization. We developed such a system and tested it in a navigated breast surgery setup. METHODS: A pointer tool was designed for concurrent electromagnetic and optical tracking. Software modules were developed for automatic calibration of the measurement system, real-time error visualization, and analysis. The system was taken to an operating room to test for field distortion in a navigated breast surgery setup. Positional and rotational electromagnetic tracking errors were then calculated using optical tracking as a ground truth. RESULTS: Our system is quick to set up and can be rapidly deployed. The process from calibration to visualization also only takes a few minutes. Field distortion was measured in the presence of various surgical equipment. Positional and rotational error in a clean field was approximately 0.90 mm and 0.31°. The presence of a surgical table, an electrosurgical cautery, and anesthesia machine increased the error by up to a few tenths of a millimeter and tenth of a degree. CONCLUSION: In a navigated breast surgery setup, measurement and visualization of tracking error defines a safe working area in the presence of surgical equipment. Our system is available as an extension for the open-source 3D Slicer platform.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albert, J; Labarbe, R; Sterpin, E
2016-06-15
Purpose: To understand the extent to which the prompt gamma camera measurements can be used to predict the residual proton range due to setup errors and errors in the calibration curve. Methods: We generated ten variations on a default calibration curve (CC) and ten corresponding range maps (RM). Starting with the default RM, we chose a square array of N beamlets, which were then rotated by a random angle θ and shifted by a random vector s. We added a 5% distal Gaussian noise to each beamlet in order to introduce discrepancies that exist between the ranges predicted from themore » prompt gamma measurements and those simulated with Monte Carlo algorithms. For each RM, s, θ, along with an offset u in the CC, were optimized using a simple Euclidian distance between the default ranges and the ranges produced by the given RM. Results: The application of our method lead to the maximal overrange of 2.0mm and underrange of 0.6mm on average. Compared to the situations where s, θ, and u were ignored, these values were larger: 2.1mm and 4.3mm. In order to quantify the need for setup error corrections, we also performed computations in which u was corrected for, but s and θ were not. This yielded: 3.2mm and 3.2mm. The average computation time for 170 beamlets was 65 seconds. Conclusion: These results emphasize the necessity to correct for setup errors and the errors in the calibration curve. The simplicity and speed of our method makes it a good candidate for being implemented as a tool for in-room adaptive therapy. This work also demonstrates that the Prompt gamma range measurements can indeed be useful in the effort to reduce range errors. Given these results, and barring further refinements, this approach is a promising step towards an adaptive proton radiotherapy.« less
NASA Astrophysics Data System (ADS)
Siemons, M.; Hulleman, C. N.; Thorsen, R. Ø.; Smith, C. S.; Stallinga, S.
2018-04-01
Point Spread Function (PSF) engineering is used in single emitter localization to measure the emitter position in 3D and possibly other parameters such as the emission color or dipole orientation as well. Advanced PSF models such as spline fits to experimental PSFs or the vectorial PSF model can be used in the corresponding localization algorithms in order to model the intricate spot shape and deformations correctly. The complexity of the optical architecture and fit model makes PSF engineering approaches particularly sensitive to optical aberrations. Here, we present a calibration and alignment protocol for fluorescence microscopes equipped with a spatial light modulator (SLM) with the goal of establishing a wavefront error well below the diffraction limit for optimum application of complex engineered PSFs. We achieve high-precision wavefront control, to a level below 20 m$\\lambda$ wavefront aberration over a 30 minute time window after the calibration procedure, using a separate light path for calibrating the pixel-to-pixel variations of the SLM, and alignment of the SLM with respect to the optical axis and Fourier plane within 3 $\\mu$m ($x/y$) and 100 $\\mu$m ($z$) error. Aberrations are retrieved from a fit of the vectorial PSF model to a bead $z$-stack and compensated with a residual wavefront error comparable to the error of the SLM calibration step. This well-calibrated and corrected setup makes it possible to create complex `3D+$\\lambda$' PSFs that fit very well to the vectorial PSF model. Proof-of-principle bead experiments show precisions below 10~nm in $x$, $y$, and $\\lambda$, and below 20~nm in $z$ over an axial range of 1 $\\mu$m with 2000 signal photons and 12 background photons.
NASA Technical Reports Server (NTRS)
Groot, J. S.
1990-01-01
In August 1989 the NASA/JPL airborne P/L/C-band DC-8 SAR participated in several remote sensing campaigns in Europe. Amongst other test sites, data were obtained of the Flevopolder test site in the Netherlands on August the 16th. The Dutch X-band SLAR was flown on the same date and imaged parts of the same area as the SAR. To calibrate the two imaging radars a set of 33 calibration devices was deployed. 16 trihedrals were used to calibrate a part of the SLAR data. This short paper outlines the X-band SLAR characteristics, the experimental set-up and the calibration method used to calibrate the SLAR data. Finally some preliminary results are given.
Proportional Counter Calibration and Analysis for 12C + p Resonance Scattering
NASA Astrophysics Data System (ADS)
Nelson, Austin; Rogachev, Grigory; Uberseder, Ethan; Hooker, Josh; Koshchiy, Yevgen
2014-09-01
Light exotic nuclei provide a unique opportunity to test the predictions of modern ab initio theoretical calculations near the drip line. In ab initio approaches, nuclear structure is described starting from bare nucleon-nucleon and three-nucleon interactions. Calculations are very heavy and can only be performed for the lightest nuclei (A < 16). Experimental information on the structure of light exotic nuclei is crucial to determine the validity of these calculations and to fix the parameters for the three-nucleon forces. Resonance scattering with rare isotope beams is a very effective tool to study spectroscopy of nuclei near the drip line. A new setup was developed at the Cyclotron Institute for effective resonance scattering measurements. The setup includes ionization chamber, silicon array, and an array of proportional counters. The proportional counter array, consisting of 8 anode wires arranged in a parallel cellular grid, is used for particle identification and to track the positioning of light recoils. The main objective of this project was to test the performance and perform position calibration of this proportional counter array. The test was done using 12C beam. The excitation function for 12C + p elastic scattering was measured and calibration of the proportional counter was performed using known resonances in 13N. The method of calibration, including solid angle calculations, normalization corrections, and position calibration will be presented. Light exotic nuclei provide a unique opportunity to test the predictions of modern ab initio theoretical calculations near the drip line. In ab initio approaches, nuclear structure is described starting from bare nucleon-nucleon and three-nucleon interactions. Calculations are very heavy and can only be performed for the lightest nuclei (A < 16). Experimental information on the structure of light exotic nuclei is crucial to determine the validity of these calculations and to fix the parameters for the three-nucleon forces. Resonance scattering with rare isotope beams is a very effective tool to study spectroscopy of nuclei near the drip line. A new setup was developed at the Cyclotron Institute for effective resonance scattering measurements. The setup includes ionization chamber, silicon array, and an array of proportional counters. The proportional counter array, consisting of 8 anode wires arranged in a parallel cellular grid, is used for particle identification and to track the positioning of light recoils. The main objective of this project was to test the performance and perform position calibration of this proportional counter array. The test was done using 12C beam. The excitation function for 12C + p elastic scattering was measured and calibration of the proportional counter was performed using known resonances in 13N. The method of calibration, including solid angle calculations, normalization corrections, and position calibration will be presented. Funded by DOE and NSF-REU Program; Grant No. PHY-1263281.
Numerical simulations of flow fields through conventionally controlled wind turbines & wind farms
NASA Astrophysics Data System (ADS)
Emre Yilmaz, Ali; Meyers, Johan
2014-06-01
In the current study, an Actuator-Line Model (ALM) is implemented in our in-house pseudo-spectral LES solver SP-WIND, including a turbine controller. Below rated wind speed, turbines are controlled by a standard-torque-controller aiming at maximum power extraction from the wind. Above rated wind speed, the extracted power is limited by a blade pitch controller which is based on a proportional-integral type control algorithm. This model is used to perform a series of single turbine and wind farm simulations using the NREL 5MW turbine. First of all, we focus on below-rated wind speed, and investigate the effect of the farm layout on the controller calibration curves. These calibration curves are expressed in terms of nondimensional torque and rotational speed, using the mean turbine-disk velocity as reference. We show that this normalization leads to calibration curves that are independent of wind speed, but the calibration curves do depend on the farm layout, in particular for tightly spaced farms. Compared to turbines in a lone-standing set-up, turbines in a farm experience a different wind distribution over the rotor due to the farm boundary-layer interaction. We demonstrate this for fully developed wind-farm boundary layers with aligned turbine arrangements at different spacings (5D, 7D, 9D). Further we also compare calibration curves obtained from full farm simulations with calibration curves that can be obtained at a much lower cost using a minimal flow unit.
Calibration of a Background Oriented Schlieren (BOS) Set-up
NASA Astrophysics Data System (ADS)
Porta, David; Echeverría, Carlos; Cardoso, Hiroki; Aguayo, Alejandro; Stern, Catalina
2014-11-01
We use two materials with different known indexes of refraction to calibrate a Background Oriented Schlieren (BOS) experimental set-up, and to validate the Lorenz-Lorentz equation. BOS is used in our experiments to determine local changes of density in the shock pattern of an axisymmetric supersonic air jet. It is important to validate, in particular, the Gladstone Dale approximation (index of refraction close to one) in our experimental conditions and determine the uncertainty of our density measurements. In some cases, the index of refraction of the material is well known, but in others the density is measured and related to the displacement field. We acknowledge support from UNAM through DGAPA PAPIIT IN117712 and the Graduate Program in Mechanical Engineering.
Design and calibration of zero-additional-phase SPIDER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baum, Peter; Riedle, Eberhard
2005-09-01
Zero-additional-phase spectral phase interferometry for direct electric field reconstruction (ZAP-SPIDER) is a novel technique for measuring the temporal shape and phase of ultrashort optical pulses directly at the interaction point of a spectroscopic experiment. The scheme is suitable for an extremely wide wavelength region from the ultraviolet to the near infrared. We present a comprehensive description of the experimental setup and design guidelines to effectively apply the technique to various wavelengths and pulse durations. The calibration of the setup and procedures to check the consistency of the measurement are discussed in detail. We show experimental data for various center wavelengthsmore » and pulse durations down to 7 fs to verify the applicability to a wide range of pulse parameters.« less
Automatic energy calibration algorithm for an RBS setup
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silva, Tiago F.; Moro, Marcos V.; Added, Nemitala
2013-05-06
This work describes a computer algorithm for automatic extraction of the energy calibration parameters from a Rutherford Back-Scattering Spectroscopy (RBS) spectrum. Parameters like the electronic gain, electronic offset and detection resolution (FWHM) of a RBS setup are usually determined using a standard sample. In our case, the standard sample comprises of a multi-elemental thin film made of a mixture of Ti-Al-Ta that is analyzed at the beginning of each run at defined beam energy. A computer program has been developed to extract automatically the calibration parameters from the spectrum of the standard sample. The code evaluates the first derivative ofmore » the energy spectrum, locates the trailing edges of the Al, Ti and Ta peaks and fits a first order polynomial for the energy-channel relation. The detection resolution is determined fitting the convolution of a pre-calculated theoretical spectrum. To test the code, data of two years have been analyzed and the results compared with the manual calculations done previously, obtaining good agreement.« less
Unsteady aerodynamic characterization of a military aircraft in vertical gusts
NASA Technical Reports Server (NTRS)
Lebozec, A.; Cocquerez, J. L.
1985-01-01
The effects of 2.5-m/sec vertical gusts on the flight characteristics of a 1:8.6 scale model of a Mirage 2000 aircraft in free flight at 35 m/sec over a distance of 30 m are investigated. The wind-tunnel setup and instrumentation are described; the impulse-response and local-coefficient-identification analysis methods applied are discussed in detail; and the modification and calibration of the gust-detection probes are reviewed. The results are presented in graphs, and good general agreement is obtained between model calculations using the two analysis methods and the experimental measurements.
Novel Principle of Contactless Gauge Block Calibration
Buchta, Zdeněk; Řeřucha, Šimon; Mikel, Břetislav; Čížek, Martin; Lazar, Josef; Číp, Ondřej
2012-01-01
In this paper, a novel principle of contactless gauge block calibration is presented. The principle of contactless gauge block calibration combines low-coherence interferometry and laser interferometry. An experimental setup combines Dowell interferometer and Michelson interferometer to ensure a gauge block length determination with direct traceability to the primary length standard. By monitoring both gauge block sides with a digital camera gauge block 3D surface measurements are possible too. The principle presented is protected by the Czech national patent No. 302948. PMID:22737012
Novel principle of contactless gauge block calibration.
Buchta, Zdeněk; Reřucha, Simon; Mikel, Břetislav; Cížek, Martin; Lazar, Josef; Cíp, Ondřej
2012-01-01
In this paper, a novel principle of contactless gauge block calibration is presented. The principle of contactless gauge block calibration combines low-coherence interferometry and laser interferometry. An experimental setup combines Dowell interferometer and Michelson interferometer to ensure a gauge block length determination with direct traceability to the primary length standard. By monitoring both gauge block sides with a digital camera gauge block 3D surface measurements are possible too. The principle presented is protected by the Czech national patent No. 302948.
Advanced GPS-Based Time Link Calibration with PTB’s New GPS Calibration Setup
2010-11-01
the BIPM to the Ku-band TWSTFT link. A deviation of 3.7 ns compared to the 2008 result was found. INTRODUCTION UTC generation and...adjustment made earlier by the BIPM to the Ku-band TWSTFT link. A deviation of 3.7 ns compared to the 2008 result was found. 15. SUBJECT TERMS 16...Frequency Transfer ( TWSTFT ) were in operation, with signals exchanged in X-band and in Ku-band, respectively. The links were repeatedly calibrated using
NASA Astrophysics Data System (ADS)
Tuca, Silviu-Sorin; Badino, Giorgio; Gramse, Georg; Brinciotti, Enrico; Kasper, Manuel; Oh, Yoo Jin; Zhu, Rong; Rankl, Christian; Hinterdorfer, Peter; Kienberger, Ferry
2016-04-01
The application of scanning microwave microscopy (SMM) to extract calibrated electrical properties of cells and bacteria in air is presented. From the S 11 images, after calibration, complex impedance and admittance images of Chinese hamster ovary cells and E. coli bacteria deposited on a silicon substrate have been obtained. The broadband capabilities of SMM have been used to characterize the bio-samples between 2 GHz and 20 GHz. The resulting calibrated cell and bacteria admittance at 19 GHz were Y cell = 185 μS + j285 μS and Y bacteria = 3 μS + j20 μS, respectively. A combined circuitry-3D finite element method EMPro model has been developed and used to investigate the frequency response of the complex impedance and admittance of the SMM setup. Based on a proposed parallel resistance-capacitance model, the equivalent conductance and parallel capacitance of the cells and bacteria were obtained from the SMM images. The influence of humidity and frequency on the cell conductance was experimentally studied. To compare the cell conductance with bulk water properties, we measured the imaginary part of the bulk water loss with a dielectric probe kit in the same frequency range resulting in a high level of agreement.
Microprocessor-based single particle calibration of scintillation counter
NASA Technical Reports Server (NTRS)
Mazumdar, G. K. D.; Pathak, K. M.
1985-01-01
A microprocessor-base set-up is fabricated and tested for the single particle calibration of the plastic scintillator. The single particle response of the scintillator is digitized by an A/D converter, and a 8085 A based microprocessor stores the pulse heights. The digitized information is printed. Facilities for CRT display and cassette storing and recalling are also made available.
Development of the High-Temperature Dew-Point Generator Over the Past 15 Years
NASA Astrophysics Data System (ADS)
Bosma, R.; Nielsen, J.; Peruzzi, A.
2017-10-01
At VSL a humidity generator was designed and constructed in the early 1990s. This generator was of the re-circulating-single-pressure type. Over the years, the generator has been thoroughly revised and several critical components have been replaced. Among others the pre-saturator and the change from re-circulation to single-pass mode. Validating experiments showed that the range of the new setup could be extended from 70 {°}C to 95 {°}C dew-point temperature, and the last modification allows an uncertainty of 0.048 {°}C (k = 2) at the maximum temperature. In 2009 the setup was used in the Euramet-T-K8 humidity intercomparison at temperatures up to 95 {°}C. In the period from 2003 to 2015, four state-of-the-art chilled mirror hygrometers were regularly calibrated with the generator. One of these was also calibrated with the primary dew-point standards of several other European National Metrology Institutes, which made it possible to link the VSL generator to the generators used in these institutes. An analysis of the results of these calibrations shows an agreement in calibration capabilities within 0.01 {°}C with PTB and NPL.
Improving the Traceability of Meteorological Measurements at Automatic Weather Stations in Thailand
NASA Astrophysics Data System (ADS)
Keawprasert, T.; Sinhaneti, T.; Phuuntharo, P.; Phanakulwijit, S.; Nimsamer, A.
2017-08-01
A joint project between the National Institute of Metrology Thailand (NIMT) and the Thai Meteorology Department (TMD) was established for improving the traceability of meteorology measurements at automatic weather stations (AWSs) in Thailand. The project aimed to improve traceability of air temperature, relative humidity and atmospheric pressure by implementing on-site calibration facilities and developing of new calibration procedures. First, new portable calibration facilities for air temperature, humidity and pressure were set up as working standard of the TMD. A portable humidity calibrator was applied as a uniform and stable source for calibration of thermo-hygrometers. A dew-point hygrometer was employed as reference hygrometer and a platinum resistance thermometer (PRT) traceable to NIMT was used as reference thermometer. The uniformity and stability in both temperature and relative humidity were characterized at NIMT. A transportable pressure calibrator was used for calibration of air pressure sensor. The estimate overall uncertainty of the calibration setup is 0.2 K for air temperature, 1.0 % for relative humidity and 0.2 hPa for atmospheric pressure, respectively. Second, on-site calibration procedures were developed and four AWSs in the central part and the northern of Thailand were chosen as pilot stations for on-site calibration using the new calibration setups and developed calibration procedures. At each station, the calibration was done at the minimum temperature, average temperature and maximum temperature of the year, for air temperature, 20 %, 55 % and 90 % for relative humidity at the average air temperature of that station and at a one-year statistics pressure range for atmospheric pressure at ambient temperature. Additional in-field uncertainty contributions such as the temperature dependence on relative humidity measurement were evaluated and included in the overall uncertainty budget. Preliminary calibration results showed that using a separate PRT probe at these AWSs would be recommended for improving the accuracy of air temperature measurement. In case of relative humidity measurement, the data logger software is needed to be upgraded for achieving higher accuracy of less than 3 %. For atmospheric pressure measurement, a higher accuracy barometer traceable to NIMT could be used to reduce the calibration uncertainty to below 0.2 hPa.
Calibration of space instruments at the Metrology Light Source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klein, R., E-mail: roman.klein@ptb.de; Fliegauf, R.; Gottwald, A.
2016-07-27
PTB has more than 20 years of experience in the calibration of space-based instruments using synchrotron radiation to cover the UV, VUV and X-ray spectral range. New instrumentation at the electron storage ring Metrology Light Source (MLS) opens up extended calibration possibilities within this framework. In particular, the set-up of a large vacuum vessel that can accommodate entire space instruments opens up new prospects. Moreover, a new facility for the calibration of radiation transfer source standards with a considerably extended spectral range has been put into operation. Besides, characterization and calibration of single components like e.g. mirrors, filters, gratings, andmore » detectors is continued.« less
NASA Astrophysics Data System (ADS)
Liu, Hai-Zheng; Shi, Ze-Lin; Feng, Bin; Hui, Bin; Zhao, Yao-Hong
2016-03-01
Integrating microgrid polarimeters on focal plane array (FPA) of an infrared detector causes non-uniformity of polarization response. In order to reduce the effect of polarization non-uniformity, this paper constructs an experimental setup for capturing raw flat-field images and proposes a procedure for acquiring non-uniform calibration (NUC) matrix and calibrating raw polarization images. The proposed procedure takes the incident radiation as a polarization vector and offers a calibration matrix for each pixel. Both our matrix calibration and two-point calibration are applied to our mid-wavelength infrared (MWIR) polarization imaging system with integrated microgrid polarimeters. Compared with two point calibration, our matrix calibration reduces non-uniformity by 30 40% under condition of flat-field data test with polarization. The ourdoor scene observation experiment indicates that our calibration can effectively reduce polarization non-uniformity and improve the image quality of our MWIR polarization imaging system.
Comparison of magnetic probe calibration at nano and millitesla magnitudes
NASA Astrophysics Data System (ADS)
Pahl, Ryan A.; Rovey, Joshua L.; Pommerenke, David J.
2014-01-01
Magnetic field probes are invaluable diagnostics for pulsed inductive plasma devices where field magnitudes on the order of tenths of tesla or larger are common. Typical methods of providing a broadband calibration of dot{{B}} probes involve either a Helmholtz coil driven by a function generator or a network analyzer. Both calibration methods typically produce field magnitudes of tens of microtesla or less, at least three and as many as six orders of magnitude lower than their intended use. This calibration factor is then assumed constant regardless of magnetic field magnitude and the effects of experimental setup are ignored. This work quantifies the variation in calibration factor observed when calibrating magnetic field probes in low field magnitudes. Calibration of two dot{{B}} probe designs as functions of frequency and field magnitude are presented. The first dot{{B}} probe design is the most commonly used design and is constructed from two hand-wound inductors in a differential configuration. The second probe uses surface mounted inductors in a differential configuration with balanced shielding to further reduce common mode noise. Calibration factors are determined experimentally using an 80.4 mm radius Helmholtz coil in two separate configurations over a frequency range of 100-1000 kHz. A conventional low magnitude calibration using a vector network analyzer produced a field magnitude of 158 nT and yielded calibration factors of 15 663 ± 1.7% and 4920 ± 0.6% {T}/{V {s}} at 457 kHz for the surface mounted and hand-wound probes, respectively. A relevant magnitude calibration using a pulsed-power setup with field magnitudes of 8.7-354 mT yielded calibration factors of 14 615 ± 0.3% and 4507 ± 0.4% {T}/{V {s}} at 457 kHz for the surface mounted inductor and hand-wound probe, respectively. Low-magnitude calibration resulted in a larger calibration factor, with an average difference of 9.7% for the surface mounted probe and 12.0% for the hand-wound probe. The maximum difference between relevant and low magnitude tests was 21.5%.
Calibration of the SphinX experiment at the XACT facility in Palermo
NASA Astrophysics Data System (ADS)
Collura, A.; Barbera, M.; Varisco, S.; Calderone, G.; Reale, F.; Gburek, S.; Kowalinski, M.; Sylwester, J.; Siarkowski, M.; Bakala, J.; Podgorski, P.; Trzebinski, W.; Plocieniak, S.; Kordylewski, Z.
2008-07-01
Three of the four detectors of the SphinX experiment to be flown on the Russian mission Coronas-Photon have been measured at the XACT Facility of the Palermo Observatory at several wavelengths in the soft X-ray band. We describe the instrumental set-up and report some measurements. The analysis work to obtain the final calibration is still in progress.
Breuer, L.; Huisman, J.A.; Willems, P.; Bormann, H.; Bronstert, A.; Croke, B.F.W.; Frede, H.-G.; Graff, T.; Hubrechts, L.; Jakeman, A.J.; Kite, G.; Lanini, J.; Leavesley, G.; Lettenmaier, D.P.; Lindstrom, G.; Seibert, J.; Sivapalan, M.; Viney, N.R.
2009-01-01
This paper introduces the project on 'Assessing the impact of land use change on hydrology by ensemble modeling (LUCHEM)' that aims at investigating the envelope of predictions on changes in hydrological fluxes due to land use change. As part of a series of four papers, this paper outlines the motivation and setup of LUCHEM, and presents a model intercomparison for the present-day simulation results. Such an intercomparison provides a valuable basis to investigate the effects of different model structures on model predictions and paves the ground for the analysis of the performance of multi-model ensembles and the reliability of the scenario predictions in companion papers. In this study, we applied a set of 10 lumped, semi-lumped and fully distributed hydrological models that have been previously used in land use change studies to the low mountainous Dill catchment, Germany. Substantial differences in model performance were observed with Nash-Sutcliffe efficiencies ranging from 0.53 to 0.92. Differences in model performance were attributed to (1) model input data, (2) model calibration and (3) the physical basis of the models. The models were applied with two sets of input data: an original and a homogenized data set. This homogenization of precipitation, temperature and leaf area index was performed to reduce the variation between the models. Homogenization improved the comparability of model simulations and resulted in a reduced average bias, although some variation in model data input remained. The effect of the physical differences between models on the long-term water balance was mainly attributed to differences in how models represent evapotranspiration. Semi-lumped and lumped conceptual models slightly outperformed the fully distributed and physically based models. This was attributed to the automatic model calibration typically used for this type of models. Overall, however, we conclude that there was no superior model if several measures of model performance are considered and that all models are suitable to participate in further multi-model ensemble set-ups and land use change scenario investigations. ?? 2008 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zehe, E.; Klaus, J.
2011-12-01
Rapid flow in connected preferential flow paths is crucial for fast transport of water and solutes through soils, especially at tile drained field sites. The present study tests whether an explicit treatment of worm burrows is feasible for modeling water flow, bromide and pesticide transport in structured heterogeneous soils with a 2-dimensional Richards based model. The essence is to represent worm burrows as morphologically connected paths of low flow resistance and low retention capacity in the spatially highly resolved model domain. The underlying extensive database to test this approach was collected during an irrigation experiment, which investigated transport of bromide and the herbicide Isoproturon at a 900 sqm tile drained field site. In a first step we investigated whether the inherent uncertainty in key data causes equifinality i.e. whether there are several spatial model setups that reproduce tile drain event discharge in an acceptable manner. We found a considerable equifinality in the spatial setup of the model, when key parameters such as the area density of worm burrows and the maximum volumetric water flows inside these macropores were varied within the ranges of either our measurement errors or measurements reported in the literature. Thirteen model runs yielded a Nash-Sutcliffe coefficient of more than 0.9. Also, the flow volumes were in good accordance and peak timing errors where less than or equal to 20 min. In the second step we investigated thus whether this "equifinality" in spatial model setups may be reduced when including the bromide tracer data into the model falsification process. We simulated transport of bromide for the 13 spatial model setups, which performed best with respect to reproduce tile drain event discharge, without any further calibration. Four of this 13 model setups allowed to model bromide transport within fixed limits of acceptability. Parameter uncertainty and equifinality could thus be reduced. Thirdly, we selected one of those four setups for simulating transport of Isoproturon, which was applied the day before the irrigation experiment, and tested different parameter combinations to characterise adsorption according to the footprint data base. Simulations could, however, only reproduce the observed event based leaching behaviour, when we allowed for retardation coefficients that were very close to one. This finding is consistent with observations various field observations. We conclude: a) A realistic representation of dominating structures and their topology is of key importance for predicting preferential water and mass flows at tile drained hillslopes. b) Parameter uncertainty and equifinality could be reduced, but a system inherent equifinality in a 2-dimensional Richards based model has to be accepted.
NASA Astrophysics Data System (ADS)
Islam, Siraj Ul; Déry, Stephen J.
2017-03-01
This study evaluates predictive uncertainties in the snow hydrology of the Fraser River Basin (FRB) of British Columbia (BC), Canada, using the Variable Infiltration Capacity (VIC) model forced with several high-resolution gridded climate datasets. These datasets include the Canadian Precipitation Analysis and the thin-plate smoothing splines (ANUSPLIN), North American Regional Reanalysis (NARR), University of Washington (UW) and Pacific Climate Impacts Consortium (PCIC) gridded products. Uncertainties are evaluated at different stages of the VIC implementation, starting with the driving datasets, optimization of model parameters, and model calibration during cool and warm phases of the Pacific Decadal Oscillation (PDO). The inter-comparison of the forcing datasets (precipitation and air temperature) and their VIC simulations (snow water equivalent - SWE - and runoff) reveals widespread differences over the FRB, especially in mountainous regions. The ANUSPLIN precipitation shows a considerable dry bias in the Rocky Mountains, whereas the NARR winter air temperature is 2 °C warmer than the other datasets over most of the FRB. In the VIC simulations, the elevation-dependent changes in the maximum SWE (maxSWE) are more prominent at higher elevations of the Rocky Mountains, where the PCIC-VIC simulation accumulates too much SWE and ANUSPLIN-VIC yields an underestimation. Additionally, at each elevation range, the day of maxSWE varies from 10 to 20 days between the VIC simulations. The snow melting season begins early in the NARR-VIC simulation, whereas the PCIC-VIC simulation delays the melting, indicating seasonal uncertainty in SWE simulations. When compared with the observed runoff for the Fraser River main stem at Hope, BC, the ANUSPLIN-VIC simulation shows considerable underestimation of runoff throughout the water year owing to reduced precipitation in the ANUSPLIN forcing dataset. The NARR-VIC simulation yields more winter and spring runoff and earlier decline of flows in summer due to a nearly 15-day earlier onset of the FRB springtime snowmelt. Analysis of the parametric uncertainty in the VIC calibration process shows that the choice of the initial parameter range plays a crucial role in defining the model hydrological response for the FRB. Furthermore, the VIC calibration process is biased toward cool and warm phases of the PDO and the choice of proper calibration and validation time periods is important for the experimental setup. Overall the VIC hydrological response is prominently influenced by the uncertainties involved in the forcing datasets rather than those in its parameter optimization and experimental setups.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ribezzi-Crivellari, M.; Huguet, J. M.; Ritort, F.
We present a dual-trap optical tweezers setup which directly measures forces using linear momentum conservation. The setup uses a counter-propagating geometry, which allows momentum measurement on each beam separately. The experimental advantages of this setup include low drift due to all-optical manipulation, and a robust calibration (independent of the features of the trapped object or buffer medium) due to the force measurement method. Although this design does not attain the high-resolution of some co-propagating setups, we show that it can be used to perform different single molecule measurements: fluctuation-based molecular stiffness characterization at different forces and hopping experiments on molecularmore » hairpins. Remarkably, in our setup it is possible to manipulate very short tethers (such as molecular hairpins with short handles) down to the limit where beads are almost in contact. The setup is used to illustrate a novel method for measuring the stiffness of optical traps and tethers on the basis of equilibrium force fluctuations, i.e., without the need of measuring the force vs molecular extension curve. This method is of general interest for dual trap optical tweezers setups and can be extended to setups which do not directly measure forces.« less
High Efficiency Variable Speed Versatile Power Air Conditioning System for Military Vehicles
2013-08-01
MOBILITY (P&M) MINI-SYMPOSIUM AUGUST 21-22, 2013 - TROY , MICHIGAN High efficiency variable speed versatile power air conditioning system for...power draw was measured using a calibrated Watt meter. The schematic of the setup is shown in Figure 5 and the setup is shown in Figure 6. Figure...Rocky Research environmental chamber. Cooling Capacity was directly measured in Btu/hr or Watts via measuring the Air flow velocity and the air
Elongation measurement using 1-dimensional image correlation method
NASA Astrophysics Data System (ADS)
Phongwisit, Phachara; Kamoldilok, Surachart; Buranasiri, Prathan
2016-11-01
Aim of this paper was to study, setup, and calibrate an elongation measurement by using 1- Dimensional Image Correlation method (1-DIC). To confirm our method and setup correctness, we need calibration with other methods. In this paper, we used a small spring as a sample to find a result in terms of spring constant. With a fundamental of Image Correlation method, images of formed and deformed samples were compared to understand the difference between deformed process. By comparing the location of reference point on both image's pixel, the spring's elongation were calculated. Then, the results have been compared with the spring constants, which were found from Hooke's law. The percentage of 5 percent error has been found. This DIC method, then, would be applied to measure the elongation of some different kinds of small fiber samples.
NASA Astrophysics Data System (ADS)
Huber, C.; Abert, C.; Bruckner, F.; Groenefeld, M.; Muthsam, O.; Schuschnigg, S.; Sirak, K.; Thanhoffer, R.; Teliban, I.; Vogler, C.; Windl, R.; Suess, D.
2016-10-01
3D print is a recently developed technique, for single-unit production, and for structures that have been impossible to build previously. The current work presents a method to 3D print polymer bonded isotropic hard magnets with a low-cost, end-user 3D printer. Commercially available isotropic NdFeB powder inside a PA11 matrix is characterized, and prepared for the printing process. An example of a printed magnet with a complex shape that was designed to generate a specific stray field is presented, and compared with finite element simulation solving the macroscopic Maxwell equations. For magnetic characterization, and comparing 3D printed structures with injection molded parts, hysteresis measurements are performed. To measure the stray field outside the magnet, the printer is upgraded to a 3D magnetic flux density measurement system. To skip an elaborate adjusting of the sensor, a simulation is used to calibrate the angles, sensitivity, and the offset of the sensor. With this setup, a measurement resolution of 0.05 mm along the z-axes is achievable. The effectiveness of our calibration method is shown. With our setup, we are able to print polymer bonded magnetic systems with the freedom of having a specific complex shape with locally tailored magnetic properties. The 3D scanning setup is easy to mount, and with our calibration method we are able to get accurate measuring results of the stray field.
Calorimetric method of ac loss measurement in a rotating magnetic field.
Ghoshal, P K; Coombs, T A; Campbell, A M
2010-07-01
A method is described for calorimetric ac-loss measurements of high-T(c) superconductors (HTS) at 80 K. It is based on a technique used at 4.2 K for conventional superconducting wires that allows an easy loss measurement in parallel or perpendicular external field orientation. This paper focuses on ac loss measurement setup and calibration in a rotating magnetic field. This experimental setup is to demonstrate measuring loss using a temperature rise method under the influence of a rotating magnetic field. The slight temperature increase of the sample in an ac-field is used as a measure of losses. The aim is to simulate the loss in rotating machines using HTS. This is a unique technique to measure total ac loss in HTS at power frequencies. The sample is mounted on to a cold finger extended from a liquid nitrogen heat exchanger (HEX). The thermal insulation between the HEX and sample is provided by a material of low thermal conductivity, and low eddy current heating sample holder in vacuum vessel. A temperature sensor and noninductive heater have been incorporated in the sample holder allowing a rapid sample change. The main part of the data is obtained in the calorimetric measurement is used for calibration. The focus is on the accuracy and calibrations required to predict the actual ac losses in HTS. This setup has the advantage of being able to measure the total ac loss under the influence of a continuous moving field as experienced by any rotating machines.
Spectrophotometers for plutonium monitoring in HB-line
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lascola, R. J.; O'Rourke, P. E.; Kyser, E. A.
2016-02-12
This report describes the equipment, control software, calibrations for total plutonium and plutonium oxidation state, and qualification studies for the instrument. It also provides a detailed description of the uncertainty analysis, which includes source terms associated with plutonium calibration standards, instrument drift, and inter-instrument variability. Also included are work instructions for instrument, flow cell, and optical fiber setup, work instructions for routine maintenance, and drawings and schematic diagrams.
Evaluation of the Quality of Action Cameras with Wide-Angle Lenses in Uav Photogrammetry
NASA Astrophysics Data System (ADS)
Hastedt, H.; Ekkel, T.; Luhmann, T.
2016-06-01
The application of light-weight cameras in UAV photogrammetry is required due to restrictions in payload. In general, consumer cameras with normal lens type are applied to a UAV system. The availability of action cameras, like the GoPro Hero4 Black, including a wide-angle lens (fish-eye lens) offers new perspectives in UAV projects. With these investigations, different calibration procedures for fish-eye lenses are evaluated in order to quantify their accuracy potential in UAV photogrammetry. Herewith the GoPro Hero4 is evaluated using different acquisition modes. It is investigated to which extent the standard calibration approaches in OpenCV or Agisoft PhotoScan/Lens can be applied to the evaluation processes in UAV photogrammetry. Therefore different calibration setups and processing procedures are assessed and discussed. Additionally a pre-correction of the initial distortion by GoPro Studio and its application to the photogrammetric purposes will be evaluated. An experimental setup with a set of control points and a prospective flight scenario is chosen to evaluate the processing results using Agisoft PhotoScan. Herewith it is analysed to which extent a pre-calibration and pre-correction of a GoPro Hero4 will reinforce the reliability and accuracy of a flight scenario.
Filacchione, Gianrico; Capaccioni, Fabrizio; Altieri, Francesca; Carli, Cristian; Ficai Veltroni, Iacopo; Dami, Michele; Tommasi, Leonardo; Aroldi, Gianluca; Borrelli, Donato; Barbis, Alessandra; Baroni, Marco; Pastorini, Guia; Mugnuolo, Raffaele
2017-09-01
Before integration aboard European Space Agency BepiColombo mission to Mercury, the visible and near infrared hyperspectral imager underwent an intensive calibration campaign. We report in Paper I about the radiometric and linearity responses of the instrument including the optical setups used to perform them. Paper II [F. Altieri et al., Rev. Sci. Instrum. 88, 094503 (2017)] will describe complementary spectral response calibration. The responsivity is used to calculate the expected instrumental signal-to-noise ratio for typical observation scenarios of the BepiColombo mission around Mercury. A description is provided of the internal calibration unit that will be used to verify the relative response during the instrument's lifetime. The instrumental spatial response functions as measured along and across the spectrometer's slit direction were determined by means of spatial scans performed with illuminated test slits placed at the focus of a collimator. The dedicated optical setup used for these measurements is described together with the methods used to derive the instrumental spatial responses at different positions within the 3.5 ° field of view and at different wavelengths in the 0.4-2.0 μm spectral range. Finally, instrument imaging capabilities and Modulated Transfer Function are tested by using a standard mask as a target.
An Omnidirectional Vision Sensor Based on a Spherical Mirror Catadioptric System.
Barone, Sandro; Carulli, Marina; Neri, Paolo; Paoli, Alessandro; Razionale, Armando Viviano
2018-01-31
The combination of mirrors and lenses, which defines a catadioptric sensor, is widely used in the computer vision field. The definition of a catadioptric sensors is based on three main features: hardware setup, projection modelling and calibration process. In this paper, a complete description of these aspects is given for an omnidirectional sensor based on a spherical mirror. The projection model of a catadioptric system can be described by the forward projection task (FP, from 3D scene point to 2D pixel coordinates) and backward projection task (BP, from 2D coordinates to 3D direction of the incident light). The forward projection of non-central catadioptric vision systems, typically obtained by using curved mirrors, is usually modelled by using a central approximation and/or by adopting iterative approaches. In this paper, an analytical closed-form solution to compute both forward and backward projection for a non-central catadioptric system with a spherical mirror is presented. In particular, the forward projection is reduced to a 4th order polynomial by determining the reflection point on the mirror surface through the intersection between a sphere and an ellipse. A matrix format of the implemented models, suitable for fast point clouds handling, is also described. A robust calibration procedure is also proposed and applied to calibrate a catadioptric sensor by determining the mirror radius and center with respect to the camera.
An Omnidirectional Vision Sensor Based on a Spherical Mirror Catadioptric System
Barone, Sandro; Carulli, Marina; Razionale, Armando Viviano
2018-01-01
The combination of mirrors and lenses, which defines a catadioptric sensor, is widely used in the computer vision field. The definition of a catadioptric sensors is based on three main features: hardware setup, projection modelling and calibration process. In this paper, a complete description of these aspects is given for an omnidirectional sensor based on a spherical mirror. The projection model of a catadioptric system can be described by the forward projection task (FP, from 3D scene point to 2D pixel coordinates) and backward projection task (BP, from 2D coordinates to 3D direction of the incident light). The forward projection of non-central catadioptric vision systems, typically obtained by using curved mirrors, is usually modelled by using a central approximation and/or by adopting iterative approaches. In this paper, an analytical closed-form solution to compute both forward and backward projection for a non-central catadioptric system with a spherical mirror is presented. In particular, the forward projection is reduced to a 4th order polynomial by determining the reflection point on the mirror surface through the intersection between a sphere and an ellipse. A matrix format of the implemented models, suitable for fast point clouds handling, is also described. A robust calibration procedure is also proposed and applied to calibrate a catadioptric sensor by determining the mirror radius and center with respect to the camera. PMID:29385051
NASA Astrophysics Data System (ADS)
Michalik-Onichimowska, Aleksandra; Kern, Simon; Riedel, Jens; Panne, Ulrich; King, Rudibert; Maiwald, Michael
2017-04-01
Driven mostly by the search for chemical syntheses under biocompatible conditions, so called "click" chemistry rapidly became a growing field of research. The resulting simple one-pot reactions are so far only scarcely accompanied by an adequate optimization via comparably straightforward and robust analysis techniques possessing short set-up times. Here, we report on a fast and reliable calibration-free online NMR monitoring approach for technical mixtures. It combines a versatile fluidic system, continuous-flow measurement of 1H spectra with a time interval of 20 s per spectrum, and a robust, fully automated algorithm to interpret the obtained data. As a proof-of-concept, the thiol-ene coupling between N-boc cysteine methyl ester and allyl alcohol was conducted in a variety of non-deuterated solvents while its time-resolved behaviour was characterized with step tracer experiments. Overlapping signals in online spectra during thiol-ene coupling could be deconvoluted with a spectral model using indirect hard modeling and were subsequently converted to either molar ratios (using a calibration-free approach) or absolute concentrations (using 1-point calibration). For various solvents the kinetic constant k for pseudo-first order reaction was estimated to be 3.9 h-1 at 25 °C. The obtained results were compared with direct integration of non-overlapping signals and showed good agreement with the implemented mass balance.
Comparison of 2c- and 3cLIF droplet temperature imaging
NASA Astrophysics Data System (ADS)
Palmer, Johannes; Reddemann, Manuel A.; Kirsch, Valeri; Kneer, Reinhold
2018-06-01
This work presents "pulsed 2D-3cLIF-EET" as a measurement setup for micro-droplet internal temperature imaging. The setup relies on a third color channel that allows correcting spatially changing energy transfer rates between the two applied fluorescent dyes. First measurement results are compared with results of two slightly different versions of the recent "pulsed 2D-2cLIF-EET" method. Results reveal a higher temperature measurement accuracy of the recent 2cLIF setup. Average droplet temperature is determined by the 2cLIF setup with an uncertainty of less than 1 K and a spatial deviation of about 3.7 K. The new 3cLIF approach would become competitive, if the existing droplet size dependency is anticipated by an additional calibration and if the processing algorithm includes spatial measurement errors more appropriately.
A high sensitivity ultralow temperature RF conductance and noise measurement setup.
Parmentier, F D; Mahé, A; Denis, A; Berroir, J-M; Glattli, D C; Plaçais, B; Fève, G
2011-01-01
We report on the realization of a high sensitivity RF noise measurement scheme to study small current fluctuations of mesoscopic systems at milli-Kelvin temperatures. The setup relies on the combination of an interferometric amplification scheme and a quarter-wave impedance transformer, allowing the measurement of noise power spectral densities with gigahertz bandwidth up to five orders of magnitude below the amplifier noise floor. We simultaneously measure the high frequency conductance of the sample by derivating a portion of the signal to a microwave homodyne detection. We describe the principle of the setup, as well as its implementation and calibration. Finally, we show that our setup allows to fully characterize a subnanosecond on-demand single electron source. More generally, its sensitivity and bandwidth make it suitable for applications manipulating single charges at GHz frequencies.
NASA Technical Reports Server (NTRS)
Moore, Alvah S., Jr.; Mauldin, L. ED, III; Stump, Charles W.; Reagan, John A.; Fabert, Milton G.
1989-01-01
The calibration of the Halogen Occultation Experiment (HALOE) sun sensor is described. This system consists of two energy-balancing silicon detectors which provide coarse azimuth and elevation control signals and a silicon photodiode array which provides top and bottom solar edge data for fine elevation control. All three detectors were calibrated on a mountaintop near Tucson, Ariz., using the Langley plot technique. The conventional Langley plot technique was modified to allow calibration of the two coarse detectors, which operate wideband. A brief description of the test setup is given. The HALOE instrument is a gas correlation radiometer that is now being developed for the Upper Atmospheric Research Satellite.
Network operability of ground-based microwave radiometers: Calibration and standardization efforts
NASA Astrophysics Data System (ADS)
Pospichal, Bernhard; Löhnert, Ulrich; Küchler, Nils; Czekala, Harald
2017-04-01
Ground-based microwave radiometers (MWR) are already widely used by national weather services and research institutions all around the world. Most of the instruments operate continuously and are beginning to be implemented into data assimilation for atmospheric models. Especially their potential for continuously observing boundary-layer temperature profiles as well as integrated water vapor and cloud liquid water path makes them valuable for improving short-term weather forecasts. However until now, most MWR have been operated as stand-alone instruments. In order to benefit from a network of these instruments, standardization of calibration, operation and data format is necessary. In the frame of TOPROF (COST Action ES1303) several efforts have been undertaken, such as uncertainty and bias assessment, or calibration intercomparison campaigns. The goal was to establish protocols for providing quality controlled (QC) MWR data and their uncertainties. To this end, standardized calibration procedures for MWR have been developed and recommendations for radiometer users compiled. Based on the results of the TOPROF campaigns, a new, high-accuracy liquid-nitrogen calibration load has been introduced for MWR manufactured by Radiometer Physics GmbH (RPG). The new load improves the accuracy of the measurements considerably and will lead to even more reliable atmospheric observations. Next to the recommendations for set-up, calibration and operation of ground-based MWR within a future network, we will present homogenized methods to determine the accuracy of a running calibration as well as means for automatic data quality control. This sets the stage for the planned microwave calibration center at JOYCE (Jülich Observatory for Cloud Evolution), which will be shortly introduced.
NASA Astrophysics Data System (ADS)
Reicher, Naama; Segev, Lior; Rudich, Yinon
2018-01-01
The WeIzmann Supercooled Droplets Observation on Microarray (WISDOM) is a new setup for studying ice nucleation in an array of monodisperse droplets for atmospheric implications. WISDOM combines microfluidics techniques for droplets production and a cryo-optic stage for observation and characterization of freezing events of individual droplets. This setup is designed to explore heterogeneous ice nucleation in the immersion freezing mode, down to the homogeneous freezing of water (235 K) in various cooling rates (typically 0.1-10 K min-1). It can also be used for studying homogeneous freezing of aqueous solutions in colder temperatures. Frozen fraction, ice nucleation active surface site densities and freezing kinetics can be obtained from WISDOM measurements for hundreds of individual droplets in a single freezing experiment. Calibration experiments using eutectic solutions and previously studied materials are described. WISDOM also allows repeatable cycles of cooling and heating for the same array of droplets. This paper describes the WISDOM setup, its temperature calibration, validation experiments and measurement uncertainties. Finally, application of WISDOM to study the ice nucleating particle (INP) properties of size-selected ambient Saharan dust particles is presented.
An automated pressure data acquisition system for evaluation of pressure sensitive paint chemistries
NASA Technical Reports Server (NTRS)
Sealey, Bradley S.; Mitchell, Michael; Burkett, Cecil G.; Oglesby, Donald M.
1993-01-01
An automated pressure data acquisition system for testing of pressure sensitive phosphorescent paints was designed, assembled, and tested. The purpose of the calibration system is the evaluation and selection of pressure sensitive paint chemistries that could be used to obtain global aerodynamic pressure distribution measurements. The test apparatus and setup used for pressure sensitive paint characterizations is described. The pressure calibrations, thermal sensitivity effects, and photodegradation properties are discussed.
Biaxial Anisotropic Material Development and Characterization using Rectangular to Square Waveguide
2015-03-26
holder 68 Figure 29. Measurement Setup with Test port cables and Network Analyzer VNA and the waveguide adapters are torqued to specification with...calibrated torque wrenches and waveguide flanges are aligned using precision alignment pins. A TRL calibration is performed prior to measuring the sample as...set to 0.0001. This enables the Frequency domain solver to refine the mesh until the tolerance is achieved. Tightening the error tolerance results in
NASA Astrophysics Data System (ADS)
Hottin, Jérôme; Moreau, Julien; Spadavecchia, Jolanda; Bellemain, Alain; Lecerf, Laure; Goossens, Michel; Canva, Michael
2008-04-01
The present paper summarizes some of our work in the field of genetic diagnosis using Surface Plasmon Resonance Imaging. The optical setup and its capability are presented, as well as the gold surface functionalization used. Results obtained with oligonucleotides targets, specific to Cystic Fibrosis disease, in high and low concentration are shown. The self-calibration method we have developed to reduce data dispersion in genetic diagnosis applications is described.
Modeling the X-Ray Process, and X-Ray Flaw Size Parameter for POD Studies
NASA Technical Reports Server (NTRS)
Khoshti, Ajay
2014-01-01
Nondestructive evaluation (NDE) method reliability can be determined by a statistical flaw detection study called probability of detection (POD) study. In many instances the NDE flaw detectability is given as a flaw size such as crack length. The flaw is either a crack or behaving like a crack in terms of affecting the structural integrity of the material. An alternate approach is to use a more complex flaw size parameter. The X-ray flaw size parameter, given here, takes into account many setup and geometric factors. The flaw size parameter relates to X-ray image contrast and is intended to have a monotonic correlation with the POD. Some factors such as set-up parameters including X-ray energy, exposure, detector sensitivity, and material type that are not accounted for in the flaw size parameter may be accounted for in the technique calibration and controlled to meet certain quality requirements. The proposed flaw size parameter and the computer application described here give an alternate approach to conduct the POD studies. Results of the POD study can be applied to reliably detect small flaws through better assessment of effect of interaction between various geometric parameters on the flaw detectability. Moreover, a contrast simulation algorithm for a simple part-source-detector geometry using calibration data is also provided for the POD estimation.
Modeling the X-ray Process, and X-ray Flaw Size Parameter for POD Studies
NASA Technical Reports Server (NTRS)
Koshti, Ajay M.
2014-01-01
Nondestructive evaluation (NDE) method reliability can be determined by a statistical flaw detection study called probability of detection (POD) study. In many instances, the NDE flaw detectability is given as a flaw size such as crack length. The flaw is either a crack or behaving like a crack in terms of affecting the structural integrity of the material. An alternate approach is to use a more complex flaw size parameter. The X-ray flaw size parameter, given here, takes into account many setup and geometric factors. The flaw size parameter relates to X-ray image contrast and is intended to have a monotonic correlation with the POD. Some factors such as set-up parameters, including X-ray energy, exposure, detector sensitivity, and material type that are not accounted for in the flaw size parameter may be accounted for in the technique calibration and controlled to meet certain quality requirements. The proposed flaw size parameter and the computer application described here give an alternate approach to conduct the POD studies. Results of the POD study can be applied to reliably detect small flaws through better assessment of effect of interaction between various geometric parameters on the flaw detectability. Moreover, a contrast simulation algorithm for a simple part-source-detector geometry using calibration data is also provided for the POD estimation.
Setup calibration and optimization for comparative digital holography
NASA Astrophysics Data System (ADS)
Baumbach, Torsten; Osten, Wolfgang; Kebbel, Volker; von Kopylow, Christoph; Jueptner, Werner
2004-08-01
With increasing globalization many enterprises decide to produce the components of their products at different locations all over the world. Consequently, new technologies and strategies for quality control are required. In this context the remote comparison of objects with regard to their shape or response on certain loads is getting more and more important for a variety of applications. For such a task the novel method of comparative digital holography is a suitable tool with interferometric sensitivity. With this technique the comparison in shape or deformation of two objects does not require the presence of both objects at the same place. In contrast to the well known incoherent techniques based on inverse fringe projection this new approach uses a coherent mask for the illumination of the sample object. The coherent mask is created by digital holography to enable the instant access to the complete optical information of the master object at any wanted place. The reconstruction of the mask is done by a spatial light modulator (SLM). The transmission of the digital master hologram to the place of comparison can be done via digital telecommunication networks. Contrary to other interferometric techniques this method enables the comparison of objects with different microstructure. In continuation of earlier reports our investigations are focused here on the analysis of the constraints of the setup with respect to the quality of the hologram reconstruction with a spatial light modulator. For successful measurements the selection of the appropriate reconstruction method and the adequate optical set-up is mandatory. In addition, the use of a SLM for the reconstruction requires the knowledge of its properties for the accomplishment of this method. The investigation results for the display properties such as display curvature, phase shift and the consequences for the technique will be presented. The optimization and the calibration of the set-up and its components lead to improved results in comparative digital holography with respect to the resolution. Examples of measurements before and after the optimization and calibration will be presented.
A PRACTICAL THEOREM ON USING INTERFEROMETRY TO MEASURE THE GLOBAL 21 cm SIGNAL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Venumadhav, Tejaswi; Chang, Tzu-Ching; Doré, Olivier
2016-08-01
The sky-averaged, or global, background of redshifted 21 cm radiation is expected to be a rich source of information on cosmological reheating and reionization. However, measuring the signal is technically challenging: one must extract a small, frequency-dependent signal from under much brighter spectrally smooth foregrounds. Traditional approaches to study the global signal have used single antennas, which require one to calibrate out the frequency-dependent structure in the overall system gain (due to internal reflections, for example) as well as remove the noise bias from auto-correlating a single amplifier output. This has motivated proposals to measure the signal using cross-correlations inmore » interferometric setups, where additional calibration techniques are available. In this paper we focus on the general principles driving the sensitivity of the interferometric setups to the global signal. We prove that this sensitivity is directly related to two characteristics of the setup: the cross-talk between readout channels (i.e., the signal picked up at one antenna when the other one is driven) and the correlated noise due to thermal fluctuations of lossy elements (e.g., absorbers or the ground) radiating into both channels. Thus in an interferometric setup, one cannot suppress cross-talk and correlated thermal noise without reducing sensitivity to the global signal by the same factor—instead, the challenge is to characterize these effects and their frequency dependence. We illustrate our general theorem by explicit calculations within toy setups consisting of two short-dipole antennas in free space and above a perfectly reflecting ground surface, as well as two well-separated identical lossless antennas arranged to achieve zero cross-talk.« less
A new settling velocity model to describe secondary sedimentation.
Ramin, Elham; Wágner, Dorottya S; Yde, Lars; Binning, Philip J; Rasmussen, Michael R; Mikkelsen, Peter Steen; Plósz, Benedek Gy
2014-12-01
Secondary settling tanks (SSTs) are the most hydraulically sensitive unit operations in biological wastewater treatment plants. The maximum permissible inflow to the plant depends on the efficiency of SSTs in separating and thickening the activated sludge. The flow conditions and solids distribution in SSTs can be predicted using computational fluid dynamics (CFD) tools. Despite extensive studies on the compression settling behaviour of activated sludge and the development of advanced settling velocity models for use in SST simulations, these models are not often used, due to the challenges associated with their calibration. In this study, we developed a new settling velocity model, including hindered, transient and compression settling, and showed that it can be calibrated to data from a simple, novel settling column experimental set-up using the Bayesian optimization method DREAM(ZS). In addition, correlations between the Herschel-Bulkley rheological model parameters and sludge concentration were identified with data from batch rheological experiments. A 2-D axisymmetric CFD model of a circular SST containing the new settling velocity and rheological model was validated with full-scale measurements. Finally, it was shown that the representation of compression settling in the CFD model can significantly influence the prediction of sludge distribution in the SSTs under dry- and wet-weather flow conditions. Copyright © 2014 Elsevier Ltd. All rights reserved.
The Site-Scale Saturated Zone Flow Model for Yucca Mountain
NASA Astrophysics Data System (ADS)
Al-Aziz, E.; James, S. C.; Arnold, B. W.; Zyvoloski, G. A.
2006-12-01
This presentation provides a reinterpreted conceptual model of the Yucca Mountain site-scale flow system subject to all quality assurance procedures. The results are based on a numerical model of site-scale saturated zone beneath Yucca Mountain, which is used for performance assessment predictions of radionuclide transport and to guide future data collection and modeling activities. This effort started from the ground up with a revised and updated hydrogeologic framework model, which incorporates the latest lithology data, and increased grid resolution that better resolves the hydrogeologic framework, which was updated throughout the model domain. In addition, faults are much better represented using the 250× 250- m2 spacing (compared to the previous model's 500× 500-m2 spacing). Data collected since the previous model calibration effort have been included and they comprise all Nye County water-level data through Phase IV of their Early Warning Drilling Program. Target boundary fluxes are derived from the newest (2004) Death Valley Regional Flow System model from the US Geologic Survey. A consistent weighting scheme assigns importance to each measured water-level datum and boundary flux extracted from the regional model. The numerical model is calibrated by matching these weighted water level measurements and boundary fluxes using parameter estimation techniques, along with more informal comparisons of the model to hydrologic and geochemical information. The model software (hydrologic simulation code FEHM~v2.24 and parameter estimation software PEST~v5.5) and model setup facilitates efficient calibration of multiple conceptual models. Analyses evaluate the impact of these updates and additional data on the modeled potentiometric surface and the flowpaths emanating from below the repository. After examining the heads and permeabilities obtained from the calibrated models, we present particle pathways from the proposed repository and compare them to those from the previous model calibration. Specific discharge at a point 5~km from the repository is also examined and found to be within acceptable uncertainty. The results show that updated model yields a calibration with smaller residuals than the previous model revision while ensuring that flowpaths follow measured gradients and paths derived from hydrochemical analyses. This work was supported by the Yucca Mountain Site Characterization Office as part of the Civilian Radioactive Waste Management Program, which is managed by the U.S. Department of Energy, Yucca Mountain Site Characterization Project. Sandia National Laboratories is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under Contract DE AC04 94AL85000.
Model of dissolution in the framework of tissue engineering and drug delivery.
Sanz-Herrera, J A; Soria, L; Reina-Romo, E; Torres, Y; Boccaccini, A R
2018-05-22
Dissolution phenomena are ubiquitously present in biomaterials in many different fields. Despite the advantages of simulation-based design of biomaterials in medical applications, additional efforts are needed to derive reliable models which describe the process of dissolution. A phenomenologically based model, available for simulation of dissolution in biomaterials, is introduced in this paper. The model turns into a set of reaction-diffusion equations implemented in a finite element numerical framework. First, a parametric analysis is conducted in order to explore the role of model parameters on the overall dissolution process. Then, the model is calibrated and validated versus a straightforward but rigorous experimental setup. Results show that the mathematical model macroscopically reproduces the main physicochemical phenomena that take place in the tests, corroborating its usefulness for design of biomaterials in the tissue engineering and drug delivery research areas.
Ambient measurement of ammonia and formaldehyde: Open path vs. extractive approach.
NASA Astrophysics Data System (ADS)
Rajamäki, Timo
2017-04-01
Ammonia NH3 and formaldehyde CH2O are some of the most critical chemicals for air quality. Reliable online measurement of these gases is one of the key operations for air quality and safety monitoring, in indoor, outdoor and process applications alike. Ammonia and formaldehyde are reactive compounds and they are harmful, even in very low ppb level concentrations. This means challenges for measurement system in all of its critical aspects: sampling, calibration and sensitivity. We are applying techniques so far successfully used to measure reactive inorganic compounds like ammonia NH3 and hydrogen fluoride HF to tackle these challenges. Now a novel setup based on direct laser absorption with cavity enhancement employing fundamental vibration level excitations of ammonia and formaldehyde molecules is constructed in connection with new mechanics and algorithms optimized for gas exchange and sampling in the case of these reactive molecules easily sticking to surfaces. An aberration corrected multipass sample cell in vacuum pressure is used in parallel with an open path multipass setup. The CH2O and NH3 calibration gases necessary for system calibration are dynamically generated using traceable standards and components. We compare these two approaches with special emphasis on the system's response time, robustness, sensitivity, usability in field conditions, maintenance need and long term stability. A further coal is to enable the use of the same setups also for simultaneous measurement of other reactive compounds often encountered in air quality monitoring. This would make possible more comprehensive and also economic monitoring of these compounds with a single device.
NASA Technical Reports Server (NTRS)
Solis, Eduardo; Meyn, Larry
2016-01-01
Calibrating the internal, multi-component balance mounted in the Tiltrotor Test Rig (TTR) required photogrammetric measurements to determine the location and orientation of forces applied to the balance. The TTR, with the balance and calibration hardware attached, was mounted in a custom calibration stand. Calibration loads were applied using eleven hydraulic actuators, operating in tension only, that were attached to the forward frame of the calibration stand and the TTR calibration hardware via linkages with in-line load cells. Before the linkages were installed, photogrammetry was used to determine the location of the linkage attachment points on the forward frame and on the TTR calibration hardware. Photogrammetric measurements were used to determine the displacement of the linkage attachment points on the TTR due to deflection of the hardware under applied loads. These measurements represent the first photogrammetric deflection measurements to be made to support 6-component rotor balance calibration. This paper describes the design of the TTR and the calibration hardware, and presents the development, set-up and use of the photogrammetry system, along with some selected measurement results.
Kern, Simon; Meyer, Klas; Guhl, Svetlana; Gräßer, Patrick; Paul, Andrea; King, Rudibert; Maiwald, Michael
2018-05-01
Monitoring specific chemical properties is the key to chemical process control. Today, mainly optical online methods are applied, which require time- and cost-intensive calibration effort. NMR spectroscopy, with its advantage being a direct comparison method without need for calibration, has a high potential for enabling closed-loop process control while exhibiting short set-up times. Compact NMR instruments make NMR spectroscopy accessible in industrial and rough environments for process monitoring and advanced process control strategies. We present a fully automated data analysis approach which is completely based on physically motivated spectral models as first principles information (indirect hard modeling-IHM) and applied it to a given pharmaceutical lithiation reaction in the framework of the European Union's Horizon 2020 project CONSENS. Online low-field NMR (LF NMR) data was analyzed by IHM with low calibration effort, compared to a multivariate PLS-R (partial least squares regression) approach, and both validated using online high-field NMR (HF NMR) spectroscopy. Graphical abstract NMR sensor module for monitoring of the aromatic coupling of 1-fluoro-2-nitrobenzene (FNB) with aniline to 2-nitrodiphenylamine (NDPA) using lithium-bis(trimethylsilyl) amide (Li-HMDS) in continuous operation. Online 43.5 MHz low-field NMR (LF) was compared to 500 MHz high-field NMR spectroscopy (HF) as reference method.
Sensitivity Study of the Wall Interference Correction System (WICS) for Rectangular Tunnels
NASA Technical Reports Server (NTRS)
Walker, Eric L.; Everhart, Joel L.; Iyer, Venkit
2001-01-01
An off-line version of the Wall Interference Correction System (WICS) has been implemented for the NASA Langley National Transonic Facility. The correction capability is currently restricted to corrections for solid wall interference in the model pitch plane for Mach numbers less than 0.45 due to a limitation in tunnel calibration data. A study to assess output sensitivity to measurement uncertainty was conducted to determine standard operational procedures and guidelines to ensure data quality during the testing process. Changes to the current facility setup and design recommendations for installing the WICS code into a new facility are reported.
NASA Astrophysics Data System (ADS)
Javernick, Luke; Redolfi, Marco; Bertoldi, Walter
2018-05-01
New data collection techniques offer numerical modelers the ability to gather and utilize high quality data sets with high spatial and temporal resolution. Such data sets are currently needed for calibration, verification, and to fuel future model development, particularly morphological simulations. This study explores the use of high quality spatial and temporal data sets of observed bed load transport in braided river flume experiments to evaluate the ability of a two-dimensional model, Delft3D, to predict bed load transport. This study uses a fixed bed model configuration and examines the model's shear stress calculations, which are the foundation to predict the sediment fluxes necessary for morphological simulations. The evaluation is conducted for three flow rates, and model setup used highly accurate Structure-from-Motion (SfM) topography and discharge boundary conditions. The model was hydraulically calibrated using bed roughness, and performance was evaluated based on depth and inundation agreement. Model bed load performance was evaluated in terms of critical shear stress exceedance area compared to maps of observed bed mobility in a flume. Following the standard hydraulic calibration, bed load performance was tested for sensitivity to horizontal eddy viscosity parameterization and bed morphology updating. Simulations produced depth errors equal to the SfM inherent errors, inundation agreement of 77-85%, and critical shear stress exceedance in agreement with 49-68% of the observed active area. This study provides insight into the ability of physically based, two-dimensional simulations to accurately predict bed load as well as the effects of horizontal eddy viscosity and bed updating. Further, this study highlights how using high spatial and temporal data to capture the physical processes at work during flume experiments can help to improve morphological modeling.
Search for hybrid baryons with CLAS12 experimental setup
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lanza, Lucille
It is crucial to study the meson electroproduction in the kinematic region dominated by the formation of resonances. CLAS12 setup in Hall B at Jefferson Lab is particularly suitable for this task, since it is able to detect scattered electrons at low polar angles thanks to the Forward Tagger (FT) component. The process that we propose to study is ep → e'K +Λ, where the electron beam will be provided by the CEBAF accelerator with energies of 6.6, 8.8, and 11 GeV. This thesis work describes the setup and calibration of the FT calorimeter and the studies related to themore » search of hybrid baryons through the measurement of the K + Λ electroproduction cross section.« less
Calibration for single multi-mode fiber digital scanning microscopy imaging system
NASA Astrophysics Data System (ADS)
Yin, Zhe; Liu, Guodong; Liu, Bingguo; Gan, Yu; Zhuang, Zhitao; Chen, Fengdong
2015-11-01
Single multimode fiber (MMF) digital scanning imaging system is a development tendency of modern endoscope. We concentrate on the calibration method of the imaging system. Calibration method comprises two processes, forming scanning focused spots and calibrating the couple factors varied with positions. Adaptive parallel coordinate algorithm (APC) is adopted to form the focused spots at the multimode fiber (MMF) output. Compare with other algorithm, APC contains many merits, i.e. rapid speed, small amount calculations and no iterations. The ratio of the optics power captured by MMF to the intensity of the focused spots is called couple factor. We setup the calibration experimental system to form the scanning focused spots and calculate the couple factors for different object positions. The experimental result the couple factor is higher in the center than the edge.
eSIP: A Novel Solution-Based Sectioned Image Property Approach for Microscope Calibration
Butzlaff, Malte; Weigel, Arwed; Ponimaskin, Evgeni; Zeug, Andre
2015-01-01
Fluorescence confocal microscopy represents one of the central tools in modern sciences. Correspondingly, a growing amount of research relies on the development of novel microscopic methods. During the last decade numerous microscopic approaches were developed for the investigation of various scientific questions. Thereby, the former qualitative imaging methods became replaced by advanced quantitative methods to gain more and more information from a given sample. However, modern microscope systems being as complex as they are, require very precise and appropriate calibration routines, in particular when quantitative measurements should be compared over longer time scales or between different setups. Multispectral beads with sub-resolution size are often used to describe the point spread function and thus the optical properties of the microscope. More recently, a fluorescent layer was utilized to describe the axial profile for each pixel, which allows a spatially resolved characterization. However, fabrication of a thin fluorescent layer with matching refractive index is technically not solved yet. Therefore, we propose a novel type of calibration concept for sectioned image property (SIP) measurements which is based on fluorescent solution and makes the calibration concept available for a broader number of users. Compared to the previous approach, additional information can be obtained by application of this extended SIP chart approach, including penetration depth, detected number of photons, and illumination profile shape. Furthermore, due to the fit of the complete profile, our method is less susceptible to noise. Generally, the extended SIP approach represents a simple and highly reproducible method, allowing setup independent calibration and alignment procedures, which is mandatory for advanced quantitative microscopy. PMID:26244982
Calibration OGSEs for multichannel radiometers for Mars atmosphere studies
NASA Astrophysics Data System (ADS)
Jiménez, J. J.; J Álvarez, F.; Gonzalez-Guerrero, M.; Apéstigue, V.; Martín, I.; Fernández, J. M.; Fernán, A. A.; Arruego, I.
2018-02-01
This work describes several Optical Ground Support Equipment (OGSEs) developed by INTA (Spanish Institute of Aerospace Technology—Instituto Nacional de Técnica Aeroespacial) for the calibration and characterization of their self-manufactured multichannel radiometers (solar irradiance sensors—SIS) developed for working on the surface of Mars and studying the atmosphere of that planet. Nowadays, INTA is developing two SIS for the ESA ExoMars 2020 and for the JPL/NASA Mars 2020 missions. These calibration OGSEs have been improved since the first model in 2011 developed for Mars MetNet Precursor mission. This work describes the currently used OGSE. Calibration tests provide an objective evidence of the SIS performance, allowing the conversion of the electrical sensor output into accurate physical measurements (irradiance) with uncertainty bounds. Calibration results of the SIS on board of the Dust characterisation, Risk assessment, and Environment Analyzer on the Martian Surface (DREAMS) on board the ExoMars 2016 Schiaparelli module (EDM—entry and descent module) are also presented, as well as their error propagation. Theoretical precision and accuracy of the instrument are determined by these results. Two types of OGSE are used as a function of the pursued aim: calibration OGSEs and Optical Fast Verification (OFV) GSE. Calibration OGSEs consist of three setups which characterize with the highest possible accuracy, the responsivity, the angular response and the thermal behavior; OFV OGSE verify that the performance of the sensor is close to nominal after every environmental and qualification test. Results show that the accuracy of the calibrated sensors is a function of the accuracy of the optical detectors and of the light conditions. For normal direct incidence and diffuse light, the accuracy is in the same order of uncertainty as that of the reference cell used for fixing the irradiance, which is about 1%.
Calibration OGSEs for multichannel radiometers for Mars atmosphere studies
NASA Astrophysics Data System (ADS)
Jiménez, J. J.; J Álvarez, F.; Gonzalez-Guerrero, M.; Apéstigue, V.; Martín, I.; Fernández, J. M.; Fernán, A. A.; Arruego, I.
2018-06-01
This work describes several Optical Ground Support Equipment (OGSEs) developed by INTA (Spanish Institute of Aerospace Technology—Instituto Nacional de Técnica Aeroespacial) for the calibration and characterization of their self-manufactured multichannel radiometers (solar irradiance sensors—SIS) developed for working on the surface of Mars and studying the atmosphere of that planet. Nowadays, INTA is developing two SIS for the ESA ExoMars 2020 and for the JPL/NASA Mars 2020 missions. These calibration OGSEs have been improved since the first model in 2011 developed for Mars MetNet Precursor mission. This work describes the currently used OGSE. Calibration tests provide an objective evidence of the SIS performance, allowing the conversion of the electrical sensor output into accurate physical measurements (irradiance) with uncertainty bounds. Calibration results of the SIS on board of the Dust characterisation, Risk assessment, and Environment Analyzer on the Martian Surface (DREAMS) on board the ExoMars 2016 Schiaparelli module (EDM—entry and descent module) are also presented, as well as their error propagation. Theoretical precision and accuracy of the instrument are determined by these results. Two types of OGSE are used as a function of the pursued aim: calibration OGSEs and Optical Fast Verification (OFV) GSE. Calibration OGSEs consist of three setups which characterize with the highest possible accuracy, the responsivity, the angular response and the thermal behavior; OFV OGSE verify that the performance of the sensor is close to nominal after every environmental and qualification test. Results show that the accuracy of the calibrated sensors is a function of the accuracy of the optical detectors and of the light conditions. For normal direct incidence and diffuse light, the accuracy is in the same order of uncertainty as that of the reference cell used for fixing the irradiance, which is about 1%.
Relative spectral response calibration using Ti plasma lines
NASA Astrophysics Data System (ADS)
Teng, FEI; Congyuan, PAN; Qiang, ZENG; Qiuping, WANG; Xuewei, DU
2018-04-01
This work introduces the branching ratio (BR) method for determining relative spectral responses, which are needed routinely in laser induced breakdown spectroscopy (LIBS). Neutral and singly ionized Ti lines in the 250–498 nm spectral range are investigated by measuring laser-induced micro plasma near a Ti plate and used to calculate the relative spectral response of an entire LIBS detection system. The results are compared with those of the conventional relative spectral response calibration method using a tungsten halogen lamp, and certain lines available for the BR method are selected. The study supports the common manner of using BRs to calibrate the detection system in LIBS setups.
Towards Fast Tracking of the Keyhole Geometry
NASA Astrophysics Data System (ADS)
Brock, C.; Hohenstein, R.; Schmidt, M.
We describe a sensor principle permitting the fast online measurement of the position of the optical process emissions in deep penetration laser welding. Experiments show a strong correlation between the position of the vapour plume and the keyhole geometry, demonstrated here by varying the penetration depth of the weld. In order to achieve an absolute position measurement, the sensor was calibrated using a light source with well defined characteristics. The setup for the calibration measurements and the corresponding data evaluation methods are discussed. The precision of the calibration with a green LED is 6 μm in lateral and 55 μm in axial direction, for a working distance of 200 mm.
Primary Radiometry for the mise-en-pratique: The Laser-Based Radiance Method Applied to a Pyrometer
NASA Astrophysics Data System (ADS)
Briaudeau, S.; Sadli, M.; Bourson, F.; Rougi, B.; Rihan, A.; Zondy, J.-J.
2011-12-01
A new setup has been implemented at LCM-LNE-CNAM for the determination "of the spectral responsivity of radiation thermometers for the determination" of the thermodynamic temperature of high-temperature blackbodies at the temperature of a metal-carbon eutectic phase transition. In this new setup, an innovative acoustic-optic modulator feedback loop is used to stabilize the radiance of a wavelength tunable laser. The effect of residual optical interferences on the calibration of a test pyrometer is analyzed. The full uncertainty budget is presented.
Hardware in the Loop Performance Assessment of LIDAR-Based Spacecraft Pose Determination
Fasano, Giancarmine; Grassi, Michele
2017-01-01
In this paper an original, easy to reproduce, semi-analytic calibration approach is developed for hardware-in-the-loop performance assessment of pose determination algorithms processing point cloud data, collected by imaging a non-cooperative target with LIDARs. The laboratory setup includes a scanning LIDAR, a monocular camera, a scaled-replica of a satellite-like target, and a set of calibration tools. The point clouds are processed by uncooperative model-based algorithms to estimate the target relative position and attitude with respect to the LIDAR. Target images, acquired by a monocular camera operated simultaneously with the LIDAR, are processed applying standard solutions to the Perspective-n-Points problem to get high-accuracy pose estimates which can be used as a benchmark to evaluate the accuracy attained by the LIDAR-based techniques. To this aim, a precise knowledge of the extrinsic relative calibration between the camera and the LIDAR is essential, and it is obtained by implementing an original calibration approach which does not need ad-hoc homologous targets (e.g., retro-reflectors) easily recognizable by the two sensors. The pose determination techniques investigated by this work are of interest to space applications involving close-proximity maneuvers between non-cooperative platforms, e.g., on-orbit servicing and active debris removal. PMID:28946651
Hardware in the Loop Performance Assessment of LIDAR-Based Spacecraft Pose Determination.
Opromolla, Roberto; Fasano, Giancarmine; Rufino, Giancarlo; Grassi, Michele
2017-09-24
In this paper an original, easy to reproduce, semi-analytic calibration approach is developed for hardware-in-the-loop performance assessment of pose determination algorithms processing point cloud data, collected by imaging a non-cooperative target with LIDARs. The laboratory setup includes a scanning LIDAR, a monocular camera, a scaled-replica of a satellite-like target, and a set of calibration tools. The point clouds are processed by uncooperative model-based algorithms to estimate the target relative position and attitude with respect to the LIDAR. Target images, acquired by a monocular camera operated simultaneously with the LIDAR, are processed applying standard solutions to the Perspective- n -Points problem to get high-accuracy pose estimates which can be used as a benchmark to evaluate the accuracy attained by the LIDAR-based techniques. To this aim, a precise knowledge of the extrinsic relative calibration between the camera and the LIDAR is essential, and it is obtained by implementing an original calibration approach which does not need ad-hoc homologous targets (e.g., retro-reflectors) easily recognizable by the two sensors. The pose determination techniques investigated by this work are of interest to space applications involving close-proximity maneuvers between non-cooperative platforms, e.g., on-orbit servicing and active debris removal.
3D artifact for calibrating kinematic parameters of articulated arm coordinate measuring machines
NASA Astrophysics Data System (ADS)
Zhao, Huining; Yu, Liandong; Xia, Haojie; Li, Weishi; Jiang, Yizhou; Jia, Huakun
2018-06-01
In this paper, a 3D artifact is proposed to calibrate the kinematic parameters of articulated arm coordinate measuring machines (AACMMs). The artifact is composed of 14 reference points with three different heights, which provides 91 different reference lengths, and a method is proposed to calibrate the artifact with laser tracker multi-stations. Therefore, the kinematic parameters of an AACMM can be calibrated in one setup of the proposed artifact, instead of having to adjust the 1D or 2D artifacts to different positions and orientations in the existing methods. As a result, it saves time to calibrate the AACMM with the proposed artifact in comparison with the traditional 1D or 2D artifacts. The performance of the AACMM calibrated with the proposed artifact is verified with a 600.003 mm gauge block. The result shows that the measurement accuracy of the AACMM is improved effectively through calibration with the proposed artifact.
NASA Astrophysics Data System (ADS)
Tauro, Flavia; Olivieri, Giorgio; Porfiri, Maurizio; Grimaldi, Salvatore
2014-05-01
Large Scale Particle Image Velocimetry (LSPIV) is a powerful methodology to nonintrusively monitor surface flows. Its use has been beneficial to the development of rating curves in riverine environments and to map geomorphic features in natural waterways. Typical LSPIV experimental setups rely on the use of mast-mounted cameras for the acquisition of natural stream reaches. Such cameras are installed on stream banks and are angled with respect to the water surface to capture large scale fields of view. Despite its promise and the simplicity of the setup, the practical implementation of LSPIV is affected by several challenges, including the acquisition of ground reference points for image calibration and time-consuming and highly user-assisted procedures to orthorectify images. In this work, we perform LSPIV studies on stream sections in the Aniene and Tiber basins, Italy. To alleviate the limitations of traditional LSPIV implementations, we propose an improved video acquisition setup comprising a telescopic, an inexpensive GoPro Hero 3 video camera, and a system of two lasers. The setup allows for maintaining the camera axis perpendicular to the water surface, thus mitigating uncertainties related to image orthorectification. Further, the mast encases a laser system for remote image calibration, thus allowing for nonintrusively calibrating videos without acquiring ground reference points. We conduct measurements on two different water bodies to outline the performance of the methodology in case of varying flow regimes, illumination conditions, and distribution of surface tracers. Specifically, the Aniene river is characterized by high surface flow velocity, the presence of abundant, homogeneously distributed ripples and water reflections, and a meagre number of buoyant tracers. On the other hand, the Tiber river presents lower surface flows, isolated reflections, and several floating objects. Videos are processed through image-based analyses to correct for lens distortions and analyzed with a commercially available PIV software. Surface flow velocity estimates are compared to supervised measurements performed by visually tracking objects floating on the stream surface and to rating curves developed by the Ufficio Idrografico e Mareografico (UIM) at Regione Lazio, Italy. Experimental findings demonstrate that the presence of tracers is crucial for surface flow velocity estimates. Further, considering surface ripples and patterns may lead to underestimations in LSPIV analyses.
Application of the Unity Rockfall Model to Variable Surface Material Conditions
NASA Astrophysics Data System (ADS)
Sala, Zac; Hutchinson, D. Jean; Ondercin, Matthew
2017-04-01
Rockfall is a geological process that poses risks to the safe operation of transportation infrastructure in mountainous environments world wide. The Unity rockfall model was created as a tool for 3D rockfall simulation as part of the Railway Ground Hazards Research Program, studying the impact of geotechnical hazards affecting Canadian railways [1]. The Unity rockfall model demonstrates the applicability of 3D video game engines for the development of realistic simulations, leveraging high-resolution site data collected using remote sensing techniques. Currently work is being done to further calibrate the model as an engineering tool for decision support. Calibration datasets include high-resolution terrestrial LiDAR and helicopter photogrammetry data collected as part of an ongoing rockfall monitoring program along the Thompson River Valley in south-central British Columbia, Canada. Change detection techniques developed as part of the program have been used to construct a database of rockfall event history and to develop magnitude-frequency relationships for rockfalls in the area [2][3]. Data collected as part of a controlled rock-rolling field program in Christchurch, New Zealand [4] is also being utilized for model calibration. Data on block dynamics for the artificially triggered rockfalls were collected through the use of embedded motion sensors and a sixteen camera setup. These experiments provide detailed information on block kinematics, and capture each impact point of the rockfall with the slope, thus offering a valuable dataset for comparison with modelling results. The research reported here explores the ability of the game engine based modelling technique to simulate rockfall under the variable slope conditions present at each of the sites where calibration data was collected. This includes steep natural rock slopes, with debris-talus cover, as well as shallower slopes with soil cover and vegetation. The varying slope conditions in each environment affect the dominant processes controlling rockfall movement downslope. In comparison to rock on rock collisions, impacts with soil and talus exhibit lower restitution values, with more energy loss occurring, but less overall fragmentation expected. The current modelling efforts present example workflows for each case, showing the steps taken to run realistic simulations using the Unity rockfall model. A comparison of the setup, model inputs and methods implemented in the model for each case study demonstrates the adaptability of the tool to different rockfall environments. References: [1] Ondercin, M.: An Exploration of Rockfall Modelling Through Game Engines, M.A.Sc Thesis, Queen's University, Kingston, 2016 [2] Kromer, R., Hutchinson, D.J., Lato, M., Gauthier, D., and Edwards, T. 2015. Identifying rock slope failure precursors using LiDAR for transportation corridor hazard management. Engineering Geology, 195, 93-103. doi:10.1016/j.enggeo.2015.05.012 [3] van Veen, M., Hutchinson, D.J., Kromer, R., Lato, M., and Edwards, T. (Submitted September 2016) Effects of Sampling Interval on the Frequency-Magnitude Relationship of Rockfalls Detected from Terrestrial Laser Scanning using Semi-Automated Methods. Landslides, MS number: LASL-D-16-00258. [4] Vick, L.M.: Evaluation of Field Data and 3D Modelling for Rockfall Hazard Assessment, Ph.D Thesis, University of Canterbury, Christchurch, 2015
NASA Astrophysics Data System (ADS)
Geisinger, Armin; Behrendt, Andreas; Wulfmeyer, Volker; Strohbach, Jens; Förstner, Jochen; Potthast, Roland
2017-12-01
A new backscatter lidar forward operator was developed which is based on the distinct calculation of the aerosols' backscatter and extinction properties. The forward operator was adapted to the COSMO-ART ash dispersion simulation of the Eyjafjallajökull eruption in 2010. While the particle number concentration was provided as a model output variable, the scattering properties of each individual particle type were determined by dedicated scattering calculations. Sensitivity studies were performed to estimate the uncertainties related to the assumed particle properties. Scattering calculations for several types of non-spherical particles required the usage of T-matrix routines. Due to the distinct calculation of the backscatter and extinction properties of the models' volcanic ash size classes, the sensitivity studies could be made for each size class individually, which is not the case for forward models based on a fixed lidar ratio. Finally, the forward-modeled lidar profiles have been compared to automated ceilometer lidar (ACL) measurements both qualitatively and quantitatively while the attenuated backscatter coefficient was chosen as a suitable physical quantity. As the ACL measurements were not calibrated automatically, their calibration had to be performed using satellite lidar and ground-based Raman lidar measurements. A slight overestimation of the model-predicted volcanic ash number density was observed. Major requirements for future data assimilation of data from ACL have been identified, namely, the availability of calibrated lidar measurement data, a scattering database for atmospheric aerosols, a better representation and coverage of aerosols by the ash dispersion model, and more investigation in backscatter lidar forward operators which calculate the backscatter coefficient directly for each individual aerosol type. The introduced forward operator offers the flexibility to be adapted to a multitude of model systems and measurement setups.
Wágner, Dorottya S; Ramin, Elham; Szabo, Peter; Dechesne, Arnaud; Plósz, Benedek Gy
2015-07-01
The objective of this work is to identify relevant settling velocity and rheology model parameters and to assess the underlying filamentous microbial community characteristics that can influence the solids mixing and transport in secondary settling tanks. Parameter values for hindered, transient and compression settling velocity functions were estimated by carrying out biweekly batch settling tests using a novel column setup through a four-month long measurement campaign. To estimate viscosity model parameters, rheological experiments were carried out on the same sludge sample using a rotational viscometer. Quantitative fluorescence in-situ hybridisation (qFISH) analysis, targeting Microthrix parvicella and phylum Chloroflexi, was used. This study finds that M. parvicella - predominantly residing inside the microbial flocs in our samples - can significantly influence secondary settling through altering the hindered settling velocity and yield stress parameter. Strikingly, this is not the case for Chloroflexi, occurring in more than double the abundance of M. parvicella, and forming filaments primarily protruding from the flocs. The transient and compression settling parameters show a comparably high variability, and no significant association with filamentous abundance. A two-dimensional, axi-symmetrical computational fluid dynamics (CFD) model was used to assess calibration scenarios to model filamentous bulking. Our results suggest that model predictions can significantly benefit from explicitly accounting for filamentous bulking by calibrating the hindered settling velocity function. Furthermore, accounting for the transient and compression settling velocity in the computational domain is crucial to improve model accuracy when modelling filamentous bulking. However, the case-specific calibration of transient and compression settling parameters as well as yield stress is not necessary, and an average parameter set - obtained under bulking and good settling conditions - can be used. Copyright © 2015 Elsevier Ltd. All rights reserved.
Tamburini, Elena; Tagliati, Chiara; Bonato, Tiziano; Costa, Stefania; Scapoli, Chiara; Pedrini, Paola
2016-01-01
Near-infrared spectroscopy (NIRS) has been widely used for quantitative and/or qualitative determination of a wide range of matrices. The objective of this study was to develop a NIRS method for the quantitative determination of fluorine content in polylactide (PLA)-talc blends. A blending profile was obtained by mixing different amounts of PLA granules and talc powder. The calibration model was built correlating wet chemical data (alkali digestion method) and NIR spectra. Using FT (Fourier Transform)-NIR technique, a Partial Least Squares (PLS) regression model was set-up, in a concentration interval of 0 ppm of pure PLA to 800 ppm of pure talc. Fluorine content prediction (R2cal = 0.9498; standard error of calibration, SEC = 34.77; standard error of cross-validation, SECV = 46.94) was then externally validated by means of a further 15 independent samples (R2EX.V = 0.8955; root mean standard error of prediction, RMSEP = 61.08). A positive relationship between an inorganic component as fluorine and NIR signal has been evidenced, and used to obtain quantitative analytical information from the spectra. PMID:27490548
Digital TAcy: proof of concept
NASA Astrophysics Data System (ADS)
Bubel, Annie; Sylvain, Jean-François; Martin, François
2009-06-01
Anthocyanins are water soluble pigments in plants that are recognized for their antioxidant property. These pigments are found in high concentration in cranberries, which give their characteristic dark red color. The Total Anthocyanin concentration (TAcy) measurement process requires precious time, consumes chemical products and needs to be continuously repeated during the harvesting period. The idea of the digital TAcy system is to explore the possibility of estimating the TAcy based on analysing the color of the fruits. A calibrated color image capture set-up was developed and characterized, allowing calibrated color data capture from hundreds of samples over two harvesting years (fall of 2007 and 2008). The acquisition system was designed in such a way to avoid specular reflections and provide good resolution images with an extended range of color values representative of the different stages of fruit ripeness. The chemical TAcy value being known for every sample, a mathematical model was developed to predict the TAcy based on color information. This model, which also takes into account bruised and rotten fruits, shows a RMS error of less than 6% over the TAcy interest range [0-50].
A framework for multi-criteria assessment of model enhancements
NASA Astrophysics Data System (ADS)
Francke, Till; Foerster, Saskia; Brosinsky, Arlena; Delgado, José; Güntner, Andreas; López-Tarazón, José A.; Bronstert, Axel
2016-04-01
Modellers are often faced with unsatisfactory model performance for a specific setup of a hydrological model. In these cases, the modeller may try to improve the setup by addressing selected causes for the model errors (i.e. data errors, structural errors). This leads to adding certain "model enhancements" (MEs), e.g. climate data based on more monitoring stations, improved calibration data, modifications in process formulations. However, deciding on which MEs to implement remains a matter of expert knowledge, guided by some sensitivity analysis at best. When multiple MEs have been implemented, a resulting improvement in model performance is not easily attributed, especially when considering different aspects of this improvement (e.g. better performance dynamics vs. reduced bias). In this study we present an approach for comparing the effect of multiple MEs in the face of multiple improvement aspects. A stepwise selection approach and structured plots help in addressing the multidimensionality of the problem. The approach is applied to a case study, which employs the meso-scale hydrosedimentological model WASA-SED for a sub-humid catchment. The results suggest that the effect of the MEs is quite diverse, with some MEs (e.g. augmented rainfall data) cause improvements for almost all aspects, while the effect of other MEs is restricted to few aspects or even deteriorate some. These specific results may not be generalizable. However, we suggest that based on studies like this, identifying the most promising MEs to implement may be facilitated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loehle, Stefan; Lein, Sebastian
A revised scientific instrument to measure simultaneously kinetic temperatures of different atoms from their optical emission profile is reported. Emission lines are simultaneously detected using one single scanning Fabry-Perot-interferometer (FPI) for a combined spectroscopic setup to acquire different emission lines simultaneously. The setup consists in a commercial Czerny-Turner spectrometer configuration which is combined with a scanning Fabry-Perot interferometer. The fast image acquisition mode of an intensified charge coupled device camera allows the detection of a wavelength interval of interest continuously while acquiring the highly resolved line during the scan of the FPI ramp. Results using this new setup are presentedmore » for the simultaneous detection of atomic nitrogen and oxygen in a high enthalpy air plasma flow as used for atmospheric re-entry research and their respective kinetic temperatures derived from the measured line profiles. The paper presents the experimental setup, the calibration procedure, and an exemplary result. The determined temperatures are different, a finding that has been published so far as due to a drawback of the experimental setup of sequential measurements, and which has now to be investigated in more detail.« less
Climate change impact assessment on hydrology of a small watershed using semi-distributed model
NASA Astrophysics Data System (ADS)
Pandey, Brij Kishor; Gosain, A. K.; Paul, George; Khare, Deepak
2017-07-01
This study is an attempt to quantify the impact of climate change on the hydrology of Armur watershed in Godavari river basin, India. A GIS-based semi-distributed hydrological model, soil and water assessment tool (SWAT) has been employed to estimate the water balance components on the basis of unique combinations of slope, soil and land cover classes for the base line (1961-1990) and future climate scenarios (2071-2100). Sensitivity analysis of the model has been performed to identify the most critical parameters of the watershed. Average monthly calibration (1987-1994) and validation (1995-2000) have been performed using the observed discharge data. Coefficient of determination (R2), Nash-Sutcliffe efficiency (ENS) and root mean square error (RMSE) were used to evaluate the model performance. Calibrated SWAT setup has been used to evaluate the changes in water balance components of future projection over the study area. HadRM3, a regional climatic data, have been used as input of the hydrological model for climate change impact studies. In results, it was found that changes in average annual temperature (+3.25 °C), average annual rainfall (+28 %), evapotranspiration (28 %) and water yield (49 %) increased for GHG scenarios with respect to the base line scenario.
Two laboratory methods for the calibration of GPS speed meters
NASA Astrophysics Data System (ADS)
Bai, Yin; Sun, Qiao; Du, Lei; Yu, Mei; Bai, Jie
2015-01-01
The set-ups of two calibration systems are presented to investigate calibration methods of GPS speed meters. The GPS speed meter calibrated is a special type of high accuracy speed meter for vehicles which uses Doppler demodulation of GPS signals to calculate the measured speed of a moving target. Three experiments are performed: including simulated calibration, field-test signal replay calibration, and in-field test comparison with an optical speed meter. The experiments are conducted at specific speeds in the range of 40-180 km h-1 with the same GPS speed meter as the device under calibration. The evaluation of measurement results validates both methods for calibrating GPS speed meters. The relative deviations between the measurement results of the GPS-based high accuracy speed meter and those of the optical speed meter are analyzed, and the equivalent uncertainty of the comparison is evaluated. The comparison results justify the utilization of GPS speed meters as reference equipment if no fewer than seven satellites are available. This study contributes to the widespread use of GPS-based high accuracy speed meters as legal reference equipment in traffic speed metrology.
NASA Astrophysics Data System (ADS)
Doerr, H.-P.; Kentischer, T. J.; Steinmetz, T.; Probst, R. A.; Franz, M.; Holzwarth, R.; Udem, Th.; Hänsch, T. W.; Schmidt, W.
2012-09-01
Laser frequency combs (LFC) provide a direct link between the radio frequency (RF) and the optical frequency regime. The comb-like spectrum of an LFC is formed by exact equidistant laser modes, whose absolute optical frequencies are controlled by RF-references such as atomic clocks or GPS receivers. While nowadays LFCs are routinely used in metrological and spectroscopic fields, their application in astronomy was delayed until recently when systems became available with a mode spacing and wavelength coverage suitable for calibration of astronomical spectrographs. We developed a LFC based calibration system for the high-resolution echelle spectrograph at the German Vacuum Tower Telescope (VTT), located at the Teide observatory, Tenerife, Canary Islands. To characterize the calibration performance of the instrument, we use an all-fiber setup where sunlight and calibration light are fed to the spectrograph by the same single-mode fiber, eliminating systematic effects related to variable grating illumination.
Y-piece temperature and humidification during mechanical ventilation.
Solomita, Mario; Daroowalla, Feroza; Leblanc, Deniese S; Smaldone, Gerald C
2009-04-01
Practitioners often presume there is adequate humidification in the ventilator circuit if the Y-piece is at a specified temperature, but control of Y-piece temperature may be inadequate to ensure adequate humidification. In an in vitro bench model we measured water-vapor delivery with several heated humidification setups and a wide range of minute volume (V (E)) values. The setup included a condenser, hygrometry, and thermometer. First, we calibrated the system with a point-source humidifier and water pump. Then we tested the water-vapor delivery during non-heated-wire humidification and during heated-wire humidification with a temperature gradient of +3 degrees C, 0 degrees C, and -3 degrees C between the humidifier and the Y-piece. We compared the results to 2 recommended humidification values: 100% saturated (absolute humidity 44 mg H(2)O/L) gas at 37 degrees C (saturated/37 degrees C); and 75% saturated (absolute humidity 33 mg H(2)O/L), which is the humidity recommended by the International Organization for Standardization (the ISO standard). In all the experiments the setup was set to provide 35 degrees C at the Y-piece. Our method for measuring water-vapor delivery closely approximated the amount delivered by a calibrated pump, but slightly underestimated the water-vapor delivery in all the experiments and the whole V (E) range. At all V (E) values, water-vapor delivery during non-heated-wire humidification matched or exceeded saturated/37 degrees C and was significantly greater than that during heated-wire humidification. During heated-wire humidification, water-vapor delivery varied with the temperature gradient and did not reach saturated/37 degrees C at V (E) > 6 L/min. Water-vapor delivery with the negative temperature gradient was below the ISO standard. Maintaining temperature at one point in the inspiratory circuit (eg, Y-piece), does not ensure adequate water-vapor delivery. Other factors (humidification system, V (E), gradient setting) are critical. At a given temperature, humidification may be significantly higher or lower than expected.
Predicting ESI/MS Signal Change for Anions in Different Solvents.
Kruve, Anneli; Kaupmees, Karl
2017-05-02
LC/ESI/MS is a technique widely used for qualitative and quantitative analysis in various fields. However, quantification is currently possible only for compounds for which the standard substances are available, as the ionization efficiency of different compounds in ESI source differs by orders of magnitude. In this paper we present an approach for quantitative LC/ESI/MS analysis without standard substances. This approach relies on accurately predicting the ionization efficiencies in ESI source based on a model, which uses physicochemical parameters of analytes. Furthermore, the model has been made transferable between different mobile phases and instrument setups by using a suitable set of calibration compounds. This approach has been validated both in flow injection and chromatographic mode with gradient elution.
NASA Astrophysics Data System (ADS)
Hafner, D.
2015-09-01
The application of ground-based boresight sources for calibration and testing of tracking antennas usually entails various difficulties, mostly due to unwanted ground effects. To avoid this problem, DLR MORABA developed a small, lightweight, frequency-adjustable S-band boresight source, mounted on a small remote-controlled multirotor aircraft. Highly accurate GPS-supported, position and altitude control functions allow both, very steady positioning of the aircraft in mid-air, and precise waypoint-based, semi-autonomous flights. In contrast to fixed near-ground boresight sources this flying setup enables to avoid obstructions in the Fresnel zone between source and antenna. Further, it minimizes ground reflections and other multipath effects which can affect antenna calibration. In addition, the large operating range of a flying boresight simplifies measurements in the far field of the antenna and permits undisturbed antenna pattern tests. A unique application is the realistic simulation of sophisticated flight paths, including overhead tracking and demanding trajectories of fast objects such as sounding rockets. Likewise, dynamic tracking tests are feasible which provide crucial information about the antenna pedestal performance — particularly at high elevations — and reveal weaknesses in the autotrack control loop of tracking antenna systems. During acceptance tests of MORABA's new tracking antennas, a manned aircraft was never used, since the Flying Boresight surpassed all expectations regarding usability, efficiency, and precision. Hence, it became an integral part of MORABA's standard antenna setup and calibration procedures.
Method for traceable measurement of LTE signals
NASA Astrophysics Data System (ADS)
Sunder Dash, Soumya; Pythoud, Frederic; Leuchtmann, Pascal; Leuthold, Juerg
2018-04-01
This contribution presents a reference setup to measure the power of the cell-specific resource elements present in downlink long term evolution (LTE) signals in a way that the measurements are traceable to the international system of units. This setup can be used to calibrate the LTE code-selective field probes that are used to measure the radiation of base stations for mobile telephony. It can also be used to calibrate LTE signal generators and receivers. The method is based on traceable scope measurements performed directly at the output of a measuring antenna. It implements offline digital signal processing demodulation algorithms that consider the digital down-conversion, timing synchronization, frequency synchronization, phase synchronization and robust LTE cell identification to produce the downlink time-frequency LTE grid. Experimental results on conducted test scenarios, both single-input-single-output and multiple-input-multiple-output antenna configuration, show promising results confirming measurement uncertainties of the order of 0.05 dB with a coverage factor of 2.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Papaconstadopoulos, P; Archambault, L; Seuntjens, J
Purpose: To investigate the accuracy of the Exradin W1 (SI) and of an “in-house” plastic scintillation dosimeter (CHUQ PSD) in small radiation fields. Methods: Output factor (OF) measurements with the W1 and CHUQ PSD were performed for field sizes of 0.5 x 0.5, 1 x 1 and 2 x 2 cm{sup 2}. Both detectors were placed parallel to the central axis (CAX) in water. The spectrum discrimination calibration method was performed in each set-up to account for the Cerenkov (CRV) signal created in the fiber. The OFs were compared to the expected field factors in water derived using i) Montemore » Carlo (MC) simulations of an accurate accelerator model and ii) microLion (PTW) and D1V diode (SI) OFs. MC-derived correction factors were applied to both the microLion and D1V OFs. For the CHUQ PSD the calibration was repeated in water (// CAX), solid water (perpendicular to CAX) and under a shielded configuration. The signal was collected using a spectrometer (wavelength range = 185–1100 nm). Spectral analysis was performed to evaluate potential changes of the spectral distributions under the various calibration set-up configurations. Results: The W1 OFs presented an over-response for the 0.5 x 0.5 cm{sup 2} in the range of 3 – 4.1% relative to the expected field factor. The CHUQ PSD presented an under-response in the range of 1.5 – 2.7%, without accounting for volume averaging. The CRV spectra under the various calibration procedures appeared similar to each other and only minor changes were observed to the respective OFs. Conclusion: The W1 and CHUQ PSD can be used in small fields down to a 1 x 1 cm{sup 2} field size. Discrepancies were encountered between the two detectors for the smallest field size of 0.5 x 0.5 cm{sup 2} with the CHUQ PSD exhibiting a closer agreement to the expected field factor. Funding sources: 1) Alexander S. Onassis Public Benefit Foundation in Greece and 2) CREATE Medical Physics Research Training Network grant of the Natural Sciences and Engineering Research Council, Grant number: 432290, RGPIN-2014-06475.« less
Calibrating the SNfactory Integral Field Spectrograph (SNIFS) with SCALA
NASA Astrophysics Data System (ADS)
Küsters, Daniel; Lombardo, Simona; Kowalski, Marek; Aldering, Greg; Nordin, Jakob; Rigault, Mickael
2016-08-01
The SNIFS CALibration Apparatus (SCALA), a device to calibrate the Supernova Integral Field Spectrograph on the University Hawaii 2.2m telescope, was developed and installed in Spring 2014. SCALA produces an artificial planet with a diameter of 1° and a constant surface brightness. The wavelength of the beam can be tuned between 3200 Å and 10000 Å and has a bandwidth of 35 Å. The amount of light injected into the telescope is monitored with NIST calibrated photodiodes. SCALA was upgraded in 2015 with a mask installed at the entrance pupil of the UH88 telescope, ensuring that the illumination of the telescope by stars is similar to that of SCALA. With this setup, a first calibration run was performed in conjunction with the spectrophotometric observations of standard stars. We present first estimates for the expected systematic uncertainties of the in-situ calibration and discuss the results of tests that examine the influence of stray light produced in the optics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ojanen, M.; Hahtela, O. M.; Heinonen, M.
MIKES is developing a measurement set-up for calibrating thermocouples in the temperature range 960 °C - 1500 °C. The calibration method is based on direct comparison of thermocouples and radiation thermometers. We have designed a graphite blackbody comparator cell, which is operated in a horizontal single-zone tube furnace. The cell includes two blackbody cavities for radiation temperature measurements. The cavities have openings on opposite sides of the cell, allowing simultaneous measurement with two radiation thermometers. The design of the comparator allows three thermocouples to be calibrated simultaneously. The thermocouples to be calibrated are inserted in thermometer wells around one ofmore » the measurement cavities. We characterize the blackbody comparator in terms of repeatability, temperature distribution and emissivity. Finally, we validate the uncertainty analysis by comparing calibration results obtained for type B and S thermocouples to the calibration results reported by Technical Research Institute of Sweden (SP), and MIKES. The agreement in the temperature range 1000 °C - 1500 °C is within 0.90 °C, the average deviation being 0.17 °C.« less
Characterization of X-ray fields at the center for devices and radiological health
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cerra, F.
This talk summarizes the process undertaken by the Center for Devices and Radiological Health (CDRH) for establishing reference x-ray fields in its accredited calibration laboratory. The main considerations and their effects on the calibration parameters are discussed. The characterization of fields may be broken down into two parts: (1) the initial setup of the calibration beam spectra and (2) the ongoing measurements and controls which ensure consistency of the reference fields. The methods employed by CDRH for both these stages and underlying considerations are presented. Uncertainties associated with the various parameters are discussed. Finally, the laboratory`s performance, as evidenced bymore » ongoing measurement quality assurance results, is reported.« less
NASA Astrophysics Data System (ADS)
Tomaskovicova, Sonia; Paamand, Eskild; Ingeman-Nielsen, Thomas; Bauer-Gottwein, Peter
2013-04-01
The sedimentary settings of West Greenlandic towns with their fine-grained, often ice-rich marine deposits are of great concern in building and construction projects in Greenland, as they lose volume, strength and bearing capacity upon thaw. Since extensive permafrost thawing over large areas of inhabited Greenlandic coast has been predicted as a result of climate change, it is of great both technical and economical interest to assess the extent and thermal properties of such formations. Availability of methods able to determine the thermal parameters of permafrost and forecast its reaction to climate evolution is therefore crucial for sustainable infrastructure planning and development in the Arctic. We are developing a model of heat transport for permafrost able to assess the thermal properties of the ground based on calibration by surface geoelectrical measurements and ground surface temperature measurements. The advantages of modeling approach and use of exclusively surface measurements (in comparison with direct measurements on core samples) are smaller environmental impact, cheaper logistics, assessment of permafrost conditions over larger areas and possibility of forecasting of the fate of permafrost by application of climate forcing. In our approach, the heat model simulates temperature distribution in the ground based on ground surface temperature, specified proportions of the ground constituents and their estimated thermal parameters. The calculated temperatures in the specified model layers are governing the phase distribution between unfrozen water and ice. The changing proportion of unfrozen water content as function of temperature is the main parameter driving the evolution of electrical properties of the ground. We use a forward modeling scheme to calculate the apparent resistivity distribution of such a ground as if collected from a surface geoelectrical array. The calculated resistivity profile is compared to actual field measurements and a difference between the synthetic and the measured apparent resistivities is minimized in a least-squares inversion procedure by adjusting the thermal parameters of the heat model. A site-specific calibration is required since the relation between unfrozen water content and temperature is strongly dependent on the grain size of the soil. We present details of an automated permanent field measurement setup that has been established to collect the calibration data in Ilulissat, West Greenland. Considering the station location in high latitude environment, this setup is unique of its kind since the installation of automated geophysical stations in the Arctic conditions is a challenging task. The main issues are related to availability of adapted equipment, high demand on robustness of the equipment and method due to the harsh environment, remoteness of the field sites and related powering issues of such systems. By showing the results from the new-established geoelectrical station over the freezing period in autumn 2012, we prove the 2D time lapse resistivity tomography to be an effective method for permafrost monitoring in high latitudes. We demonstrate the effectivity of time lapse geoelectrical signal for petrophysical relationship calibration, which is enhanced comparing to sparse measurements.
The current status of the MASHA setup
NASA Astrophysics Data System (ADS)
Vedeneev, V. Yu.; Rodin, A. M.; Krupa, L.; Belozerov, A. V.; Chernysheva, E. V.; Dmitriev, S. N.; Gulyaev, A. V.; Gulyaeva, A. V.; Kamas, D.; Kliman, J.; Komarov, A. B.; Motycak, S.; Novoselov, A. S.; Salamatin, V. S.; Stepantsov, S. V.; Podshibyakin, A. V.; Yukhimchuk, S. A.; Granja, C.; Pospisil, S.
2017-11-01
The MASHA setup designed as the mass-separator with the resolving power of about 1700, which allows mass identification of superheavy nuclides is described. The setup uses solid ISOL (Isotope Separation On-Line) method. In the present article the upgrade of some parts of MASHA are described: target box (rotating target + hot catcher), ion source based on electron cyclotron resonance, data acquisition, beam diagnostics and control systems. The upgrade is undertaken in order to increase the total separation efficiency, reduce the separation time, of the installation and working stability and make possible continuous measurements at high beam currents. Ion source efficiency was measured in autonomous regime with using calibrated gas leaks of Kr and Xe injected directly to ion source. Some results of the first experiments for production of radon isotopes using the multi-nucleon transfer reaction 48Ca+242Pu are described in the present article. The using of TIMEPIX detector with MASHA setup for neutron-rich Rn isotopes identification is also described.
Yao, Mingyin; Yang, Hui; Huang, Lin; Chen, Tianbing; Rao, Gangfu; Liu, Muhua
2017-05-10
In seeking a novel method with the ability of green analysis in monitoring toxic heavy metals residue in fresh leafy vegetables, laser-induced breakdown spectroscopy (LIBS) was applied to prove its capability in performing this work. The spectra of fresh vegetable samples polluted in the lab were collected by optimized LIBS experimental setup, and the reference concentrations of cadmium (Cd) from samples were obtained by conventional atomic absorption spectroscopy after wet digestion. The direct calibration employing intensity of single Cd line and Cd concentration exposed the weakness of this calibration method. Furthermore, the accuracy of linear calibration can be improved a little by triple Cd lines as characteristic variables, especially after the spectra were pretreated. However, it is not enough in predicting Cd in samples. Therefore, partial least-squares regression (PLSR) was utilized to enhance the robustness of quantitative analysis. The results of the PLSR model showed that the prediction accuracy of the Cd target can meet the requirement of determination in food safety. This investigation presented that LIBS is a promising and emerging method in analyzing toxic compositions in agricultural products, especially combined with suitable chemometrics.
NASA Astrophysics Data System (ADS)
Bozhenkov, S. A.; Beurskens, M.; Dal Molin, A.; Fuchert, G.; Pasch, E.; Stoneking, M. R.; Hirsch, M.; Höfel, U.; Knauer, J.; Svensson, J.; Trimino Mora, H.; Wolf, R. C.
2017-10-01
The optimized stellarator Wendelstein 7-X started operation in December 2015 with a 10 week limiter campaign. Divertor experiments will begin in the second half of 2017. The W7-X Thomson scattering system is an essential diagnostic for electron density and temperature profiles. In this paper the Thomson scattering diagnostic is described in detail, including its design, calibration, data evaluation and first experimental results. Plans for further development are also presented. The W7-X Thomson system is a Nd:YAG setup with up to five lasers, two sets of light collection lenses viewing the entire plasma cross-section, fiber bundles and filter based polychromators. To reduce hardware costs, two or three scattering volumes are measured with a single polychromator. The relative spectral calibration is carried out with the aid of a broadband supercontinuum light source. The absolute calibration is performed by observing Raman scattering in nitrogen. The electron temperatures and densities are recovered by Bayesian modelling. In the first campaign, the diagnostic was equipped for 10 scattering volumes. It provided temperature profiles comparable to those measured using an electron cyclotron emission diagnostic and line integrated densities within 10% of those from a dispersion interferometer.
NASA Astrophysics Data System (ADS)
Colin, Angel
2014-03-01
This paper describes an experimental setup for the spectral calibration of bolometric detectors used in radioastronomy. The system is composed of a Martin-Puplett interferometer with two identical artificial blackbody sources operating in the vacuum mode at 77 K and 300 K simultaneously. One source is integrated into a liquid nitrogen cryostat, and the other one into a vacuum chamber at room temperature. The sources were designed with a combination of conical with cylindrical geometries thus forming an orthogonal configuration to match the internal optics of the interfermometer. With a simple mathematical model we estimated emissivities of ε 0.995 for each source.
NASA Astrophysics Data System (ADS)
Hueni, A.
2015-12-01
ESA's Airborne Imaging Spectrometer APEX (Airborne Prism Experiment) was developed under the PRODEX (PROgramme de Développement d'EXpériences scientifiques) program by a Swiss-Belgian consortium and entered its operational phase at the end of 2010 (Schaepman et al., 2015). Work on the sensor model has been carried out extensively within the framework of European Metrology Research Program as part of the Metrology for Earth Observation and Climate (MetEOC and MetEOC2). The focus has been to improve laboratory calibration procedures in order to reduce uncertainties, to establish a laboratory uncertainty budget and to upgrade the sensor model to compensate for sensor specific biases. The updated sensor model relies largely on data collected during dedicated characterisation experiments in the APEX calibration home base but includes airborne data as well where the simulation of environmental conditions in the given laboratory setup was not feasible. The additions to the model deal with artefacts caused by environmental changes and electronic features, namely the impact of ambient air pressure changes on the radiometry in combination with dichroic coatings, influences of external air temperatures and consequently instrument baffle temperatures on the radiometry, and electronic anomalies causing radiometric errors in the four shortwave infrared detector readout blocks. Many of these resolved issues might be expected to be present in other imaging spectrometers to some degree or in some variation. Consequently, the work clearly shows the difficulties of extending a laboratory-based uncertainty to data collected under in-flight conditions. The results are hence not only of interest to the calibration scientist but also to the spectroscopy end user, in particular when commercial sensor systems are used for data collection and relevant sensor characteristic information tends to be sparse. Schaepman, et al, 2015. Advanced radiometry measurements and Earth science applications with the Airborne Prism Experiment (APEX). RSE, 158, 207-219.
Indoor calibration for stereoscopic camera STC: a new method
NASA Astrophysics Data System (ADS)
Simioni, E.; Re, C.; Da Deppo, V.; Naletto, G.; Borrelli, D.; Dami, M.; Ficai Veltroni, I.; Cremonese, G.
2017-11-01
In the framework of the ESA-JAXA BepiColombo mission to Mercury, the global mapping of the planet will be performed by the on-board Stereo Camera (STC), part of the SIMBIO-SYS suite [1]. In this paper we propose a new technique for the validation of the 3D reconstruction of planetary surface from images acquired with a stereo camera. STC will provide a three-dimensional reconstruction of Mercury surface. The generation of a DTM of the observed features is based on the processing of the acquired images and on the knowledge of the intrinsic and extrinsic parameters of the optical system. The new stereo concept developed for STC needs a pre-flight verification of the actual capabilities to obtain elevation information from stereo couples: for this, a stereo validation setup to get an indoor reproduction of the flight observing condition of the instrument would give a much greater confidence to the developed instrument design. STC is the first stereo satellite camera with two optical channels converging in a unique sensor. Its optical model is based on a brand new concept to minimize mass and volume and to allow push-frame imaging. This model imposed to define a new calibration pipeline to test the reconstruction method in a controlled ambient. An ad-hoc indoor set-up has been realized for validating the instrument designed to operate in deep space, i.e. in-flight STC will have to deal with source/target essentially placed at infinity. This auxiliary indoor setup permits on one side to rescale the stereo reconstruction problem from the operative distance in-flight of 400 km to almost 1 meter in lab; on the other side it allows to replicate different viewing angles for the considered targets. Neglecting for sake of simplicity the Mercury curvature, the STC observing geometry of the same portion of the planet surface at periherm corresponds to a rotation of the spacecraft (SC) around the observed target by twice the 20° separation of each channel with respect to nadir. The indoor simulation of the SC trajectory can therefore be provided by two rotation stages to generate a dual system of the real one with same stereo parameters but different scale. The set of acquired images will be used to get a 3D reconstruction of the target: depth information retrieved from stereo reconstruction and the known features of the target will allow to get an evaluation of the stereo system performance both in terms of horizontal resolution and vertical accuracy. To verify the 3D reconstruction capabilities of STC by means of this stereo validation set-up, the lab target surface should provide a reference, i.e. should be known with an accuracy better than that required on the 3D reconstruction itself. For this reason, the rock samples accurately selected to be used as lab targets have been measured with a suitable accurate 3D laser scanner. The paper will show this method in detail analyzing all the choices adopted to lead back a so complex system to the indoor solution for calibration.
Indoor Calibration for Stereoscopic Camera STC, A New Method
NASA Astrophysics Data System (ADS)
Simioni, E.; Re, C.; Da Deppo, V.; Naletto, G.; Borrelli, D.; Dami, M.; Ficai Veltroni, I.; Cremonese, G.
2014-10-01
In the framework of the ESA-JAXA BepiColombo mission to Mercury, the global mapping of the planet will be performed by the on-board Stereo Camera (STC), part of the SIMBIO-SYS suite [1]. In this paper we propose a new technique for the validation of the 3D reconstruction of planetary surface from images acquired with a stereo camera. STC will provide a three-dimensional reconstruction of Mercury surface. The generation of a DTM of the observed features is based on the processing of the acquired images and on the knowledge of the intrinsic and extrinsic parameters of the optical system. The new stereo concept developed for STC needs a pre-flight verification of the actual capabilities to obtain elevation information from stereo couples: for this, a stereo validation setup to get an indoor reproduction of the flight observing condition of the instrument would give a much greater confidence to the developed instrument design. STC is the first stereo satellite camera with two optical channels converging in a unique sensor. Its optical model is based on a brand new concept to minimize mass and volume and to allow push-frame imaging. This model imposed to define a new calibration pipeline to test the reconstruction method in a controlled ambient. An ad-hoc indoor set-up has been realized for validating the instrument designed to operate in deep space, i.e. in-flight STC will have to deal with source/target essentially placed at infinity. This auxiliary indoor setup permits on one side to rescale the stereo reconstruction problem from the operative distance in-flight of 400 km to almost 1 meter in lab; on the other side it allows to replicate different viewing angles for the considered targets. Neglecting for sake of simplicity the Mercury curvature, the STC observing geometry of the same portion of the planet surface at periherm corresponds to a rotation of the spacecraft (SC) around the observed target by twice the 20° separation of each channel with respect to nadir. The indoor simulation of the SC trajectory can therefore be provided by two rotation stages to generate a dual system of the real one with same stereo parameters but different scale. The set of acquired images will be used to get a 3D reconstruction of the target: depth information retrieved from stereo reconstruction and the known features of the target will allow to get an evaluation of the stereo system performance both in terms of horizontal resolution and vertical accuracy. To verify the 3D reconstruction capabilities of STC by means of this stereo validation set-up, the lab target surface should provide a reference, i.e. should be known with an accuracy better than that required on the 3D reconstruction itself. For this reason, the rock samples accurately selected to be used as lab targets have been measured with a suitable accurate 3D laser scanner. The paper will show this method in detail analyzing all the choices adopted to lead back a so complex system to the indoor solution for calibration.
NASA Astrophysics Data System (ADS)
Tan, Xihe; Mester, Achim; von Hebel, Christian; van der Kruk, Jan; Zimmermann, Egon; Vereecken, Harry; van Waasen, Stefan
2017-04-01
Electromagnetic induction (EMI) systems offer a great potential to obtain highly resolved layered electrical conductivity models of the shallow subsurface. State-of-the-art inversion procedures require quantitative calibration of EMI data, especially for short-offset EMI systems where significant data shifts are often observed. These shifts are caused by external influences such as the presence of the operator, zero-leveling procedures, the field setup used to move the EMI system and/or cables close by. Calibrations can be performed by using collocated electrical resistivity measurements or taking soil samples, however, these two methods take a lot of time in the field. To improve the calibration in a fast and concise way, we introduce a novel on-site calibration method using a series of apparent electrical conductivity (ECa) values acquired at multiple elevations for a multi-configuration EMI system. No additional instrument or pre-knowledge of the subsurface is needed to acquire quantitative ECa data. By using this calibration method, we correct each coil configuration, i.e., transmitter and receiver coil separation and the horizontal or vertical coplanar (HCP or VCP) coil orientation with a unique set of calibration parameters. A multi-layer soil structure at the corresponding measurement location is inverted together with the calibration parameters using full-solution Maxwell equations for the forward modelling within the shuffled complex evolution (SCE) algorithm to find the optimum solution under a user-defined parameter space. Synthetic data verified the feasibility for calibrating HCP and VCP measurements of a custom made six-coil EMI system with coil offsets between 0.35 m and 1.8 m for quantitative data inversions. As a next step, we applied the calibration approach on acquired experimental data from a bare soil test field (Selhausen, Germany) for the considered EMI system. The obtained calibration parameters were applied to measurements over a 30 m transect line that covers a range of conductivities between 5 and 40 mS/m. Inverted calibrated EMI data of the transect line showed very similar electrical conductivity distributions and layer interfaces of the subsurface compared to reference data obtained from vertical electrical sounding (VES) measurements. These results show that a combined calibration and inversion of multi-configuration EMI data is possible when including measurements at different elevations, which will speed up the measurement process to obtain quantitative EMI data since the labor intensive electrical resistivity measurement or soil coring is not necessary anymore.
Calibration of Photon Sources for Brachytherapy
NASA Astrophysics Data System (ADS)
Rijnders, Alex
Source calibration has to be considered an essential part of the quality assurance program in a brachytherapy department. Not only it will ensure that the source strength value used for dose calculation agrees within some predetermined limits to the value stated on the source certificate, but also it will ensure traceability to international standards. At present calibration is most often still given in terms of reference air kerma rate, although calibration in terms of absorbed dose to water would be closer to the users interest. It can be expected that in a near future several standard laboratories will be able to offer this latter service, and dosimetry protocols will have to be adapted in this way. In-air measurement using ionization chambers (e.g. a Baldwin—Farmer ionization chamber for 192Ir high dose rate HDR or pulsed dose rate PDR sources) is still considered the method of choice for high energy source calibration, but because of their ease of use and reliability well type chambers are becoming more popular and are nowadays often recommended as the standard equipment. For low energy sources well type chambers are in practice the only equipment available for calibration. Care should be taken that the chamber is calibrated at the standard laboratory for the same source type and model as used in the clinic, and using the same measurement conditions and setup. Several standard laboratories have difficulties to provide these calibration facilities, especially for the low energy seed sources (125I and 103Pd). Should a user not be able to obtain properly calibrated equipment to verify the brachytherapy sources used in his department, then at least for sources that are replaced on a regular basis, a consistency check program should be set up to ensure a minimal level of quality control before these sources are used for patient treatment.
Calibrating the interaction matrix for the LINC-NIRVANA high layer wavefront sensor.
Zhang, Xianyu; Arcidiacono, Carmelo; Conrad, Albert R; Herbst, Thomas M; Gaessler, Wolfgang; Bertram, Thomas; Ragazzoni, Roberto; Schreiber, Laura; Diolaiti, Emiliano; Kuerster, Martin; Bizenberger, Peter; Meschke, Daniel; Rix, Hans-Walter; Rao, Changhui; Mohr, Lars; Briegel, Florian; Kittmann, Frank; Berwein, Juergen; Trowitzsch, Jan
2012-03-26
LINC-NIRVANA is a near-infrared Fizeau interferometric imager that will operate at the Large Binocular Telescope. In preparation for the commissioning of this instrument, we conducted experiments for calibrating the high-layer wavefront sensor of the layer-oriented multi-conjugate adaptive optics system. For calibrating the multi-pyramid wavefront sensor, four light sources were used to simulate guide stars. Using this setup, we developed the push-pull method for calibrating the interaction matrix. The benefits of this method over the traditional push-only method are quantified, and also the effects of varying the number of push-pull frames over which aberrations are averaged is reported. Finally, we discuss a method for measuring mis-conjugation between the deformable mirror and the wavefront sensor, and the proper positioning of the wavefront sensor detector with respect to the four pupil positions.
The LED and fiber based calibration system for the photomultiplier array of SNO+
NASA Astrophysics Data System (ADS)
Seabra, L.; Alves, R.; Andringa, S.; Bradbury, S.; Carvalho, J.; Clark, K.; Coulter, I.; Descamps, F.; Falk, L.; Gurriana, L.; Kraus, C.; Lefeuvre, G.; Maio, A.; Maneira, J.; Mottram, M.; Peeters, S.; Rose, J.; Sinclair, J.; Skensved, P.; Waterfield, J.; White, R.; Wilson, J.; SNO+ Collaboration
2015-02-01
A new external LED/fiber light injection calibration system was designed for the calibration and monitoring of the photomultiplier array of the SNO+ experiment at SNOLAB. The goal of the calibration system is to allow an accurate and regular measurement of the photomultiplier array's performance, while minimizing the risk of radioactivity ingress. The choice in SNO+ was to use a set of optical fiber cables to convey into the detector the light pulses produced by external LEDs. The quality control was carried out using a modified test bench that was used in QC of optical fibers for TileCal/ATLAS. The optical fibers were characterized for transmission, timing and angular dispersions. This article describes the setups used for the characterization and quality control of the system based on LEDs and optical fibers and their results.
Experimental and numerical study of a 10MW TLP wind turbine in waves and wind
NASA Astrophysics Data System (ADS)
Pegalajar-Jurado, Antonio; Hansen, Anders M.; Laugesen, Robert; Mikkelsen, Robert F.; Borg, Michael; Kim, Taeseong; Heilskov, Nicolai F.; Bredmose, Henrik
2016-09-01
This paper presents tests on a 1:60 version of the DTU 10MW wind turbine mounted on a tension leg platform and their numerical reproduction. Both the experimental setup and the numerical model are Froude-scaled, and the dynamic response of the floating wind turbine to wind and waves is compared in terms of motion in the six degrees of freedom, nacelle acceleration and mooring line tension. The numerical model is implemented in the aero-elastic code Flex5, featuring the unsteady BEM method and the Morison equation for the modelling of aerodynamics and hydrodynamics, respectively. It was calibrated with the tests by matching key system features, namely the steady thrust curve and the decay tests in water. The calibrated model is used to reproduce the wind-wave climates in the laboratory, including regular and irregular waves, with and without wind. The model predictions are compared to the measured data, and a good agreement is found for surge and heave, while some discrepancies are observed for pitch, nacelle acceleration and line tension. The addition of wind generally improves the agreement with test results. The aerodynamic damping is identified in both tests and simulations. Finally, the sources of the discrepancies are discussed and some improvements in the numerical model are suggested in order to obtain a better agreement with the experiments.
NASA Astrophysics Data System (ADS)
Chatterjee, Abhijit; Verma, Anurag
2016-05-01
The Advanced Wide Field Sensor (AWiFS) camera caters to high temporal resolution requirement of Resourcesat-2A mission with repeativity of 5 days. The AWiFS camera consists of four spectral bands, three in the visible and near IR and one in the short wave infrared. The imaging concept in VNIR bands is based on push broom scanning that uses linear array silicon charge coupled device (CCD) based Focal Plane Array (FPA). On-Board Calibration unit for these CCD based FPAs is used to monitor any degradation in FPA during entire mission life. Four LEDs are operated in constant current mode and 16 different light intensity levels are generated by electronically changing exposure of CCD throughout the calibration cycle. This paper describes experimental setup and characterization results of various flight model visible LEDs (λP=650nm) for development of On-Board Calibration unit of Advanced Wide Field Sensor (AWiFS) camera of RESOURCESAT-2A. Various LED configurations have been studied to meet dynamic range coverage of 6000 pixels silicon CCD based focal plane array from 20% to 60% of saturation during night pass of the satellite to identify degradation of detector elements. The paper also explains comparison of simulation and experimental results of CCD output profile at different LED combinations in constant current mode.
NASA Astrophysics Data System (ADS)
Guillevic, Myriam; Pascale, Céline; Mutter, Daniel; Wettstein, Sascha; Niederhauser, Bernhard
2017-04-01
In the framework of METAS' AtmoChem-ECV project, new facilities are currently being developed to generate reference gas mixtures for water vapour at concentrations measured in the high troposphere and polar regions, in the range 1-20 µmol/mol (ppm). The generation method is dynamic (the mixture is produced continuously over time) and SI-traceable (i.e. the amount of substance fraction in mole per mole is traceable to the definition of SI-units). The generation process is composed of three successive steps. The first step is to purify the matrix gas, nitrogen or synthetic air. Second, this matrix gas is spiked with the pure substance using a permeation technique: a permeation device contains a few grams of pure water in liquid form and loses it linearly over time by permeation through a membrane. In a third step, to reach the desired concentration, the first, high concentration mixture exiting the permeation chamber is then diluted with a chosen flow of matrix gas with one or two subsequent dilution steps. All flows are piloted by mass flow controllers. All parts in contact with the gas mixture are passivated using coated surfaces, to reduce adsorption/desorption processes as much as possible. The mixture can eventually be directly used to calibrate an analyser. The standard mixture produced by METAS' dynamic setup was injected into a chilled mirror from MBW Calibration AG, the designated institute for absolute humidity calibration in Switzerland. The used chilled mirror, model 373LX, is able to measure frost point and sample pressure and therefore calculate the water vapour concentration. This intercomparison of the two systems was performed in the range 4-18 ppm water vapour in synthetic air, at two different pressure levels, 1013.25 hPa and 2000 hPa. We present here METAS' dynamic setup, its uncertainty budget and the first results of the intercomparison with MBW's chilled mirror.
Design of system calibration for effective imaging
NASA Astrophysics Data System (ADS)
Varaprasad Babu, G.; Rao, K. M. M.
2006-12-01
A CCD based characterization setup comprising of a light source, CCD linear array, Electronics for signal conditioning/ amplification, PC interface has been developed to generate images at varying densities and at multiple view angles. This arrangement is used to simulate and evaluate images by Super Resolution technique with multiple overlaps and yaw rotated images at different view angles. This setup also generates images at different densities to analyze the response of the detector port wise separately. The light intensity produced by the source needs to be calibrated for proper imaging by the high sensitive CCD detector over the FOV. One approach is to design a complex integrating sphere arrangement which costs higher for such applications. Another approach is to provide a suitable intensity feed back correction wherein the current through the lamp is controlled in a closed loop arrangement. This method is generally used in the applications where the light source is a point source. The third method is to control the time of exposure inversely to the lamp variations where lamp intensity is not possible to control. In this method, light intensity during the start of each line is sampled and the correction factor is applied for the full line. The fourth method is to provide correction through Look Up Table where the response of all the detectors are normalized through the digital transfer function. The fifth method is to have a light line arrangement where the light through multiple fiber optic cables are derived from a single source and arranged them in line. This is generally applicable and economical for low width cases. In our applications, a new method wherein an inverse multi density filter is designed which provides an effective calibration for the full swath even at low light intensities. The light intensity along the length is measured, an inverse density is computed, a correction filter is generated and implemented in the CCD based Characterization setup. This paper describes certain novel techniques of design and implementation of system calibration for effective Imaging to produce better quality data product especially while handling high resolution data.
Evaluation and Analysis of F-16XL Wind Tunnel Data From Static and Dynamic Tests
NASA Technical Reports Server (NTRS)
Kim, Sungwan; Murphy, Patrick C.; Klein, Vladislav
2004-01-01
A series of wind tunnel tests were conducted in the NASA Langley Research Center as part of an ongoing effort to develop and test mathematical models for aircraft rigid-body aerodynamics in nonlinear unsteady flight regimes. Analysis of measurement accuracy, especially for nonlinear dynamic systems that may exhibit complicated behaviors, is an essential component of this ongoing effort. In this report, tools for harmonic analysis of dynamic data and assessing measurement accuracy are presented. A linear aerodynamic model is assumed that is appropriate for conventional forced-oscillation experiments, although more general models can be used with these tools. Application of the tools to experimental data is demonstrated and results indicate the levels of uncertainty in output measurements that can arise from experimental setup, calibration procedures, mechanical limitations, and input errors.
A Peltier-based variable temperature source
NASA Astrophysics Data System (ADS)
Molki, Arman; Roof Baba, Abdul
2014-11-01
In this paper we propose a simple and cost-effective variable temperature source based on the Peltier effect using a commercially purchased thermoelectric cooler. The proposed setup can be used to quickly establish relatively accurate dry temperature reference points, which are necessary for many temperature applications such as thermocouple calibration.
Calibration of a mosfet detection system for 6-MV in vivo dosimetry.
Scalchi, P; Francescon, P
1998-03-01
Metal oxide semiconductor field-effect transistor (MOSFET) detectors were calibrated to perform in vivo dosimetry during 6-MV treatments, both in normal setup and total body irradiation (TBI) conditions. MOSFET water-equivalent depth, dependence of the calibration factors (CFs) on the field sizes, MOSFET orientation, bias supply, accumulated dose, incidence angle, temperature, and spoiler-skin distance in TBI setup were investigated. MOSFET reproducibility was verified. The correlation between the water-equivalent midplane depth and the ratio of the exit MOSFET readout divided by the entrance MOSFET readout was studied. MOSFET midplane dosimetry in TBI setup was compared with thermoluminescent dosimetry in an anthropomorphic phantom. By using ionization chamber measurements, the TBI midplane dosimetry was also verified in the presence of cork as a lung substitute. The water-equivalent depth of the MOSFET is about 0.8 mm or 1.8 mm, depending on which sensor side faces the beam. The field size also affects this quantity; Monte Carlo simulations allow driving this behavior by changes in the contaminating electron mean energy. The CFs vary linearly as a function of the square field side, for fields ranging from 5 x 5 to 30 x 30 cm2. In TBI setup, varying the spoiler-skin distance between 5 mm and 10 cm affects the CFs within 5%. The MOSFET reproducibility is about 3% (2 SD) for the doses normally delivered to the patients. The effect of the accumulated dose on the sensor response is negligible. For beam incidence ranging from 0 degrees to 90 degrees, the MOSFET response varies within 7%. No monotonic correlation between the sensor response and the temperature is apparent. Good correlation between the water-equivalent midplane depth and the ratio of the exit MOSFET readout divided by the entrance MOSFET readout was found (the correlation coefficient is about 1). The MOSFET midplane dosimetry relevant to the anthropomorphic phantom irradiation is in agreement with TLD dosimetry within 5%. Ionization chamber and MOSFET midplane dosimetry in inhomogeneous phantoms are in agreement within 2%. MOSFET characteristics are suitable for the in vivo dosimetry relevant to 6-MV treatments, both in normal and TBI setup. The TBI midplane dosimetry using MOSFETs is valid also in the presence of the lung, which is the most critical organ, and allows verifying that calculation of the lung attenuator thicknesses based only on the density is not correct. Our MOSFET dosimetry system can be used also to determine the surface dose by using the water-equivalent depth and extrapolation methods. This procedure depends on the field size used.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corona, Edmundo; Song, Bo
This memo concerns the transmission of mechanical signals through silicone foam pads in a compression Kolsky bar set-up. The results of numerical simulations for four levels of pad pre-compression and two striker velocities were compared directly to test measurements to assess the delity of the simulations. The nite element model simulated the Kolsky tests in their entirety and used the hyperelastic `hyperfoam' model for the silicone foam pads. Calibration of the hyperfoam model was deduced from quasi-static compression data. It was necessary, however, to augment the material model by adding sti ness proportional damping in order to generate results thatmore » resembled the experimental measurements. Based on the results presented here, it is important to account for the dynamic behavior of polymeric foams in numerical simulations that involve high loading rates.« less
Detailed Calibration of SphinX instrument at the Palermo XACT facility of INAF-OAPA
NASA Astrophysics Data System (ADS)
Szymon, Gburek; Collura, Alfonso; Barbera, Marco; Reale, Fabio; Sylwester, Janusz; Kowalinski, Miroslaw; Bakala, Jaroslaw; Kordylewski, Zbigniew; Plocieniak, Stefan; Podgorski, Piotr; Trzebinski, Witold; Varisco, Salvatore
The Solar photometer in X-rays (SphinX) experiment is scheduled for launch late summer 2008 on-board the Russian CORONAS-Photon satellite. SphinX will use three silicon PIN diode detectors with selected effective areas in order to record solar spectra in the X-ray energy range 0.3-15 keV with unprecedented temporal and medium energy resolution. High sensitivity and large dynamic range of the SphinX instrument will give for the first time possibility of observing solar soft X-ray variability from the weakest levels, ten times below present thresholds, to the largest X20+ flares. We present the results of the ground X-ray calibrations of the SphinX instrument performed at the X-ray Astronomy Calibration and Testing (XACT) facility of INAF-OAPA. The calibrations were essential for determination of SphinX detector energy resolution and efficiency. We describe the ground tests instrumental set-up, adopted measurement techniques and present results of the calibration data analysis.
Case-based Reasoning for Automotive Engine Performance Tune-up
NASA Astrophysics Data System (ADS)
Vong, C. M.; Huang, H.; Wong, P. K.
2010-05-01
The automotive engine performance tune-up is greatly affected by the calibration of its electronic control unit (ECU). The ECU calibration is traditionally done by trial-and-error method. This traditional method consumes a large amount of time and money because of a large number of dynamometer tests. To resolve this problem, case based reasoning (CBR) is employed, so that an existing and effective ECU setup can be adapted to fit another similar class of engines. The adaptation procedure is done through a more sophisticated step called case-based adaptation (CBA) [1, 2]. CBA is an effective knowledge management tool, which can interactively learn the expert adaptation knowledge. The paper briefly reviews the methodologies of CBR and CBA. Then the application to ECU calibration is described via a case study. With CBR and CBA, the efficiency of calibrating an ECU can be enhanced. A prototype system has also been developed to verify the usefulness of CBR in ECU calibration.
NASA Astrophysics Data System (ADS)
Saaid, Hicham; Segers, Patrick; Novara, Matteo; Claessens, Tom; Verdonck, Pascal
2018-03-01
The characterization of flow patterns in the left ventricle may help the development and interpretation of flow-based parameters of cardiac function and (patho-)physiology. Yet, in vivo visualization of highly dynamic three-dimensional flow patterns in an opaque and moving chamber is a challenging task. This has been shown in several recent multidisciplinary studies where in vivo imaging methods are often complemented by in silico solutions, or by in vitro methods. Because of its distinctive features, particle image velocimetry (PIV) has been extensively used to investigate flow dynamics in the cardiovascular field. However, full volumetric PIV data in a dynamically changing geometry such as the left ventricle remain extremely scarce, which justifies the present study. An investigation of the left ventricle flow making use of a customized cardiovascular simulator is presented; a multiplane scanning-stereoscopic PIV setup is used, which allows for the measurement of independent planes across the measurement volume. Due to the accuracy in traversing the illumination and imaging systems, the present setup allows to reconstruct the flow in a 3D volume performing only one single calibration. The effects of the orientation of a prosthetic mitral valve in anatomical and anti-anatomical configurations have been investigated during the diastolic filling time. The measurement is performed in a phase-locked manner; the mean velocity components are presented together with the vorticity and turbulent kinetic energy maps. The reconstructed 3D flow structures downstream the bileaflet mitral valve are shown, which provides additional insight of the highly three-dimensional flow.
NASA Astrophysics Data System (ADS)
Versini, Pierre-Antoine; Tchiguirinskaia, Ioulia; Schertzer, Daniel
2017-04-01
Green roofs are commonly considered as efficient tools to mitigate urban runoff as they can store precipitation, and consequently provide retention and detention performances. Designed as a compromise between water holding capacity, weight and hydraulic conductivity, their substrate is usually an artificial media differentiating significantly from a traditional soil. In order to assess green roofs hydrological performances, many models have been developed. Classified into two categories (conceptual and physically based), they are usually applied to reproduce the discharge of a particular monitored green roof considered as homogeneous. Although the resulted simulations could be satisfactory, the question of robustness and consistency of the calibrated parameters is often not addressed. Here, a modeling framework has been developed to assess the efficiency and the robustness of both modelling approaches (conceptual and physically based) in reproducing green roof hydrological behaviour. SWMM and VS2DT models have been used for this purpose. This work also benefits from an experimental setup where several green roofs differentiated by their substrate thickness and vegetation cover are monitored. Based on the data collected for several rainfall events, it has been studied how the calibrated parameters are effectively linked to their physical properties and how they can vary from one green roof configuration to another. Although both models reproduce correctly the observed discharges in most of the cases, their calibrated parameters exhibit a high inconsistency. For a same green roof configuration, these parameters can vary significantly from one rainfall event to another, even if they are supposed to be linked to the green roof characteristics (roughness, residual moisture content for instance). They can also be different from one green roof configuration to another although the implemented substrate is the same. Finally, it appears very difficult to find any relationship between the calibrated parameters supposed to represent similar characteristics in both models (porosity, hydraulic conductivity). These results illustrate the difficulty to reproduce the hydrological behaviour of such an artificial media constituting green roof substrate. They justify the development of new methods able to take to into account the spatial heterogeneity of the substrate for instance.
Barbui, T.; Krychowiak, M.; König, R.; ...
2016-09-27
A beam emission spectroscopy system on thermal helium (He) and neon (Ne) has been set up at Wendelstein 7-X to measure edge electron temperature and density profiles utilizing the line-ratio technique or its extension by the analysis of absolutely calibrated line emissions. The setup for a first systematic test of these techniques of quantitative atomic spectroscopy in the limiter startup phase (OP1.1) is reported together with first measured profiles. Lastly, this setup and the first results are an important test for developing the technique for the upcoming high density, low temperature island divertor regime.
Investigating a compact phantom and setup for testing body sound transducers
Mansy, Hansen A; Grahe, Joshua; Royston, Thomas J; Sandler, Richard H
2011-01-01
Contact transducers are a key element in experiments involving body sounds. The characteristics of these devices are often not known with accuracy. There are no standardized calibration setups or procedures for testing these sensors. This study investigated the characteristics of a new computer-controlled sound source phantom for testing sensors. Results suggested that sensors with different sizes require special phantom requirements. The effectiveness of certain approaches on increasing the spatial and spectral uniformity of the phantom surface signal was studied. Non-uniformities >20 dB were removable, which can be particularly helpful in comparing the characteristics of different size sensors more accurately. PMID:21496795
NASA Technical Reports Server (NTRS)
Pagnutti, Mary; Ryan, Robert E.; Holekamp, Kara; Harrington, Gary; Frisbie, Troy
2006-01-01
A simple and cost-effective, hyperspectral sun photometer for radiometric vicarious remote sensing system calibration, air quality monitoring, and potentially in-situ planetary climatological studies, was developed. The device was constructed solely from off the shelf components and was designed to be easily deployable for support of short-term verification and validation data collects. This sun photometer not only provides the same data products as existing multi-band sun photometers, this device requires a simpler setup, less data acquisition time and allows for a more direct calibration approach. Fielding this instrument has also enabled Stennis Space Center (SSC) Applied Sciences Directorate personnel to cross calibrate existing sun photometers. This innovative research will position SSC personnel to perform air quality assessments in support of the NASA Applied Sciences Program's National Applications program element as well as to develop techniques to evaluate aerosols in a Martian or other planetary atmosphere.
A spectrally tunable LED sphere source enables accurate calibration of tristimulus colorimeters
NASA Astrophysics Data System (ADS)
Fryc, I.; Brown, S. W.; Ohno, Y.
2006-02-01
The Four-Color Matrix method (FCM) was developed to improve the accuracy of chromaticity measurements of various display colors. The method is valid for each type of display having similar spectra. To develop the Four-Color correction matrix, spectral measurements of primary red, green, blue, and white colors of a display are needed. Consequently, a calibration facility should be equipped with a number of different displays. This is very inconvenient and expensive. A spectrally tunable light source (STS) that can mimic different display spectral distributions would eliminate the need for maintaining a wide variety of displays and would enable a colorimeter to be calibrated for a number of different displays using the same setup. Simulations show that an STS that can create red, green, blue and white distributions that are close to the real spectral power distribution (SPD) of a display works well with the FCM for the calibration of colorimeters.
Torralba, Marta; Díaz-Pérez, Lucía C.
2017-01-01
This article presents a self-calibration procedure and the experimental results for the geometrical characterisation of a 2D laser system operating along a large working range (50 mm × 50 mm) with submicrometre uncertainty. Its purpose is to correct the geometric errors of the 2D laser system setup generated when positioning the two laser heads and the plane mirrors used as reflectors. The non-calibrated artefact used in this procedure is a commercial grid encoder that is also a measuring instrument. Therefore, the self-calibration procedure also allows the determination of the geometrical errors of the grid encoder, including its squareness error. The precision of the proposed algorithm is tested using virtual data. Actual measurements are subsequently registered, and the algorithm is applied. Once the laser system is characterised, the error of the grid encoder is calculated along the working range, resulting in an expanded submicrometre calibration uncertainty (k = 2) for the X and Y axes. The results of the grid encoder calibration are comparable to the errors provided by the calibration certificate for its main central axes. It is, therefore, possible to confirm the suitability of the self-calibration methodology proposed in this article. PMID:28858239
Changes in deviation of absorbed dose to water among users by chamber calibration shift.
Katayose, Tetsurou; Saitoh, Hidetoshi; Igari, Mitsunobu; Chang, Weishan; Hashimoto, Shimpei; Morioka, Mie
2017-07-01
The JSMP01 dosimetry protocol had adopted the provisional 60 Co calibration coefficient [Formula: see text], namely, the product of exposure calibration coefficient N C and conversion coefficient k D,X . After that, the absorbed dose to water D w standard was established, and the JSMP12 protocol adopted the [Formula: see text] calibration. In this study, the influence of the calibration shift on the measurement of D w among users was analyzed. The intercomparison of the D w using an ionization chamber was annually performed by visiting related hospitals. Intercomparison results before and after the calibration shift were analyzed, the deviation of D w among users was re-evaluated, and the cause of deviation was estimated. As a result, the stability of LINAC, calibration of the thermometer and barometer, and collection method of ion recombination were confirmed. The statistical significance of standard deviation of D w was not observed, but that of difference of D w among users was observed between N C and [Formula: see text] calibration. Uncertainty due to chamber-to-chamber variation was reduced by the calibration shift, consequently reducing the uncertainty among users regarding D w . The result also pointed out uncertainty might be reduced by accurate and detailed instructions on the setup of an ionization chamber.
Imaging of particles with 3D full parallax mode with two-color digital off-axis holography
NASA Astrophysics Data System (ADS)
Kara-Mohammed, Soumaya; Bouamama, Larbi; Picart, Pascal
2018-05-01
This paper proposes an approach based on two orthogonal views and two wavelengths for recording off-axis two-color holograms. The approach permits to discriminate particles aligned along the sight-view axis. The experimental set-up is based on a double Mach-Zehnder architecture in which two different wavelengths provides the reference and the object beams. The digital processing to get images from the particles is based on convolution so as to obtain images with no wavelength dependence. The spatial bandwidth of the angular spectrum transfer function is adapted in order to increase the maximum reconstruction distance which is generally limited to a few tens of millimeters. In order to get the images of particles in the 3D volume, a calibration process is proposed and is based on the modulation theorem to perfectly superimpose the two views in a common XYZ axis. The experimental set-up is applied to two-color hologram recording of moving non-calibrated opaque particles with average diameter at about 150 μm. After processing the two-color holograms with image reconstruction and view calibration, the location of particles in the 3D volume can be obtained. Particularly, ambiguity about close particles, generating hidden particles in a single-view scheme, can be removed to determine the exact number of particles in the region of interest.
Automation of a high-speed imaging setup for differential viscosity measurements
NASA Astrophysics Data System (ADS)
Hurth, C.; Duane, B.; Whitfield, D.; Smith, S.; Nordquist, A.; Zenhausern, F.
2013-12-01
We present the automation of a setup previously used to assess the viscosity of pleural effusion samples and discriminate between transudates and exudates, an important first step in clinical diagnostics. The presented automation includes the design, testing, and characterization of a vacuum-actuated loading station that handles the 2 mm glass spheres used as sensors, as well as the engineering of electronic Printed Circuit Board (PCB) incorporating a microcontroller and their synchronization with a commercial high-speed camera operating at 10 000 fps. The hereby work therefore focuses on the instrumentation-related automation efforts as the general method and clinical application have been reported earlier [Hurth et al., J. Appl. Phys. 110, 034701 (2011)]. In addition, we validate the performance of the automated setup with the calibration for viscosity measurements using water/glycerol standard solutions and the determination of the viscosity of an "unknown" solution of hydroxyethyl cellulose.
Automation of a high-speed imaging setup for differential viscosity measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hurth, C.; Duane, B.; Whitfield, D.
We present the automation of a setup previously used to assess the viscosity of pleural effusion samples and discriminate between transudates and exudates, an important first step in clinical diagnostics. The presented automation includes the design, testing, and characterization of a vacuum-actuated loading station that handles the 2 mm glass spheres used as sensors, as well as the engineering of electronic Printed Circuit Board (PCB) incorporating a microcontroller and their synchronization with a commercial high-speed camera operating at 10 000 fps. The hereby work therefore focuses on the instrumentation-related automation efforts as the general method and clinical application have beenmore » reported earlier [Hurth et al., J. Appl. Phys. 110, 034701 (2011)]. In addition, we validate the performance of the automated setup with the calibration for viscosity measurements using water/glycerol standard solutions and the determination of the viscosity of an “unknown” solution of hydroxyethyl cellulose.« less
A Quantitative Microbial Risk Assessment (QMRA) infrastructure that automates the manual process of characterizing transport of pathogens and microorganisms, from the source of release to a point of exposure, has been developed by loosely configuring a set of modules and process-...
Effect of Prior Aging on Fatigue Behavior of IM7/BMI 5250-4 Composite at 191 C
2007-06-01
6 Figure 4. Three stages of fatigue life cycle for general material ....................................... 9 Figure 5...calibration ........ 24 vii Figure 17. Omega thermocouple reader setup .................................................................. 26 Figure...cost and fleet readiness. To assure long- term durability and structural integrity of HTPMC components, reliable experimentally- based life -prediction
Setup and evaluation of a sensor tilting system for dimensional micro- and nanometrology
NASA Astrophysics Data System (ADS)
Schuler, Alexander; Weckenmann, Albert; Hausotte, Tino
2014-06-01
Sensors in micro- and nanometrology show their limits if the measurement objects and surfaces feature high aspect ratios, high curvature and steep surface angles. Their measurable surface angle is limited and an excess leads to measurement deviation and not detectable surface points. We demonstrate a principle to adapt the sensor's working angle during the measurement keeping the sensor in its optimal working angle. After the simulation of the principle, a hardware prototype was realized. It is based on a rotary kinematic chain with two rotary degrees of freedom, which extends the measurable surface angle to ±90° and is combined with a nanopositioning and nanomeasuring machine. By applying a calibration procedure with a quasi-tactile 3D sensor based on electrical near-field interaction the systematic position deviation of the kinematic chain is reduced. The paper shows for the first time the completed setup and integration of the prototype, the performance results of the calibration, the measurements with the prototype and the tilting principle, and finishes with the interpretation and feedback of the practical results.
2016 Research Outreach Program report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Hye Young; Kim, Yangkyu
2016-10-13
This paper is the research activity report for 4 weeks in LANL. Under the guidance of Dr. Lee, who performs nuclear physics research at LANSCE, LANL, I studied the Low Energy NZ (LENZ) setup and how to use the LENZ. First, I studied the LENZ chamber and Si detectors, and worked on detector calibrations, using the computer software, ROOT (CERN developed data analysis tool) and EXCEL (Microsoft office software). I also performed the calibration experiments that measure alpha particles emitted from a Th-229 source by using a S1-type detector (Si detector). And with Dr. Lee, we checked the result.
Medical-grade Sterilizable Target for Fluid-immersed Fetoscope Optical Distortion Calibration.
Nikitichev, Daniil I; Shakir, Dzhoshkun I; Chadebecq, François; Tella, Marcel; Deprest, Jan; Stoyanov, Danail; Ourselin, Sébastien; Vercauteren, Tom
2017-02-23
We have developed a calibration target for use with fluid-immersed endoscopes within the context of the GIFT-Surg (Guided Instrumentation for Fetal Therapy and Surgery) project. One of the aims of this project is to engineer novel, real-time image processing methods for intra-operative use in the treatment of congenital birth defects, such as spina bifida and the twin-to-twin transfusion syndrome. The developed target allows for the sterility-preserving optical distortion calibration of endoscopes within a few minutes. Good optical distortion calibration and compensation are important for mitigating undesirable effects like radial distortions, which not only hamper accurate imaging using existing endoscopic technology during fetal surgery, but also make acquired images less suitable for potentially very useful image computing applications, like real-time mosaicing. In this paper proposes a novel fabrication method to create an affordable, sterilizable calibration target suitable for use in a clinical setup. This method involves etching a calibration pattern by laser cutting a sandblasted stainless steel sheet. This target was validated using the camera calibration module provided by OpenCV, a state-of-the-art software library popular in the computer vision community.
Medical-grade Sterilizable Target for Fluid-immersed Fetoscope Optical Distortion Calibration
Chadebecq, François; Tella, Marcel; Deprest, Jan; Stoyanov, Danail; Ourselin, Sébastien; Vercauteren, Tom
2017-01-01
We have developed a calibration target for use with fluid-immersed endoscopes within the context of the GIFT-Surg (Guided Instrumentation for Fetal Therapy and Surgery) project. One of the aims of this project is to engineer novel, real-time image processing methods for intra-operative use in the treatment of congenital birth defects, such as spina bifida and the twin-to-twin transfusion syndrome. The developed target allows for the sterility-preserving optical distortion calibration of endoscopes within a few minutes. Good optical distortion calibration and compensation are important for mitigating undesirable effects like radial distortions, which not only hamper accurate imaging using existing endoscopic technology during fetal surgery, but also make acquired images less suitable for potentially very useful image computing applications, like real-time mosaicing. In this paper proposes a novel fabrication method to create an affordable, sterilizable calibration target suitable for use in a clinical setup. This method involves etching a calibration pattern by laser cutting a sandblasted stainless steel sheet. This target was validated using the camera calibration module provided by OpenCV, a state-of-the-art software library popular in the computer vision community. PMID:28287588
Simulating Visual Attention Allocation of Pilots in an Advanced Cockpit Environment
NASA Technical Reports Server (NTRS)
Frische, F.; Osterloh, J.-P.; Luedtke, A.
2011-01-01
This paper describes the results of experiments conducted with human line pilots and a cognitive pilot model during interaction with a new 40 Flight Management System (FMS). The aim of these experiments was to gather human pilot behavior data in order to calibrate the behavior of the model. Human behavior is mainly triggered by visual perception. Thus, the main aspect was to setup a profile of human pilots' visual attention allocation in a cockpit environment containing the new FMS. We first performed statistical analyses of eye tracker data and then compared our results to common results of familiar analyses in standard cockpit environments. The comparison has shown a significant influence of the new system on the visual performance of human pilots. Further on, analyses of the pilot models' visual performance have been performed. A comparison to human pilots' visual performance revealed important improvement potentials.
Henrion, Sebastian; Spoor, Cees W; Pieters, Remco P M; Müller, Ulrike K; van Leeuwen, Johan L
2015-07-07
Images of underwater objects are distorted by refraction at the water-glass-air interfaces and these distortions can lead to substantial errors when reconstructing the objects' position and shape. So far, aquatic locomotion studies have minimized refraction in their experimental setups and used the direct linear transform algorithm (DLT) to reconstruct position information, which does not model refraction explicitly. Here we present a refraction corrected ray-tracing algorithm (RCRT) that reconstructs position information using Snell's law. We validated this reconstruction by calculating 3D reconstruction error-the difference between actual and reconstructed position of a marker. We found that reconstruction error is small (typically less than 1%). Compared with the DLT algorithm, the RCRT has overall lower reconstruction errors, especially outside the calibration volume, and errors are essentially insensitive to camera position and orientation and the number and position of the calibration points. To demonstrate the effectiveness of the RCRT, we tracked an anatomical marker on a seahorse recorded with four cameras to reconstruct the swimming trajectory for six different camera configurations. The RCRT algorithm is accurate and robust and it allows cameras to be oriented at large angles of incidence and facilitates the development of accurate tracking algorithms to quantify aquatic manoeuvers.
ALICE HLT Run 2 performance overview.
NASA Astrophysics Data System (ADS)
Krzewicki, Mikolaj; Lindenstruth, Volker;
2017-10-01
For the LHC Run 2 the ALICE HLT architecture was consolidated to comply with the upgraded ALICE detector readout technology. The software framework was optimized and extended to cope with the increased data load. Online calibration of the TPC using online tracking capabilities of the ALICE HLT was deployed. Offline calibration code was adapted to run both online and offline and the HLT framework was extended to support that. The performance of this schema is important for Run 3 related developments. An additional data transport approach was developed using the ZeroMQ library, forming at the same time a test bed for the new data flow model of the O2 system, where further development of this concept is ongoing. This messaging technology was used to implement the calibration feedback loop augmenting the existing, graph oriented HLT transport framework. Utilising the online reconstruction of many detectors, a new asynchronous monitoring scheme was developed to allow real-time monitoring of the physics performance of the ALICE detector, on top of the new messaging scheme for both internal and external communication. Spare computing resources comprising the production and development clusters are run as a tier-2 GRID site using an OpenStack-based setup. The development cluster is running continuously, the production cluster contributes resources opportunistically during periods of LHC inactivity.
NASA Astrophysics Data System (ADS)
Toulemonde, Pierre; Goujon, Céline; Laversenne, Laetitia; Bordet, Pierre; Bruyère, Rémy; Legendre, Murielle; Leynaud, Olivier; Prat, Alain; Mezouar, Mohamed
2014-04-01
We have developed a new laboratory experimental set-up to study in situ the pressure-temperature phase diagram of a given pure element or compound, its associated phase transitions, or the chemical reactions involved at high pressure and high temperature (HP-HT) between different solids and liquids. This new tool allows laboratory studies before conducting further detailed experiments using more brilliant synchrotron X-ray sources or before kinetic studies. This device uses the diffraction of X-rays produced by a quasi-monochromatic micro-beam source operating at the silver radiation (λ(Ag)Kα 1, 2≈0.56 Å). The experimental set-up is based on a VX Paris-Edinburgh cell equipped with tungsten carbide or sintered diamond anvils and uses standard B-epoxy 5 or 7 mm gaskets. The diffracted signal coming from the compressed (and heated) sample is collected on an image plate. The pressure and temperature calibrations were performed by diffraction, using conventional calibrants (BN, NaCl and MgO) for determination of the pressure, and by crossing isochores of BN, NaCl, Cu or Au for the determination of the temperature. The first examples of studies performed with this new laboratory set-up are presented in the article: determination of the melting point of germanium and magnesium under HP-HT, synthesis of MgB2 or C-diamond and partial study of the P, T phase diagram of MgH2.
Setup and Calibration of SLAC's Peripheral Monitoring Stations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cooper, C.
2004-09-03
The goals of this project were to troubleshoot, repair, calibrate, and establish documentation regarding SLAC's (Stanford Linear Accelerator Center's) PMS (Peripheral Monitoring Station) system. The PMS system consists of seven PMSs that continuously monitor skyshine (neutron and photon) radiation levels in SLAC's environment. Each PMS consists of a boron trifluoride (BF{sub 3}) neutron detector (model RS-P1-0802-104 or NW-G-20-12) and a Geiger Moeller (GM) gamma ray detector (model TGM N107 or LND 719) together with their respective electronics. Electronics for each detector are housed in Nuclear Instrument Modules (NIMs) and are plugged into a NIM bin in the station. All communicationmore » lines from the stations to the Main Control Center (MCC) were tested prior to troubleshooting. To test communication with MCC, a pulse generator (Systron Donner model 100C) was connected to each channel in the PMS and data at MCC was checked for consistency. If MCC displayed no data, the communication cables to MCC or the CAMAC (Computer Automated Measurement and Control) crates were in need of repair. If MCC did display data, then it was known that the communication lines were intact. All electronics from each station were brought into the lab for troubleshooting. Troubleshooting usually consisted of connecting an oscilloscope or scaler (Ortec model 871 or 775) at different points in the circuit of each detector to record simulated pulses produced by a pulse generator; the input and output pulses were compared to establish the location of any problems in the circuit. Once any problems were isolated, repairs were done accordingly. The detectors and electronics were then calibrated in the field using radioactive sources. Calibration is a process that determines the response of the detector. Detector response is defined as the ratio of the number of counts per minute interpreted by the detector to the amount of dose equivalent rate (in mrem per hour, either calculated or measured). Detector response for both detectors is dependent upon the energy of the incident radiation; this trend had to be accounted for in the calibration of the BF{sub 3} detector. Energy dependence did not have to be taken into consideration when calibrating the GM detectors since GM detector response is only dependent on radiation energy below 100 keV; SLAC only produces a spectrum of gamma radiation above 100 keV. For the GM detector, calibration consisted of bringing a {sup 137}Cs source and a NIST-calibrated RADCAL Radiation Monitor Controller (model 9010) out to the field; the absolute dose rate was determined by the RADCAL device while simultaneously irradiating the GM detector to obtain a scaler reading corresponding to counts per minute. Detector response was then calculated. Calibration of the BF{sub 3} detector was done using NIST certified neutron sources of known emission rates and energies. Five neutron sources ({sup 238}PuBe, {sup 238}PuB, {sup 238}PuF4, {sup 238}PuLi and {sup 252}Cf) with different energies were used to account for the energy dependence of the response. The actual neutron dose rate was calculated by date-correcting NIST source data and considering the direct dose rate and scattered dose rate. Once the total dose rate (sum of the direct and scattered dose rates) was known, the response vs. energy curve was plotted. The first station calibrated (PMS6) was calibrated with these five neutron sources; all subsequent stations were calibrated with one neutron source and the energy dependence was assumed to be the same.« less
NASA Astrophysics Data System (ADS)
Gobrecht, Alexia; Bendoula, Ryad; Roger, Jean-Michel; Bellon-Maurel, Véronique
2014-05-01
Visible - Near-infrared spectroscopy (Vis-NIRS) is now commonly used to measure different physical and chemical parameters of soils, including carbon content. However, prediction model accuracy is insufficient for Vis-NIRS to replace routine laboratory analysis. One of the biggest issues this technique is facing up to is light scattering due to soil particles. It causes departure in the assumed linear relationship between the Absorbance spectrum and the concentration of the chemicals of interest as stated by Beer-Lambert's Law, which underpins the calibration models. Therefore it becomes essential to improve the metrological quality of the measured signal in order to optimize calibration as light/matter interactions are at the basis of the resulting linear modeling. Optics can help to mitigate scattering effect on the signal. We put forward a new optical setup coupling linearly polarized light with a Vis-NIR spectrometer to free the measured spectra from multi-scattering effect. The corrected measured spectrum was then used to compute an Absorbance spectrum of the sample, using Dahm's Equation in the frame of the Representative Layer Theory. This method has been previously tested and validated on liquid (milk+ dye) and powdered (sand + dye) samples showing scattering (and absorbing) properties. The obtained Absorbance was a very good approximation of the Beer-Lambert's law absorbance. Here, we tested the method on a set of 54 soil samples to predict Soil Organic Carbon content. In order to assess the signal quality improvement by this method, we built and compared calibration models using Partial Least Square (PLS) algorithm. The prediction model built from new Absorbance spectrum outperformed the model built with the classical Absorbance traditionally obtained with Vis-NIR diffuse reflectance. This study is a good illustration of the high influence of signal quality on prediction model's performances.
Development and implementation of an EPID-based method for localizing isocenter.
Hyer, Daniel E; Mart, Christopher J; Nixon, Earl
2012-11-08
The aim of this study was to develop a phantom and analysis software that could be used to quickly and accurately determine the location of radiation isocenter to an accuracy of less than 1 mm using the EPID (Electronic Portal Imaging Device). The proposed solution uses a collimator setting of 10 × 10 cm2 to acquire EPID images of a new phantom constructed from LEGO blocks. Images from a number of gantry and collimator angles are analyzed by automated analysis software to determine the position of the jaws and center of the phantom in each image. The distance between a chosen jaw and the phantom center is then compared to the same distance measured after a 180° collimator rotation to determine if the phantom is centered in the dimension being investigated. Repeated tests show that the system is reproducibly independent of the imaging session, and calculated offsets of the phantom from radiation isocenter are a function of phantom setup only. Accuracy of the algorithm's calculated offsets were verified by imaging the LEGO phantom before and after applying the calculated offset. These measurements show that the offsets are predicted with an accuracy of approximately 0.3 mm, which is on the order of the detector's pitch. Comparison with a star-shot analysis yielded agreement of isocenter location within 0.5 mm. Additionally, the phantom and software are completely independent of linac vendor, and this study presents results from two linac manufacturers. A Varian Optical Guidance Platform (OGP) calibration array was also integrated into the phantom to allow calibration of the OGP while the phantom is positioned at radiation isocenter to reduce setup uncertainty in the calibration. This solution offers a quick, objective method to perform isocenter localization as well as laser alignment and OGP calibration on a monthly basis.
Performance of Different Light Sources for the Absolute Calibration of Radiation Thermometers
NASA Astrophysics Data System (ADS)
Martín, M. J.; Mantilla, J. M.; del Campo, D.; Hernanz, M. L.; Pons, A.; Campos, J.
2017-09-01
The evolving mise en pratique for the definition of the kelvin (MeP-K) [1, 2] will, in its forthcoming edition, encourage the realization and dissemination of the thermodynamic temperature either directly (primary thermometry) or indirectly (relative primary thermometry) via fixed points with assigned reference thermodynamic temperatures. In the last years, the Centro Español de Metrología (CEM), in collaboration with the Instituto de Óptica of Consejo Superior de Investigaciones Científicas (IO-CSIC), has developed several setups for absolute calibration of standard radiation thermometers using the radiance method to allow CEM the direct dissemination of the thermodynamic temperature and the assignment of the thermodynamic temperatures to several fixed points. Different calibration facilities based on a monochromator and/or a laser and an integrating sphere have been developed to calibrate CEM's standard radiation thermometers (KE-LP2 and KE-LP4) and filter radiometer (FIRA2). This system is based on the one described in [3] placed in IO-CSIC. Different light sources have been tried and tested for measuring absolute spectral radiance responsivity: a Xe-Hg 500 W lamp, a supercontinuum laser NKT SuperK-EXR20 and a diode laser emitting at 6473 nm with a typical maximum power of 120 mW. Their advantages and disadvantages have been studied such as sensitivity to interferences generated by the laser inside the filter, flux stability generated by the radiant sources and so forth. This paper describes the setups used, the uncertainty budgets and the results obtained for the absolute temperatures of Cu, Co-C, Pt-C and Re-C fixed points, measured with the three thermometers with central wavelengths around 650 nm.
A methodology to develop computational phantoms with adjustable posture for WBC calibration
NASA Astrophysics Data System (ADS)
Ferreira Fonseca, T. C.; Bogaerts, R.; Hunt, John; Vanhavere, F.
2014-11-01
A Whole Body Counter (WBC) is a facility to routinely assess the internal contamination of exposed workers, especially in the case of radiation release accidents. The calibration of the counting device is usually done by using anthropomorphic physical phantoms representing the human body. Due to such a challenge of constructing representative physical phantoms a virtual calibration has been introduced. The use of computational phantoms and the Monte Carlo method to simulate radiation transport have been demonstrated to be a worthy alternative. In this study we introduce a methodology developed for the creation of realistic computational voxel phantoms with adjustable posture for WBC calibration. The methodology makes use of different software packages to enable the creation and modification of computational voxel phantoms. This allows voxel phantoms to be developed on demand for the calibration of different WBC configurations. This in turn helps to study the major source of uncertainty associated with the in vivo measurement routine which is the difference between the calibration phantoms and the real persons being counted. The use of realistic computational phantoms also helps the optimization of the counting measurement. Open source codes such as MakeHuman and Blender software packages have been used for the creation and modelling of 3D humanoid characters based on polygonal mesh surfaces. Also, a home-made software was developed whose goal is to convert the binary 3D voxel grid into a MCNPX input file. This paper summarizes the development of a library of phantoms of the human body that uses two basic phantoms called MaMP and FeMP (Male and Female Mesh Phantoms) to create a set of male and female phantoms that vary both in height and in weight. Two sets of MaMP and FeMP phantoms were developed and used for efficiency calibration of two different WBC set-ups: the Doel NPP WBC laboratory and AGM laboratory of SCK-CEN in Mol, Belgium.
A methodology to develop computational phantoms with adjustable posture for WBC calibration.
Fonseca, T C Ferreira; Bogaerts, R; Hunt, John; Vanhavere, F
2014-11-21
A Whole Body Counter (WBC) is a facility to routinely assess the internal contamination of exposed workers, especially in the case of radiation release accidents. The calibration of the counting device is usually done by using anthropomorphic physical phantoms representing the human body. Due to such a challenge of constructing representative physical phantoms a virtual calibration has been introduced. The use of computational phantoms and the Monte Carlo method to simulate radiation transport have been demonstrated to be a worthy alternative. In this study we introduce a methodology developed for the creation of realistic computational voxel phantoms with adjustable posture for WBC calibration. The methodology makes use of different software packages to enable the creation and modification of computational voxel phantoms. This allows voxel phantoms to be developed on demand for the calibration of different WBC configurations. This in turn helps to study the major source of uncertainty associated with the in vivo measurement routine which is the difference between the calibration phantoms and the real persons being counted. The use of realistic computational phantoms also helps the optimization of the counting measurement. Open source codes such as MakeHuman and Blender software packages have been used for the creation and modelling of 3D humanoid characters based on polygonal mesh surfaces. Also, a home-made software was developed whose goal is to convert the binary 3D voxel grid into a MCNPX input file. This paper summarizes the development of a library of phantoms of the human body that uses two basic phantoms called MaMP and FeMP (Male and Female Mesh Phantoms) to create a set of male and female phantoms that vary both in height and in weight. Two sets of MaMP and FeMP phantoms were developed and used for efficiency calibration of two different WBC set-ups: the Doel NPP WBC laboratory and AGM laboratory of SCK-CEN in Mol, Belgium.
Detailed field test of yaw-based wake steering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fleming, Paul; Churchfield, Matt; Scholbrock, Andrew
This study describes a detailed field-test campaign to investigate yaw-based wake steering. In yaw-based wake steering, an upstream turbine intentionally misaligns its yaw with respect to the inflow to deflect its wake away from a downstream turbine, with the goal of increasing total power production. In the first phase, a nacelle-mounted scanning lidar was used to verify wake deflection of a misaligned turbine and calibrate wake deflection models. In the second phase, these models were used within a yaw controller to achieve a desired wake deflection. This paper details the experimental design and setup. Lastly, all data collected as partmore » of this field experiment will be archived and made available to the public via the U.S. Department of Energy's Atmosphere to Electrons Data Archive and Portal.« less
Detailed field test of yaw-based wake steering
Fleming, Paul; Churchfield, Matt; Scholbrock, Andrew; ...
2016-10-03
This study describes a detailed field-test campaign to investigate yaw-based wake steering. In yaw-based wake steering, an upstream turbine intentionally misaligns its yaw with respect to the inflow to deflect its wake away from a downstream turbine, with the goal of increasing total power production. In the first phase, a nacelle-mounted scanning lidar was used to verify wake deflection of a misaligned turbine and calibrate wake deflection models. In the second phase, these models were used within a yaw controller to achieve a desired wake deflection. This paper details the experimental design and setup. Lastly, all data collected as partmore » of this field experiment will be archived and made available to the public via the U.S. Department of Energy's Atmosphere to Electrons Data Archive and Portal.« less
NASA Astrophysics Data System (ADS)
Bouaziz, Laurène; Hegnauer, Mark; Schellekens, Jaap; Sperna Weiland, Frederiek; ten Velden, Corine
2017-04-01
In many countries, data is scarce, incomplete and often not easily shared. In these cases, global satellite and reanalysis data provide an alternative to assess water resources. To assess water resources in Azerbaijan, a completely distributed and physically based hydrological wflow-sbm model was set-up for the entire Kura basin. We used SRTM elevation data, a locally available river map and one from OpenStreetMap to derive the drainage direction network at the model resolution of approximately 1x1 km. OpenStreetMap data was also used to derive the fraction of paved area per cell to account for the reduced infiltration capacity (c.f. Schellekens et al. 2014). We used the results of a global study to derive root zone capacity based on climate data (Wang-Erlandsson et al., 2016). To account for the variation in vegetation cover over the year, monthly averages of Leaf Area Index, based on MODIS data, were used. For the soil-related parameters, we used global estimates as provided by Dai et al. (2013). This enabled the rapid derivation of a first estimate of parameter values for our hydrological model. Digitized local meteorological observations were scarce and available only for limited time period. Therefore several sources of global meteorological data were evaluated: (1) EU-WATCH global precipitation, temperature and derived potential evaporation for the period 1958-2001 (Harding et al., 2011), (2) WFDEI precipitation, temperature and derived potential evaporation for the period 1979-2014 (by Weedon et al., 2014), (3) MSWEP precipitation (Beck et al., 2016) and (4) local precipitation data from more than 200 stations in the Kura basin were available from the NOAA website for a period up to 1991. The latter, together with data archives from Azerbaijan, were used as a benchmark to evaluate the global precipitation datasets for the overlapping period 1958-1991. By comparing the datasets, we found that monthly mean precipitation of EU-WATCH and WFDEI coincided well with NOAA stations and that MSWEP slightly overestimated precipitation amounts. On a daily basis, there were discrepancies in the peak timing and magnitude between measured precipitation and the global products. A bias between EU-WATCH and WFDEI temperature and potential evaporation was observed and to model the water balance correctly, it was needed to correct EU-WATCH to WFDEI mean monthly values. Overall, the available sources enabled rapid set-up of a hydrological model including the forcing of the model with a relatively good performance to assess water resources in Azerbaijan with a limited calibration effort and allow for a similar set-up anywhere in the world. Timing and quantification of peak volume remains a weakness in global data, making it difficult to be used for some applications (flooding) and for detailed calibration. Selecting and comparing different sources of global meteorological data is important to have a reliable set which improves model performance. - Beck et al., 2016. MSWEP: 3-hourly 0.25° global gridded precipitation (1979-2014) by merging gauge, satellite, and reanalysis data. Hydrol. Earth Syst. Sci. Discuss. - Dai Y. et al. ,2013. Development of a China Dataset of Soil Hydraulic Parameters Using Pedotransfer Functions for Land Surface Modeling. Journal of Hydrometeorology - Harding, R. et al., 2011., WATCH: Current knowledge of the Terrestrial global water cycle, J. Hydrometeorol. - Schellekens, J. et al., 2014. Rapid setup of hydrological and hydraulic models using OpenStreetMap and the SRTM derived digital elevation model. Environmental Modelling&Software - Wang-Erlandsson L. et al., 2016. Global Root Zone Storage Capacity from Satellite-Based Evaporation. Hydrology and Earth System Sciences - Weedon, G. et al., 2014. The WFDEI meteorological forcing data set: WATCH Forcing Data methodology applied to ERA-Interim reanalysis data, Water Resources Research.
Formation of algae growth constitutive relations for improved algae modeling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gharagozloo, Patricia E.; Drewry, Jessica Louise.
This SAND report summarizes research conducted as a part of a two year Laboratory Directed Research and Development (LDRD) project to improve our abilities to model algal cultivation. Algae-based biofuels have generated much excitement due to their potentially large oil yield from relatively small land use and without interfering with the food or water supply. Algae mitigate atmospheric CO2 through metabolism. Efficient production of algal biofuels could reduce dependence on foreign oil by providing a domestic renewable energy source. Important factors controlling algal productivity include temperature, nutrient concentrations, salinity, pH, and the light-to-biomass conversion rate. Computational models allow for inexpensivemore » predictions of algae growth kinetics in these non-ideal conditions for various bioreactor sizes and geometries without the need for multiple expensive measurement setups. However, these models need to be calibrated for each algal strain. In this work, we conduct a parametric study of key marine algae strains and apply the findings to a computational model.« less
Roussy, Georges; Dichtel, Bernard; Chaabane, Haykel
2003-01-01
By using a new integrated circuit, which is marketed for bluetooth applications, it is possible to simplify the method of measuring the complex impedance, complex reflection coefficient and complex transmission coefficient in an industrial microwave setup. The Analog Devices circuit AD 8302, which measures gain and phase up to 2.7 GHz, operates with variable level input signals and is less sensitive to both amplitude and frequency fluctuations of the industrial magnetrons than are mixers and AM crystal detectors. Therefore, accurate gain and phase measurements can be performed with low stability generators. A mechanical setup with an AD 8302 is described; the calibration procedure and its performance are presented.
A novel method to calibrate DOI function of a PET detector with a dual-ended-scintillator readout.
Shao, Yiping; Yao, Rutao; Ma, Tianyu
2008-12-01
The detection of depth-of-interaction (DOI) is a critical detector capability to improve the PET spatial resolution uniformity across the field-of-view and will significantly enhance, in particular, small bore system performance for brain, breast, and small animal imaging. One promising technique of DOI detection is to use dual-ended-scintillator readout that uses two photon sensors to detect scintillation light from both ends of a scintillator array and estimate DOI based on the ratio of signals (similar to Anger logic). This approach needs a careful DOI function calibration to establish accurate relationship between DOI and signal ratios, and to recalibrate if the detection condition is shifted due to the drift of sensor gain, bias variations, or degraded optical coupling, etc. However, the current calibration method that uses coincident events to locate interaction positions inside a single scintillator crystal has severe drawbacks, such as complicated setup, long and repetitive measurements, and being prone to errors from various possible misalignments among the source and detector components. This method is also not practically suitable to calibrate multiple DOI functions of a crystal array. To solve these problems, a new method has been developed that requires only a uniform flood source to irradiate a crystal array without the need to locate the interaction positions, and calculates DOI functions based solely on the uniform probability distribution of interactions over DOI positions without knowledge or assumption of detector responses. Simulation and experiment have been studied to validate the new method, and the results show that the new method, with a simple setup and one single measurement, can provide consistent and accurate DOI functions for the entire array of multiple scintillator crystals. This will enable an accurate, simple, and practical DOI function calibration for the PET detectors based on the design of dual-ended-scintillator readout. In addition, the new method can be generally applied to calibrating other types of detectors that use the similar dual-ended readout to acquire the radiation interaction position.
A novel method to calibrate DOI function of a PET detector with a dual-ended-scintillator readout
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shao Yiping; Yao Rutao; Ma Tianyu
The detection of depth-of-interaction (DOI) is a critical detector capability to improve the PET spatial resolution uniformity across the field-of-view and will significantly enhance, in particular, small bore system performance for brain, breast, and small animal imaging. One promising technique of DOI detection is to use dual-ended-scintillator readout that uses two photon sensors to detect scintillation light from both ends of a scintillator array and estimate DOI based on the ratio of signals (similar to Anger logic). This approach needs a careful DOI function calibration to establish accurate relationship between DOI and signal ratios, and to recalibrate if the detectionmore » condition is shifted due to the drift of sensor gain, bias variations, or degraded optical coupling, etc. However, the current calibration method that uses coincident events to locate interaction positions inside a single scintillator crystal has severe drawbacks, such as complicated setup, long and repetitive measurements, and being prone to errors from various possible misalignments among the source and detector components. This method is also not practically suitable to calibrate multiple DOI functions of a crystal array. To solve these problems, a new method has been developed that requires only a uniform flood source to irradiate a crystal array without the need to locate the interaction positions, and calculates DOI functions based solely on the uniform probability distribution of interactions over DOI positions without knowledge or assumption of detector responses. Simulation and experiment have been studied to validate the new method, and the results show that the new method, with a simple setup and one single measurement, can provide consistent and accurate DOI functions for the entire array of multiple scintillator crystals. This will enable an accurate, simple, and practical DOI function calibration for the PET detectors based on the design of dual-ended-scintillator readout. In addition, the new method can be generally applied to calibrating other types of detectors that use the similar dual-ended readout to acquire the radiation interaction position.« less
Feaster, Toby D.; Benedict, Stephen T.; Clark, Jimmy M.; Bradley, Paul M.; Conrads, Paul
2014-01-01
As part of an ongoing effort by the U.S. Geological Survey to expand the understanding of relations among hydrologic, geochemical, and ecological processes that affect fish-tissue mercury concentrations within the Edisto River Basin, analyses and simulations of the hydrology of the Edisto River Basin were made using the topography-based hydrological model (TOPMODEL). A primary focus of the investigation was to assess the potential for scaling up a previous application of TOPMODEL for the McTier Creek watershed, which is a small headwater catchment to the Edisto River Basin. Scaling up was done in a step-wise manner, beginning with applying the calibration parameters, meteorological data, and topographic-wetness-index data from the McTier Creek TOPMODEL to the Edisto River TOPMODEL. Additional changes were made for subsequent simulations, culminating in the best simulation, which included meteorological and topographic wetness index data from the Edisto River Basin and updated calibration parameters for some of the TOPMODEL calibration parameters. The scaling-up process resulted in nine simulations being made. Simulation 7 best matched the streamflows at station 02175000, Edisto River near Givhans, SC, which was the downstream limit for the TOPMODEL setup, and was obtained by adjusting the scaling factor, including streamflow routing, and using NEXRAD precipitation data for the Edisto River Basin. The Nash-Sutcliffe coefficient of model-fit efficiency and Pearson’s correlation coefficient for simulation 7 were 0.78 and 0.89, respectively. Comparison of goodness-of-fit statistics between measured and simulated daily mean streamflow for the McTier Creek and Edisto River models showed that with calibration, the Edisto River TOPMODEL produced slightly better results than the McTier Creek model, despite the substantial difference in the drainage-area size at the outlet locations for the two models (30.7 and 2,725 square miles, respectively). Along with the TOPMODEL hydrologic simulations, a visualization tool (the Edisto River Data Viewer) was developed to help assess trends and influencing variable in the stream ecosystem. Incorporated into the visualization tool were the water-quality load models TOPLOAD, TOPLOAD–H, and LOADEST. Because the focus of this investigation was on scaling up the models from McTier Creek, water-quality concentrations that were previously collected in the McTier Creek Basin were used in the water-quality load models.
NASA Astrophysics Data System (ADS)
Emanuelsson, B. D.; Baisden, W. T.; Bertler, N. A. N.; Keller, E. D.; Gkinis, V.
2014-12-01
Here we present an experimental setup for water stable isotopes (δ18O and δD) continuous flow measurements. It is the first continuous flow laser spectroscopy system that is using Off-Axis Integrated Cavity Output Spectroscopy (OA-ICOS; analyzer manufactured by Los Gatos Research - LGR) in combination with an evaporation unit to continuously analyze sample from an ice core. A Water Vapor Isotopic Standard Source (WVISS) calibration unit, manufactured by LGR, was modified to: (1) increase the temporal resolution by reducing the response time (2) enable measurements on several water standards, and (3) to reduce the influence from memory effects. While this setup was designed for the Continuous Flow Analysis (CFA) of ice cores, it can also continuously analyze other liquid or vapor sources. The modified setup provides a shorter response time (~54 and 18 s for 2013 and 2014 setup, respectively) compared to the original WVISS unit (~62 s), which is an improvement in measurement resolution. Another improvement compared to the original WVISS is that the modified setup has a reduced memory effect. Stability tests comparing the modified WVISS and WVISS setups were performed and Allan deviations (σAllan) were calculated to determine precision at different averaging times. For the 2013 modified setup the precision after integration times of 103 s are 0.060 and 0.070‰ for δ18O and δD, respectively. For the WVISS setup the corresponding σAllan values are 0.030, 0.060 and 0.043‰ for δ18O, δD and δ17O, respectively. For the WVISS setup the precision is 0.035, 0.070 and 0.042‰ after 103 s for δ18O, δD and δ17O, respectively. Both the modified setups and WVISS setup are influenced by instrumental drift with δ18O being more drift sensitive than δD. The σAllan values for δ18O of 0.30 and 0.18‰ for the modified (2013) and WVISS setup, respectively after averaging times of 104 s (2.78 h). The Isotopic Water Analyzer (IWA)-modified WVISS setup used during the 2013 Roosevelt Island Climate Evolution (RICE) ice core processing campaign achieved high precision measurements, in particular for δD, with high temporal resolution for the upper part of the core, where a seasonally resolved isotopic signal is preserved.
NASA Astrophysics Data System (ADS)
Keawprasert, T.; Anhalt, K.; Taubert, D. R.; Sperling, A.; Schuster, M.; Nevas, S.
2013-09-01
An LP3 radiation thermometer was absolutely calibrated at a newly developed monochromator-based set-up and the TUneable Lasers in Photometry (TULIP) facility of PTB in the wavelength range from 400 nm to 1100 nm. At both facilities, the spectral radiation of the respective sources irradiates an integrating sphere, thus generating uniform radiance across its precision aperture. The spectral irradiance of the integrating sphere is determined via an effective area of a precision aperture and a Si trap detector, traceable to the primary cryogenic radiometer of PTB. Due to the limited output power from the monochromator, the absolute calibration was performed with the measurement uncertainty of 0.17 % (k = 1), while the respective uncertainty at the TULIP facility is 0.14 %. Calibration results obtained by the two facilities were compared in terms of spectral radiance responsivity, effective wavelength and integral responsivity. It was found that the measurement results in integral responsivity at the both facilities are in agreement within the expanded uncertainty (k = 2). To verify the calibration accuracy, the absolutely calibrated radiation thermometer was used to measure the thermodynamic freezing temperatures of the PTB gold fixed-point blackbody.
Scanning microwave microscopy applied to semiconducting GaAs structures
NASA Astrophysics Data System (ADS)
Buchter, Arne; Hoffmann, Johannes; Delvallée, Alexandra; Brinciotti, Enrico; Hapiuk, Dimitri; Licitra, Christophe; Louarn, Kevin; Arnoult, Alexandre; Almuneau, Guilhem; Piquemal, François; Zeier, Markus; Kienberger, Ferry
2018-02-01
A calibration algorithm based on one-port vector network analyzer (VNA) calibration for scanning microwave microscopes (SMMs) is presented and used to extract quantitative carrier densities from a semiconducting n-doped GaAs multilayer sample. This robust and versatile algorithm is instrument and frequency independent, as we demonstrate by analyzing experimental data from two different, cantilever- and tuning fork-based, microscope setups operating in a wide frequency range up to 27.5 GHz. To benchmark the SMM results, comparison with secondary ion mass spectrometry is undertaken. Furthermore, we show SMM data on a GaAs p-n junction distinguishing p- and n-doped layers.
NASA Astrophysics Data System (ADS)
Wong, Pak-kin; Vong, Chi-man; Wong, Hang-cheong; Li, Ke
2010-05-01
Modern automotive spark-ignition (SI) power performance usually refers to output power and torque, and they are significantly affected by the setup of control parameters in the engine management system (EMS). EMS calibration is done empirically through tests on the dynamometer (dyno) because no exact mathematical engine model is yet available. With an emerging nonlinear function estimation technique of Least squares support vector machines (LS-SVM), the approximate power performance model of a SI engine can be determined by training the sample data acquired from the dyno. A novel incremental algorithm based on typical LS-SVM is also proposed in this paper, so the power performance models built from the incremental LS-SVM can be updated whenever new training data arrives. With updating the models, the model accuracies can be continuously increased. The predicted results using the estimated models from the incremental LS-SVM are good agreement with the actual test results and with the almost same average accuracy of retraining the models from scratch, but the incremental algorithm can significantly shorten the model construction time when new training data arrives.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pedersen, K; Irwin, J; Sansourekidou, P
Purpose: To investigate the impact of the treatment table on skin dose for prone breast patients for which the breast contacts the table and to develop a method to decrease skin dose. Methods: We used 12cm stack of 15cmx15cm solid water slabs to imitate breast. Calibrated EBT3 radiochromic film was affixed to the bottom of the phantom. Treatments for 32 patients were analyzed to determine typical prone breast beam parameters. Based on the analysis, a field size and a range of gantry angles were chosen for the test beams. Three experimental setups were used. The first represented the patient setupmore » currently used in our clinics with the phantom directly on the table. The second was the skin sparing setup, with a 1.5cm Styrofoam slab between the phantom and the table. The third used a 7.5cm Styrofoam slab to examine the extent of skin sparing potential. The calibration curve was applied to each film to determine dose. Percent difference in dose between the current and skin sparing setups was calculated for each gantry angle and gantry angle pair. Results: Data showed that beams entering through the table showed a skin dose decrease ranging from 13%–30% with the addition of 7.5cm Styrofoam, while beams exiting through the table showed no significant difference. The addition of 1.5cm Styrofoam resulted in differences ranging from 0.5%–13% with the skin sparing setup. Conclusion: The results demonstrate that skin in contact with the table receives increased dose from beams entering through the table. By creating separation between the breast and the table with Styrofoam the skin dose can be lowered, but 1.5 cm did not fully mitigate the effect. Further investigation will be performed to identify a clinically practical thickness that maximizes this mitigation.« less
NASA Technical Reports Server (NTRS)
Pagnutti, Mary; Ryan, Robert E.; Holekamp, Kara; Harrington, Gary; Frisbie, Troy
2006-01-01
A simple and cost-effective, hyperspectral sun photometer for radiometric vicarious remote sensing system calibration, air quality monitoring, and potentially in-situ planetary climatological studies, was developed. The device was constructed solely from off the shelf components and was designed to be easily deployable for support of short-term verification and validation data collects. This sun photometer not only provides the same data products as existing multi-band sun photometers but also the potential of hyperspectral optical depth and diffuse-to-global products. As compared to traditional sun photometers, this device requires a simpler setup, less data acquisition time and allows for a more direct calibration approach. Fielding this instrument has also enabled Stennis Space Center (SSC) Applied Sciences Directorate personnel to cross-calibrate existing sun photometers. This innovative research will position SSC personnel to perform air quality assessments in support of the NASA Applied Sciences Program's National Applications program element as well as to develop techniques to evaluate aerosols in a Martian or other planetary atmosphere.
NASA Astrophysics Data System (ADS)
Ströhl, Florian; Wong, Hovy H. W.; Holt, Christine E.; Kaminski, Clemens F.
2018-01-01
Fluorescence anisotropy imaging microscopy (FAIM) measures the depolarization properties of fluorophores to deduce molecular changes in their environment. For successful FAIM, several design principles have to be considered and a thorough system-specific calibration protocol is paramount. One important calibration parameter is the G factor, which describes the system-induced errors for different polarization states of light. The determination and calibration of the G factor is discussed in detail in this article. We present a novel measurement strategy, which is particularly suitable for FAIM with high numerical aperture objectives operating in TIRF illumination mode. The method makes use of evanescent fields that excite the sample with a polarization direction perpendicular to the image plane. Furthermore, we have developed an ImageJ/Fiji plugin, AniCalc, for FAIM data processing. We demonstrate the capabilities of our TIRF-FAIM system by measuring β -actin polymerization in human embryonic kidney cells and in retinal neurons.
NASA Astrophysics Data System (ADS)
Arenz, M.; Baek, W.-J.; Beck, M.; Beglarian, A.; Behrens, J.; Bergmann, T.; Berlev, A.; Besserer, U.; Blaum, K.; Bode, T.; Bornschein, B.; Bornschein, L.; Brunst, T.; Buzinsky, N.; Chilingaryan, S.; Choi, W. Q.; Deffert, M.; Doe, P. J.; Dragoun, O.; Drexlin, G.; Dyba, S.; Edzards, F.; Eitel, K.; Ellinger, E.; Engel, R.; Enomoto, S.; Erhard, M.; Eversheim, D.; Fedkevych, M.; Fischer, S.; Formaggio, J. A.; Fränkle, F. M.; Franklin, G. B.; Friedel, F.; Fulst, A.; Gil, W.; Glück, F.; Ureña, A. Gonzalez; Grohmann, S.; Grössle, R.; Gumbsheimer, R.; Hackenjos, M.; Hannen, V.; Harms, F.; Haußmann, N.; Heizmann, F.; Helbing, K.; Herz, W.; Hickford, S.; Hilk, D.; Hillesheimer, D.; Howe, M. A.; Huber, A.; Jansen, A.; Kellerer, J.; Kernert, N.; Kippenbrock, L.; Kleesiek, M.; Klein, M.; Kopmann, A.; Korzeczek, M.; Kovalík, A.; Krasch, B.; Kraus, M.; Kuckert, L.; Lasserre, T.; Lebeda, O.; Letnev, J.; Lokhov, A.; Machatschek, M.; Marsteller, A.; Martin, E. L.; Mertens, S.; Mirz, S.; Monreal, B.; Neumann, H.; Niemes, S.; Off, A.; Osipowicz, A.; Otten, E.; Parno, D. S.; Pollithy, A.; Poon, A. W. P.; Priester, F.; Ranitzsch, P. C.-O.; Rest, O.; Robertson, R. G. H.; Roccati, F.; Rodenbeck, C.; Röllig, M.; Röttele, C.; Ryšavý, M.; Sack, R.; Saenz, A.; Schimpf, L.; Schlösser, K.; Schlösser, M.; Schönung, K.; Schrank, M.; Seitz-Moskaliuk, H.; Sentkerestiová, J.; Sibille, V.; Slezák, M.; Steidl, M.; Steinbrink, N.; Sturm, M.; Suchopar, M.; Suesser, M.; Telle, H. H.; Thorne, L. A.; Thümmler, T.; Titov, N.; Tkachev, I.; Trost, N.; Valerius, K.; Vénos, D.; Vianden, R.; Hernández, A. P. Vizcaya; Weber, M.; Weinheimer, C.; Weiss, C.; Welte, S.; Wendel, J.; Wilkerson, J. F.; Wolf, J.; Wüstling, S.; Zadoroghny, S.
2018-05-01
The neutrino mass experiment KATRIN requires a stability of 3 ppm for the retarding potential at - 18.6 kV of the main spectrometer. To monitor the stability, two custom-made ultra-precise high-voltage dividers were developed and built in cooperation with the German national metrology institute Physikalisch-Technische Bundesanstalt (PTB). Until now, regular absolute calibration of the voltage dividers required bringing the equipment to the specialised metrology laboratory. Here we present a new method based on measuring the energy difference of two ^{83{m}}Kr conversion electron lines with the KATRIN setup, which was demonstrated during KATRIN's commissioning measurements in July 2017. The measured scale factor M=1972.449(10) of the high-voltage divider K35 is in agreement with the last PTB calibration 4 years ago. This result demonstrates the utility of the calibration method, as well as the long-term stability of the voltage divider.
Nonhydrostatic and surfbeat model predictions of extreme wave run-up in fringing reef environments
Lashley, Christopher H.; Roelvink, Dano; van Dongeren, Ap R.; Buckley, Mark L.; Lowe, Ryan J.
2018-01-01
The accurate prediction of extreme wave run-up is important for effective coastal engineering design and coastal hazard management. While run-up processes on open sandy coasts have been reasonably well-studied, very few studies have focused on understanding and predicting wave run-up at coral reef-fronted coastlines. This paper applies the short-wave resolving, Nonhydrostatic (XB-NH) and short-wave averaged, Surfbeat (XB-SB) modes of the XBeach numerical model to validate run-up using data from two 1D (alongshore uniform) fringing-reef profiles without roughness elements, with two objectives: i) to provide insight into the physical processes governing run-up in such environments; and ii) to evaluate the performance of both modes in accurately predicting run-up over a wide range of conditions. XBeach was calibrated by optimizing the maximum wave steepness parameter (maxbrsteep) in XB-NH and the dissipation coefficient (alpha) in XB-SB) using the first dataset; and then applied to the second dataset for validation. XB-NH and XB-SB predictions of extreme wave run-up (Rmax and R2%) and its components, infragravity- and sea-swell band swash (SIG and SSS) and shoreline setup (<η>), were compared to observations. XB-NH more accurately simulated wave transformation but under-predicted shoreline setup due to its exclusion of parameterized wave-roller dynamics. XB-SB under-predicted sea-swell band swash but overestimated shoreline setup due to an over-prediction of wave heights on the reef flat. Run-up (swash) spectra were dominated by infragravity motions, allowing the short-wave (but not wave group) averaged model (XB-SB) to perform comparably well to its more complete, short-wave resolving (XB-NH) counterpart. Despite their respective limitations, both modes were able to accurately predict Rmax and R2%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cárdenas-García, D.; Méndez-Lango, E.
Flat Calibrators (FC) are an option for calibration of infrared thermometers (IT) with a fixed large target. FCs are neither blackbodies, nor gray-bodies; their spectral emissivity is lower than one and depends on wavelength. Nevertheless they are used as gray-bodies with a nominal emissivity value. FCs can be calibrated radiometrically using as reference a calibrated IR thermometer (RT). If an FC will be used to calibrate ITs that work in the same spectral range as the RT then its calibration is straightforward: the actual FC spectral emissivity is not required. This result is valid for any given fixed emissivity assessedmore » to the FC. On the other hand, when the RT working spectral range does not match with that of the ITs to be calibrated with the FC then it is required to know the FC spectral emissivity as part of the calibration process. For this purpose, at CENAM, we developed an experimental setup to measure spectral emissivity in the infrared spectral range, based on a Fourier transform infrared spectrometer. Not all laboratories have emissivity measurement capability in the appropriate wavelength and temperature ranges to obtain the spectral emissivity. Thus, we present an estimation of the error introduced when the spectral range of the RT used to calibrate an FC and the spectral ranges of the ITs to be calibrated with the FC do not match. Some examples are developed for the cases when RT and IT spectral ranges are [8,13] μm and [8,14] μm respectively.« less
X-ray dual energy spectral parameter optimization for bone Calcium/Phosphorus mass ratio estimation
NASA Astrophysics Data System (ADS)
Sotiropoulou, P. I.; Fountos, G. P.; Martini, N. D.; Koukou, V. N.; Michail, C. M.; Valais, I. G.; Kandarakis, I. S.; Nikiforidis, G. C.
2015-09-01
Calcium (Ca) and Phosphorus (P) bone mass ratio has been identified as an important, yet underutilized, risk factor in osteoporosis diagnosis. The purpose of this simulation study is to investigate the use of effective or mean mass attenuation coefficient in Ca/P mass ratio estimation with the use of a dual-energy method. The investigation was based on the minimization of the accuracy of Ca/P ratio, with respect to the Coefficient of Variation of the ratio. Different set-ups were examined, based on the K-edge filtering technique and single X-ray exposure. The modified X-ray output was attenuated by various Ca/P mass ratios resulting in nine calibration points, while keeping constant the total bone thickness. The simulated data were obtained considering a photon counting energy discriminating detector. The standard deviation of the residuals was used to compare and evaluate the accuracy between the different dual energy set-ups. The optimum mass attenuation coefficient for the Ca/P mass ratio estimation was the effective coefficient in all the examined set-ups. The variation of the residuals between the different set-ups was not significant.
NASA Technical Reports Server (NTRS)
Akyuzlu, K. M.; Jones, S.; Meredith, T.
1993-01-01
Self pressurization by propellant boiloff is experimentally studied as an alternate pressurization concept for the Space Shuttle external tank (ET). The experimental setup used in the study is an open flow system which is composed of a variable area test tank and a recovery tank. The vacuum jacketed test tank is geometrically similar to the external LOx tank for the Space Shuttle. It is equipped with instrumentation to measure the temperature and pressure histories within the liquid and vapor, and viewports to accommodate visual observations and Laser-Doppler Anemometry measurements of fluid velocities. A set of experiments were conducted using liquid Nitrogen to determine the temperature stratification in the liquid and vapor, and pressure histories of the vapor during sudden and continuous depressurization for various different boundary and initial conditions. The study also includes the development and calibration of a computer model to simulate the experiments. This model is a one-dimensional, multi-node type which assumes the liquid and the vapor to be under non-equilibrium conditions during the depressurization. It has been tested for a limited number of cases. The preliminary results indicate that the accuracy of the simulations is determined by the accuracy of the heat transfer coefficients for the vapor and the liquid at the interface which are taken to be the calibration parameters in the present model.
Review of atrazine sampling by polar organic chemical integrative samplers and Chemcatcher.
Booij, Kees; Chen, Sunmao
2018-04-24
A key success factor for the performance of passive samplers is the proper calibration of sampling rates. Sampling rates for a wide range of polar organic compounds are available for Chemcatchers and polar organic chemical integrative samplers (POCIS), but the mechanistic models that are needed to understand the effects of exposure conditions on sampling rates need improvement. Literature data on atrazine sampling rates by these samplers were reviewed with the aim of assessing what can be learned from literature reports of this well-studied compound and identifying knowledge gaps related to the effects of flow and temperature. The flow dependency of sampling rates could be described by a mass transfer resistance model with 1 (POCIS) or 2 (Chemcatcher) adjustable parameters. Literature data were insufficient to evaluate the temperature effect on the sampling rates. An evaluation of reported sampler configurations showed that standardization of sampler design can be improved: for POCIS with respect to surface area and sorbent mass, and for Chemcatcher with respect to housing design. Several reports on atrazine sampling could not be used because the experimental setups were insufficiently described with respect to flow conditions. Recommendations are made for standardization of sampler layout and documentation of flow conditions in calibration studies. Environ Toxicol Chem 2018;9999:1-13. © 2018 SETAC. © 2018 SETAC.
van Leeuwen, Martin; Kremens, Robert L.; van Aardt, Jan
2015-01-01
Photosynthetic light-use efficiency (LUE) has gained wide interest as an input to modeling forest gross primary productivity (GPP). The photochemical reflectance index (PRI) has been identified as a principle means to inform LUE-based models, using airborne and satellite-based observations of canopy reflectance. More recently, low-cost electronics have become available with the potential to provide for dense in situ time-series measurements of PRI. A recent design makes use of interference filters to record light transmission within narrow wavebands. Uncertainty remains as to the dynamic range of these sensors and performance under low light conditions, the placement of the reference band, and methodology for reflectance calibration. This paper presents a low-cost sensor design and is tested in a laboratory set-up, as well in the field. The results demonstrate an excellent performance against a calibration standard (R2 = 0.9999) and at low light conditions. Radiance measurements over vegetation demonstrate a reversible reduction in green reflectance that was, however, seen in both the reference and signal wavebands. Time-series field measurements of PRI in a Douglas-fir canopy showed a weak correlation with eddy-covariance-derived LUE and a significant decline in PRI over the season. Effects of light quality, bidirectional scattering effects, and possible sensor artifacts on PRI are discussed. PMID:25951342
CALIFA, the Calar Alto Legacy Integral Field Area survey. III. Second public data release
NASA Astrophysics Data System (ADS)
García-Benito, R.; Zibetti, S.; Sánchez, S. F.; Husemann, B.; de Amorim, A. L.; Castillo-Morales, A.; Cid Fernandes, R.; Ellis, S. C.; Falcón-Barroso, J.; Galbany, L.; Gil de Paz, A.; González Delgado, R. M.; Lacerda, E. A. D.; López-Fernandez, R.; de Lorenzo-Cáceres, A.; Lyubenova, M.; Marino, R. A.; Mast, D.; Mendoza, M. A.; Pérez, E.; Vale Asari, N.; Aguerri, J. A. L.; Ascasibar, Y.; Bekeraitė, S.; Bland-Hawthorn, J.; Barrera-Ballesteros, J. K.; Bomans, D. J.; Cano-Díaz, M.; Catalán-Torrecilla, C.; Cortijo, C.; Delgado-Inglada, G.; Demleitner, M.; Dettmar, R.-J.; Díaz, A. I.; Florido, E.; Gallazzi, A.; García-Lorenzo, B.; Gomes, J. M.; Holmes, L.; Iglesias-Páramo, J.; Jahnke, K.; Kalinova, V.; Kehrig, C.; Kennicutt, R. C.; López-Sánchez, Á. R.; Márquez, I.; Masegosa, J.; Meidt, S. E.; Mendez-Abreu, J.; Mollá, M.; Monreal-Ibero, A.; Morisset, C.; del Olmo, A.; Papaderos, P.; Pérez, I.; Quirrenbach, A.; Rosales-Ortega, F. F.; Roth, M. M.; Ruiz-Lara, T.; Sánchez-Blázquez, P.; Sánchez-Menguiano, L.; Singh, R.; Spekkens, K.; Stanishev, V.; Torres-Papaqui, J. P.; van de Ven, G.; Vilchez, J. M.; Walcher, C. J.; Wild, V.; Wisotzki, L.; Ziegler, B.; Alves, J.; Barrado, D.; Quintana, J. M.; Aceituno, J.
2015-04-01
This paper describes the Second Public Data Release (DR2) of the Calar Alto Legacy Integral Field Area (CALIFA) survey. The data for 200 objects are made public, including the 100 galaxies of the First Public Data Release (DR1). Data were obtained with the integral-field spectrograph PMAS/PPak mounted on the 3.5 m telescope at the Calar Alto observatory. Two different spectral setups are available for each galaxy, (i) a low-resolution V500 setup covering the wavelength range 3745-7500 Å with a spectral resolution of 6.0 Å (FWHM); and (ii) a medium-resolution V1200 setup covering the wavelength range 3650-4840 Å with a spectral resolution of 2.3 Å (FWHM). The sample covers a redshift range between 0.005 and 0.03, with a wide range of properties in the color-magnitude diagram, stellar mass, ionization conditions, and morphological types. All the cubes in the data release were reduced with the latest pipeline, which includes improvedspectrophotometric calibration, spatial registration, and spatial resolution. The spectrophotometric calibration is better than 6% and the median spatial resolution is 2.̋4. In total, the second data release contains over 1.5 million spectra. Based on observations collected at the Centro Astronómico Hispano Alemán (CAHA) at Calar Alto, operated jointly by the Max-Planck-Institut für Astronomie (MPIA) and the Instituto de Astrofísica de Andalucía (CSIC).The second data release is available at http://califa.caha.es/DR2
NASA Astrophysics Data System (ADS)
Vaz, R.; May, P. W.; Fox, N. A.; Harwood, C. J.; Chatterjee, V.; Smith, J. A.; Horsfield, C. J.; Lapington, J. S.; Osbourne, S.
2015-03-01
Diamond-based photomultipliers have the potential to provide a significant improvement over existing devices due to diamond's high secondary electron yield and narrow energy distribution of secondary electrons which improves energy resolution creating extremely fast response times. In this paper we describe an experimental apparatus designed to study secondary electron emission from diamond membranes only 400 nm thick, observed in reflection and transmission configurations. The setup consists of a system of calibrated P22 green phosphor screens acting as radiation converters which are used in combination with photomultiplier tubes to acquire secondary emission yield data from the diamond samples. The superior signal voltage sampling of the phosphor screen setup compared with traditional Faraday Cup detection allows the variation in the secondary electron yield across the sample to be visualised, allowing spatial distributions to be obtained. Preliminary reflection and transmission yield data are presented as a function of primary electron energy for selected CVD diamond films and membranes. Reflection data were also obtained from the same sample set using a Faraday Cup detector setup. In general, the curves for secondary electron yield versus primary energy for both measurement setups were comparable. On average a 15-20% lower signal was recorded on our setup compared to the Faraday Cup, which was attributed to the lower photoluminescent efficiency of the P22 phosphor screens when operated at sub-kilovolt bias voltages.
FY17 Status Report on the Initial Development of a Constitutive Model for Grade 91 Steel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Messner, M. C.; Phan, V. -T.; Sham, T. -L.
Grade 91 is a candidate structural material for high temperature advanced reactor applications. Existing ASME Section III, Subsection HB, Subpart B simplified design rules based on elastic analysis are setup as conservative screening tools with the intent to supplement these screening rules with full inelastic analysis when required. The Code provides general guidelines for suitable inelastic models but does not provide constitutive model implementations. This report describes the development of an inelastic constitutive model for Gr. 91 steel aimed at fulfilling the ASME Code requirements and being included into a new Section III Code appendix, HBB-Z. A large database ofmore » over 300 experiments on Gr. 91 was collected and converted to a standard XML form. Five families of Gr. 91 material models were identified in the literature. Of these five, two are potentially suitable for use in the ASME code. These two models were implemented and evaluated against the experimental database. Both models have deficiencies so the report develops a framework for developing and calibrating an improved model. This required creating a new modeling method for representing changes in material rate sensitivity across the full ASME allowable temperature range for Gr. 91 structural components: room temperature to 650° C. On top of this framework for rate sensitivity the report describes calibrating a model for work hardening and softening in the material using genetic algorithm optimization. Future work will focus on improving this trial model by including tension/compression asymmetry observed in experiments and necessary to capture material ratcheting under zero mean stress and by improving the optimization and analysis framework.« less
NASA Astrophysics Data System (ADS)
Emanuelsson, B. D.; Baisden, W. T.; Bertler, N. A. N.; Keller, E. D.; Gkinis, V.
2015-07-01
Here we present an experimental setup for water stable isotope (δ18O and δD) continuous-flow measurements and provide metrics defining the performance of the setup during a major ice core measurement campaign (Roosevelt Island Climate Evolution; RICE). We also use the metrics to compare alternate systems. Our setup is the first continuous-flow laser spectroscopy system that is using off-axis integrated cavity output spectroscopy (OA-ICOS; analyzer manufactured by Los Gatos Research, LGR) in combination with an evaporation unit to continuously analyze water samples from an ice core. A Water Vapor Isotope Standard Source (WVISS) calibration unit, manufactured by LGR, was modified to (1) enable measurements on several water standards, (2) increase the temporal resolution by reducing the response time and (3) reduce the influence from memory effects. While this setup was designed for the continuous-flow analysis (CFA) of ice cores, it can also continuously analyze other liquid or vapor sources. The custom setups provide a shorter response time (~ 54 and 18 s for 2013 and 2014 setup, respectively) compared to the original WVISS unit (~ 62 s), which is an improvement in measurement resolution. Another improvement compared to the original WVISS is that the custom setups have a reduced memory effect. Stability tests comparing the custom and WVISS setups were performed and Allan deviations (σAllan) were calculated to determine precision at different averaging times. For the custom 2013 setup the precision after integration times of 103 s is 0.060 and 0.070 ‰ for δ18O and δD, respectively. The corresponding σAllan values for the custom 2014 setup are 0.030, 0.060 and 0.043 ‰ for δ18O, δD and δ17O, respectively. For the WVISS setup the precision is 0.035, 0.070 and 0.042 ‰ after 103 s for δ18O, δD and δ17O, respectively. Both the custom setups and WVISS setup are influenced by instrumental drift with δ18O being more drift sensitive than δD. The σAllan values for δ18O are 0.30 and 0.18 ‰ for the custom 2013 and WVISS setup, respectively, after averaging times of 104 s (2.78 h). Using response time tests and stability tests, we show that the custom setups are more responsive (shorter response time), whereas the University of Copenhagen (UC) setup is more stable. More broadly, comparisons of different setups address the challenge of integrating vaporizer/spectrometer isotope measurement systems into a CFA campaign with many other analytical instruments.
De Leersnyder, Fien; Peeters, Elisabeth; Djalabi, Hasna; Vanhoorne, Valérie; Van Snick, Bernd; Hong, Ke; Hammond, Stephen; Liu, Angela Yang; Ziemons, Eric; Vervaet, Chris; De Beer, Thomas
2018-03-20
A calibration model for in-line API quantification based on near infrared (NIR) spectra collection during tableting in the tablet press feed frame was developed and validated. First, the measurement set-up was optimised and the effect of filling degree of the feed frame on the NIR spectra was investigated. Secondly, a predictive API quantification model was developed and validated by calculating the accuracy profile based on the analysis results of validation experiments. Furthermore, based on the data of the accuracy profile, the measurement uncertainty was determined. Finally, the robustness of the API quantification model was evaluated. An NIR probe (SentroPAT FO) was implemented into the feed frame of a rotary tablet press (Modul™ P) to monitor physical mixtures of a model API (sodium saccharine) and excipients with two different API target concentrations: 5 and 20% (w/w). Cutting notches into the paddle wheel fingers did avoid disturbances of the NIR signal caused by the rotating paddle wheel fingers and hence allowed better and more complete feed frame monitoring. The effect of the design of the notched paddle wheel fingers was also investigated and elucidated that straight paddle wheel fingers did cause less variation in NIR signal compared to curved paddle wheel fingers. The filling degree of the feed frame was reflected in the raw NIR spectra. Several different calibration models for the prediction of the API content were developed, based on the use of single spectra or averaged spectra, and using partial least squares (PLS) regression or ratio models. These predictive models were then evaluated and validated by processing physical mixtures with different API concentrations not used in the calibration models (validation set). The β-expectation tolerance intervals were calculated for each model and for each of the validated API concentration levels (β was set at 95%). PLS models showed the best predictive performance. For each examined saccharine concentration range (i.e., between 4.5 and 6.5% and between 15 and 25%), at least 95% of future measurements will not deviate more than 15% from the true value. Copyright © 2018 Elsevier B.V. All rights reserved.
Actuator-Assisted Calibration of Freehand 3D Ultrasound System.
Koo, Terry K; Silvia, Nathaniel
2018-01-01
Freehand three-dimensional (3D) ultrasound has been used independently of other technologies to analyze complex geometries or registered with other imaging modalities to aid surgical and radiotherapy planning. A fundamental requirement for all freehand 3D ultrasound systems is probe calibration. The purpose of this study was to develop an actuator-assisted approach to facilitate freehand 3D ultrasound calibration using point-based phantoms. We modified the mathematical formulation of the calibration problem to eliminate the need of imaging the point targets at different viewing angles and developed an actuator-assisted approach/setup to facilitate quick and consistent collection of point targets spanning the entire image field of view. The actuator-assisted approach was applied to a commonly used cross wire phantom as well as two custom-made point-based phantoms (original and modified), each containing 7 collinear point targets, and compared the results with the traditional freehand cross wire phantom calibration in terms of calibration reproducibility, point reconstruction precision, point reconstruction accuracy, distance reconstruction accuracy, and data acquisition time. Results demonstrated that the actuator-assisted single cross wire phantom calibration significantly improved the calibration reproducibility and offered similar point reconstruction precision, point reconstruction accuracy, distance reconstruction accuracy, and data acquisition time with respect to the freehand cross wire phantom calibration. On the other hand, the actuator-assisted modified "collinear point target" phantom calibration offered similar precision and accuracy when compared to the freehand cross wire phantom calibration, but it reduced the data acquisition time by 57%. It appears that both actuator-assisted cross wire phantom and modified collinear point target phantom calibration approaches are viable options for freehand 3D ultrasound calibration.
Actuator-Assisted Calibration of Freehand 3D Ultrasound System
2018-01-01
Freehand three-dimensional (3D) ultrasound has been used independently of other technologies to analyze complex geometries or registered with other imaging modalities to aid surgical and radiotherapy planning. A fundamental requirement for all freehand 3D ultrasound systems is probe calibration. The purpose of this study was to develop an actuator-assisted approach to facilitate freehand 3D ultrasound calibration using point-based phantoms. We modified the mathematical formulation of the calibration problem to eliminate the need of imaging the point targets at different viewing angles and developed an actuator-assisted approach/setup to facilitate quick and consistent collection of point targets spanning the entire image field of view. The actuator-assisted approach was applied to a commonly used cross wire phantom as well as two custom-made point-based phantoms (original and modified), each containing 7 collinear point targets, and compared the results with the traditional freehand cross wire phantom calibration in terms of calibration reproducibility, point reconstruction precision, point reconstruction accuracy, distance reconstruction accuracy, and data acquisition time. Results demonstrated that the actuator-assisted single cross wire phantom calibration significantly improved the calibration reproducibility and offered similar point reconstruction precision, point reconstruction accuracy, distance reconstruction accuracy, and data acquisition time with respect to the freehand cross wire phantom calibration. On the other hand, the actuator-assisted modified “collinear point target” phantom calibration offered similar precision and accuracy when compared to the freehand cross wire phantom calibration, but it reduced the data acquisition time by 57%. It appears that both actuator-assisted cross wire phantom and modified collinear point target phantom calibration approaches are viable options for freehand 3D ultrasound calibration. PMID:29854371
NASA Astrophysics Data System (ADS)
Nolz, R.; Kammerer, G.
2017-06-01
Monitoring water status near the soil surface is a prerequisite for studying hydrological processes at the soil-atmosphere boundary and an option for calibrating remotely sensed water content data, for instance. As the water status of the uppermost soil layer is highly variable in space and time, adequate sensors are required to enable accurate measurements. Therefore, a sensor setup was tested and evaluated in the laboratory and in the field for such a purpose. The arrangement included Hydra Probe and MPS-2 sensors to measure water content and matric potential, respectively. Performance of the MPS-2 was validated in the laboratory by comparing sensor readings with the water potential of a soil, drained to equilibrium for certain pressure steps inside a pressure plate apparatus. Afterwards, six Hydra Probes and twelve MPS-2 sensors were installed in bare soil at a small field plot of about 9 m2. The measurements represented soil water status to a depth of 6 cm from surface. Core samples were repeatedly excavated around the measurement spots. Their water content was determined and the samples were further utilized to analyze water retention characteristics. The tested setup properly reflected changes of near-surface soil water status due to rainfall and evaporation. However, some shortcomings weakened the potential of the chosen arrangement. Site-specific calibration of the Hydra Probes - implemented by relating sensor readings to the water content values of the core samples - confirmed the applicability of the recommended standard calibration parameters for the respective soil texture. The derived user calibration enabled a measurement accuracy of 0.02 cm3·cm-3. Further improvement was restrained by the spatial variability of soil moisture. In this context, spots that were permanently drier or wetter than the others were discovered by means of a temporal stability approach. Performance of MPS-2 sensors was more critical with respect to the objectives. Sensor-to-sensor variation was small at the applied pressure steps of -20, -50, and -100 kPa, but the respective averaged readings were -18, -37, and -57 kPa. At matric potentials of -200 and -300 kPa, the MPS-2 revealed substantial sensor-to-sensor variation. The large deviation of the sensor readings in the field confirmed that the calibration of the MPS-2 should be improved. However, in spite of this inaccuracy, the wide measuring range of the MPS-2 offers suitability to a wide range of potential applications. As an example, water retention functions were calculated from the in-situ data and compared to retention data from the core samples.
APEX calibration facility: status and first commissioning results
NASA Astrophysics Data System (ADS)
Suhr, Birgit; Fries, Jochen; Gege, Peter; Schwarzer, Horst
2006-09-01
The paper presents the current status of the operational calibration facility that can be used for radiometric, spectral and geometric on-ground characterisation and calibration of imaging spectrometers. The European Space Agency (ESA) co-funded this establishment at DLR Oberpfaffenhofen within the framework of the hyper-spectral imaging spectrometer Airborne Prism Experiment (APEX). It was designed to fulfil the requirements for calibration of APEX, but can also be used for other imaging spectrometers. A description of the hardware set-up of the optical bench will be given. Signals from two sides can alternatively be sent to the hyper-spectral sensor under investigation. Frome one side the spatial calibration will be done by using an off-axis collimator and six slits of different width and orientation to measure the line spread function (LSF) in flight direction as well as across flight direction. From the other side the spectral calibration will be performed. A monochromator provides radiation in a range from 380 nm to 13 μm with a bandwidth between 0.1 nm in the visible and 5 nm in the thermal infrared. For the relative radiometric calibration a large integrating sphere of 1.65 m diameter and exit port size of 55 cm × 40 cm is used. The absolute radiometric calibration will be done using a small integrating sphere with 50 cm diameter that is regularly calibrated according to national standards. This paper describes the hardware components and their accuracy, and it presents the software interface for automation of the measurements.
Cortesi, Marilisa; Bandiera, Lucia; Pasini, Alice; Bevilacqua, Alessandro; Gherardi, Alessandro; Furini, Simone; Giordano, Emanuele
2017-01-01
Quantifying gene expression at single cell level is fundamental for the complete characterization of synthetic gene circuits, due to the significant impact of noise and inter-cellular variability on the system's functionality. Commercial set-ups that allow the acquisition of fluorescent signal at single cell level (flow cytometers or quantitative microscopes) are expensive apparatuses that are hardly affordable by small laboratories. A protocol that makes a standard optical microscope able to acquire quantitative, single cell, fluorescent data from a bacterial population transformed with synthetic gene circuitry is presented. Single cell fluorescence values, acquired with a microscope set-up and processed with custom-made software, are compared with results that were obtained with a flow cytometer in a bacterial population transformed with the same gene circuitry. The high correlation between data from the two experimental set-ups, with a correlation coefficient computed over the tested dynamic range > 0.99, proves that a standard optical microscope- when coupled with appropriate software for image processing- might be used for quantitative single-cell fluorescence measurements. The calibration of the set-up, together with its validation, is described. The experimental protocol described in this paper makes quantitative measurement of single cell fluorescence accessible to laboratories equipped with standard optical microscope set-ups. Our method allows for an affordable measurement/quantification of intercellular variability, whose better understanding of this phenomenon will improve our comprehension of cellular behaviors and the design of synthetic gene circuits. All the required software is freely available to the synthetic biology community (MUSIQ Microscope flUorescence SIngle cell Quantification).
Experimental light scattering by small particles: system design and calibration
NASA Astrophysics Data System (ADS)
Maconi, Göran; Kassamakov, Ivan; Penttilä, Antti; Gritsevich, Maria; Hæggström, Edward; Muinonen, Karri
2017-06-01
We describe a setup for precise multi-angular measurements of light scattered by mm- to μm-sized samples. We present a calibration procedure that ensures accurate measurements. Calibration is done using a spherical sample (d = 5 mm, n = 1.517) fixed on a static holder. The ultimate goal of the project is to allow accurate multi-wavelength measurements (the full Mueller matrix) of single-particle samples which are levitated ultrasonically. The system comprises a tunable multimode Argon-krypton laser, with 12 wavelengths ranging from 465 to 676 nm, a linear polarizer, a reference photomultiplier tube (PMT) monitoring beam intensity, and several PMT:s mounted radially towards the sample at an adjustable radius. The current 150 mm radius allows measuring all azimuthal angles except for ±4° around the backward scattering direction. The measurement angle is controlled by a motor-driven rotational stage with an accuracy of 15'.
Performance evaluation and clinical applications of 3D plenoptic cameras
NASA Astrophysics Data System (ADS)
Decker, Ryan; Shademan, Azad; Opfermann, Justin; Leonard, Simon; Kim, Peter C. W.; Krieger, Axel
2015-06-01
The observation and 3D quantification of arbitrary scenes using optical imaging systems is challenging, but increasingly necessary in many fields. This paper provides a technical basis for the application of plenoptic cameras in medical and medical robotics applications, and rigorously evaluates camera integration and performance in the clinical setting. It discusses plenoptic camera calibration and setup, assesses plenoptic imaging in a clinically relevant context, and in the context of other quantitative imaging technologies. We report the methods used for camera calibration, precision and accuracy results in an ideal and simulated surgical setting. Afterwards, we report performance during a surgical task. Test results showed the average precision of the plenoptic camera to be 0.90mm, increasing to 1.37mm for tissue across the calibrated FOV. The ideal accuracy was 1.14mm. The camera showed submillimeter error during a simulated surgical task.
NASA Technical Reports Server (NTRS)
Anderson, R. C.; Summers, R. L.
1981-01-01
An integrated gas analysis system designed to operate in automatic, semiautomatic, and manual modes from a remote control panel is described. The system measures the carbon monoxide, oxygen, water vapor, total hydrocarbons, carbon dioxide, and oxides of nitrogen. A pull through design provides increased reliability and eliminates the need for manual flow rate adjustment and pressure correction. The system contains two microprocessors to range the analyzers, calibrate the system, process the raw data to units of concentration, and provides information to the facility research computer and to the operator through terminal and the control panels. After initial setup, the system operates for several hours without significant operator attention.
Kim, Ki-Hyun; Anthwal, A; Pandey, Sudhir Kumar; Kabir, Ehsanul; Sohn, Jong Ryeul
2010-11-01
In this study, a series of GC calibration experiments were conducted to examine the feasibility of the thermal desorption approach for the quantification of five carbonyl compounds (acetaldehyde, propionaldehyde, butyraldehyde, isovaleraldehyde, and valeraldehyde) in conjunction with two internal standard compounds. The gaseous working standards of carbonyls were calibrated with the aid of thermal desorption as a function of standard concentration and of loading volume. The detection properties were then compared against two types of external calibration data sets derived by fixed standard volume and fixed standard concentration approach. According to this comparison, the fixed standard volume-based calibration of carbonyls should be more sensitive and reliable than its fixed standard concentration counterpart. Moreover, the use of internal standard can improve the analytical reliability of aromatics and some carbonyls to a considerable extent. Our preliminary test on real samples, however, indicates that the performance of internal calibration, when tested using samples of varying dilution ranges, can be moderately different from that derivable from standard gases. It thus suggests that the reliability of calibration approaches should be examined carefully with the considerations on the interactive relationships between the compound-specific properties and the operation conditions of the instrumental setups.
Fluorescence calibration method for single-particle aerosol fluorescence instruments
NASA Astrophysics Data System (ADS)
Shipley Robinson, Ellis; Gao, Ru-Shan; Schwarz, Joshua P.; Fahey, David W.; Perring, Anne E.
2017-05-01
Real-time, single-particle fluorescence instruments used to detect atmospheric bioaerosol particles are increasingly common, yet no standard fluorescence calibration method exists for this technique. This gap limits the utility of these instruments as quantitative tools and complicates comparisons between different measurement campaigns. To address this need, we have developed a method to produce size-selected particles with a known mass of fluorophore, which we use to calibrate the fluorescence detection of a Wideband Integrated Bioaerosol Sensor (WIBS-4A). We use mixed tryptophan-ammonium sulfate particles to calibrate one detector (FL1; excitation = 280 nm, emission = 310-400 nm) and pure quinine particles to calibrate the other (FL2; excitation = 280 nm, emission = 420-650 nm). The relationship between fluorescence and mass for the mixed tryptophan-ammonium sulfate particles is linear, while that for the pure quinine particles is nonlinear, likely indicating that not all of the quinine mass contributes to the observed fluorescence. Nonetheless, both materials produce a repeatable response between observed fluorescence and particle mass. This procedure allows users to set the detector gains to achieve a known absolute response, calculate the limits of detection for a given instrument, improve the repeatability of the instrumental setup, and facilitate intercomparisons between different instruments. We recommend calibration of single-particle fluorescence instruments using these methods.
NASA Astrophysics Data System (ADS)
Pommé, S.
2009-06-01
An analytical model is presented to calculate the total detection efficiency of a well-type radiation detector for photons, electrons and positrons emitted from a radioactive source at an arbitrary position inside the well. The model is well suited to treat a typical set-up with a point source or cylindrical source and vial inside a NaI well detector, with or without lead shield surrounding it. It allows for fast absolute or relative total efficiency calibrations for a wide variety of geometrical configurations and also provides accurate input for the calculation of coincidence summing effects. Depending on its accuracy, it may even be applied in 4π-γ counting, a primary standardisation method for activity. Besides an accurate account of photon interactions, precautions are taken to simulate the special case of 511 keV annihilation quanta and to include realistic approximations for the range of (conversion) electrons and β -- and β +-particles.
Establishing BRDF calibration capabilities through shortwave infrared
NASA Astrophysics Data System (ADS)
Georgiev, Georgi T.; Butler, James J.; Thome, Kurt; Cooksey, Catherine; Ding, Leibo
2017-09-01
Satellite instruments operating in the reflective solar wavelength region require accurate and precise determination of the Bidirectional Reflectance Distribution Functions (BRDFs) of the laboratory and flight diffusers used in their pre-flight and on-orbit calibrations. This paper advances that initial work and presents a comparison of spectral Bidirectional Reflectance Distribution Function (BRDF) and Directional Hemispherical Reflectance (DHR) of Spectralon*, a common material for laboratory and onorbit flight diffusers. A new measurement setup for BRDF measurements from 900 nm to 2500 nm located at NASA Goddard Space Flight Center (GSFC) is described. The GSFC setup employs an extended indium gallium arsenide detector, bandpass filters, and a supercontinuum light source. Comparisons of the GSFC BRDF measurements in the shortwave infrared (SWIR) with those made by the National Institute of Standards and Technology (NIST) Spectral Tri-function Automated Reference Reflectometer (STARR) are presented. The Spectralon sample used in this study was 2 inch diameter, 99% white pressed and sintered Polytetrafluoroethylene (PTFE) target. The NASA/NIST BRDF comparison measurements were made at an incident angle of 0° and viewing angle of 45° . Additional BRDF data not compared to NIST were measured at additional incident and viewing angle geometries and are not presented here. The total combined uncertainty for the measurement of BRDF in the SWIR range made by the GSFC scatterometer is less than 1% (k = 1). This study is in support of the calibration of the Radiation Budget Instrument (RBI) and Visible Infrared Imaging Radiometer Suit (VIIRS) instruments of the Joint Polar Satellite System (JPSS) and other current and future NASA remote sensing missions operating across the reflected solar wavelength region.
ESTABLISHING BRDF CALIBRATION CAPABILITIES THROUGH SHORTWAVE INFRARED.
Georgiev, Georgi T; Butler, James J; Thome, Kurt; Cooksey, Catherine; Ding, Leibo
2017-01-01
Satellite instruments operating in the reflective solar wavelength region require accurate and precise determination of the Bidirectional Reflectance Distribution Functions (BRDFs) of the laboratory and flight diffusers used in their pre-flight and on-orbit calibrations. This paper advances that initial work and presents a comparison of spectral Bidirectional Reflectance Distribution Function (BRDF) and Directional Hemispherical Reflectance (DHR) of Spectralon, a common material for laboratory and on-orbit flight diffusers. A new measurement setup for BRDF measurements from 900 nm to 2500 nm located at NASA Goddard Space Flight Center (GSFC) is described. The GSFC setup employs an extended indium gallium arsenide detector, bandpass filters, and a supercontinuum light source. Comparisons of the GSFC BRDF measurements in the shortwave infrared (SWIR) with those made by the National Institute of Standards and Technology (NIST) Spectral Tri-function Automated Reference Reflectometer (STARR) are presented. The Spectralon sample used in this study was 2 inch diameter, 99% white pressed and sintered Polytetrafluoroethylene (PTFE) target. The NASA/NIST BRDF comparison measurements were made at an incident angle of 0° and viewing angle of 45°. Additional BRDF data not compared to NIST were measured at additional incident and viewing angle geometries and are not presented here. The total combined uncertainty for the measurement of BRDF in the SWIR range made by the GSFC scatterometer is less than 1% ( k = 1). This study is in support of the calibration of the Radiation Budget Instrument (RBI) and Visible Infrared Imaging Radiometer Suit (VIIRS) instruments of the Joint Polar Satellite System (JPSS) and other current and future NASA remote sensing missions operating across the reflected solar wavelength region.
Small format digital photogrammetry for applications in the earth sciences
NASA Astrophysics Data System (ADS)
Rieke-Zapp, Dirk
2010-05-01
Small format digital photogrammetry for applications in the earth sciences Photogrammetry is often considered one of the most precise and versatile surveying techniques. The same camera and analysis software can be used for measurements from sub-millimetre to kilometre scale. Such a measurement device is well suited for application by earth scientists working in the field. In this case a small toolset and a straight forward setup best fit the needs of the operator. While a digital camera is typically already part of the field equipment of an earth scientist the main focus of the field work is often not surveying. Lack in photogrammetric training at the same time requires an easy to learn, straight forward surveying technique. A photogrammetric method was developed aimed primarily at earth scientists for taking accurate measurements in the field minimizing extra bulk and weight of the required equipment. The work included several challenges. A) Definition of an upright coordinate system without heavy and bulky tools like a total station or GNS-Sensor. B) Optimization of image acquisition and geometric stability of the image block. C) Identification of a small camera suitable for precise measurements in the field. D) Optimization of the workflow from image acquisition to preparation of images for stereo measurements. E) Introduction of students and non-photogrammetrists to the workflow. Wooden spheres were used as target points in the field. They were more rugged and available in different sizes than ping pong balls used in a previous setup. Distances between three spheres were introduced as scale information in a photogrammetric adjustment. The distances were measured with a laser distance meter accurate to 1 mm (1 sigma). The vertical angle between the spheres was measured with the same laser distance meter. The precision of the measurement was 0.3° (1 sigma) which is sufficient, i.e. better than inclination measurements with a geological compass. The upright coordinate system is important to measure the dip angle of geologic features in outcrop. The planimetric coordinate systems would be arbitrary, but may easily be oriented to compass north introducing a direction measurement of a compass. Wooden spheres and a Leica disto D3 laser distance meter added less than 0.150 kg to the field equipment considering that a suitable digital camera was already part of it. Identification of a small digital camera suitable for precise measurements was a major part of this work. A group of cameras were calibrated several times over different periods of time on a testfield. Further evaluation involved an accuracy assessment in the field comparing distances between signalized points calculated form a photogrammetric setup with coordinates derived from a total station survey. The smallest camera in the test required calibration on the job as the interior orientation changed significantly between testfield calibration and use in the field. We attribute this to the fact that the lens was retracted then the camera was switched off. Fairly stable camera geometry in a compact size camera with lens retracting system was accomplished for Sigma DP1 and DP2 cameras. While the pixel count of the cameras was less than for the Ricoh, the pixel pitch in the Sigma cameras was much larger. Hence, the same mechanical movement would have less per pixel effect for the Sigma cameras than for the Ricoh camera. A large pixel pitch may therefore compensate for some camera instability explaining why cameras with large sensors and larger pixel pitch typically yield better accuracy in object space. Both Sigma cameras weigh approximately 0.250 kg and may even be suitable for use with ultralight aerial vehicles (UAV) which have payload restriction of 0.200 to 0.300 kg. A set of other cameras that were available were also tested on a calibration field and on location showing once again that it is difficult to reason geometric stability from camera specifications. Image acquisition with geometrically stable cameras was fairly straight forward to cover the area of interest with stereo pairs for analysis. We limited our tests to setups with three to five images to minimize the amount of post processing. The laser dot of the laser distance meter was not visible for distances farther than 5-7 m with the naked eye which also limited the maximum stereo area that may be covered with this technique. Extrapolating the setup to fairly large areas showed no significant decrease in accuracy accomplished in object space. Working with a Sigma SD14 SLR camera on a 6 x 18 x 20 m3 volume the maximum length measurement error ranged between 20 and 30 mm depending on image setup and analysis. For smaller outcrops even the compact cameras yielded maximum length measurement errors in the mm range which was considered sufficient for measurements in the earth sciences. In many cases the resolution per pixel was the limiting factor of image analysis rather than accuracy. A field manual was developed guiding novice users and students to this technique. The technique does not simplify ease of use for precision; therefore successful users of the presented method easily grow into more advanced photogrammetric methods for high precision applications. Originally camera calibration was not part of the methodology for the novice operators. Recent introduction of Camera Calibrator which is a low cost, well automated software for camera calibration, allowed beginners to calibrate their camera within a couple minutes. The complete set of calibration parameters can be applied in ERDAS LPS software easing the workflow. Image orientation was performed in LPS 9.2 software which was also used for further image analysis.
NASA Astrophysics Data System (ADS)
Radkowski, Rafael; Holland, Stephen; Grandin, Robert
2018-04-01
This research addresses inspection location tracking in the field of nondestructive evaluation (NDE) using a computer vision technique to determine the position and orientation of typical NDE equipment in a test setup. The objective is the tracking accuracy for typical NDE equipment to facilitate automatic NDE data integration. Since the employed tracking technique relies on surface curvatures of an object of interest, the accuracy can be only experimentally determined. We work with flash-thermography and conducted an experiment in which we tracked a specimen and a thermography flash hood, measured the spatial relation between both, and used the relation as input to map thermography data onto a 3D model of the specimen. The results indicate an appropriate accuracy, however, unveiled calibration challenges.
NASA Astrophysics Data System (ADS)
Gugg, Christoph; Harker, Matthew; O'Leary, Paul
2013-03-01
This paper describes the physical setup and mathematical modelling of a device for the measurement of structural deformations over large scales, e.g., a mining shaft. Image processing techniques are used to determine the deformation by measuring the position of a target relative to a reference laser beam. A particular novelty is the incorporation of electro-active glass; the polymer dispersion liquid crystal shutters enable the simultaneous calibration of any number of consecutive measurement units without manual intervention, i.e., the process is fully automatic. It is necessary to compensate for optical distortion if high accuracy is to be achieved in a compact hardware design where lenses with short focal lengths are used. Wide-angle lenses exhibit significant distortion, which are typically characterized using Zernike polynomials. Radial distortion models assume that the lens is rotationally symmetric; such models are insufficient in the application at hand. This paper presents a new coordinate mapping procedure based on a tensor product of discrete orthogonal polynomials. Both lens distortion and the projection are compensated by a single linear transformation. Once calibrated, to acquire the measurement data, it is necessary to localize a single laser spot in the image. For this purpose, complete interpolation and rectification of the image is not required; hence, we have developed a new hierarchical approach based on a quad-tree subdivision. Cross-validation tests verify the validity, demonstrating that the proposed method accurately models both the optical distortion as well as the projection. The achievable accuracy is e <= +/-0.01 [mm] in a field of view of 150 [mm] x 150 [mm] at a distance of the laser source of 120 [m]. Finally, a Kolmogorov Smirnov test shows that the error distribution in localizing a laser spot is Gaussian. Consequently, due to the linearity of the proposed method, this also applies for the algorithm's output. Therefore, first-order covariance propagation provides an accurate estimate of the measurement uncertainty, which is essential for any measurement device.
Calibration of GPS based high accuracy speed meter for vehicles
NASA Astrophysics Data System (ADS)
Bai, Yin; Sun, Qiao; Du, Lei; Yu, Mei; Bai, Jie
2015-02-01
GPS based high accuracy speed meter for vehicles is a special type of GPS speed meter which uses Doppler Demodulation of GPS signals to calculate the speed of a moving target. It is increasingly used as reference equipment in the field of traffic speed measurement, but acknowledged standard calibration methods are still lacking. To solve this problem, this paper presents the set-ups of simulated calibration, field test signal replay calibration, and in-field test comparison with an optical sensor based non-contact speed meter. All the experiments were carried out on particular speed values in the range of (40-180) km/h with the same GPS speed meter. The speed measurement errors of simulated calibration fall in the range of +/-0.1 km/h or +/-0.1%, with uncertainties smaller than 0.02% (k=2). The errors of replay calibration fall in the range of +/-0.1% with uncertainties smaller than 0.10% (k=2). The calibration results justify the effectiveness of the two methods. The relative deviations of the GPS speed meter from the optical sensor based noncontact speed meter fall in the range of +/-0.3%, which validates the use of GPS speed meter as reference instruments. The results of this research can provide technical basis for the establishment of internationally standard calibration methods of GPS speed meters, and thus ensures the legal status of GPS speed meters as reference equipment in the field of traffic speed metrology.
Calvo, Roque; D’Amato, Roberto; Gómez, Emilio; Domingo, Rosario
2016-01-01
Coordinate measuring machines (CMM) are main instruments of measurement in laboratories and in industrial quality control. A compensation error model has been formulated (Part I). It integrates error and uncertainty in the feature measurement model. Experimental implementation for the verification of this model is carried out based on the direct testing on a moving bridge CMM. The regression results by axis are quantified and compared to CMM indication with respect to the assigned values of the measurand. Next, testing of selected measurements of length, flatness, dihedral angle, and roundness features are accomplished. The measurement of calibrated gauge blocks for length or angle, flatness verification of the CMM granite table and roundness of a precision glass hemisphere are presented under a setup of repeatability conditions. The results are analysed and compared with alternative methods of estimation. The overall performance of the model is endorsed through experimental verification, as well as the practical use and the model capability to contribute in the improvement of current standard CMM measuring capabilities. PMID:27754441
The spectral imaging facility: Setup characterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Angelis, Simone, E-mail: simone.deangelis@iaps.inaf.it; De Sanctis, Maria Cristina; Manzari, Paola Olga
2015-09-15
The SPectral IMager (SPIM) facility is a laboratory visible infrared spectrometer developed to support space borne observations of rocky bodies of the solar system. Currently, this laboratory setup is used to support the DAWN mission, which is in its journey towards the asteroid 1-Ceres, and to support the 2018 Exo-Mars mission in the spectral investigation of the Martian subsurface. The main part of this setup is an imaging spectrometer that is a spare of the DAWN visible infrared spectrometer. The spectrometer has been assembled and calibrated at Selex ES and then installed in the facility developed at the INAF-IAPS laboratorymore » in Rome. The goal of SPIM is to collect data to build spectral libraries for the interpretation of the space borne and in situ hyperspectral measurements of planetary materials. Given its very high spatial resolution combined with the imaging capability, this instrument can also help in the detailed study of minerals and rocks. In this paper, the instrument setup is first described, and then a series of test measurements, aimed to the characterization of the main subsystems, are reported. In particular, laboratory tests have been performed concerning (i) the radiation sources, (ii) the reference targets, and (iii) linearity of detector response; the instrumental imaging artifacts have also been investigated.« less
Empirical High-Temperature Calibration for the Carbonate Clumped Isotopes Paleothermometer
NASA Astrophysics Data System (ADS)
Kluge, T.; John, C. M.; Jourdan, A.; Davis, S.; Crawshaw, J.
2013-12-01
The clumped isotope paleothermometer is being used in a wide range of applications related to carbonate mineral formation, focusing on temperature and fluid δ18O reconstruction. Whereas the range of typical Earth surface temperatures has been the focus of several studies based on laboratory experiments and biogenic carbonates of known growth temperatures, the clumped isotope-temperature relationship above 70 °C has not been assessed by direct precipitation of carbonates. We investigated the clumped isotope-temperature relationship by precipitating carbonates between 20 and 200°C in the laboratory. The setup consists of a pressurized vessel in which carbonate minerals are precipitated from the mixture of two solutions (CaCl2, NaHCO3). Both solutions are thermally and isotopically equilibrated before injection in the pressure vessel. Minerals precipitated in this setup generally consist of calcite. Samples were reacted with 105% orthophosphoric acid for 10 min at 90°C. The evolved CO2 was continuously collected and subsequently purified with a Porapak trap held at -35°C. Measurements were performed on a MAT 253 using the protocol of Huntington et al. (2009) and Dennis et al. (2011). Clumped isotope values from 20-90°C are consistent with carbonates that were precipitated from a CaCO3 super-saturated solution using the method of McCrea (1950). This demonstrates that the experimental setup does not induce any kinetic fractionation, and can be used for high-temperature carbonate precipitation. The new clumped isotope calibration at high temperature follows the theoretical calculations of Schauble et al. (2006) adjusted for phosphoric acid digestion at 90°C. We gratefully acknowledge funding from Qatar Petroleum, Shell and the Qatar Science and Technology Park.
Development and implementation of an EPID‐based method for localizing isocenter
Hyer, Daniel E.; Nixon, Earl
2012-01-01
The aim of this study was to develop a phantom and analysis software that could be used to quickly and accurately determine the location of radiation isocenter to an accuracy of less than 1 mm using the EPID (Electronic Portal Imaging Device). The proposed solution uses a collimator setting of 10×10cm2 to acquire EPID images of a new phantom constructed from LEGO blocks. Images from a number of gantry and collimator angles are analyzed by automated analysis software to determine the position of the jaws and center of the phantom in each image. The distance between a chosen jaw and the phantom center is then compared to the same distance measured after a 180° collimator rotation to determine if the phantom is centered in the dimension being investigated. Repeated tests show that the system is reproducibly independent of the imaging session, and calculated offsets of the phantom from radiation isocenter are a function of phantom setup only. Accuracy of the algorithm's calculated offsets were verified by imaging the LEGO phantom before and after applying the calculated offset. These measurements show that the offsets are predicted with an accuracy of approximately 0.3 mm, which is on the order of the detector's pitch. Comparison with a star‐shot analysis yielded agreement of isocenter location within 0.5 mm. Additionally, the phantom and software are completely independent of linac vendor, and this study presents results from two linac manufacturers. A Varian Optical Guidance Platform (OGP) calibration array was also integrated into the phantom to allow calibration of the OGP while the phantom is positioned at radiation isocenter to reduce setup uncertainty in the calibration. This solution offers a quick, objective method to perform isocenter localization as well as laser alignment and OGP calibration on a monthly basis. PACS number: 87.55.Qr PMID:23149787
DOE Office of Scientific and Technical Information (OSTI.GOV)
Theys, M.
1994-05-06
Beamlet is a high power laser currently being built at Lawrence Livermore National Lab as a proof of concept for the National Ignition Facility (NIF). Beamlet is testing several areas of laser advancements, such as a 37cm Pockels cell, square amplifier, and propagation of a square beam. The diagnostics on beamlet tell the operators how much energy the beam has in different locations, the pulse shape, the energy distribution, and other important information regarding the beam. This information is being used to evaluate new amplifier designs, and extrapolate performance to the NIF laser. In my term at Lawrence Livermore Nationalmore » Laboratory I have designed and built a diagnostic, calibrated instruments used on diagnostics, setup instruments, hooked up communication lines to the instruments, and setup computers to control specific diagnostics.« less
Comparison of normal and phase stepping shearographic NDE
NASA Astrophysics Data System (ADS)
Andhee, A.; Gryzagoridis, J.; Findeis, D.
2005-05-01
The paper presents results of non-destructive testing of composite main rotor helicopter blade calibration specimens using the laser based optical NDE technique known as Shearography. The tests were performed initially using the already well established near real-time non-destructive technique of Shearography, with the specimens perturbed during testing for a few seconds using the hot air from a domestic hair dryer. Subsequent to modification of the shearing device utilized in the shearographic setup, phase stepping of one of the sheared images to be captured by the CCD camera was enabled and identical tests were performed on the composite main rotor helicopter blade specimens. Considerable enhancement of the images manifesting or depicting the defects on the specimens is noted suggesting that phase stepping is a desirable enhancement technique to the traditional Shearographic setup.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Huiying; Hou, Zhangshuan; Huang, Maoyi
The Community Land Model (CLM) represents physical, chemical, and biological processes of the terrestrial ecosystems that interact with climate across a range of spatial and temporal scales. As CLM includes numerous sub-models and associated parameters, the high-dimensional parameter space presents a formidable challenge for quantifying uncertainty and improving Earth system predictions needed to assess environmental changes and risks. This study aims to evaluate the potential of transferring hydrologic model parameters in CLM through sensitivity analyses and classification across watersheds from the Model Parameter Estimation Experiment (MOPEX) in the United States. The sensitivity of CLM-simulated water and energy fluxes to hydrologicalmore » parameters across 431 MOPEX basins are first examined using an efficient stochastic sampling-based sensitivity analysis approach. Linear, interaction, and high-order nonlinear impacts are all identified via statistical tests and stepwise backward removal parameter screening. The basins are then classified accordingly to their parameter sensitivity patterns (internal attributes), as well as their hydrologic indices/attributes (external hydrologic factors) separately, using a Principal component analyses (PCA) and expectation-maximization (EM) –based clustering approach. Similarities and differences among the parameter sensitivity-based classification system (S-Class), the hydrologic indices-based classification (H-Class), and the Koppen climate classification systems (K-Class) are discussed. Within each S-class with similar parameter sensitivity characteristics, similar inversion modeling setups can be used for parameter calibration, and the parameters and their contribution or significance to water and energy cycling may also be more transferrable. This classification study provides guidance on identifiable parameters, and on parameterization and inverse model design for CLM but the methodology is applicable to other models. Inverting parameters at representative sites belonging to the same class can significantly reduce parameter calibration efforts.« less
Stojković, Ivana; Todorović, Nataša; Nikolov, Jovana; Tenjović, Branislava
2016-06-01
A procedure for the (222)Rn determination in aqueous samples using liquid scintillation counting (LSC) was evaluated and optimized. Measurements were performed by ultra-low background spectrometer Quantulus 1220™ equipped with PSA (Pulse Shape Analysis) circuit which discriminates alpha/beta spectra. Since calibration procedure is carried out with (226)Ra standard, which has both alpha and beta progenies, it is clear that PSA discriminator has vital importance in order to provide precise spectra separation. Improvement of calibration procedure was done through investigation of PSA discriminator level and, consequentially, the activity of (226)Ra calibration standard influence on (222)Rn efficiency detection. Quench effects on generated spectra i.e. determination of radon efficiency detection were also investigated with quench calibration curve obtained. Radon determination in waters based on modified procedure according to the activity of (226)Ra standard used, dependent on PSA setup, was evaluated with prepared (226)Ra solution samples and drinking water samples with assessment of measurement uncertainty variation included. Copyright © 2016 Elsevier Ltd. All rights reserved.
Geometric Calibration and Validation of Ultracam Aerial Sensors
NASA Astrophysics Data System (ADS)
Gruber, Michael; Schachinger, Bernhard; Muick, Marc; Neuner, Christian; Tschemmernegg, Helfried
2016-03-01
We present details of the calibration and validation procedure of UltraCam Aerial Camera systems. Results from the laboratory calibration and from validation flights are presented for both, the large format nadir cameras and the oblique cameras as well. Thus in this contribution we show results from the UltraCam Eagle and the UltraCam Falcon, both nadir mapping cameras, and the UltraCam Osprey, our oblique camera system. This sensor offers a mapping grade nadir component together with the four oblique camera heads. The geometric processing after the flight mission is being covered by the UltraMap software product. Thus we present details about the workflow as well. The first part consists of the initial post-processing which combines image information as well as camera parameters derived from the laboratory calibration. The second part, the traditional automated aerial triangulation (AAT) is the step from single images to blocks and enables an additional optimization process. We also present some special features of our software, which are designed to better support the operator to analyze large blocks of aerial images and to judge the quality of the photogrammetric set-up.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nabeel A. Riza
The goals of the first six months of this project were to begin laying the foundations for both the SiC front-end optical chip fabrication techniques for high pressure gas species sensing as well as the design, assembly, and test of a portable high pressure high temperature calibration test cell chamber for introducing gas species. This calibration cell will be used in the remaining months for proposed first stage high pressure high temperature gas species sensor experimentation and data processing. All these goals have been achieved and are described in detail in the report. Both design process and diagrams for themore » mechanical elements as well as the optical systems are provided. Photographs of the fabricated calibration test chamber cell, the optical sensor setup with the calibration cell, the SiC sample chip holder, and relevant signal processing mathematics are provided. Initial experimental data from both the optical sensor and fabricated test gas species SiC chips is provided. The design and experimentation results are summarized to give positive conclusions on the proposed novel high temperature high pressure gas species detection optical sensor technology.« less
Spectral calibration of the fluorescence telescopes of the Pierre Auger Observatory
NASA Astrophysics Data System (ADS)
Aab, A.; Abreu, P.; Aglietta, M.; Al Samarai, I.; Albuquerque, I. F. M.; Allekotte, I.; Almela, A.; Alvarez Castillo, J.; Alvarez-Muñiz, J.; Anastasi, G. A.; Anchordoqui, L.; Andrada, B.; Andringa, S.; Aramo, C.; Arqueros, F.; Arsene, N.; Asorey, H.; Assis, P.; Aublin, J.; Avila, G.; Badescu, A. M.; Balaceanu, A.; Barbato, F.; Barreira Luz, R. J.; Beatty, J. J.; Becker, K. H.; Bellido, J. A.; Berat, C.; Bertaina, M. E.; Biermann, P. L.; Biteau, J.; Blaess, S. G.; Blanco, A.; Blazek, J.; Bleve, C.; Boháčová, M.; Boncioli, D.; Bonifazi, C.; Borodai, N.; Botti, A. M.; Brack, J.; Brancus, I.; Bretz, T.; Bridgeman, A.; Briechle, F. L.; Buchholz, P.; Bueno, A.; Buitink, S.; Buscemi, M.; Caballero-Mora, K. S.; Caccianiga, L.; Cancio, A.; Canfora, F.; Caramete, L.; Caruso, R.; Castellina, A.; Catalani, F.; Cataldi, G.; Cazon, L.; Chavez, A. G.; Chinellato, J. A.; Chudoba, J.; Clay, R. W.; Cobos, A.; Colalillo, R.; Coleman, A.; Collica, L.; Coluccia, M. R.; Conceição, R.; Consolati, G.; Contreras, F.; Cooper, M. J.; Coutu, S.; Covault, C. E.; Cronin, J.; D'Amico, S.; Daniel, B.; Dasso, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; de Jong, S. J.; De Mauro, G.; de Mello Neto, J. R. T.; De Mitri, I.; de Oliveira, J.; de Souza, V.; Debatin, J.; Deligny, O.; Díaz Castro, M. L.; Diogo, F.; Dobrigkeit, C.; D'Olivo, J. C.; Dorosti, Q.; dos Anjos, R. C.; Dova, M. T.; Dundovic, A.; Ebr, J.; Engel, R.; Erdmann, M.; Erfani, M.; Escobar, C. O.; Espadanal, J.; Etchegoyen, A.; Falcke, H.; Farmer, J.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Fenu, F.; Fick, B.; Figueira, J. M.; Filipčič, A.; Fratu, O.; Freire, M. M.; Fujii, T.; Fuster, A.; Gaior, R.; García, B.; Garcia-Pinto, D.; Gaté, F.; Gemmeke, H.; Gherghel-Lascu, A.; Giaccari, U.; Giammarchi, M.; Giller, M.; Głas, D.; Glaser, C.; Golup, G.; Gómez Berisso, M.; Gómez Vitale, P. F.; González, N.; Gookin, B.; Gorgi, A.; Gorham, P.; Grillo, A. F.; Grubb, T. D.; Guarino, F.; Guedes, G. P.; Halliday, R.; Hampel, M. R.; Hansen, P.; Harari, D.; Harrison, T. A.; Harton, J. L.; Haungs, A.; Hebbeker, T.; Heck, D.; Heimann, P.; Herve, A. E.; Hill, G. C.; Hojvat, C.; Holt, E.; Homola, P.; Hörandel, J. R.; Horvath, P.; Hrabovský, M.; Huege, T.; Hulsman, J.; Insolia, A.; Isar, P. G.; Jandt, I.; Johnsen, J. A.; Josebachuili, M.; Jurysek, J.; Kääpä, A.; Kambeitz, O.; Kampert, K. H.; Keilhauer, B.; Kemmerich, N.; Kemp, E.; Kemp, J.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Krause, R.; Krohm, N.; Kuempel, D.; Kukec Mezek, G.; Kunka, N.; Kuotb Awad, A.; Lago, B. L.; LaHurd, D.; Lang, R. G.; Lauscher, M.; Legumina, R.; Leigui de Oliveira, M. A.; Letessier-Selvon, A.; Lhenry-Yvon, I.; Link, K.; Lo Presti, D.; Lopes, L.; López, R.; López Casado, A.; Lorek, R.; Luce, Q.; Lucero, A.; Malacari, M.; Mallamaci, M.; Mandat, D.; Mantsch, P.; Mariazzi, A. G.; Mariş, I. C.; Marsella, G.; Martello, D.; Martinez, H.; Martínez Bravo, O.; Masías Meza, J. J.; Mathes, H. J.; Mathys, S.; Matthews, J.; Matthews, J. A. J.; Matthiae, G.; Mayotte, E.; Mazur, P. O.; Medina, C.; Medina-Tanco, G.; Melo, D.; Menshikov, A.; Merenda, K.-D.; Michal, S.; Micheletti, M. I.; Middendorf, L.; Miramonti, L.; Mitrica, B.; Mockler, D.; Mollerach, S.; Montanet, F.; Morello, C.; Mostafá, M.; Müller, A. L.; Müller, G.; Muller, M. A.; Müller, S.; Mussa, R.; Naranjo, I.; Nellen, L.; Nguyen, P. H.; Niculescu-Oglinzanu, M.; Niechciol, M.; Niemietz, L.; Niggemann, T.; Nitz, D.; Nosek, D.; Novotny, V.; Nožka, L.; Núñez, L. A.; Ochilo, L.; Oikonomou, F.; Olinto, A.; Palatka, M.; Pallotta, J.; Papenbreer, P.; Parente, G.; Parra, A.; Paul, T.; Pech, M.; Pedreira, F.; Pȩkala, J.; Pelayo, R.; Peña-Rodriguez, J.; Pereira, L. A. S.; Perlin, M.; Perrone, L.; Peters, C.; Petrera, S.; Phuntsok, J.; Piegaia, R.; Pierog, T.; Pimenta, M.; Pirronello, V.; Platino, M.; Plum, M.; Porowski, C.; Prado, R. R.; Privitera, P.; Prouza, M.; Quel, E. J.; Querchfeld, S.; Quinn, S.; Ramos-Pollan, R.; Rautenberg, J.; Ravignani, D.; Ridky, J.; Riehn, F.; Risse, M.; Ristori, P.; Rizi, V.; Rodrigues de Carvalho, W.; Rodriguez Fernandez, G.; Rodriguez Rojo, J.; Rogozin, D.; Roncoroni, M. J.; Roth, M.; Roulet, E.; Rovero, A. C.; Ruehl, P.; Saffi, S. J.; Saftoiu, A.; Salamida, F.; Salazar, H.; Saleh, A.; Salesa Greus, F.; Salina, G.; Sánchez, F.; Sanchez-Lucas, P.; Santos, E. M.; Santos, E.; Sarazin, F.; Sarmento, R.; Sarmiento-Cano, C.; Sato, R.; Schauer, M.; Scherini, V.; Schieler, H.; Schimp, M.; Schmidt, D.; Scholten, O.; Schovánek, P.; Schröder, F. G.; Schröder, S.; Schulz, A.; Schumacher, J.; Sciutto, S. J.; Segreto, A.; Shadkam, A.; Shellard, R. C.; Sigl, G.; Silli, G.; Sima, O.; Śmiałkowski, A.; Šmída, R.; Snow, G. R.; Sommers, P.; Sonntag, S.; Squartini, R.; Stanca, D.; Stanič, S.; Stasielak, J.; Stassi, P.; Stolpovskiy, M.; Strafella, F.; Streich, A.; Suarez, F.; Suarez Durán, M.; Sudholz, T.; Suomijärvi, T.; Supanitsky, A. D.; Šupík, J.; Swain, J.; Szadkowski, Z.; Taboada, A.; Taborda, O. A.; Theodoro, V. M.; Timmermans, C.; Todero Peixoto, C. J.; Tomankova, L.; Tomé, B.; Torralba Elipe, G.; Travnicek, P.; Trini, M.; Ulrich, R.; Unger, M.; Urban, M.; Valdés Galicia, J. F.; Valiño, I.; Valore, L.; van Aar, G.; van Bodegom, P.; van den Berg, A. M.; van Vliet, A.; Varela, E.; Vargas Cárdenas, B.; Varner, G.; Vázquez, R. A.; Veberič, D.; Ventura, C.; Vergara Quispe, I. D.; Verzi, V.; Vicha, J.; Villaseñor, L.; Vorobiov, S.; Wahlberg, H.; Wainberg, O.; Walz, D.; Watson, A. A.; Weber, M.; Weindl, A.; Wiencke, L.; Wilczyński, H.; Wirtz, M.; Wittkowski, D.; Wundheiler, B.; Yang, L.; Yushkov, A.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zepeda, A.; Zimmermann, B.; Ziolkowski, M.; Zong, Z.; Zuccarello, F.; Pierre Auger Collaboration
2017-10-01
We present a novel method to measure precisely the relative spectral response of the fluorescence telescopes of the Pierre Auger Observatory. We used a portable light source based on a xenon flasher and a monochromator to measure the relative spectral efficiencies of eight telescopes in steps of 5 nm from 280 nm to 440 nm. Each point in a scan had approximately 2 nm FWHM out of the monochromator. Different sets of telescopes in the observatory have different optical components, and the eight telescopes measured represent two each of the four combinations of components represented in the observatory. We made an end-to-end measurement of the response from different combinations of optical components, and the monochromator setup allowed for more precise and complete measurements than our previous multi-wavelength calibrations. We find an overall uncertainty in the calibration of the spectral response of most of the telescopes of 1.5% for all wavelengths; the six oldest telescopes have larger overall uncertainties of about 2.2%. We also report changes in physics measurables due to the change in calibration, which are generally small.
Qualification and calibration tests of detector modules for the CMS Pixel Phase 1 upgrade
NASA Astrophysics Data System (ADS)
Zhu, D.; Backhaus, M.; Berger, P.; Meinhard, M.; Starodumov, A.; Tavolaro, V.
2018-01-01
In high energy particle physics, accelerator- and detector-upgrades always go hand in hand. The instantaneous luminosity of the Large Hadron Collider will increase to up to L = 2×1034cm-2s-1 during Run 2 until 2023. In order to cope with such luminosities, the pixel detector of the CMS experiment has been replaced early 2017. The so-called CMS Pixel phase 1 upgrade detector consists of 1184 modules with new design. An important production step is the module qualification and calibration, ensuring their proper functionality within the detector. This paper summarizes the qualification and calibration tests and results of modules used in the innermost two detector layers with focus on methods using module-internal calibration signals. Extended characterizations on pixel level such as electronic noise and bump bond connectivity, optimization of operational parameters, sensor quality and thermal stress resistance were performed using a customized setup with controlled environment. It could be shown that the selected modules have on average 0.55‰ ± 0.01‰ defective pixels and that all performance parameters stay within their specifications.
NASA Astrophysics Data System (ADS)
Gassmann, Matthias; Farlin, Julien; Gallé, Tom
2017-04-01
Agricultural application of herbicides often leads to significant herbicide losses to receiving rivers. The impact of agricultural practices on water pollution can be assessed by process-based reactive transport modelling using catchment scale models. Prior to investigations of management practices, these models have to be calibrated using sampling data. However, most previous studies only used concentrations at the catchment outlet for model calibration and validation. Thus, even if the applied model is spatially distributed, predicted spatial differences of pesticide loss cannot be directly compared to observations. In this study, we applied the spatially distributed reactive transport model Zin-AgriTra in the mesoscale (78 km2) catchment of the Wark River in Luxembourg in order to simulate concentrations of terbuthylazine in river water. In contrast to former studies, we used six sampling points, equipped with passive samplers, for pesticide model validation. Three samplers were located in the main channel of the river and three in smaller tributaries. At each sampling point, event mean concentration of six events from May to July 2011 were calculated by subtraction of baseflow-mass from total collected mass assuming time-proportional uptake by passive samplers. Continuous discharge measurements and high-resolution autosampling during events allowed for accurate load calculations at the outlet. Detailed information about maize cultivation in the catchment and nation-wide terbuthylazine application statistics (341 g/ha in the 3rd week of May) were used for a definition of the pesticide input function of the model. The hydrological model was manually calibrated to fit baseflow and spring/summer events. Substance fluxes were calibrated using a Latin Hypercube of physico-chemical substance characteristics as provided by the literature: surface soil half-lives of 10-35 d, Freundlich KOC of 150-330 ml/g, Freundlich n of 0.9 - 1 and adsorption/desorption kinetics of 20 - 80 1/d. Daily discharge simulations resulted in high Kling-Gupta efficiencies (KGE) for the calibration and the validation period (KGE > 0.70). Overall, terbuthylazine concentrations could be successfully reproduced with maximum KGE > 0.90 for all concentrations in the catchment and loads at the outlet. The generally lower concentrations in the tributaries that were measured by the passive samplers and the declining concentrations towards the outlet in the main channel could be reproduced by the model. The model simulated overland flow to be the major source of terbuthylazine in the main channel and soil water fluxes to be the most important pathways in the tributaries. Simulation results suggest that less than 0.01 % of applied terbuthylazine mass was exported to the river in the Wark catchment and less than 5 % of the exported mass was originating from the sampled tributaries. In addition to calibration of substance characteristics, passive sampler data was helpful in model setup of application field connectivity. Since the spatial resolution of the model was 50m, input maps sometimes showed a field to be directly connected to a river, whereas it was in reality separated from it by a 30m wide field or forest strip. Such misconfigurations leading to high concentrations in tributaries could easily be identified by comparing model results to passive sampler data. In conclusion, assigning different transport pathways of terbuthylazine to the rivers by model simulations was helped by using the additional spatial information on pesticide concentrations gained from passive samplers.
NASA Astrophysics Data System (ADS)
Pal, Sandip; Kar, Ranjan; Mandal, Anupam; Das, Ananda; Saha, Subrata
2017-05-01
A prototype of a variable temperature insert has been developed in-house as a cryogenic thermometer calibration facility. It was commissioned in fulfilment of the very stringent requirements of the temperature control of the cryogenic system. The calibration facility is designed for calibrating industrial cryogenic thermometers that include a temperature sensor and the wires heat-intercept in the 2.2 K-325 K temperature range. The isothermal section of the calibration block onto which the thermometers are mounted is weakly linked with the temperature control zone mounted with cooling capillary coil and cryogenic heater. The connecting wires of the thermometer are thermally anchored with the support of the temperature insert. The calibration procedure begins once the temperature of the support is stabilized. Homogeneity of the calibration block’s temperature is established both by simulation and by cross-comparison of two calibrated sensors. The absolute uncertainty present in temperature measurement is calculated and found comparable with the measured uncertainty at different temperature points. Measured data is presented in comparison to the standard thermometers at fixed points and it is possible to infer that the absolute accuracy achieved is better than ±0.5% of the reading in comparison to the fixed point temperature. The design and development of simpler, low cost equipment, and approach to analysis of the calibration results are discussed further in this paper, so that it can be easily devised by other researchers.
NASA Astrophysics Data System (ADS)
Hawdon, Aaron; McJannet, David; Wallace, Jim
2014-06-01
The cosmic-ray probe (CRP) provides continuous estimates of soil moisture over an area of ˜30 ha by counting fast neutrons produced from cosmic rays which are predominantly moderated by water molecules in the soil. This paper describes the setup, measurement correction procedures, and field calibration of CRPs at nine locations across Australia with contrasting soil type, climate, and land cover. These probes form the inaugural Australian CRP network, which is known as CosmOz. CRP measurements require neutron count rates to be corrected for effects of atmospheric pressure, water vapor pressure changes, and variations in incoming neutron intensity. We assess the magnitude and importance of these corrections and present standardized approaches for network-wide analysis. In particular, we present a new approach to correct for incoming neutron intensity variations and test its performance against existing procedures used in other studies. Our field calibration results indicate that a generalized calibration function for relating neutron counts to soil moisture is suitable for all soil types, with the possible exception of very sandy soils with low water content. Using multiple calibration data sets, we demonstrate that the generalized calibration function only applies after accounting for persistent sources of hydrogen in the soil profile. Finally, we demonstrate that by following standardized correction procedures and scaling neutron counting rates of all CRPs to a single reference location, differences in calibrations between sites are related to site biomass. This observation provides a means for estimating biomass at a given location or for deriving coefficients for the calibration function in the absence of field calibration data.
NASA Astrophysics Data System (ADS)
Rider, N. D.; Taha, Y. M.; Odame-Ankrah, C. A.; Huo, J. A.; Tokarek, T. W.; Cairns, E.; Moussa, S. G.; Liggio, J.; Osthoff, H. D.
2015-07-01
Photochemical sources of peroxycarboxylic nitric anhydrides (PANs) are utilized in many atmospheric measurement techniques for calibration or to deliver an internal standard. Conventionally, such sources rely on phosphor-coated low-pressure mercury (Hg) lamps to generate the UV light necessary to photo-dissociate a dialkyl ketone (usually acetone) in the presence of a calibrated amount of nitric oxide (NO) and oxygen (O2). In this manuscript, a photochemical PAN source in which the Hg lamp has been replaced by arrays of ultraviolet light-emitting diodes (UV-LEDs) is described. The output of the UV-LED source was analyzed by gas chromatography (PAN-GC) and thermal dissociation cavity ring-down spectroscopy (TD-CRDS). Using acetone, diethyl ketone (DIEK), diisopropyl ketone (DIPK), or di-n-propyl ketone (DNPK), respectively, the source produces peroxyacetic (PAN), peroxypropionic (PPN), peroxyisobutanoic (PiBN), or peroxy-n-butanoic nitric anhydride (PnBN) from NO in high yield (> 90 %). Box model simulations with a subset of the Master Chemical Mechanism (MCM) were carried out to rationalize product yields and to identify side products. The present work demonstrates that UV-LED arrays are a viable alternative to current Hg lamp setups.
NASA Technical Reports Server (NTRS)
Sketoe, J. G.; Clark, Anthony
2000-01-01
This paper presents a DOD E3 program overview on integrated circuit immunity. The topics include: 1) EMI Immunity Testing; 2) Threshold Definition; 3) Bias Tee Function; 4) Bias Tee Calibration Set-Up; 5) EDM Test Figure; 6) EMI Immunity Levels; 7) NAND vs. and Gate Immunity; 8) TTL vs. LS Immunity Levels; 9) TP vs. OC Immunity Levels; 10) 7805 Volt Reg Immunity; and 11) Seventies Chip Set. This paper is presented in viewgraph form.
LV software support for supersonic flow analysis
NASA Technical Reports Server (NTRS)
Bell, William A.
1991-01-01
During 1991, the software developed allowed an operator to configure and checkout the TSI, Inc. laser velocimeter (LV) system prior to a run. This setup procedure established the operating conditions for the TSI MI-990 multichannel interface and the RMR-1989 rotating machinery resolver. In addition to initializing the instruments, the software package provides a means of specifying LV calibration constants, controlling the sampling process, and identifying the test parameters.
Automated Iodine Monitoring System Development (AIMS). [shuttle prototype
NASA Technical Reports Server (NTRS)
1975-01-01
The operating principle of the automated iodine monitoring/controller system (AIMS) is described along with several design modifications. The iodine addition system is also discussed along with test setups and calibration; a facsimile of the optical/mechanical portion of the iodine monitor was fabricated and tested. The appendices include information on shuttle prototype AIMS, preliminary prime item development specifications, preliminary failure modes and effects analysis, and preliminary operating and maintenance instructions.
A system to simulate and reproduce audio-visual environments for spatial hearing research.
Seeber, Bernhard U; Kerber, Stefan; Hafter, Ervin R
2010-02-01
The article reports the experience gained from two implementations of the "Simulated Open-Field Environment" (SOFE), a setup that allows sounds to be played at calibrated levels over a wide frequency range from multiple loudspeakers in an anechoic chamber. Playing sounds from loudspeakers in the free-field has the advantage that each participant listens with their own ears, and individual characteristics of the ears are captured in the sound they hear. This makes an easy and accurate comparison between various listeners with and without hearing devices possible. The SOFE uses custom calibration software to assure individual equalization of each loudspeaker. Room simulation software creates the spatio-temporal reflection pattern of sound sources in rooms which is played via the SOFE loudspeakers. The sound playback system is complemented by a video projection facility which can be used to collect or give feedback or to study auditory-visual interaction. The article discusses acoustical and technical requirements for accurate sound playback against the specific needs in hearing research. An introduction to software concepts is given which allow easy, high-level control of the setup and thus fast experimental development, turning the SOFE into a "Swiss army knife" tool for auditory, spatial hearing and audio-visual research. Crown Copyright 2009. Published by Elsevier B.V. All rights reserved.
A System to Simulate and Reproduce Audio-Visual Environments for Spatial Hearing Research
Seeber, Bernhard U.; Kerber, Stefan; Hafter, Ervin R.
2009-01-01
The article reports the experience gained from two implementations of the “Simulated Open-Field Environment” (SOFE), a setup that allows sounds to be played at calibrated levels over a wide frequency range from multiple loudspeakers in an anechoic chamber. Playing sounds from loudspeakers in the free-field has the advantage that each participant listens with their own ears, and individual characteristics of the ears are captured in the sound they hear. This makes an easy and accurate comparison between various listeners with and without hearing devices possible. The SOFE uses custom calibration software to assure individual equalization of each loudspeaker. Room simulation software creates the spatio-temporal reflection pattern of sound sources in rooms which is played via the SOFE loudspeakers. The sound playback system is complemented by a video projection facility which can be used to collect or give feedback or to study auditory-visual interaction. The article discusses acoustical and technical requirements for accurate sound playback against the specific needs in hearing research. An introduction to software concepts is given which allow easy, high-level control of the setup and thus fast experimental development, turning the SOFE into a “Swiss army knife” tool for auditory, spatial hearing and audio-visual research. PMID:19909802
NASA Astrophysics Data System (ADS)
Haller, J.; Wilkens, V.
2017-03-01
The objective of this work was to create highly stable therapeutic ultrasound fields with well-known exposimetry and dosimetry parameters that are reproducible and hence predictable with well-known uncertainties. Such well- known and reproducible fields would allow validation and secondary calibrations of different measuring capabilities, which is already a widely accepted strategy for diagnostic fields. For this purpose, a reference setup was established that comprises two therapeutic ultrasound sources (one High-Intensity Therapeutic Ultrasound (HITU) source and one physiotherapy-like source), standard rf electronics for signal creation, and computer-controlled feedback to stabilize the input voltage. The short- and longtime stability of the acoustic output were evaluated - for the former, measurements over typical laboratory measurement time periods (i.e. some seconds or minutes) of the input voltage stability with and without feedback control were performed. For the latter, measurements of typical acoustical exposimetry parameters were performed bimonthly over one year. The measurement results show that the short- and the longtime stability of the reference setup are very good and that it is especially significantly improved in comparison to a setup without any feedback control.
An Imaging System for Satellite Hypervelocity Impact Debris Characterization
NASA Astrophysics Data System (ADS)
Moraguez, M.; Liou, J.; Fitz-Coy, N.; Patankar, K.; Cowardin, H.
This paper discusses the design of an automated imaging system for size characterization of debris produced by the DebriSat hypervelocity impact test. The goal of the DebriSat project is to update satellite breakup models. A representative LEO satellite, DebriSat, was constructed and subjected to a hypervelocity impact test. The impact produced an estimated 85,000 debris fragments. The size distribution of these fragments is required to update the current satellite breakup models. An automated imaging system was developed for the size characterization of the debris fragments. The system uses images taken from various azimuth and elevation angles around the object to produce a 3D representation of the fragment via a space carving algorithm. The system consists of N point-and-shoot cameras attached to a rigid support structure that defines the elevation angle for each camera. The debris fragment is placed on a turntable that is incrementally rotated to desired azimuth angles. The number of images acquired can be varied based on the desired resolution. Appropriate background and lighting is used for ease of object detection. The system calibration and image acquisition process are automated to result in push-button operations. However, for quality assurance reasons, the system is semi-autonomous by design to ensure operator involvement. This paper describes the imaging system setup, calibration procedure, repeatability analysis, and the results of the debris characterization.
An Imaging System for Satellite Hypervelocity Impact Debris Characterization
NASA Technical Reports Server (NTRS)
Moraguez, Matthew; Patankar, Kunal; Fitz-Coy, Norman; Liou, J.-C.; Cowardin, Heather
2015-01-01
This paper discusses the design of an automated imaging system for size characterization of debris produced by the DebriSat hypervelocity impact test. The goal of the DebriSat project is to update satellite breakup models. A representative LEO satellite, DebriSat, was constructed and subjected to a hypervelocity impact test. The impact produced an estimated 85,000 debris fragments. The size distribution of these fragments is required to update the current satellite breakup models. An automated imaging system was developed for the size characterization of the debris fragments. The system uses images taken from various azimuth and elevation angles around the object to produce a 3D representation of the fragment via a space carving algorithm. The system consists of N point-and-shoot cameras attached to a rigid support structure that defines the elevation angle for each camera. The debris fragment is placed on a turntable that is incrementally rotated to desired azimuth angles. The number of images acquired can be varied based on the desired resolution. Appropriate background and lighting is used for ease of object detection. The system calibration and image acquisition process are automated to result in push-button operations. However, for quality assurance reasons, the system is semi-autonomous by design to ensure operator involvement. This paper describes the imaging system setup, calibration procedure, repeatability analysis, and the results of the debris characterization.
NASA Astrophysics Data System (ADS)
Prakash, A.; Cristobal, J.; Fochesatto, G. J.; Starkenburg, D. P.; Kane, D. L.; Gens, R.; Alfieri, J. G.; Irving, K.; Anderson, M. C.; Kustas, W.
2012-12-01
Evapotranspiration (ET) is a critical component of the hydrologic cycle in interior Alaska, being about 74% of summer precipitation or 50% of annual precipitation, and is a process that will become more important as we witness increasing trends of climate warming, permafrost degradation, forest fire occurrences, and significant land cover changes. In preparation for NASA's planned Hyperspectral Infrared Imager (HyspIRI) satellite mission; we have established two experimental sites in interior Alaska to measure representative ET values for typical boreal forest in this region as a basis to estimate and upscale ET from remote sensing solar and thermal data. The first site (University of Alaska Fairbanks, UAF, north campus) is located in a needleleaf forest mainly composed of black spruce (Picea mariana) and the second site (Caribou-Poker Creek Research Watershed) is in a deciduous forest mainly composed of paper birch (Betula papyrifera). Both field sites are equipped with sonic anemometers and gas analyzers at 24 m height operating at a 20Hz sampling rate, and, additionally, the UAF north campus site includes a 3 and 12m sonic anemometers. At 24m, the tower is also equipped with a four component net radiometer sensor and air temperature and pressure sensors are installed at different heights. To monitor ground heat, temperature and soil moisture sensors as well as heat flux plates have also been installed in the organic and the subsurface soil layers. Additionally, a Large Aperture Scintillometer (LAS) transmitter and receiver units with a separation of 1.2 km have been installed across the tower ensuring a beam height of 24m. Data is recorded on data loggers and downloaded for quality check and processing on a weekly basis. Further details of tower set-up are available at www.et.alaska.edu. Data from the field instruments are presented and their use for Alaska specific ET model calibration are discussed. The field set-up provides all input data for ET modeling and for boundary layer micrometeorological research and the field-sites have also potential to benefit CalVal activities for other planned missions such us SMAP, Sentinel series, and EnMap.
Vadose Zone Monitoring as a Key to Groundwater Protection from Pollution Hazard
NASA Astrophysics Data System (ADS)
Dahan, Ofer
2016-04-01
Minimization subsurface pollution is much dependent on the capability to provide real-time information on the chemical and hydrological properties of the percolating water. Today, most monitoring programs are based on observation wells that enable data acquisitions from the saturated part of the subsurface. Unfortunately, identification of pollutants in well water is clear evidence that the contaminants already crossed the entire vadose-zone and accumulated in the aquifer water to detectable concentration. Therefore, effective monitoring programs that aim at protecting groundwater from pollution hazard should include vadose zone monitoring technologies that are capable to provide real-time information on the chemical composition of the percolating water. Obviously, identification of pollution process in the vadose zone may provide an early warning on potential risk to groundwater quality, long before contaminates reach the water-table and accumulate in the aquifers. Since productive agriculture must inherently include down leaching of excess lower quality water, understanding the mechanisms controlling transport and degradation of pollutants in the unsaturated is crucial for water resources management. A vadose-zone monitoring system (VMS), which was specially developed to enable continuous measurements of the hydrological and chemical properties of percolating water, was used to assess the impact of various agricultural setups on groundwater quality, including: (a) intensive organic and conventional greenhouses, (b) citrus orchard and open field crops , and (c) dairy farms. In these applications frequent sampling of vadose zone water for chemical and isotopic analysis along with continuous measurement of water content was used to assess the link between agricultural setups and groundwater pollution potential. Transient data on variation in water content along with solute breakthrough at multiple depths were used to calibrate flow and transport models. These models where then used to assess the long term impact of various agricultural setups on the quantity and quality of groundwater recharge. Relevant publications: Turkeltaub et al., WRR. 2016; Turkeltaub et al., J. Hydrol. 2015: Dahan et al., HESS 2014. Baram et al., J. Hydrol. 2012.
NASA Astrophysics Data System (ADS)
Hailegeorgis, Teklu T.; Alfredsen, Knut; Abdella, Yisak S.; Kolberg, Sjur
2015-03-01
Identification of proper parameterizations of spatial heterogeneity is required for precipitation-runoff models. However, relevant studies with a specific aim at hourly runoff simulation in boreal mountainous catchments are not common. We conducted calibration and evaluation of hourly runoff simulation in a boreal mountainous watershed based on six different parameterizations of the spatial heterogeneity of subsurface storage capacity for a semi-distributed (subcatchments hereafter called elements) and distributed (1 × 1 km2 grid) setup. We evaluated representation of element-to-element, grid-to-grid, and probabilistic subcatchment/subbasin, subelement and subgrid heterogeneities. The parameterization cases satisfactorily reproduced the streamflow hydrographs with Nash-Sutcliffe efficiency values for the calibration and validation periods up to 0.84 and 0.86 respectively, and similarly for the log-transformed streamflow up to 0.85 and 0.90. The parameterizations reproduced the flow duration curves, but predictive reliability in terms of quantile-quantile (Q-Q) plots indicated marked over and under predictions. The simple and parsimonious parameterizations with no subelement or no subgrid heterogeneities provided equivalent simulation performance compared to the more complex cases. The results indicated that (i) identification of parameterizations require measurements from denser precipitation stations than what is required for acceptable calibration of the precipitation-streamflow relationships, (ii) there is challenges in the identification of parameterizations based on only calibration to catchment integrated streamflow observations and (iii) a potential preference for the simple and parsimonious parameterizations for operational forecast contingent on their equivalent simulation performance for the available input data. In addition, the effects of non-identifiability of parameters (interactions and equifinality) can contribute to the non-identifiability of the parameterizations.
Multi-parameter fibre Bragg grating sensor-array for thermal vacuum cycling test
NASA Astrophysics Data System (ADS)
Cheng, L.; Ahlers, B.; Toet, P.; Casarosa, G.; Appolloni, M.
2017-11-01
Fibre Bragg Grating (FBG) sensor systems based on optical fibres are gaining interest in space applications. Studies on Structural Health Monitoring (SHM) of the reusable launchers using FBG sensors have been carried out in the Future European Space Transportation Investigations Programme (FESTIP). Increasing investment in the development on FBG sensor applications is foreseen for the Future Launchers Preparatory Programme (FLPP). TNO has performed different SHM measurements with FBGs including on the VEGA interstage [1, 2] in 2006. Within the current project, a multi-parameter FBG sensor array demonstrator system for temperature and strain measurements is designed, fabricated and tested under ambient as well as Thermal Vacuum (TV) conditions in a TV chamber of the European Space Agency (ESA), ESTEC site. The aim is the development of a multi-parameters measuring system based on FBG technology for space applications. During the TV tests of a Space Craft (S/C) or its subsystems, thermal measurements, as well as strain measurements are needed by the engineers in order to verify their prediction and to validate their models. Because of the dimensions of the test specimen and the accuracy requested to the measurement, a large number of observation/measuring points are needed. Conventional sensor systems require a complex routing of the cables connecting the sensors to their acquisition unit. This will add extra weight to the construction under test. FBG sensors are potentially light-weight and can easily be multiplexed in an array configuration. The different tasks comply of a demonstrator system design; its component selection, procurement, manufacturing and finally its assembly. The temperature FBG sensor is calibrated in a dedicated laboratory setup down to liquid nitrogen (LN2) temperature at TNO. A temperature-wavelength calibration curve is generated. After a test programme definition a setup in thermal vacuum is realised at ESA premises including a mechanical strain transducer to generate strain via a dedicated feed through in the chamber. Thermocouples are used to log the temperature for comparison to the temperature FBG sensor. Extreme temperature ranges from -150°C and +70°C at a pressure down to 10-4 Pa (10-6 mbar) are covered as well as testing under ambient conditions. In total five thermal cycles during a week test are performed. The FBG temperature sensor test results performed in the ESA/ESTEC TV chamber reveal high reproducibility (within 1 °C) within the test temperature range without any evidence of hysteresis. Differences are detected to the previous calibration curve. Investigation is performed to find the cause of the discrepancy. Differences between the test set-ups are identified. Equipment of the TNO test is checked and excluded to be the cause. Additional experiments are performed. The discrepancy is most likely caused by a 'thermal shock' due to rapid cooling down to LN2 temperature, which results in a wavelength shift. Test data of the FBG strain sensor is analysed. The read-out of the FBG strain sensor varies with the temperature during the test. This can be caused by temperature induced changes in the mechanical setup (fastening of the mechanical parts) or impact of temperature to the mechanical strain transfer to the FBG. Improvements are identified and recommendations given for future activities.
CCD image sensor induced error in PIV applications
NASA Astrophysics Data System (ADS)
Legrand, M.; Nogueira, J.; Vargas, A. A.; Ventas, R.; Rodríguez-Hidalgo, M. C.
2014-06-01
The readout procedure of charge-coupled device (CCD) cameras is known to generate some image degradation in different scientific imaging fields, especially in astrophysics. In the particular field of particle image velocimetry (PIV), widely extended in the scientific community, the readout procedure of the interline CCD sensor induces a bias in the registered position of particle images. This work proposes simple procedures to predict the magnitude of the associated measurement error. Generally, there are differences in the position bias for the different images of a certain particle at each PIV frame. This leads to a substantial bias error in the PIV velocity measurement (˜0.1 pixels). This is the order of magnitude that other typical PIV errors such as peak-locking may reach. Based on modern CCD technology and architecture, this work offers a description of the readout phenomenon and proposes a modeling for the CCD readout bias error magnitude. This bias, in turn, generates a velocity measurement bias error when there is an illumination difference between two successive PIV exposures. The model predictions match the experiments performed with two 12-bit-depth interline CCD cameras (MegaPlus ES 4.0/E incorporating the Kodak KAI-4000M CCD sensor with 4 megapixels). For different cameras, only two constant values are needed to fit the proposed calibration model and predict the error from the readout procedure. Tests by different researchers using different cameras would allow verification of the model, that can be used to optimize acquisition setups. Simple procedures to obtain these two calibration values are also described.
Panetta, D; Belcari, N; Del Guerra, A; Bartolomei, A; Salvadori, P A
2012-04-01
This study investigates the reproducibility of the reconstructed image sharpness, after modifications of the geometry setup, for a variable magnification micro-CT (μCT) scanner. All the measurements were performed on a novel engineered μCT scanner for in vivo imaging of small animals (Xalt), which has been recently built at the Institute of Clinical Physiology of the National Research Council (IFC-CNR, Pisa, Italy), in partnership with the University of Pisa. The Xalt scanner is equipped with an integrated software for on-line geometric recalibration, which will be used throughout the experiments. In order to evaluate the losses of image quality due to modifications of the geometry setup, we have made 22 consecutive acquisitions by changing alternatively the system geometry between two different setups (Large FoV - LF, and High Resolution - HR). For each acquisition, the tomographic images have been reconstructed before and after the on-line geometric recalibration. For each reconstruction, the image sharpness was evaluated using two different figures of merit: (i) the percentage contrast on a small bar pattern of fixed frequency (f = 5.5 lp/mm for the LF setup and f = 10 lp/mm for the HR setup) and (ii) the image entropy. We have found that, due to the small-scale mechanical uncertainty (in the order of the voxel size), a recalibration is necessary for each geometric setup after repositioning of the system's components; the resolution losses due to the lack of recalibration are worse for the HR setup (voxel size = 18.4 μm). The integrated on-line recalibration algorithm of the Xalt scanner allowed to perform the recalibration quickly, by restoring the spatial resolution of the system to the reference resolution obtained after the initial (off-line) calibration. Copyright © 2011 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Calibration Procedures on Oblique Camera Setups
NASA Astrophysics Data System (ADS)
Kemper, G.; Melykuti, B.; Yu, C.
2016-06-01
Beside the creation of virtual animated 3D City models, analysis for homeland security and city planning, the accurately determination of geometric features out of oblique imagery is an important task today. Due to the huge number of single images the reduction of control points force to make use of direct referencing devices. This causes a precise camera-calibration and additional adjustment procedures. This paper aims to show the workflow of the various calibration steps and will present examples of the calibration flight with the final 3D City model. In difference to most other software, the oblique cameras are used not as co-registered sensors in relation to the nadir one, all camera images enter the AT process as single pre-oriented data. This enables a better post calibration in order to detect variations in the single camera calibration and other mechanical effects. The shown sensor (Oblique Imager) is based o 5 Phase One cameras were the nadir one has 80 MPIX equipped with a 50 mm lens while the oblique ones capture images with 50 MPix using 80 mm lenses. The cameras are mounted robust inside a housing to protect this against physical and thermal deformations. The sensor head hosts also an IMU which is connected to a POS AV GNSS Receiver. The sensor is stabilized by a gyro-mount which creates floating Antenna -IMU lever arms. They had to be registered together with the Raw GNSS-IMU Data. The camera calibration procedure was performed based on a special calibration flight with 351 shoots of all 5 cameras and registered the GPS/IMU data. This specific mission was designed in two different altitudes with additional cross lines on each flying heights. The five images from each exposure positions have no overlaps but in the block there are many overlaps resulting in up to 200 measurements per points. On each photo there were in average 110 well distributed measured points which is a satisfying number for the camera calibration. In a first step with the help of the nadir camera and the GPS/IMU data, an initial orientation correction and radial correction were calculated. With this approach, the whole project was calculated and calibrated in one step. During the iteration process the radial and tangential parameters were switched on individually for the camera heads and after that the camera constants and principal point positions were checked and finally calibrated. Besides that, the bore side calibration can be performed either on basis of the nadir camera and their offsets, or independently for each camera without correlation to the others. This must be performed in a complete mission anyway to get stability between the single camera heads. Determining the lever arms of the nodal-points to the IMU centre needs more caution than for a single camera especially due to the strong tilt angle. Prepared all these previous steps, you get a highly accurate sensor that enables a fully automated data extraction with a rapid update of you existing data. Frequently monitoring urban dynamics is then possible in fully 3D environment.
NASA Astrophysics Data System (ADS)
Panagopoulos, Yiannis; Gassman, Philip W.; Jha, Manoj K.; Kling, Catherine L.; Campbell, Todd; Srinivasan, Raghavan; White, Michael; Arnold, Jeffrey G.
2015-05-01
Nonpoint source pollution from agriculture is the main source of nitrogen and phosphorus in the stream systems of the Corn Belt region in the Midwestern US. This region is comprised of two large river basins, the intensely row-cropped Upper Mississippi River Basin (UMRB) and Ohio-Tennessee River Basin (OTRB), which are considered the key contributing areas for the Northern Gulf of Mexico hypoxic zone according to the US Environmental Protection Agency. Thus, in this area it is of utmost importance to ensure that intensive agriculture for food, feed and biofuel production can coexist with a healthy water environment. To address these objectives within a river basin management context, an integrated modeling system has been constructed with the hydrologic Soil and Water Assessment Tool (SWAT) model, capable of estimating river basin responses to alternative cropping and/or management strategies. To improve modeling performance compared to previous studies and provide a spatially detailed basis for scenario development, this SWAT Corn Belt application incorporates a greatly refined subwatershed structure based on 12-digit hydrologic units or 'subwatersheds' as defined by the US Geological Service. The model setup, calibration and validation are time-demanding and challenging tasks for these large systems, given the scale intensive data requirements, and the need to ensure the reliability of flow and pollutant load predictions at multiple locations. Thus, the objectives of this study are both to comprehensively describe this large-scale modeling approach, providing estimates of pollution and crop production in the region as well as to present strengths and weaknesses of integrated modeling at such a large scale along with how it can be improved on the basis of the current modeling structure and results. The predictions were based on a semi-automatic hydrologic calibration approach for large-scale and spatially detailed modeling studies, with the use of the Sequential Uncertainty Fitting algorithm (SUFI-2) and the SWAT-CUP interface, followed by a manual water quality calibration on a monthly basis. The refined modeling approach developed in this study led to successful predictions across most parts of the Corn Belt region and can be used for testing pollution mitigation measures and agricultural economic scenarios, providing useful information to policy makers and recommendations on similar efforts at the regional scale.
ESTABLISHING BRDF CALIBRATION CAPABILITIES THROUGH SHORTWAVE INFRARED
Georgiev, Georgi T.; Butler, James J.; Thome, Kurt; Cooksey, Catherine; Ding, Leibo
2017-01-01
Satellite instruments operating in the reflective solar wavelength region require accurate and precise determination of the Bidirectional Reflectance Distribution Functions (BRDFs) of the laboratory and flight diffusers used in their pre-flight and on-orbit calibrations. This paper advances that initial work and presents a comparison of spectral Bidirectional Reflectance Distribution Function (BRDF) and Directional Hemispherical Reflectance (DHR) of Spectralon*, a common material for laboratory and on-orbit flight diffusers. A new measurement setup for BRDF measurements from 900 nm to 2500 nm located at NASA Goddard Space Flight Center (GSFC) is described. The GSFC setup employs an extended indium gallium arsenide detector, bandpass filters, and a supercontinuum light source. Comparisons of the GSFC BRDF measurements in the shortwave infrared (SWIR) with those made by the National Institute of Standards and Technology (NIST) Spectral Tri-function Automated Reference Reflectometer (STARR) are presented. The Spectralon sample used in this study was 2 inch diameter, 99% white pressed and sintered Polytetrafluoroethylene (PTFE) target. The NASA/NIST BRDF comparison measurements were made at an incident angle of 0° and viewing angle of 45°. Additional BRDF data not compared to NIST were measured at additional incident and viewing angle geometries and are not presented here. The total combined uncertainty for the measurement of BRDF in the SWIR range made by the GSFC scatterometer is less than 1% (k = 1). This study is in support of the calibration of the Radiation Budget Instrument (RBI) and Visible Infrared Imaging Radiometer Suit (VIIRS) instruments of the Joint Polar Satellite System (JPSS) and other current and future NASA remote sensing missions operating across the reflected solar wavelength region. PMID:29167593
Development of a Machine-Vision System for Recording of Force Calibration Data
NASA Astrophysics Data System (ADS)
Heamawatanachai, Sumet; Chaemthet, Kittipong; Changpan, Tawat
This paper presents the development of a new system for recording of force calibration data using machine vision technology. Real time camera and computer system were used to capture images of the reading from the instruments during calibration. Then, the measurement images were transformed and translated to numerical data using optical character recognition (OCR) technique. These numerical data along with raw images were automatically saved to memories as the calibration database files. With this new system, the human error of recording would be eliminated. The verification experiments were done by using this system for recording the measurement results from an amplifier (DMP 40) with load cell (HBM-Z30-10kN). The NIMT's 100-kN deadweight force standard machine (DWM-100kN) was used to generate test forces. The experiments setup were done in 3 categories; 1) dynamics condition (record during load changing), 2) statics condition (record during fix load), and 3) full calibration experiments in accordance with ISO 376:2011. The captured images from dynamics condition experiment gave >94% without overlapping of number. The results from statics condition experiment were >98% images without overlapping. All measurement images without overlapping were translated to number by the developed program with 100% accuracy. The full calibration experiments also gave 100% accurate results. Moreover, in case of incorrect translation of any result, it is also possible to trace back to the raw calibration image to check and correct it. Therefore, this machine-vision-based system and program should be appropriate for recording of force calibration data.
Biodiesel/Cummins CRADA Report
2014-07-01
sediment on the Racor turbine , coalescing centrifuge, check ball, and rubber seal pieces. The fuel flow sensor was opened, and a small obstruction was...is a wireless router that connects the computer (via BlueTooth) with a remote site (via the cellular network). This setup allowed the test team to...2 ¾” shaft on the BUSL. The data appeared to be accurate for short periods, and then appeared to lose calibration due to sensor misalignment
WHOI Hawaii Ocean Timeseries Station (WHOTS): WHOTS-6 2009 Mooring Turnaround Cruise Report
2010-01-01
pyranometers . This report describes the set-up on the ship, the procedures adopted, and some preliminary, and necessarily incomplete, results from...discrepancy in the net energy budget. The collection of recently calibrated pyranometers on this cruise, from two manufacturers and different...the bow tower. To complete the PSD air-sea flux system, pyranometers and pyrgeometers (Eppley and Kipp&Zonen) were mounted on top of pole on the 03
2016-04-04
Final 3. DATES COVERED (From - To) 4. TITLE AND SUBTITLE Test Operations Procedure (TOP) 03-2-827 Test Procedures for Video Target Scoring Using...ABSTRACT This Test Operations Procedure (TOP) describes typical equipment and procedures to setup and operate a Video Target Scoring System (VTSS) to...lights. 15. SUBJECT TERMS Video Target Scoring System, VTSS, witness screens, camera, target screen, light pole 16. SECURITY
Taylor, Alexander J; Granwehr, Josef; Lesbats, Clémentine; Krupa, James L; Six, Joseph S; Pavlovskaya, Galina E; Thomas, Neil R; Auer, Dorothee P; Meersmann, Thomas; Faas, Henryk M
2016-01-01
Due to low fluorine background signal in vivo, 19F is a good marker to study the fate of exogenous molecules by magnetic resonance imaging (MRI) using equilibrium nuclear spin polarization schemes. Since 19F MRI applications require high sensitivity, it can be important to assess experimental feasibility during the design stage already by estimating the minimum detectable fluorine concentration. Here we propose a simple method for the calibration of MRI hardware, providing sensitivity estimates for a given scanner and coil configuration. An experimental "calibration factor" to account for variations in coil configuration and hardware set-up is specified. Once it has been determined in a calibration experiment, the sensitivity of an experiment or, alternatively, the minimum number of required spins or the minimum marker concentration can be estimated without the need for a pilot experiment. The definition of this calibration factor is derived based on standard equations for the sensitivity in magnetic resonance, yet the method is not restricted by the limited validity of these equations, since additional instrument-dependent factors are implicitly included during calibration. The method is demonstrated using MR spectroscopy and imaging experiments with different 19F samples, both paramagnetically and susceptibility broadened, to approximate a range of realistic environments.
Techniques for Measuring Solubility and Electrical Conductivity in Molten Salts
NASA Astrophysics Data System (ADS)
Su, Shizhao; Villalon, Thomas; Pal, Uday; Powell, Adam
Eutectic MgF2-CaF2 based salt containing YF3, CaO and Al2O3 additions were used in this study. The electrical conductivity was measured as a function of temperature by a calibration-free coaxial electrode setup. The materials selection and setup design were optimized to accurately measure the electrical conductivity of the highly conductive molten salts (>1 S/cm). The solubility and diffusion behavior of alumina and zirconia in the molten salts were investigated by drawing and holding the molten salt for different lengths of time within capillary tubes made of alumina and zirconia, respectively. After the time-dependent high temperature holds, the samples were cooled and the solubility of the solute within the molten salt was determined using scanning electron microscopy, energy-dispersive X-ray spectroscopy analysis and wavelength-dispersive X-ray spectroscopy analysis.
Optical tweezers with 2.5 kHz bandwidth video detection for single-colloid electrophoresis
NASA Astrophysics Data System (ADS)
Otto, Oliver; Gutsche, Christof; Kremer, Friedrich; Keyser, Ulrich F.
2008-02-01
We developed an optical tweezers setup to study the electrophoretic motion of colloids in an external electric field. The setup is based on standard components for illumination and video detection. Our video based optical tracking of the colloid motion has a time resolution of 0.2ms, resulting in a bandwidth of 2.5kHz. This enables calibration of the optical tweezers by Brownian motion without applying a quadrant photodetector. We demonstrate that our system has a spatial resolution of 0.5nm and a force sensitivity of 20fN using a Fourier algorithm to detect periodic oscillations of the trapped colloid caused by an external ac field. The electrophoretic mobility and zeta potential of a single colloid can be extracted in aqueous solution avoiding screening effects common for usual bulk measurements.
BaHigh-force magnetic tweezers with force feedback for biological applications
NASA Astrophysics Data System (ADS)
Kollmannsberger, Philip; Fabry, Ben
2007-11-01
Magnetic micromanipulation using magnetic tweezers is a versatile biophysical technique and has been used for single-molecule unfolding, rheology measurements, and studies of force-regulated processes in living cells. This article describes an inexpensive magnetic tweezer setup for the application of precisely controlled forces up to 100nN onto 5μm magnetic beads. High precision of the force is achieved by a parametric force calibration method together with a real-time control of the magnetic tweezer position and current. High forces are achieved by bead-magnet distances of only a few micrometers. Applying such high forces can be used to characterize the local viscoelasticity of soft materials in the nonlinear regime, or to study force-regulated processes and mechanochemical signal transduction in living cells. The setup can be easily adapted to any inverted microscope.
High-force magnetic tweezers with force feedback for biological applications.
Kollmannsberger, Philip; Fabry, Ben
2007-11-01
Magnetic micromanipulation using magnetic tweezers is a versatile biophysical technique and has been used for single-molecule unfolding, rheology measurements, and studies of force-regulated processes in living cells. This article describes an inexpensive magnetic tweezer setup for the application of precisely controlled forces up to 100 nN onto 5 microm magnetic beads. High precision of the force is achieved by a parametric force calibration method together with a real-time control of the magnetic tweezer position and current. High forces are achieved by bead-magnet distances of only a few micrometers. Applying such high forces can be used to characterize the local viscoelasticity of soft materials in the nonlinear regime, or to study force-regulated processes and mechanochemical signal transduction in living cells. The setup can be easily adapted to any inverted microscope.
NASA Astrophysics Data System (ADS)
Aerts, W.; Baire, Q.; Bruyninx, C.; Legrand, J.; Pottiaux, E.
2012-12-01
A new multi-GNSS IGS reference station, BRUX, has been installed at Brussels. It replaces the former IGS reference station BRUS, which had to be dismantled because of construction works. The antenna of BRUX is sited on top of a telescope dome. Although this might be an unfortunate choice from an electromagnetic point of view, the siting is very convenient for other reasons. Being close to the time lab hosting the atomic clocks, the cable length is within acceptable and affordable limits, both for cost and signal loss reasons. Moreover, the site offers open sky view, which can indeed be expected from a former telescope siting. The dome is entirely metal, hence shielding of the dome was required in order to mitigate multipath propagation. This was achieved using a metal shield topped with RF absorbing material and respecting a certain antenna-to-absorber spacing in order not to alter the antenna phase center offset (PCO) and variations (PCVs) too much. This would otherwise render the individual calibration of the antenna, in an anechoic chamber in the case of BRUX, invalid. But even taking all precautions, the PCO and PCVs of the calibration do not exactly equal those after installation. Moreover, different calibrations, in an anechoic chamber and by an outdoor robot, of the same antenna have shown to result in PCO and PCVs that differ up to several mm at certain azimuths and elevations. A test set-up with 6 such redundantly calibrated GNSS antennas revealed that the calibration differences can reach 8 mm on the ionosphere-free frequency, which amplifies the calibration differences by a factor three compared to L1 and L2 only. The use of different receiver antenna calibration models can impact position at almost the centimeter level. In an attempt to align the historical time series for BRUS with the (future) data for BRUX, the tie between the new station BRUX and the old IGS station BRUS was determined using terrestrial measurements as well as GPS. In the case of GPS, several L1, L2 and ionosphere-free baseline measurements were performed using state-of-the-art type-mean receiver antenna calibrations as well as individual calibrations. Differences between the different GPS measurements are several mm, while the differences between the terrestrial tie and the GPS ties reaches almost the cm-level. One contribution to the error budget is the absence of an individual calibration for the BRUS antenna, another one is the difference in PCO and PCVs on site, as opposed to at calibration, as already mentioned.
GIADA: extended calibration activities before the comet encounter
NASA Astrophysics Data System (ADS)
Accolla, Mario; Sordini, Roberto; Della Corte, Vincenzo; Ferrari, Marco; Rotundi, Alessandra
2014-05-01
The Grain Impact Analyzer and Dust Accumulator - GIADA - is one of the payloads on-board Rosetta Orbiter. Its three detection sub-systems are able to measure the speed, the momentum, the mass, the optical cross section of single cometary grains and the dust flux ejected by the periodic comet 67P Churyumov-Gerasimenko. During the Hibernation phase of the Rosetta mission, we have performed a dedicated extended calibration activity on the GIADA Proto Flight Model (accommodated in a clean room in our laboratory) involving two of three sub-systems constituting GIADA, i.e. the Grain Detection System (GDS) and the Impact Sensor (IS). Our aim is to carry out a new set of response curves for these two subsystems and to correlate them with the calibration curves obtained in 2002 for the GIADA payload onboard the Rosetta spacecraft, in order to improve the interpretation of the forthcoming scientific data. For the extended calibration we have dropped or shot into GIADA PFM a statistically relevant number of grains (i.e. about 1 hundred), acting as cometary dust analogues. We have studied the response of the GDS and IS as a function of grain composition, size and velocity. Different terrestrial materials were selected as cometary analogues according to the more recent knowledge gained through the analyses of Interplanetary Dust Particles and cometary samples returned from comet 81P/Wild 2 (Stardust mission). Therefore, for each material, we have produced grains with sizes ranging from 20-500 μm in diameter, that were characterized by FESEM and micro IR spectroscopy. Therefore, the grains were shot into GIADA PFM with speed ranging between 1 and 100 ms-1. Indeed, according to the estimation reported in Fink & Rubin (2012), this range is representative of the dust particle velocity expected at the comet scenario and lies within the GIADA velocity sensitivity (i.e. 1-100 ms-1 for GDSand 1-300 ms-1for GDS+IS 1-300 ms-1). The response curves obtained using the data collected during the GIADA PFM extended calibration will be linked to the on-ground calibration data collected during the instrument qualification campaign (performed both on Flight and Spare Models, in 2002). The final aim is to rescale the Extended Calibration data obtained with the GIADA PFM to GIADA presently onboard the Rosetta spacecraft. In this work we present the experimental procedures and the setup used for the calibration activities, particularly focusing on the new response curves of GDS and IS sub-systems obtained for the different cometary dust analogues. These curves will be critical for the future interpretation of scientific data. Fink, U. & Rubin, M. (2012), The calculation of Afρ and mass loss rate for comets, Icarus, Volume 221, issue 2, p. 721-734
Technology for detecting spectral radiance by a snapshot multi-imaging spectroradiometer
NASA Astrophysics Data System (ADS)
Zuber, Ralf; Stührmann, Ansgar; Gugg-Helminger, Anton; Seckmeyer, Gunther
2017-12-01
Technologies to determine spectral sky radiance distributions have evolved in recent years and have enabled new applications in remote sensing, for sky radiance measurements, in biological/diagnostic applications and luminance measurements. Most classical spectral imaging radiance technologies are based on mechanical and/or spectral scans. However, these methods require scanning time in which the spectral radiance distribution might change. To overcome this limitation, different so-called snapshot spectral imaging technologies have been developed that enable spectral and spatial non-scanning measurements. We present a new setup based on a facet mirror that is already used in imaging slicing spectrometers. By duplicating the input image instead of slicing it and using a specially designed entrance slit, we are able to select nearly 200 (14 × 14) channels within the field of view (FOV) for detecting spectral radiance in different directions. In addition, a megapixel image of the FOV is captured by an additional RGB camera. This image can be mapped onto the snapshot spectral image. In this paper, the mechanical setup, technical design considerations and first measurement results of a prototype are presented. For a proof of concept, the device is radiometrically calibrated and a 10 mm × 10 mm test pattern measured within a spectral range of 380 nm-800 nm with an optical bandwidth of 10 nm (full width at half maximum or FWHM). To show its potential in the UV spectral region, zenith sky radiance measurements in the UV of a clear sky were performed. Hence, the prototype was equipped with an entrance optic with a FOV of 0.5° and modified to obtain a radiometrically calibrated spectral range of 280 nm-470 nm with a FWHM of 3 nm. The measurement results have been compared to modeled data processed by UVSPEC, which showed deviations of less than 30%. This is far from being ideal, but an acceptable result with respect to available state-of-the-art intercomparisons.
NASA Technical Reports Server (NTRS)
Benner, D. Chris
1998-01-01
This cooperative agreement has investigated a number of spectroscopic problems of interest to the Halogen Occultation Experiment (HALOE). The types of studies performed are in two parts, namely, those that involve the testing and characterization of correlation spectrometers and those that provide basic molecular spectroscopic information. In addition, some solar studies were performed with the calibration data returned by HALOE from orbit. In order to accomplish this a software package was written as part of this cooperative agreement. The HALOE spectroscopic instrument package was used in various tests of the HALOE flight instrument. These included the spectral response test, the early stages of the gas response test and various spectral response tests of the detectors and optical elements of the instruments. Considerable effort was also expended upon the proper laboratory setup for many of the prelaunch tests of the HALOE flight instrument, including the spectral response test and the gas response test. These tests provided the calibration and the assurance that the calibration was performed correctly.
A fully automated temperature-dependent resistance measurement setup using van der Pauw method
NASA Astrophysics Data System (ADS)
Pandey, Shivendra Kumar; Manivannan, Anbarasu
2018-03-01
The van der Pauw (VDP) method is widely used to identify the resistance of planar homogeneous samples with four contacts placed on its periphery. We have developed a fully automated thin film resistance measurement setup using the VDP method with the capability of precisely measuring a wide range of thin film resistances from few mΩ up to 10 GΩ under controlled temperatures from room-temperature up to 600 °C. The setup utilizes a robust, custom-designed switching network board (SNB) for measuring current-voltage characteristics automatically at four different source-measure configurations based on the VDP method. Moreover, SNB is connected with low noise shielded coaxial cables that reduce the effect of leakage current as well as the capacitance in the circuit thereby enhancing the accuracy of measurement. In order to enable precise and accurate resistance measurement of the sample, wide range of sourcing currents/voltages are pre-determined with the capability of auto-tuning for ˜12 orders of variation in the resistances. Furthermore, the setup has been calibrated with standard samples and also employed to investigate temperature dependent resistance (few Ω-10 GΩ) measurements for various chalcogenide based phase change thin films (Ge2Sb2Te5, Ag5In5Sb60Te30, and In3SbTe2). This setup would be highly helpful for measurement of temperature-dependent resistance of wide range of materials, i.e., metals, semiconductors, and insulators illuminating information about structural change upon temperature as reflected by change in resistances, which are useful for numerous applications.
NASA Astrophysics Data System (ADS)
Jung, H.; Carruthers, T.; Allison, M. A.; Weathers, D.; Moss, L.; Timmermans, H.
2017-12-01
Pacific Island communities are highly vulnerable to the effects of climate change, specifically accelerating rates of sea level rise, changes to storm intensity and associated rainfall patterns resulting in flooding and shoreline erosion. Nature-based adaptation is being planned not only to reduce the risk from shoreline erosion, but also to support benefits of a healthy ecosystem (e.g., supporting fisheries or coral reefs). In order to assess potential effectiveness of the nature-based actions to dissipate wave energy, two-dimensional X-Beach models were developed to predict the wave attenuation effect of coastal adaptation actions at the pilot sites—the villages of Naselesele and Somosomo on Taveuni island, Fiji. Both sites are experiencing serious shoreline erosion due to sea level rise and storm wave. The water depth (single-beam bathymetry), land elevation (truck-based LiDAR), and vegetation data including stem density and height were collected in both locations in a June 2017 field experiment. Wave height and water velocity were also measured for the model setup and calibration using a series of bottom-mounted instruments deployed in the 0-15 m water depth portions of the study grid. The calibrated model will be used to evaluate a range of possible adaptation actions identified by the community members of Naselesele and Somosomo. Particularly, multiple storm scenario runs with management-relevant shoreline restoration/adaptation options will be implemented to evaluate efficiencies of each adaptation action (e.g., no action, with additional planted trees, with sand mining, with seawalls constructed with natural materials, etc.). These model results will help to better understand how proposed adaption actions may influence future shoreline change and maximize benefits to communities in island nations across the SW Pacific.
Liu, S.; Anderson, P.; Zhou, G.; Kauffman, B.; Hughes, F.; Schimel, D.; Watson, Vicente; Tosi, Joseph
2008-01-01
Objectively assessing the performance of a model and deriving model parameter values from observations are critical and challenging in landscape to regional modeling. In this paper, we applied a nonlinear inversion technique to calibrate the ecosystem model CENTURY against carbon (C) and nitrogen (N) stock measurements collected from 39 mature tropical forest sites in seven life zones in Costa Rica. Net primary productivity from the Moderate-Resolution Imaging Spectroradiometer (MODIS), C and N stocks in aboveground live biomass, litter, coarse woody debris (CWD), and in soils were used to calibrate the model. To investigate the resolution of available observations on the number of adjustable parameters, inversion was performed using nine setups of adjustable parameters. Statistics including observation sensitivity, parameter correlation coefficient, parameter sensitivity, and parameter confidence limits were used to evaluate the information content of observations, resolution of model parameters, and overall model performance. Results indicated that soil organic carbon content, soil nitrogen content, and total aboveground biomass carbon had the highest information contents, while measurements of carbon in litter and nitrogen in CWD contributed little to the parameter estimation processes. The available information could resolve the values of 2-4 parameters. Adjusting just one parameter resulted in under-fitting and unacceptable model performance, while adjusting five parameters simultaneously led to over-fitting. Results further indicated that the MODIS NPP values were compressed as compared with the spatial variability of net primary production (NPP) values inferred from inverse modeling. Using inverse modeling to infer NPP and other sensitive model parameters from C and N stock observations provides an opportunity to utilize data collected by national to regional forest inventory systems to reduce the uncertainties in the carbon cycle and generate valuable databases to validate and improve MODIS NPP algorithms.
A Study of Vicon System Positioning Performance.
Merriaux, Pierre; Dupuis, Yohan; Boutteau, Rémi; Vasseur, Pascal; Savatier, Xavier
2017-07-07
Motion capture setups are used in numerous fields. Studies based on motion capture data can be found in biomechanical, sport or animal science. Clinical science studies include gait analysis as well as balance, posture and motor control. Robotic applications encompass object tracking. Today's life applications includes entertainment or augmented reality. Still, few studies investigate the positioning performance of motion capture setups. In this paper, we study the positioning performance of one player in the optoelectronic motion capture based on markers: Vicon system. Our protocol includes evaluations of static and dynamic performances. Mean error as well as positioning variabilities are studied with calibrated ground truth setups that are not based on other motion capture modalities. We introduce a new setup that enables directly estimating the absolute positioning accuracy for dynamic experiments contrary to state-of-the art works that rely on inter-marker distances. The system performs well on static experiments with a mean absolute error of 0.15 mm and a variability lower than 0.025 mm. Our dynamic experiments were carried out at speeds found in real applications. Our work suggests that the system error is less than 2 mm. We also found that marker size and Vicon sampling rate must be carefully chosen with respect to the speed encountered in the application in order to reach optimal positioning performance that can go to 0.3 mm for our dynamic study.
Covariance Matrix Adaptation Evolutionary Strategy for Drift Correction of Electronic Nose Data
NASA Astrophysics Data System (ADS)
Di Carlo, S.; Falasconi, M.; Sanchez, E.; Sberveglieri, G.; Scionti, A.; Squillero, G.; Tonda, A.
2011-09-01
Electronic Noses (ENs) might represent a simple, fast, high sample throughput and economic alternative to conventional analytical instruments [1]. However, gas sensors drift still limits the EN adoption in real industrial setups due to high recalibration effort and cost [2]. In fact, pattern recognition (PaRC) models built in the training phase become useless after a period of time, in some cases a few weeks. Although algorithms to mitigate the drift date back to the early 90 this is still a challenging issue for the chemical sensor community [3]. Among other approaches, adaptive drift correction methods adjust the PaRC model in parallel with data acquisition without need of periodic calibration. Self-Organizing Maps (SOMs) [4] and Adaptive Resonance Theory (ART) networks [5] have been already tested in the past with fair success. This paper presents and discusses an original methodology based on a Covariance Matrix Adaptation Evolution Strategy (CMA-ES) [6], suited for stochastic optimization of complex problems.
Macias-Melo, E V; Aguilar-Castro, K M; Alvarez-Lemus, M A; Flores-Prieto, J J
2015-09-01
In this work, we describe a methodology for developing a mathematical model based on infrared (IR) detection to determine the moisture content (M) in solid samples. For this purpose, an experimental setup was designed, developed and calibrated against the gravimetric method. The experimental arrangement allowed for the simultaneous measurement of M and the electromotive force (EMF), fitting the experimental variables as much as possible. These variables were correlated by a mathematical model, and the obtained correlation was M=1.12×exp(3.47×EMF), ±2.54%. This finding suggests that it is feasible to measure the moisture content when it has greater values than 2.54%. The proposed methodology could be used for different conditions of temperature, relative humidity and drying rates to evaluate the influence of these variables on the amount of energy received by the IR detector. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Calibration of imaging plate detectors to mono-energetic protons in the range 1-200 MeV
NASA Astrophysics Data System (ADS)
Rabhi, N.; Batani, D.; Boutoux, G.; Ducret, J.-E.; Jakubowska, K.; Lantuejoul-Thfoin, I.; Nauraye, C.; Patriarca, A.; Saïd, A.; Semsoum, A.; Serani, L.; Thomas, B.; Vauzour, B.
2017-11-01
Responses of Fuji Imaging Plates (IPs) to proton have been measured in the range 1-200 MeV. Mono-energetic protons were produced with the 15 MV ALTO-Tandem accelerator of the Institute of Nuclear Physics (Orsay, France) and, at higher energies, with the 200-MeV isochronous cyclotron of the Institut Curie—Centre de Protonthérapie d'Orsay (Orsay, France). The experimental setups are described and the measured photo-stimulated luminescence responses for MS, SR, and TR IPs are presented and compared to existing data. For the interpretation of the results, a sensitivity model based on the Monte Carlo GEANT4 code has been developed. It enables the calculation of the response functions in a large energy range, from 0.1 to 200 MeV. Finally, we show that our model reproduces accurately the response of more complex detectors, i.e., stack of high-Z filters and IPs, which could be of great interest for diagnostics of Petawatt laser accelerated particles.
NASA Astrophysics Data System (ADS)
Sharma, Pankaj; Jain, Ajai
2014-12-01
Stochastic dynamic job shop scheduling problem with consideration of sequence-dependent setup times are among the most difficult classes of scheduling problems. This paper assesses the performance of nine dispatching rules in such shop from makespan, mean flow time, maximum flow time, mean tardiness, maximum tardiness, number of tardy jobs, total setups and mean setup time performance measures viewpoint. A discrete event simulation model of a stochastic dynamic job shop manufacturing system is developed for investigation purpose. Nine dispatching rules identified from literature are incorporated in the simulation model. The simulation experiments are conducted under due date tightness factor of 3, shop utilization percentage of 90% and setup times less than processing times. Results indicate that shortest setup time (SIMSET) rule provides the best performance for mean flow time and number of tardy jobs measures. The job with similar setup and modified earliest due date (JMEDD) rule provides the best performance for makespan, maximum flow time, mean tardiness, maximum tardiness, total setups and mean setup time measures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jankowiak, A.; Wille, K.; /Dortmund U. /SLAC
2011-08-25
For a synchrotron radiation source it is necessary to operate a monitoring system to determine the beam position with high resolution and accuracy with respect to the axis of the quadrupole magnets. In this paper the present closed orbit measurement system of the DELTA SR-Facility, concerning the hardware setup, data processing and the calibration methods, will be presented. The results of the calibration measurements and the recent operating experience will be discussed. These results show, that the system is close to the design resolution. But the BPM offsets with respect to the magnetic center of the quadrupole magnets turn outmore » to be not acceptable. For some BPMs they are in the order of several 100 micro m. Therefore it was decided to install a beam based BPM calibration system in the near future . This system should allow to determine the BPM offsets relative to the center of the quadrupole magnets for all 40 BPMs. It is planned to install a system in order to change the focussing strength of each quadrupole individually either in a static or dynamic way.« less
Spectral calibration of the fluorescence telescopes of the Pierre Auger Observatory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aab, A.; Abreu, P.; Aglietta, M.
We present a novel method to measure precisely the relative spectral response of the fluorescence telescopes of the Pierre Auger Observatory. Here, we used a portable light source based on a xenon flasher and a monochromator to measure the relative spectral efficiencies of eight telescopes in steps of 5 nm from 280 nm to 440 nm. Each point in a scan had approximately 2 nm FWHM out of the monochromator. Different sets of telescopes in the observatory have different optical components, and the eight telescopes measured represent two each of the four combinations of components represented in the observatory. Wemore » made an end-to-end measurement of the response from different combinations of optical components, and the monochromator setup allowed for more precise and complete measurements than our previous multi-wavelength calibrations. We find an overall uncertainty in the calibration of the spectral response of most of the telescopes of 1.5% for all wavelengths; the six oldest telescopes have larger overall uncertainties of about 2.2%. We also report changes in physics measureables due to the change in calibration, which are generally small.« less
Spectral calibration of the fluorescence telescopes of the Pierre Auger Observatory
Aab, A.; Abreu, P.; Aglietta, M.; ...
2017-09-08
We present a novel method to measure precisely the relative spectral response of the fluorescence telescopes of the Pierre Auger Observatory. Here, we used a portable light source based on a xenon flasher and a monochromator to measure the relative spectral efficiencies of eight telescopes in steps of 5 nm from 280 nm to 440 nm. Each point in a scan had approximately 2 nm FWHM out of the monochromator. Different sets of telescopes in the observatory have different optical components, and the eight telescopes measured represent two each of the four combinations of components represented in the observatory. Wemore » made an end-to-end measurement of the response from different combinations of optical components, and the monochromator setup allowed for more precise and complete measurements than our previous multi-wavelength calibrations. We find an overall uncertainty in the calibration of the spectral response of most of the telescopes of 1.5% for all wavelengths; the six oldest telescopes have larger overall uncertainties of about 2.2%. We also report changes in physics measureables due to the change in calibration, which are generally small.« less
Calibration and optimization of an x-ray bendable mirror using displacement-measuring sensors.
Vannoni, Maurizio; Martín, Idoia Freijo; Music, Valerija; Sinn, Harald
2016-07-25
We propose a method to control and to adjust in a closed-loop a bendable x-ray mirror using displacement-measuring devices. For this purpose, the usage of capacitive and interferometric sensors is investigated and compared. We installed the sensors in a bender setup and used them to continuously measure the position and shape of the mirror in the lab. The sensors are vacuum-compatible such that the same concept can also be applied in final conditions. The measurement is used to keep the calibration of the system and to create a closed-loop control compensating for external influences: in a demonstration measurement, using a 950 mm long bendable mirror, the mirror sagitta is kept stable inside a range of 10 nm Peak-To-Valley (P-V).
Eddy Covariance measurements of stable isotopes (δD and δ18O) in water vapor
NASA Astrophysics Data System (ADS)
Braden-Behrens, J.; Knohl, A.
2016-12-01
Stable isotopes are a promising tool to enhance our understanding of ecosystem gas exchanges. Studying 18O and 2H (D) in water vapour (H2Ov) can e.g. help partitioning evapotranspiration into its components. With recent developments in laser spectroscopy direct Eddy Covariance (EC) measurements to investigate fluxes of stable isotopologues became feasible. But so far only very few case studies applying the EC method to stable isotopes in water vapor have been carried out worldwide At our micrometeorological EC tower in a managed beech forest in Thuringia, Germany, we continuously measure fluxes of water vapor isotopologues using EC since autumn 2015. The set-up is based on an off-axis cavity output water vapor isotope analyzer (WVIA, Los Gatos Research. Inc, USA) that measures the water vapour concentration and its isotopic composition (δD and δ18O). The instrument is optimized for high flow rates (app. 4slpm) to generate high frequent (2Hz) measurements. The HF-optimized WVIA showed sufficient precision with a minimal Allan Deviation of 0.023 ‰ for δD and 0.02 ‰ for δ18O for averaging periods of app. 700 s and 400 s resp. The instrument is calibrated hourly using a high-flow optimized version of the water vapor isotope standard source (WVISS, Los Gatos Research. Inc, USA) that provides water vapor with known isotopic composition for a large range of different concentrations. Our calibration scheme includes a near continuous concentration range calibration instead of a simple 2 or 3-point calibration to face the analyzers large concentration dependency within a range of app. 6 000 to 16 000 ppm in winter and app. 8 000 to 23 000 ppm in summer. We evaluate the calibration approach, present specific aspects of the set-up such as the HF optimization and compare the measured and averaged spectra and cospectra of the isotopologue analyzer with those of the longterm EC installation (using a LI-6262 as well as a LI-7200 infrared gas analyzer at 10 Hz). Furthermore, we show results for the isotopologue fluxes before and after leaf unfolding in spring/summer 2016. This novel instrument for EC measurements of water vapor isotopologues provides a new exciting opportunity for studying the hydrological cycle in long-term observation networks like Ameriflux and ICOS.
Calibration Procedures in Mid Format Camera Setups
NASA Astrophysics Data System (ADS)
Pivnicka, F.; Kemper, G.; Geissler, S.
2012-07-01
A growing number of mid-format cameras are used for aerial surveying projects. To achieve a reliable and geometrically precise result also in the photogrammetric workflow, awareness on the sensitive parts is important. The use of direct referencing systems (GPS/IMU), the mounting on a stabilizing camera platform and the specific values of the mid format camera make a professional setup with various calibration and misalignment operations necessary. An important part is to have a proper camera calibration. Using aerial images over a well designed test field with 3D structures and/or different flight altitudes enable the determination of calibration values in Bingo software. It will be demonstrated how such a calibration can be performed. The direct referencing device must be mounted in a solid and reliable way to the camera. Beside the mechanical work especially in mounting the camera beside the IMU, 2 lever arms have to be measured in mm accuracy. Important are the lever arms from the GPS Antenna to the IMU's calibrated centre and also the lever arm from the IMU centre to the Camera projection centre. In fact, the measurement with a total station is not a difficult task but the definition of the right centres and the need for using rotation matrices can cause serious accuracy problems. The benefit of small and medium format cameras is that also smaller aircrafts can be used. Like that, a gyro bases stabilized platform is recommended. This causes, that the IMU must be mounted beside the camera on the stabilizer. The advantage is, that the IMU can be used to control the platform, the problematic thing is, that the IMU to GPS antenna lever arm is floating. In fact we have to deal with an additional data stream, the values of the movement of the stabiliser to correct the floating lever arm distances. If the post-processing of the GPS-IMU data by taking the floating levers into account, delivers an expected result, the lever arms between IMU and camera can be applied. However, there is a misalignment (bore side angle) that must be evaluated by photogrammetric process using advanced tools e.g. in Bingo. Once, all these parameters have been determined, the system is capable for projects without or with only a few ground control points. But which effect has the photogrammetric process when directly applying the achieved direct orientation values compared with an AT based on a proper tiepoint matching? The paper aims to show the steps to be done by potential users and gives a kind of quality estimation about the importance and quality influence of the various calibration and adjustment steps.
Textile Fingerprinting for Dismount Analysis in the Visible, Near, and Shortwave Infrared Domain
2014-03-01
Laboratory setup of reflectance data collection. The green, 100% cotton shirt sample, contact probe, and black calibration panel used are labeled...32 3.2 100 Instances of Cotton Reflectance from ASD FieldSpec ® 3 Hi-Res Spectroradiometer using a contact probe, with a black reflectance panel as...eight a-class colors. The solid vertical black line represents the wavelength selected as a feature (430nm, 481nm, 530nm, 588nm
NASA Astrophysics Data System (ADS)
Leidinger, Martin; Schultealbert, Caroline; Neu, Julian; Schütze, Andreas; Sauerwald, Tilman
2018-01-01
This article presents a test gas generation system designed to generate ppb level gas concentrations from gas cylinders. The focus is on permanent gases and volatile organic compounds (VOCs) for applications like indoor and outdoor air quality monitoring or breath analysis. In the design and the setup of the system, several issues regarding handling of trace gas concentrations have been considered, addressed and tested. This concerns not only the active fluidic components (flow controllers, valves), which have been chosen specifically for the task, but also the design of the fluidic tubing regarding dead volumes and delay times, which have been simulated for the chosen setup. Different tubing materials have been tested for their adsorption/desorption characteristics regarding naphthalene, a highly relevant gas for indoor air quality monitoring, which has generated high gas exchange times in a previous gas mixing system due to long time adsorption/desorption effects. Residual gas contaminations of the system and the selected carrier air supply have been detected and quantified using both an analytical method (GC-MS analysis according to ISO 16000-6) and a metal oxide semiconductor gas sensor, which detected a maximum contamination equivalent to 28 ppb of carbon monoxide. A measurement strategy for suppressing even this contamination has been devised, which allows the system to be used for gas sensor and gas sensor system characterization and calibration in the low ppb concentration range.
2D Measurements of the Balmer Series in Proto-MPEX using a Fast Visible Camera Setup
NASA Astrophysics Data System (ADS)
Lindquist, Elizabeth G.; Biewer, Theodore M.; Ray, Holly B.
2017-10-01
The Prototype Material Plasma Exposure eXperiment (Proto-MPEX) is a linear plasma device with densities up to 1020 m-3 and temperatures up to 20 eV. Broadband spectral measurements show the visible emission spectra are solely due to the Balmer lines of deuterium. Monochromatic and RGB color Sanstreak SC1 Edgertronic fast visible cameras capture high speed video of plasmas in Proto-MPEX. The color camera is equipped with a long pass 450 nm filter and an internal Bayer filter to view the Dα line at 656 nm on the red channel and the Dβ line at 486 nm on the blue channel. The monochromatic camera has a 434 nm narrow bandpass filter to view the Dγ intensity. In the setup, a 50/50 beam splitter is used so both cameras image the same region of the plasma discharge. Camera images were aligned to each other by viewing a grid ensuring 1 pixel registration between the two cameras. A uniform intensity calibrated white light source was used to perform a pixel-to-pixel relative and an absolute intensity calibration for both cameras. Python scripts that combined the dual camera data, rendering the Dα, Dβ, and Dγ intensity ratios. Observations from Proto-MPEX discharges will be presented. This work was supported by the US. D.O.E. contract DE-AC05-00OR22725.
NASA Astrophysics Data System (ADS)
Zegers, R. P. C.; Yu, M.; Bekdemir, C.; Dam, N. J.; Luijten, C. C. M.; de Goey, L. P. H.
2013-08-01
Planar laser-induced fluorescence (LIF) of toluene has been applied in an optical engine and a high-pressure cell, to determine temperatures of fuel sprays and in-cylinder vapors. The method relies on a redshift of the toluene LIF emission spectrum with increasing temperature. Toluene fluorescence is recorded simultaneously in two disjunct wavelength bands by a two-camera setup. After calibration, the pixel-by-pixel LIF signal ratio is a proxy for the local temperature. A detailed measurement procedure is presented to minimize measurement inaccuracies and to improve precision. n-Heptane is used as the base fuel and 10 % of toluene is added as a tracer. The toluene LIF method is capable of measuring temperatures up to 700 K; above that the signal becomes too weak. The precision of the spray temperature measurements is 4 % and the spatial resolution 1.3 mm. We pay particular attention to the construction of the calibration curve that is required to translate LIF signal ratios into temperature, and to possible limitations in the portability of this curve between different setups. The engine results are compared to those obtained in a constant-volume high-pressure cell, and the fuel spray results obtained in the high-pressure cell are also compared to LES simulations. We find that the hot ambient gas entrained by the head vortex gives rise to a hot zone on the spray axis.
Integrated watershed-scale response to climate change for selected basins across the United States
Markstrom, Steven L.; Hay, Lauren E.; Ward-Garrison, D. Christian; Risley, John C.; Battaglin, William A.; Bjerklie, David M.; Chase, Katherine J.; Christiansen, Daniel E.; Dudley, Robert W.; Hunt, Randall J.; Koczot, Kathryn M.; Mastin, Mark C.; Regan, R. Steven; Viger, Roland J.; Vining, Kevin C.; Walker, John F.
2012-01-01
A study by the U.S. Geological Survey (USGS) evaluated the hydrologic response to different projected carbon emission scenarios of the 21st century using a hydrologic simulation model. This study involved five major steps: (1) setup, calibrate and evaluated the Precipitation Runoff Modeling System (PRMS) model in 14 basins across the United States by local USGS personnel; (2) acquire selected simulated carbon emission scenarios from the World Climate Research Programme's Coupled Model Intercomparison Project; (3) statistical downscaling of these scenarios to create PRMS input files which reflect the future climatic conditions of these scenarios; (4) generate PRMS projections for the carbon emission scenarios for the 14 basins; and (5) analyze the modeled hydrologic response. This report presents an overview of this study, details of the methodology, results from the 14 basin simulations, and interpretation of these results. A key finding is that the hydrological response of the different geographical regions of the United States to potential climate change may be different, depending on the dominant physical processes of that particular region. Also considered is the tremendous amount of uncertainty present in the carbon emission scenarios and how this uncertainty propagates through the hydrologic simulations.
NASA Astrophysics Data System (ADS)
Kincaid, T. R.; Meyer, B. A.
2009-12-01
In groundwater flow modeling, aquifer permeability is typically defined through model calibration. Since the pattern and size of conduits are part of a karstic permeability framework, those parameters should be constrainable through the same process given a sufficient density of measured conditions. H2H Associates has completed a dual-permeability steady-state model of groundwater flow through the western Santa Fe River Basin, Florida from which a 380.9 km network of saturated conduits was delineated through model calibration to heads and spring discharges. Two calibration datasets were compiled describing average high-water and average low-water conditions based on heads at 145 wells and discharge from 18 springs for the high-water scenario and heads at 188 wells and discharge from 9 springs for the low-water scenario. An initial conduit network was defined by assigning paths along mapped conduits and inferring paths along potentiometric troughs between springs and swallets that had been connected by groundwater tracing. These initial conduit assignments accounted for only 13.75 and 34.1 km of the final conduit network respectively. The model was setup using FEFLOW™ where conduits were described as discrete features embedded in a porous matrix. Flow in the conduits was described by the Manning-Strickler equation where variables for conduit area and roughness were used to adjust the volume and velocity of spring flows. Matrix flow was described by Darcy’s law where hydraulic conductivity variations were limited to three geologically defined internally homogeneous zones that ranged from ~2E-6 m/s to ~4E-3 m/s. Recharge for both the high-water and low-water periods was determined through a water budget analysis where variations were restricted to nine zones defined by land-use. All remaining variations in observed head were then assumed to be due to conduits. The model was iteratively calibrated to the high-water and low-water datasets wherein the location, size and roughness of the conduits were assigned as needed to accurately simulate observed heads and spring discharges while bounding simulated velocities by the tracer test results. Conduit diameters were adjusted to support high-water spring discharges but the locations were best determined by calibration to the low-water head field. The final model calibrated to within 5% of the total head change across the model region at 143 of the 145 wells in the high-water scenario and at 176 of the 188 wells in the low-water scenario. Simulated spring discharges fell within 13% of the observed range under high-water conditions and to within 100% of the observed range under low-water conditions. Simulated velocities ranged from as low as 10-4 m/day in the matrix to as high as 10+3 m/day in the largest conduits. The significance of these results that we emphasize here is two-fold. First, plausible karstic groundwater flow conditions can be reasonably simulated if adequate efforts are made to include springs, swallets, caves, and traced flow paths. And second, detailed saturated conduit networks can be delineated from careful evaluation of hydraulic head data particularly when dense datasets can be constructed by correlating values obtained from different wells under similar hydraulic periods.
NASA Astrophysics Data System (ADS)
Rieke-Zapp, D.; Tecklenburg, W.; Peipe, J.; Hastedt, H.; Haig, Claudia
Recent tests on the geometric stability of several digital cameras that were not designed for photogrammetric applications have shown that the accomplished accuracies in object space are either limited or that the accuracy potential is not exploited to the fullest extent. A total of 72 calibrations were calculated with four different software products for eleven digital camera models with different hardware setups, some with mechanical fixation of one or more parts. The calibration procedure was chosen in accord to a German guideline for evaluation of optical 3D measuring systems [VDI/VDE, VDI/VDE 2634 Part 1, 2002. Optical 3D Measuring Systems-Imaging Systems with Point-by-point Probing. Beuth Verlag, Berlin]. All images were taken with ringflashes which was considered a standard method for close-range photogrammetry. In cases where the flash was mounted to the lens, the force exerted on the lens tube and the camera mount greatly reduced the accomplished accuracy. Mounting the ringflash to the camera instead resulted in a large improvement of accuracy in object space. For standard calibration best accuracies in object space were accomplished with a Canon EOS 5D and a 35 mm Canon lens where the focusing tube was fixed with epoxy (47 μm maximum absolute length measurement error in object space). The fixation of the Canon lens was fairly easy and inexpensive resulting in a sevenfold increase in accuracy compared with the same lens type without modification. A similar accuracy was accomplished with a Nikon D3 when mounting the ringflash to the camera instead of the lens (52 μm maximum absolute length measurement error in object space). Parameterisation of geometric instabilities by introduction of an image variant interior orientation in the calibration process improved results for most cameras. In this case, a modified Alpa 12 WA yielded the best results (29 μm maximum absolute length measurement error in object space). Extending the parameter model with FiBun software to model not only an image variant interior orientation, but also deformations in the sensor domain of the cameras, showed significant improvements only for a small group of cameras. The Nikon D3 camera yielded the best overall accuracy (25 μm maximum absolute length measurement error in object space) with this calibration procedure indicating at the same time the presence of image invariant error in the sensor domain. Overall, calibration results showed that digital cameras can be applied for an accurate photogrammetric survey and that only a little effort was sufficient to greatly improve the accuracy potential of digital cameras.
Efficient gradient calibration based on diffusion MRI.
Teh, Irvin; Maguire, Mahon L; Schneider, Jürgen E
2017-01-01
To propose a method for calibrating gradient systems and correcting gradient nonlinearities based on diffusion MRI measurements. The gradient scaling in x, y, and z were first offset by up to 5% from precalibrated values to simulate a poorly calibrated system. Diffusion MRI data were acquired in a phantom filled with cyclooctane, and corrections for gradient scaling errors and nonlinearity were determined. The calibration was assessed with diffusion tensor imaging and independently validated with high resolution anatomical MRI of a second structured phantom. The errors in apparent diffusion coefficients along orthogonal axes ranged from -9.2% ± 0.4% to + 8.8% ± 0.7% before calibration and -0.5% ± 0.4% to + 0.8% ± 0.3% after calibration. Concurrently, fractional anisotropy decreased from 0.14 ± 0.03 to 0.03 ± 0.01. Errors in geometric measurements in x, y and z ranged from -5.5% to + 4.5% precalibration and were likewise reduced to -0.97% to + 0.23% postcalibration. Image distortions from gradient nonlinearity were markedly reduced. Periodic gradient calibration is an integral part of quality assurance in MRI. The proposed approach is both accurate and efficient, can be setup with readily available materials, and improves accuracy in both anatomical and diffusion MRI to within ±1%. Magn Reson Med 77:170-179, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. © 2016 Wiley Periodicals, Inc.
Efficient gradient calibration based on diffusion MRI
Teh, Irvin; Maguire, Mahon L.
2016-01-01
Purpose To propose a method for calibrating gradient systems and correcting gradient nonlinearities based on diffusion MRI measurements. Methods The gradient scaling in x, y, and z were first offset by up to 5% from precalibrated values to simulate a poorly calibrated system. Diffusion MRI data were acquired in a phantom filled with cyclooctane, and corrections for gradient scaling errors and nonlinearity were determined. The calibration was assessed with diffusion tensor imaging and independently validated with high resolution anatomical MRI of a second structured phantom. Results The errors in apparent diffusion coefficients along orthogonal axes ranged from −9.2% ± 0.4% to + 8.8% ± 0.7% before calibration and −0.5% ± 0.4% to + 0.8% ± 0.3% after calibration. Concurrently, fractional anisotropy decreased from 0.14 ± 0.03 to 0.03 ± 0.01. Errors in geometric measurements in x, y and z ranged from −5.5% to + 4.5% precalibration and were likewise reduced to −0.97% to + 0.23% postcalibration. Image distortions from gradient nonlinearity were markedly reduced. Conclusion Periodic gradient calibration is an integral part of quality assurance in MRI. The proposed approach is both accurate and efficient, can be setup with readily available materials, and improves accuracy in both anatomical and diffusion MRI to within ±1%. Magn Reson Med 77:170–179, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. PMID:26749277
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hrdlicka, Ales; Prokes, Lubomir; Stankova, Alice
2010-05-01
The development of a remote laser-induced breakdown spectroscopy (LIBS) setup with an off-axis Newtonian collection optics, Galilean-based focusing telescope, and a 532 nm flattop laser beam source is presented. The device was tested at a 6 m distance on a slice of bone to simulate its possible use in the field, e.g., during archaeological excavations. It is shown that this setup is sufficiently sensitive to both major (P, Mg) and minor elements (Na, Zn, Sr). The measured quantities of Mg, Zn, and Sr correspond to the values obtained by reference laser ablation-inductively coupled plasma-mass spectrometry (LA-ICP-MS) measurements within an approximatelymore » 20% range of uncertainty. A single point calibration was performed by use of a bone meal standard . The radial element distribution is almost invariable by use of LA-ICP-MS, whereas the LIBS measurement showed a strong dependence on the sample porosity. Based on these results, this remote LIBS setup with a relatively large (350 mm) collecting mirror is capable of semiquantitative analysis at the level of units of mg kg{sup -1}.« less
NASA Astrophysics Data System (ADS)
Rider, N. D.; Taha, Y. M.; Odame-Ankrah, C. A.; Huo, J. A.; Tokarek, T. W.; Cairns, E.; Moussa, S. G.; Liggio, J.; Osthoff, H. D.
2015-01-01
Photochemical sources of peroxycarboxylic nitric anhydrides (PANs) are utilized in many atmospheric measurement techniques for calibration or to deliver an internal standard. Conventionally, such sources rely on phosphor-coated low-pressure mercury (Hg) lamps to generate the UV light necessary to photo-dissociate a dialkyl ketone (usually acetone) in the presence of a calibrated amount of nitric oxide (NO) and oxygen (O2). In this manuscript, a photochemical PAN source in which the Hg lamp has been replaced by arrays of ultraviolet light-emitting diodes (UV-LEDs) is described. The output of the UV-LED source was analyzed by gas chromatography (PAN-GC) and thermal dissociation cavity ring-down spectroscopy (TD-CRDS). Using acetone, diethyl ketone (DIEK), diisopropyl ketone (DIPK), or di-n-propyl ketone (DNPK), respectively, the source produces peroxyacetic (PAN), peroxypropionic (PPN), peroxyisobutanoic (PiBN), or peroxy-n-butanoic nitric anhydride (PnBN) from NO in high yield (> 90%). Box model simulations with a subset of the Master Chemical Mechanism (MCM) were carried out to rationalize products yields and to identify side products. The use of UV-LED arrays offers many advantages over conventional Hg lamp setups, including greater light output over a narrower wavelength range, lower power consumption, and minimal generation of heat.
Distance measurement using frequency scanning interferometry with mode-hoped laser
NASA Astrophysics Data System (ADS)
Medhat, M.; Sobee, M.; Hussein, H. M.; Terra, O.
2016-06-01
In this paper, frequency scanning interferometry is implemented to measure distances up to 5 m absolutely. The setup consists of a Michelson interferometer, an external cavity tunable diode laser, and an ultra-low expansion (ULE) Fabry-Pérot (FP) cavity to measure the frequency scanning range. The distance is measured by acquiring simultaneously the interference fringes from, the Michelson and the FP interferometers, while scanning the laser frequency. An online fringe processing technique is developed to calculate the distance from the fringe ratio while removing the parts result from the laser mode-hops without significantly affecting the measurement accuracy. This fringe processing method enables accurate distance measurements up to 5 m with measurements repeatability ±3.9×10-6 L. An accurate translation stage is used to find the FP cavity free-spectral-range and therefore allow accurate measurement. Finally, the setup is applied for the short distance calibration of a laser distance meter (LDM).
NASA Astrophysics Data System (ADS)
Świątkowski, Michał; Wojtuś, Arkadiusz; Wielgoszewski, Grzegorz; Rudek, Maciej; Piasecki, Tomasz; Jóźwiak, Grzegorz; Gotszalk, Teodor
2018-04-01
Atomic force microscopy (AFM) is a widely used technology for the investigation and characterization of nanomaterials. Its functionality can be easily expanded by applying dedicated extension modules, which can measure the electrical conductivity or temperature of a sample. In this paper, we introduce a transformer ratio-arm bridge setup dedicated to AFM-based thermal imaging. One of the key features of the thermal module is the use of a low-power driving signal that prevents undesirable tip heating during resistance measurement, while the other is the sensor location in a ratio-arm transformer bridge working in the audio frequency range and ensuring galvanic isolation of the tip, enabling contact-mode scanning of electronic circuits. The proposed expansion module is compact and it can be integrated onto the AFM head close to the cantilever. The calibration process and the resolution of 11 mK of the proposed setup are shown.
Laboratory Measurements of Single-Particle Polarimetric Spectrum
NASA Astrophysics Data System (ADS)
Gritsevich, M.; Penttila, A.; Maconi, G.; Kassamakov, I.; Helander, P.; Puranen, T.; Salmi, A.; Hæggström, E.; Muinonen, K.
2017-12-01
Measuring scattering properties of different targets is important for material characterization, remote sensing applications, and for verifying theoretical results. Furthermore, there are usually simplifications made when we model targets and compute the scattering properties, e.g., ideal shape or constant optical parameters throughout the target material. Experimental studies help in understanding the link between the observed properties and computed results. Experimentally derived Mueller matrices of studied particles can be used as input for larger-scale scattering simulations, e.g., radiative transfer computations. This method allows to bypass the problem of using an idealized model for single-particle optical properties. While existing approaches offer ensemble- and orientation-averaged particle properties, our aim is to measure individual particles with controlled or known orientation. With the newly developed scatterometer, we aim to offer novel possibility to measure single, small (down to μm-scale) targets and their polarimetric spectra. This work presents an experimental setup that measures light scattered by a fixed small particle with dimensions ranging between micrometer and millimeter sizes. The goal of our setup is nondestructive characterization of such particles by measuring light of multiple wavelengths scattered in 360° in a horizontal plane by an ultrasonically levitating sample, whilst simultaneously controlling its 3D position and orientation. We describe the principles and design of our instrument and its calibration. We also present example measurements of real samples. This study was conducted under the support from the European Research Council, in the frame of the Advanced Grant project No. 320773 `Scattering and Absorption of Electromagnetic Waves in Particulate Media' (SAEMPL).
Analytical robustness of quantitative NIR chemical imaging for Islamic paper characterization
NASA Astrophysics Data System (ADS)
Mahgoub, Hend; Gilchrist, John R.; Fearn, Thomas; Strlič, Matija
2017-07-01
Recently, spectral imaging techniques such as Multispectral (MSI) and Hyperspectral Imaging (HSI) have gained importance in the field of heritage conservation. This paper explores the analytical robustness of quantitative chemical imaging for Islamic paper characterization by focusing on the effect of different measurement and processing parameters, i.e. acquisition conditions and calibration on the accuracy of the collected spectral data. This will provide a better understanding of the technique that can provide a measure of change in collections through imaging. For the quantitative model, special calibration target was devised using 105 samples from a well-characterized reference Islamic paper collection. Two material properties were of interest: starch sizing and cellulose degree of polymerization (DP). Multivariate data analysis methods were used to develop discrimination and regression models which were used as an evaluation methodology for the metrology of quantitative NIR chemical imaging. Spectral data were collected using a pushbroom HSI scanner (Gilden Photonics Ltd) in the 1000-2500 nm range with a spectral resolution of 6.3 nm using a mirror scanning setup and halogen illumination. Data were acquired at different measurement conditions and acquisition parameters. Preliminary results showed the potential of the evaluation methodology to show that measurement parameters such as the use of different lenses and different scanning backgrounds may not have a great influence on the quantitative results. Moreover, the evaluation methodology allowed for the selection of the best pre-treatment method to be applied to the data.
Eddy Covariance measurements of stable isotopes (δD and δ18O) in water vapor
NASA Astrophysics Data System (ADS)
Braden-Behrens, Jelka; Knohl, Alexander
2017-04-01
Stable isotopes are a promising tool to enhance our understanding of ecosystem gas exchanges. Studying 18O and 2H in water vapour (H2Ov) can e.g. help partitioning evapotranspiration into its components. With recent developments in laser spectroscopy direct Eddy Covariance (EC) measurements for investigating fluxes of stable isotopologues became feasible. So far very few case studies have applied the EC method to measure stable isotopes in water vapor. We continuously measure fluxes of water vapor isotopologues with the EC method in a managed beech forest in Thuringia, Germany, since autumn 2015 using the following setup: An off-axis integrated cavity output water vapor isotope analyzer (WVIA, Los Gatos Research. Inc, USA) measures the water vapour concentration and its isotopic composition (δD and δ18O). The instrument, that was optimized for high flow rates (app. 4slpm) to generate high frequency (2Hz) measurements, showed sufficient precision with Allan Deviations of app. 0.12 ‰ for δD and 0.06 ‰ for δ18O for averaging periods of 100s. The instrument was calibrated hourly using a high-flow optimized version of the water vapor isotope standard source (WVISS, Los Gatos Research. Inc, USA) that provides water vapor with known isotopic composition for a large range of different concentrations. Our calibration scheme includes a near continuous concentration range calibration instead of a simple 2 or 3-point calibration to face the analyzers strong concentration dependency within a range of app. 6 000 to 16 000 ppm in winter and app. 8 000 to 23 000 ppm in summer. In the used setup, the high-flow and high-frequency optimized water vapor isotope analyzer (WVIA) showed suitable characteristics (Allan deviation and spectral energy distribution) to perform Eddy covariance measurements of stable isotopes in H2Ov. Thus, this novel instrument for EC measurements of water vapor isotopologues provides a new opportunity for studying the hydrological cycle in long-term observation networks like Fluxnet and ICOS.
A simultaneous multimodal imaging system for tissue functional parameters
NASA Astrophysics Data System (ADS)
Ren, Wenqi; Zhang, Zhiwu; Wu, Qiang; Zhang, Shiwu; Xu, Ronald
2014-02-01
Simultaneous and quantitative assessment of skin functional characteristics in different modalities will facilitate diagnosis and therapy in many clinical applications such as wound healing. However, many existing clinical practices and multimodal imaging systems are subjective, qualitative, sequential for multimodal data collection, and need co-registration between different modalities. To overcome these limitations, we developed a multimodal imaging system for quantitative, non-invasive, and simultaneous imaging of cutaneous tissue oxygenation and blood perfusion parameters. The imaging system integrated multispectral and laser speckle imaging technologies into one experimental setup. A Labview interface was developed for equipment control, synchronization, and image acquisition. Advanced algorithms based on a wide gap second derivative reflectometry and laser speckle contrast analysis (LASCA) were developed for accurate reconstruction of tissue oxygenation and blood perfusion respectively. Quantitative calibration experiments and a new style of skinsimulating phantom were designed to verify the accuracy and reliability of the imaging system. The experimental results were compared with a Moor tissue oxygenation and perfusion monitor. For In vivo testing, a post-occlusion reactive hyperemia (PORH) procedure in human subject and an ongoing wound healing monitoring experiment using dorsal skinfold chamber models were conducted to validate the usability of our system for dynamic detection of oxygenation and perfusion parameters. In this study, we have not only setup an advanced multimodal imaging system for cutaneous tissue oxygenation and perfusion parameters but also elucidated its potential for wound healing assessment in clinical practice.
A beam hardening and dispersion correction for x-ray dark-field radiography.
Pelzer, Georg; Anton, Gisela; Horn, Florian; Rieger, Jens; Ritter, André; Wandner, Johannes; Weber, Thomas; Michel, Thilo
2016-06-01
X-ray dark-field imaging promises information on the small angle scattering properties even of large samples. However, the dark-field image is correlated with the object's attenuation and phase-shift if a polychromatic x-ray spectrum is used. A method to remove part of these correlations is proposed. The experimental setup for image acquisition was modeled in a wave-field simulation to quantify the dark-field signals originating solely from a material's attenuation and phase-shift. A calibration matrix was simulated for ICRU46 breast tissue. Using the simulated data, a dark-field image of a human mastectomy sample was corrected for the finger print of attenuation- and phase-image. Comparing the simulated, attenuation-based dark-field values to a phantom measurement, a good agreement was found. Applying the proposed method to mammographic dark-field data, a reduction of the dark-field background and anatomical noise was achieved. The contrast between microcalcifications and their surrounding background was increased. The authors show that the influence of and dispersion can be quantified by simulation and, thus, measured image data can be corrected. The simulation allows to determine the corresponding dark-field artifacts for a wide range of setup parameters, like tube-voltage and filtration. The application of the proposed method to mammographic dark-field data shows an increase in contrast compared to the original image, which might simplify a further image-based diagnosis.
Stray light calibration of the Dawn Framing Camera
NASA Astrophysics Data System (ADS)
Kovacs, Gabor; Sierks, Holger; Nathues, Andreas; Richards, Michael; Gutierrez-Marques, Pablo
2013-10-01
Sensitive imaging systems with high dynamic range onboard spacecrafts are susceptible to ghost and stray-light effects. During the design phase, the Dawn Framing Camera was laid out and optimized to minimize those unwanted, parasitic effects. However, the requirement of low distortion to the optical design and use of a front-lit focal plane array induced an additional stray light component. This paper presents the ground-based and in-flight procedures characterizing the stray-light artifacts. The in-flight test used the Sun as the stray light source, at different angles of incidence. The spacecraft was commanded to point predefined solar elongation positions, and long exposure images were recorded. The PSNIT function was calculated by the known illumination and the ground based calibration information. In the ground based calibration, several extended and point sources were used with long exposure times in dedicated imaging setups. The tests revealed that the major contribution to the stray light is coming from the ghost reflections between the focal plan array and the band pass interference filters. Various laboratory experiments and computer modeling simulations were carried out to quantify the amount of this effect, including the analysis of the diffractive reflection pattern generated by the imaging sensor. The accurate characterization of the detector reflection pattern is the key to successfully predict the intensity distribution of the ghost image. Based on the results, and the properties of the optical system, a novel correction method is applied in the image processing pipeline. The effect of this correction procedure is also demonstrated with the first images of asteroid Vesta.
Hydrological Modeling of Rainfall-Watershed-Bioretention System with EPA SWMM
NASA Astrophysics Data System (ADS)
gülbaz, sezar; melek kazezyılmaz-alhan, cevza
2016-04-01
Water resources should be protected for the sustainability of water supply and water quality. Human activities such as high urbanization with lack of infrastructure system and uncontrolled agricultural facilities adversely affect the water resources. Therefore, recent techniques should be investigated in detail to avoid present and future problems like flood, drought and water pollution. Low Impact Development-Best Management Practice (LID-BMP) is such a technique to manage storm water runoff and quality. There are several LID storm water BMPs such as bioretention facilities, rain gardens, storm water wetlands, vegetated rooftops, rain barrels, vegetative swales and permeable pavements. Bioretention is a type of Low Impact Developments (LIDs) implemented to diminish adverse effects of urbanization by reducing peak flows over the surface and improving surface water quality simultaneously. Different soil types in different ratios are considered in bioretention design which affects the performance of bioretention systems. Therefore, in this study, a hydrologic model for bioretention is developed by using Environmental Protection Agency Storm Water Management Model (EPA SWMM). Part of the input data is supplied to the hydrologic model by experimental setup called Rainfall-Watershed-Bioretention (RWB). RWB System is developed to investigate the relation among rainfall, watershed and bioretention. This setup consists of three main parts which are artificial rainfall system, drainage area and four bioretention columns with different soil mixture. EPA SWMM is a dynamic simulation model for the surface runoff which develops on a watershed during a rainfall event. The model is commonly used to plan, analyze, and control storm water runoff, to design drainage system components and to evaluate watershed management of both urban and rural areas. Furthermore, EPA SWMM is a well-known program to model LID-Bioretention in the literature. Therefore, EPA SWMM is employed in drainage and bioretention modeling. Calibration of hydrologic model is made using part of the measured data in RWB System for drainage area and for each bioretention column separately. Finally, performance of the model is evaluated by comparing the model results with the experimental data collected in RWB system.
The first laboratory measurements of sulfur ions sputtering water ice
NASA Astrophysics Data System (ADS)
Galli, André; Pommerol, Antoine; Vorburger, Audrey; Wurz, Peter; Tulej, Marek; Scheer, Jürgen; Thomas, Nicolas; Wieser, Martin; Barabash, Stas
2015-04-01
The upcoming JUpiter ICy moons Explorer mission to Europa, Ganymede, and Callisto has renewed the interest in the interaction of plasma with an icy surface. In particular, the surface release processes on which exosphere models of icy moons rely should be tested with realistic laboratory experiments. We therefore use an existing laboratory facility for space hardware calibration in vacuum to measure the sputtering of water ice due to hydrogen, oxygen, and sulfur ions at energies from 1 keV to 100 keV. Pressure and temperature are comparable to surface conditions encountered on Jupiter's icy moons. The sputter target is a 1cm deep layer of porous, salty water ice. Our results confirm theoretical predictions that the sputter yield from oxygen and sulfur ions should be similar. Thanks to the modular set-up of our experiment we can add further surface processes relevant for icy moons, such as electron sputtering, sublimation, and photodesorption due to UV light.
Compact energy dispersive X-ray microdiffractometer for diagnosis of neoplastic tissues
NASA Astrophysics Data System (ADS)
Sosa, C.; Malezan, A.; Poletti, M. E.; Perez, R. D.
2017-08-01
An energy dispersive X-ray microdiffractometer with capillary optics has been developed for characterizing breast cancer. The employment of low divergence capillary optics helps to reduce the setup size to a few centimeters, while providing a lateral spatial resolution of 100 μm. The system angular calibration and momentum transfer resolution were assessed by a detailed study of a polycrystalline reference material. The performance of the system was tested by means of the analysis of tissue-equivalent samples previously characterized by conventional X-ray diffraction. In addition, a simplified correction model for an appropriate comparison of the diffraction spectra was developed and validated. Finally, the system was employed to evaluate normal and neoplastic human breast samples, in order to determine their X-ray scatter signatures. The initial results indicate that the use of this compact energy dispersive X-ray microdiffractometer combined with a simplified correction procedure is able to provide additional information to breast cancer diagnosis.
Terahertz Measurement of the Water Content Distribution in Wood Materials
NASA Astrophysics Data System (ADS)
Bensalem, M.; Sommier, A.; Mindeguia, J. C.; Batsale, J. C.; Pradere, C.
2018-02-01
Recently, THz waves have been shown to be an effective technique for investigating the water diffusion within porous media, such as biomaterial or insulation materials. This applicability is due to the sufficient resolution for such applications and the safe levels of radiation. This study aims to achieve contactless absolute water content measurements at a steady state case in semi-transparent solids (wood) using a transmittance THz wave range setup. First, a calibration method is developed to validate an analytical model based on the Beer-Lambert law, linking the absorption coefficient, the density of the solid, and its water content. Then, an estimation of the water content on a local scale in a transient-state case (drying) is performed. This study shows that THz waves are an effective contactless, safe, and low-cost technique for the measurement of water content in a porous medium, such as wood.
The Effect of Vegetation on Sea-Swell Waves, Infragravity Waves and Wave-Induced Setup
NASA Astrophysics Data System (ADS)
Roelvink, J. A.; van Rooijen, A.; McCall, R. T.; Van Dongeren, A.; Reniers, A.; van Thiel de Vries, J.
2016-02-01
Aquatic vegetation in the coastal zone (e.g. mangrove trees) attenuates wave energy and thereby reduces flood risk along many shorelines worldwide. However, in addition to the attenuation of incident-band (sea-swell) waves, vegetation may also affect infragravity-band (IG) waves and the wave-induced water level setup (in short: wave setup). Currently, knowledge on the effect of vegetation on IG waves and wave setup is lacking, while they are they are key parameters for coastal risk assessment. In this study, the process-based storm impact model XBeach was extended with formulations for attenuation of sea-swell and IG waves as well as the effect on the wave setup, in two modes: the sea-swell wave phase-resolving (non-hydrostatic) and the phase-averaged (surfbeat) mode. In surfbeat mode a wave shape model was implemented to estimate the wave phase and to capture the intra-wave scale effect of emergent vegetation and nonlinear waves on the wave setup. Both modeling modes were validated using data from two flume experiments and show good skill in computing the attenuation of both sea-swell and IG waves as well as the effect on the wave-induced water level setup. In surfbeat mode, the prediction of nearshore mean water levels greatly improved when using the wave shape model, while in non-hydrostatic mode this effect is directly accounted for. Subsequently, the model was used to study the influence of the bottom profile slope and the location of the vegetation field on the computed wave setup with and without vegetation. It was found that the reduction is wave setup is strongly related to the location of vegetation relative to the wave breaking point, and that the wave setup is lower for milder slopes. The extended version of XBeach developed within this study can be used to study the nearshore hydrodynamics on coasts fronted by vegetation such as mangroves. It can also serve as tool for storm impact studies on coasts with aquatic vegetation, and can help to quantify the coastal protection function of vegetation.
Aeolus End-To-End Simulator and Wind Retrieval Algorithms up to Level 1B
NASA Astrophysics Data System (ADS)
Reitebuch, Oliver; Marksteiner, Uwe; Rompel, Marc; Meringer, Markus; Schmidt, Karsten; Huber, Dorit; Nikolaus, Ines; Dabas, Alain; Marshall, Jonathan; de Bruin, Frank; Kanitz, Thomas; Straume, Anne-Grete
2018-04-01
The first wind lidar in space ALADIN will be deployed on ESÁs Aeolus mission. In order to assess the performance of ALADIN and to optimize the wind retrieval and calibration algorithms an end-to-end simulator was developed. This allows realistic simulations of data downlinked by Aeolus. Together with operational processors this setup is used to assess random and systematic error sources and perform sensitivity studies about the influence of atmospheric and instrument parameters.
Holographic imaging with a Shack-Hartmann wavefront sensor.
Gong, Hai; Soloviev, Oleg; Wilding, Dean; Pozzi, Paolo; Verhaegen, Michel; Vdovin, Gleb
2016-06-27
A high-resolution Shack-Hartmann wavefront sensor has been used for coherent holographic imaging, by computer reconstruction and propagation of the complex field in a lensless imaging setup. The resolution of the images obtained with the experimental data is in a good agreement with the diffraction theory. Although a proper calibration with a reference beam improves the image quality, the method has a potential for reference-less holographic imaging with spatially coherent monochromatic and narrowband polychromatic sources in microscopy and imaging through turbulence.
FY92 Progress Report for the Gyrotron Backward-Wave-Oscillator Experiment
1993-07-01
C. SAMPLE CABLE CALIBRATION 23 D. ASYST CHANNEL SETUPS 26 E. SAMPLE MAGNET INPUT DATA DECK FOR THE GYRO-BWO 32 F. SAMPLE EGUN INPUT DATA DECK FOR THE...of the first coil of the Helmholtz pair; zero also corresponds to the diode end of the experiment). Another computer code used was the EGUN code (Ref...a short computer program was written to superimpose the two magnetic fields; DC and Helmholtz). An example of an EGUN input data file is included in
Recent improvements of the JET lithium beam diagnostica)
NASA Astrophysics Data System (ADS)
Brix, M.; Dodt, D.; Dunai, D.; Lupelli, I.; Marsen, S.; Melson, T. F.; Meszaros, B.; Morgan, P.; Petravich, G.; Refy, D. I.; Silva, C.; Stamp, M.; Szabolics, T.; Zastrow, K.-D.; Zoletnik, S.; JET-EFDA Contributors
2012-10-01
A 60 kV neutral lithium diagnostic beam probes the edge plasma of JET for the measurement of electron density profiles. This paper describes recent enhancements of the diagnostic setup, new procedures for calibration and protection measures for the lithium ion gun during massive gas puffs for disruption mitigation. New light splitting optics allow in parallel beam emission measurements with a new double entrance slit CCD spectrometer (spectrally resolved) and a new interference filter avalanche photodiode camera (fast density and fluctuation studies).
NeutronSTARS: A segmented neutron and charged particle detector for low-energy reaction studies
Akindele, O. A.; Casperson, R. J.; Wang, B. S.; ...
2017-08-10
NeutronSTARS (Neutron-S ilicon T elescope A rray for R eaction S tudies) consists of 2.2-tons of gadolinium-doped liquid scintillator for neutron detection and large area silicon detectors for charged particle identification. This detector array is intended for low-energy-nuclear-reaction measurements that result in the emission of neutrons such as and fission. This paper describes the NeutronSTARS experimental setup, calibration, and the array’s response to neutral and charged particles.
NASA Technical Reports Server (NTRS)
Kumar, Parikshith K.; Desai, Uri; Chatzigeorgiou, George; Lagoudas, Dimitris C.; Monroe, James; Karaman, Ibrahim; Noebe, Ron; Bigelow, Glen
2010-01-01
The present work is focused on studying the cycling actuation behavior of HTSMAs undergoing simultaneous creep and transformation. For the thermomechanical testing, a high temperature test setup was assembled on a MTS frame with the capability to test up to temperatures of 600 C. Constant stress thermal cycling tests were conducted to establish the actuation characteristics and the phase diagram for the chosen HTSMA. Additionally, creep tests were conducted at constant stress levels at different test temperatures to characterize the creep behavior of the alloy over the operational range. A thermodynamic constitutive model is developed and extended to take into account a) the effect of multiple thermal cycling on the generation of plastic strains due to transformation (TRIP strains) and b) both primary and secondary creep effects. The model calibration is based on the test results. The creep tests and the uniaxial tests are used to identify the viscoplastic behavior of the material. The parameters for the SMA properties, regarding the transformation and transformation induced plastic strain evolutions, are obtained from the material phase diagram and the thermomechanical tests. The model is validated by predicting the material behavior at different thermomechanical test conditions.
Self-calibration for lensless color microscopy.
Flasseur, Olivier; Fournier, Corinne; Verrier, Nicolas; Denis, Loïc; Jolivet, Frédéric; Cazier, Anthony; Lépine, Thierry
2017-05-01
Lensless color microscopy (also called in-line digital color holography) is a recent quantitative 3D imaging method used in several areas including biomedical imaging and microfluidics. By targeting cost-effective and compact designs, the wavelength of the low-end sources used is known only imprecisely, in particular because of their dependence on temperature and power supply voltage. This imprecision is the source of biases during the reconstruction step. An additional source of error is the crosstalk phenomenon, i.e., the mixture in color sensors of signals originating from different color channels. We propose to use a parametric inverse problem approach to achieve self-calibration of a digital color holographic setup. This process provides an estimation of the central wavelengths and crosstalk. We show that taking the crosstalk phenomenon into account in the reconstruction step improves its accuracy.
Estimating Setup of Driven Piles into Louisiana Clayey Soils
DOT National Transportation Integrated Search
2009-11-15
Two types of mathematical models for pile setup prediction, the Skov-Denver model and the newly developed rate-based model, have been established from all the dynamic and static testing data, including restrikes of the production piles, restrikes, st...
Estimating setup of driven piles into Louisiana clayey soils.
DOT National Transportation Integrated Search
2010-11-15
Two types of mathematical models for pile setup prediction, the Skov-Denver model and the newly developed rate-based model, have been established from all the dynamic and static testing data, including restrikes of the production piles, restrikes, st...
A Fully Sensorized Cooperative Robotic System for Surgical Interventions
Tovar-Arriaga, Saúl; Vargas, José Emilio; Ramos, Juan M.; Aceves, Marco A.; Gorrostieta, Efren; Kalender, Willi A.
2012-01-01
In this research a fully sensorized cooperative robot system for manipulation of needles is presented. The setup consists of a DLR/KUKA Light Weight Robot III especially designed for safe human/robot interaction, a FD-CT robot-driven angiographic C-arm system, and a navigation camera. Also, new control strategies for robot manipulation in the clinical environment are introduced. A method for fast calibration of the involved components and the preliminary accuracy tests of the whole possible errors chain are presented. Calibration of the robot with the navigation system has a residual error of 0.81 mm (rms) with a standard deviation of ±0.41 mm. The accuracy of the robotic system while targeting fixed points at different positions within the workspace is of 1.2 mm (rms) with a standard deviation of ±0.4 mm. After calibration, and due to close loop control, the absolute positioning accuracy was reduced to the navigation camera accuracy which is of 0.35 mm (rms). The implemented control allows the robot to compensate for small patient movements. PMID:23012551
Calibration Of An Omnidirectional Vision Navigation System Using An Industrial Robot
NASA Astrophysics Data System (ADS)
Oh, Sung J.; Hall, Ernest L.
1989-09-01
The characteristics of an omnidirectional vision navigation system were studied to determine position accuracy for the navigation and path control of a mobile robot. Experiments for calibration and other parameters were performed using an industrial robot to conduct repetitive motions. The accuracy and repeatability of the experimental setup and the alignment between the robot and the sensor provided errors of less than 1 pixel on each axis. Linearity between zenith angle and image location was tested at four different locations. Angular error of less than 1° and radial error of less than 1 pixel were observed at moderate speed variations. The experimental information and the test of coordinated operation of the equipment provide understanding of characteristics as well as insight into the evaluation and improvement of the prototype dynamic omnivision system. The calibration of the sensor is important since the accuracy of navigation influences the accuracy of robot motion. This sensor system is currently being developed for a robot lawn mower; however, wider applications are obvious. The significance of this work is that it adds to the knowledge of the omnivision sensor.
The upgrade of the Thomson scattering system for measurement on the C-2/C-2U devices.
Zhai, K; Schindler, T; Kinley, J; Deng, B; Thompson, M C
2016-11-01
The C-2/C-2U Thomson scattering system has been substantially upgraded during the latter phase of C-2/C-2U program. A Rayleigh channel has been added to each of the three polychromators of the C-2/C-2U Thomson scattering system. Onsite spectral calibration has been applied to avoid the issue of different channel responses at different spots on the photomultiplier tube surface. With the added Rayleigh channel, the absolute intensity response of the system is calibrated with Rayleigh scattering in argon gas from 0.1 to 4 Torr, where the Rayleigh scattering signal is comparable to the Thomson scattering signal at electron densities from 1 × 10 13 to 4 × 10 14 cm -3 . A new signal processing algorithm, using a maximum likelihood method and including detailed analysis of different noise contributions within the system, has been developed to obtain electron temperature and density profiles. The system setup, spectral and intensity calibration procedure and its outcome, data analysis, and the results of electron temperature/density profile measurements will be presented.
Quantum efficiency measurement of the Transiting Exoplanet Survey Satellite (TESS) CCD detectors
NASA Astrophysics Data System (ADS)
Krishnamurthy, A.; Villasenor, J.; Thayer, C.; Kissel, S.; Ricker, G.; Seager, S.; Lyle, R.; Deline, A.; Morgan, E.; Sauerwein, T.; Vanderspek, R.
2016-07-01
Very precise on-ground characterization and calibration of TESS CCD detectors will significantly assist in the analysis of the science data from the mission. An accurate optical test bench with very high photometric stability has been developed to perform precise measurements of the absolute quantum efficiency. The setup consists of a vacuum dewar with a single MIT Lincoln Lab CCID-80 device mounted on a cold plate with the calibrated reference photodiode mounted next to the CCD. A very stable laser-driven light source is integrated with a closed-loop intensity stabilization unit to control variations of the light source down to a few parts-per-million when averaged over 60 s. Light from the stabilization unit enters a 20 inch integrating sphere. The output light from the sphere produces near-uniform illumination on the cold CCD and on the calibrated reference photodiode inside the dewar. The ratio of the CCD and photodiode signals provides the absolute quantum efficiency measurement. The design, key features, error analysis, and results from the test campaign are presented.
NASA Astrophysics Data System (ADS)
Meyer, Swen; Ludwig, Ralf
2013-04-01
According to current climate projections, Mediterranean countries are at high risk for an even pronounced susceptibility to changes in the hydrological budget and extremes. While there is scientific consensus that climate induced changes on the hydrology of Mediterranean regions are presently occurring and are projected to amplify in the future, very little knowledge is available about the quantification of these changes, which is hampered by a lack of suitable and cost effective hydrological monitoring and modeling systems. The European FP7-project CLIMB is aiming to analyze climate induced changes on the hydrology of the Mediterranean Basins by investigating 7 test sites located in the countries Italy, France, Turkey, Tunisia, Gaza and Egypt. CLIMB employs a combination of novel geophysical field monitoring concepts, remote sensing techniques and integrated hydrologic modeling to improve process descriptions and understanding and to quantify existing uncertainties in climate change impact analysis. The Rio Mannu Basin, located in Sardinia; Italy, is one test site of the CLIMB project. The catchment has a size of 472.5 km2, it ranges from 62 to 946 meters in elevation, at mean annual temperatures of 16°C and precipitation of about 700 mm, the annual runoff volume is about 200 mm. The physically based Water Simulation Model WaSiM Vers. 2 (Schulla & Jasper (1999)) was setup to model current and projected future hydrological conditions. The availability of measured meteorological and hydrological data is poor as common to many Mediterranean catchments. The lack of available measured input data hampers the calibration of the model setup and the validation of model outputs. State of the art remote sensing techniques and field measuring techniques were applied to improve the quality of hydrological input parameters. In a field campaign about 250 soil samples were collected and lab-analyzed. Different geostatistical regionalization methods were tested to improve the model setup. The soil parameterization of the model was tested against publically available soil data. Results show a significant improvement of modeled soil moisture outputs. To validate WaSiMs evapotranspiration (ETact) outputs, Landsat TM images were used to calculate the actual monthly mean ETact rates using the triangle method (Jiang and Islam, 1999). Simulated spatial ETact patterns and those derived from remote sensing show a good fit especially for the growing season. WaSiM was driven with the meteorological forcing taken from 4 different ENSEMBLES climate projections for a reference (1971-2000) and a future (2041-2070) times series. Output results were analyzed for climate induced changes on selected hydrological variables. While the climate projections reveal increased precipitation rates in the spring season, first simulation results show an earlier onset and an increased duration of the dry season, imposing an increased irrigation demand and higher vulnerability of agricultural productivity.
Calibration Issues and Operating System Requirements for Electron-Probe Microanalysis
NASA Technical Reports Server (NTRS)
Carpenter, P.
2006-01-01
Instrument purchase requirements and dialogue with manufacturers have established hardware parameters for alignment, stability, and reproducibility, which have helped improve the precision and accuracy of electron microprobe analysis (EPMA). The development of correction algorithms and the accurate solution to quantitative analysis problems requires the minimization of systematic errors and relies on internally consistent data sets. Improved hardware and computer systems have resulted in better automation of vacuum systems, stage and wavelength-dispersive spectrometer (WDS) mechanisms, and x-ray detector systems which have improved instrument stability and precision. Improved software now allows extended automated runs involving diverse setups and better integrates digital imaging and quantitative analysis. However, instrumental performance is not regularly maintained, as WDS are aligned and calibrated during installation but few laboratories appear to check and maintain this calibration. In particular, detector deadtime (DT) data is typically assumed rather than measured, due primarily to the difficulty and inconvenience of the measurement process. This is a source of fundamental systematic error in many microprobe laboratories and is unknown to the analyst, as the magnitude of DT correction is not listed in output by microprobe operating systems. The analyst must remain vigilant to deviations in instrumental alignment and calibration, and microprobe system software must conveniently verify the necessary parameters. Microanalysis of mission critical materials requires an ongoing demonstration of instrumental calibration. Possible approaches to improvements in instrument calibration, quality control, and accuracy will be discussed. Development of a set of core requirements based on discussions with users, researchers, and manufacturers can yield documents that improve and unify the methods by which instruments can be calibrated. These results can be used to continue improvements of EPMA.
Hand-eye calibration for rigid laparoscopes using an invariant point.
Thompson, Stephen; Stoyanov, Danail; Schneider, Crispin; Gurusamy, Kurinchi; Ourselin, Sébastien; Davidson, Brian; Hawkes, David; Clarkson, Matthew J
2016-06-01
Laparoscopic liver resection has significant advantages over open surgery due to less patient trauma and faster recovery times, yet it can be difficult due to the restricted field of view and lack of haptic feedback. Image guidance provides a potential solution but one current challenge is in accurate "hand-eye" calibration, which determines the position and orientation of the laparoscope camera relative to the tracking markers. In this paper, we propose a simple and clinically feasible calibration method based on a single invariant point. The method requires no additional hardware, can be constructed by theatre staff during surgical setup, requires minimal image processing and can be visualised in real time. Real-time visualisation allows the surgical team to assess the calibration accuracy before use in surgery. In addition, in the laboratory, we have developed a laparoscope with an electromagnetic tracking sensor attached to the camera end and an optical tracking marker attached to the distal end. This enables a comparison of tracking performance. We have evaluated our method in the laboratory and compared it to two widely used methods, "Tsai's method" and "direct" calibration. The new method is of comparable accuracy to existing methods, and we show RMS projected error due to calibration of 1.95 mm for optical tracking and 0.85 mm for EM tracking, versus 4.13 and 1.00 mm respectively, using existing methods. The new method has also been shown to be workable under sterile conditions in the operating room. We have proposed a new method of hand-eye calibration, based on a single invariant point. Initial experience has shown that the method provides visual feedback, satisfactory accuracy and can be performed during surgery. We also show that an EM sensor placed near the camera would provide significantly improved image overlay accuracy.
NASA Astrophysics Data System (ADS)
Liu, Xuejin; Chen, Han; Bornefalk, Hans; Danielsson, Mats; Karlsson, Staffan; Persson, Mats; Xu, Cheng; Huber, Ben
2015-02-01
The variation among energy thresholds in a multibin detector for photon-counting spectral CT can lead to ring artefacts in the reconstructed images. Calibration of the energy thresholds can be used to achieve homogeneous threshold settings or to develop compensation methods to reduce the artefacts. We have developed an energy-calibration method for the different comparator thresholds employed in a photon-counting silicon-strip detector. In our case, this corresponds to specifying the linear relation between the threshold positions in units of mV and the actual deposited photon energies in units of keV. This relation is determined by gain and offset values that differ for different detector channels due to variations in the manufacturing process. Typically, the calibration is accomplished by correlating the peak positions of obtained pulse-height spectra to known photon energies, e.g. with the aid of mono-energetic x rays from synchrotron radiation, radioactive isotopes or fluorescence materials. Instead of mono-energetic x rays, the calibration method presented in this paper makes use of a broad x-ray spectrum provided by commercial x-ray tubes. Gain and offset as the calibration parameters are obtained by a regression analysis that adjusts a simulated spectrum of deposited energies to a measured pulse-height spectrum. Besides the basic photon interactions such as Rayleigh scattering, Compton scattering and photo-electric absorption, the simulation takes into account the effect of pulse pileup, charge sharing and the electronic noise of the detector channels. We verify the method for different detector channels with the aid of a table-top setup, where we find the uncertainty of the keV-value of a calibrated threshold to be between 0.1 and 0.2 keV.
NASA Technical Reports Server (NTRS)
Littell, Justin D.; Annett, Martin S.
2013-01-01
A series of 16 vertical tests were conducted on a Test Device for Human Occupant Restraint (THOR) - NT 50th percentile Anthropomorphic Test Device (ATD) at NASA Langley Research Center (LaRC). The purpose of the tests conducted at NASA LaRC was threefold. The first was to add vertical response data to the growing test database for THOR-NT development and validation. Second, the THOR-NT analytical computational models currently in development must be validated for the vertical loading environment. The computational models have been calibrated for frontal crash environments with concentration on accurately replicating head/neck, thoracic, and lower extremity responses. Finally, familiarity with the THOR ATD is necessary because NASA is interested in evaluating advanced ATDs for use in future flight and research projects. The THOR was subjected to vertical loading conditions ranging between 5 and 16 g in magnitude and 40 to 120 milliseconds (msec) in duration. It was also tested under conditions identical to previous tests conducted on the Hybrid II and III ATDs to allow comparisons to be made. Variations in the test setup were also introduced, such as the addition of a footrest in an attempt to offload some of the impact load into the legs. A full data set of the THOR-NT ATD will be presented and discussed. Results from the tests show that the THOR was largely insensitive to differences in the loading conditions, perhaps due in part to their small magnitudes. THOR responses, when compared to the Hybrid II and III in the lumbar region, demonstrated that the THOR more closely resembled the straight spine Hybrid setup. In the neck region, the THOR behaved more like the Hybrid III. However in both cases, the responses were not identical, indicating that the THOR would show differences in response than the Hybrid II and III ATDs when subjected to identical impact conditions. The addition of a footrest did not significantly affect the THOR response due to the nature of how the loading conditions were applied.
NASA Astrophysics Data System (ADS)
Muir, B. R.; McEwen, M. R.; Rogers, D. W. O.
2014-10-01
A method is presented to obtain ion chamber calibration coefficients relative to secondary standard reference chambers in electron beams using depth-ionization measurements. Results are obtained as a function of depth and average electron energy at depth in 4, 8, 12 and 18 MeV electron beams from the NRC Elekta Precise linac. The PTW Roos, Scanditronix NACP-02, PTW Advanced Markus and NE 2571 ion chambers are investigated. The challenges and limitations of the method are discussed. The proposed method produces useful data at shallow depths. At depths past the reference depth, small shifts in positioning or drifts in the incident beam energy affect the results, thereby providing a built-in test of incident electron energy drifts and/or chamber set-up. Polarity corrections for ion chambers as a function of average electron energy at depth agree with literature data. The proposed method produces results consistent with those obtained using the conventional calibration procedure while gaining much more information about the behavior of the ion chamber with similar data acquisition time. Measurement uncertainties in calibration coefficients obtained with this method are estimated to be less than 0.5%. These results open up the possibility of using depth-ionization measurements to yield chamber ratios which may be suitable for primary standards-level dissemination.
On-ground calibration of AGILE-GRID with a photon beam: results and lessons for the future
NASA Astrophysics Data System (ADS)
Cattaneo, P. W.; Rappoldi, A.
2013-06-01
On the AGILE satellite, there is the Gamma Ray Imaging Detector (GRID) consisting of a Silicon Tracker (ST), a Cesium Iodide Mini-Calorimeter and an Anti-Coincidence system of plastic scintillator bars. The ST needs a calibration with a γ-ray beam to validate the simulation used to calculate the detector response versus the energy and the direction of the γ rays. A tagged γ-ray beam line was designed at the Beam Test Facility of the Laboratori Nazionali of Frascati, generated by an electron beam through bremsstrahlung in a position-sensitive target. The γ-ray energy is deduced by the difference with the post-bremsstrahlung electron energy [P. W. Cattaneo, et al., Characterization of a tagged γ-ray beam line at the daΦne beam test facility, Nucl. Instr. and Meth. A 674 (2012) 55-66; P. W. Cattaneo, et al., First results about on-ground calibration of the silicon tracker for the agile satellite, Nucl. Instr. and Meth. A 630(1) (2011) 251-257.]. The electron energy is measured by a spectrometer consisting of a dipole magnet and an array of position sensitive silicon strip detectors, the Photon Tagging System (PTS). In this paper the setup and the calibration of AGILE performed in 2005 are described.
Water level effects on breaking wave setup for Pacific Island fringing reefs
NASA Astrophysics Data System (ADS)
Becker, J. M.; Merrifield, M. A.; Ford, M.
2014-02-01
The effects of water level variations on breaking wave setup over fringing reefs are assessed using field measurements obtained at three study sites in the Republic of the Marshall Islands and the Mariana Islands in the western tropical Pacific Ocean. At each site, reef flat setup varies over the tidal range with weaker setup at high tide and stronger setup at low tide for a given incident wave height. The observed water level dependence is interpreted in the context of radiation stress gradients specified by an idealized point break model generalized for nonnormally incident waves. The tidally varying setup is due in part to depth-limited wave heights on the reef flat, as anticipated from previous reef studies, but also to tidally dependent breaking on the reef face. The tidal dependence of the breaking is interpreted in the context of the point break model in terms of a tidally varying wave height to water depth ratio at breaking. Implications for predictions of wave-driven setup at reef-fringed island shorelines are discussed.
Cama-Moncunill, Raquel; Markiewicz-Keszycka, Maria; Dixit, Yash; Cama-Moncunill, Xavier; Casado-Gavalda, Maria P; Cullen, Patrick J; Sullivan, Carl
2016-07-01
Powdered infant formula (PIF) is a worldwide, industrially produced, human milk substitute. Manufacture of PIF faces strict quality controls in order to ensure that the product meets all compositional requirements. Near-infrared (NIR) spectroscopy is a rapid, non-destructive and well-qualified technique for food quality assessments. The use of fibre-optic NIR sensors allows measuring in-line and at real-time, and can record spectra from different stages of the process. The non-contact character of fibre-optic sensors can be enhanced by fitting collimators, which allow operation at various distances. The system, based on a Fabry-Perot interferometer, records four spectra concurrently, rather than consecutively as in the "quasi-simultaneous" multipoint NIR systems. In the present study, a novel multipoint NIR spectroscopy system equipped with four fibre-optic probes with collimators was assessed to determine carbohydrate and protein contents of PIF samples under static and motion conditions (0.02, 0.15 and 0.30m/s) to simulate possible industrial scenarios. Best results were obtained under static conditions providing a R(2) of calibration of 0.95 and RMSEP values of 1.89%. Yet, considerably low values of RMSEP, for instance 2.70% at 0.15m/s, were provided with the in-motion predictions, demonstrating the system's potential for in/on-line applications at various levels of speed. The current work also evaluated the viability of using general off-line calibrations developed under static conditions for on/in-line applications subject to motion. To this end, calibrations in both modes were developed and compared. Best results were obtained with specific calibrations; however, reasonably accurate models were obtained with the general calibration. Furthermore, this work illustrated independency of the collimator-probe setup by characterizing PIF samples simultaneously recorded according to their carbohydrate content, even when measured under different conditions. Therefore, the improved multipoint NIR approach constitutes a potential in/on-line tool for quality evaluation of PIF over the manufacturing process. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Dziomba, Thorsten; Koenders, Ludger; Wilkening, Günter
2005-10-01
The continuing miniaturization in many technologies - among them the optical systems - demands high-resolution measurements with uncertainties in the nanometre-range or even well below. A brief introduction of measurement methods used at the micro- & nanometre scale is therefore given as introduction. While a wide range of these methods are well established for the determination of various physical properties down to the nanometric scale, it is Scanning Probe Microscopy (SPM) that provides a unique direct access to topographic surface features in the size range from atomic diameters to some ten or hundred micrometres. With the increasing use of SPMs as quantitative measurement instruments, the demand for standardized calibration routines also for this type of instruments rises. However, except for a few specially designed set-ups mainly at National Metrology Institutes (e. g. PTB in Germany), measurements made with SPMs usually lack traceability to the metre definition. A number of physical transfer standards have therefore been developed and are already available commercially. While detailed knowledge of the standards' properties is a prerequisite for their practical applicability, the calibration procedure itself deserves careful consideration as well. As there is, up to now, no generally accepted concept how to perform SPM calibrations, guidelines are now being developed on various national and international levels, e. g. VDI/VDE-GMA in Germany and ISO. This papers discusses the draft of an SPM calibration guideline by focusing on several critical practical aspects of SPM calibration. The paper intends to invite the readers to take active part in guideline discussions.
Water Resources Implications of Cellulosic Biofuel Production at a Regional Scale
NASA Astrophysics Data System (ADS)
Christopher, S. F.; Schoenholtz, S. H.; Nettles, J. E.
2011-12-01
Recent increases in oil prices, a strong national interest in greater energy independence, and a concern for the role of fossil fuels in global climate change, have led to a dramatic expansion in use of alternative renewable energy sources in the U.S. The U.S. government has mandated production of 36 billion gallons of renewable fuels by 2022, of which 16 billion gallons are required to be cellulosic biofuels. Production of cellulosic biomass offers a promising alternative to corn-based systems because large-scale production of corn-based ethanol often requires irrigation and is associated with increased erosion, excess sediment export, and enhanced leaching of nitrogen and phosphorus. Although cultivation of switchgrass using standard agricultural practices is one option being considered for production of cellulosic biomass, intercropping cellulosic biofuel crops within managed forests could provide feedstock without primary land use change or the water quality impacts associated with annual crops. Catchlight Energy LLC is examining the feasibility and sustainability of intercropping switchgrass in loblolly pine plantations in the southeastern U.S. Ongoing research is determining efficient operational techniques and information needed to evaluate effects of these practices on water resources in small watershed-scale (~25 ha) studies. Three sets of four to five sub-watersheds are fully instrumented and currently collecting calibration data in North Carolina, Alabama, and Mississippi. These watershed studies will provide detailed information to understand processes and guide management decisions. However, environmental implications of cellulosic systems need to be examined at a regional scale. We used the Soil Water Assessment Tool (SWAT), a physically-based hydrologic model, to examine water quantity effects of various land use change scenarios ranging from switchgrass intercropping a small percentage of managed pine forest land to conversion of all managed forested land to switchgrass. The regional-scale SWAT model was successfully run and calibrated on the ~ 5 million ha Tombigbee Watershed located in Mississippi and Alabama. Publically available datasets were used as input to the model and for calibration. To improve calibration statistics, five tree age classes (0-4 yr, 4-10 yr, 10-17 yr, 17-24 yr, 24-30 yr) were added to the model to more appropriately represent existing forested systems in the region, which are not included within the standard SWAT set-up. Our results will be essential to public policy makers as they influence and plan for large-scale production of cellulosic biofuels, while sustaining water quality and quantity.
NASA Astrophysics Data System (ADS)
Barber, Corinne; DIRC at EIC Collaboration
2015-10-01
The High-B test facility at Thomas Jefferson National Accelerator Facility allows researchers to evaluate the gain of compact photon sensors, such as Micro-Channel-Plate Photomultipliers (MCP-PMTs), in magnetic fields up to 5 T. These ongoing studies support the development of a Detector of Internally Reflected Cherenkov light (DIRC) to be used in an Electron Ion Collider (EIC). Here, we present our summer 2015 activities to upgrade and improve the facility, and we show results for MCP-PMT gain changes in high B-fields. To monitor the light stability delivered to the MCP-PMTs being tested, we implemented a Silicon Photomultiplier (SiPM) in the setup and calibrated the ADC reading this sensor. A 405-nm Light-Emitting Diode (LED) housed in an optical tube compatible with neutral density filters was also installed. The filters provide an alternative way of reducing the light output of the LED to operate the MCP-PMTs in a single-photon mode. We calibrated a set of filters by means of a photodiode and measured the photon flux at multiple positions relative to the LED. This information helped us to design 3D-printed holders unique to each MCP-PMT so that the photocathode receives the greatest amount of light. The improvements to the setup allow for more precise PMT gain evaluation. This team includes 7 collaborators/co-authors besides myself: Yordanka Ilieva, Kijun Park, Greg Kalicy, Carl Zorn, Pawel Nadel-Turonski, Tongtong Cao, and Lee.
Test Setup For Model Landing Investigation of a Winged Space Vehicle
1960-07-20
Test Setup For Model Landing Investigation of a Winged Space Vehicle Image used in NASA Document TN-D-1496 1960-L-04633.01 is Figure 9a for NASA Document L-2064 Photograph of model on launcher and landing on runway.
Stauffer, Paul R; Snow, Brent W; Rodrigues, Dario B; Salahi, Sara; Oliveira, Tiago R; Reudink, Doug; Maccarini, Paolo F
2014-02-01
This study characterizes the sensitivity and accuracy of a non-invasive microwave radiometric thermometer intended for monitoring body core temperature directly in brain to assist rapid recovery from hypothermia such as occurs during surgical procedures. To study this approach, a human head model was constructed with separate brain and scalp regions consisting of tissue equivalent liquids circulating at independent temperatures on either side of intact skull. This test setup provided differential surface/deep tissue temperatures for quantifying sensitivity to change in brain temperature independent of scalp and surrounding environment. A single band radiometer was calibrated and tested in a multilayer model of the human head with differential scalp and brain temperature. Following calibration of a 500MHz bandwidth microwave radiometer in the head model, feasibility of clinical monitoring was assessed in a pediatric patient during a 2-hour surgery. The results of phantom testing showed that calculated radiometric equivalent brain temperature agreed within 0.4°C of measured temperature when the brain phantom was lowered 10°C and returned to original temperature (37°C), while scalp was maintained constant over a 4.6-hour experiment. The intended clinical use of this system was demonstrated by monitoring brain temperature during surgery of a pediatric patient. Over the 2-hour surgery, the radiometrically measured brain temperature tracked within 1-2°C of rectal and nasopharynx temperatures, except during rapid cooldown and heatup periods when brain temperature deviated 2-4°C from slower responding core temperature surrogates. In summary, the radiometer demonstrated long term stability, accuracy and sensitivity sufficient for clinical monitoring of deep brain temperature during surgery.
Precision assessment of model-based RSA for a total knee prosthesis in a biplanar set-up.
Trozzi, C; Kaptein, B L; Garling, E H; Shelyakova, T; Russo, A; Bragonzoni, L; Martelli, S
2008-10-01
Model-based Roentgen Stereophotogrammetric Analysis (RSA) was recently developed for the measurement of prosthesis micromotion. Its main advantage is that markers do not need to be attached to the implants as traditional marker-based RSA requires. Model-based RSA has only been tested in uniplanar radiographic set-ups. A biplanar set-up would theoretically facilitate the pose estimation algorithm, since radiographic projections would show more different shape features of the implants than in uniplanar images. We tested the precision of model-based RSA and compared it with that of the traditional marker-based method in a biplanar set-up. Micromotions of both tibial and femoral components were measured with both the techniques from double examinations of patients participating in a clinical study. The results showed that in the biplanar set-up model-based RSA presents a homogeneous distribution of precision for all the translation directions, but an inhomogeneous error for rotations, especially internal-external rotation presented higher errors than rotations about the transverse and sagittal axes. Model-based RSA was less precise than the marker-based method, although the differences were not significant for the translations and rotations of the tibial component, with the exception of the internal-external rotations. For both prosthesis components the precisions of model-based RSA were below 0.2 mm for all the translations, and below 0.3 degrees for rotations about transverse and sagittal axes. These values are still acceptable for clinical studies aimed at evaluating total knee prosthesis micromotion. In a biplanar set-up model-based RSA is a valid alternative to traditional marker-based RSA where marking of the prosthesis is an enormous disadvantage.
NASA Astrophysics Data System (ADS)
Kit, Eliezer; Liberzon, Dan
2016-09-01
High resolution measurements of turbulence in the atmospheric boundary layer (ABL) are critical to the understanding of physical processes and parameterization of important quantities, such as the turbulent kinetic energy dissipation. Low spatio-temporal resolution of standard atmospheric instruments, sonic anemometers and LIDARs, limits their suitability for fine-scale measurements of ABL. The use of miniature hot-films is an alternative technique, although such probes require frequent calibration, which is logistically untenable in field setups. Accurate and truthful calibration is crucial for the multi-hot-films applications in atmospheric studies, because the ability to conduct calibration in situ ultimately determines the turbulence measurements quality. Kit et al (2010 J. Atmos. Ocean. Technol. 27 23-41) described a novel methodology for calibration of hot-film probes using a collocated sonic anemometer combined with a neural network (NN) approach. An important step in the algorithm is the generation of a calibration set for NN training by an appropriate low-pass filtering of the high resolution voltages, measured by the hot-film-sensors and low resolution velocities acquired by the sonic. In Kit et al (2010 J. Atmos. Ocean. Technol. 27 23-41), Kit and Grits (2011 J. Atmos. Ocean. Technol. 28 104-10) and Vitkin et al (2014 Meas. Sci. Technol. 25 75801), the authors reported on successful use of this approach for in situ calibration, but also on the method’s limitations and restricted range of applicability. In their earlier work, a jet facility and a probe, comprised of two orthogonal x-hot-films, were used for calibration and for full dataset generation. In the current work, a comprehensive laboratory study of 3D-calibration of two multi-hot-film probes (triple- and four-sensor) using a grid flow was conducted. The probes were embedded in a collocated sonic, and their relative pitch and yaw orientation to the mean flow was changed by means of motorized traverses. The study demonstrated that NN-calibration is a powerful tool for calibration of multi-sensor 3D-hot film probes embedded in a collocated sonic, and can be employed in long-lasting field campaigns.
Fine water spray for fire extinguishing. Phase 2: Turbine hood
NASA Astrophysics Data System (ADS)
Aune, P.; Wighus, R.; Drangsholt, G.; Stensaas, J. P.
1994-12-01
SINTEF has carried out tests of a Fine Water Spray fire suppression system intended to be used as a replacement for Halon systems in turbine hoods on offshore platforms operated by British Petroleum Norway. The tests were carried out in a 70 cu m full scale model representing a turbine hood of the Ula platform in the North Sea. A mock-up of a gas turbine was installed in the model. The scope of work in Phase 2 was to verify the efficiency of fire suppression in realistic fire scenarios using a Fine Water Spray system, and to find an optimum procedure for water application in a fire situation. Two reports have been made from the experiments in Phase 2, one Main Report, STF25 A94036, and the present Technical Report, STF25 A94037. The discussion and conclusions are given in the Main Report while this Technical Report gives a more thorough presentation of the experimental setup and methods used for calibration and calculation of measured values. In addition, a complete set of curves for each experiment is included.
Numerical and Experimental Studies on Impact Loaded Concrete Structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saarenheimo, Arja; Hakola, Ilkka; Karna, Tuomo
2006-07-01
An experimental set-up has been constructed for medium scale impact tests. The main objective of this effort is to provide data for the calibration and verification of numerical models of a loading scenario where an aircraft impacts against a nuclear power plant. One goal is to develop and take in use numerical methods for predicting response of reinforced concrete structures to impacts of deformable projectiles that may contain combustible liquid ('fuel'). Loading, structural behaviour, like collapsing mechanism and the damage grade, will be predicted by simple analytical methods and using non-linear FE-method. In the so-called Riera method the behavior ofmore » the missile material is assumed to be rigid plastic or rigid visco-plastic. Using elastic plastic and elastic visco-plastic material models calculations are carried out by ABAQUS/Explicit finite element code, assuming axisymmetric deformation mode for the missile. With both methods, typically, the impact force time history, the velocity of the missile rear end and the missile shortening during the impact were recorded for comparisons. (authors)« less
Application of High Speed Digital Image Correlation in Rocket Engine Hot Fire Testing
NASA Technical Reports Server (NTRS)
Gradl, Paul R.; Schmidt, Tim
2016-01-01
Hot fire testing of rocket engine components and rocket engine systems is a critical aspect of the development process to understand performance, reliability and system interactions. Ground testing provides the opportunity for highly instrumented development testing to validate analytical model predictions and determine necessary design changes and process improvements. To properly obtain discrete measurements for model validation, instrumentation must survive in the highly dynamic and extreme temperature application of hot fire testing. Digital Image Correlation has been investigated and being evaluated as a technique to augment traditional instrumentation during component and engine testing providing further data for additional performance improvements and cost savings. The feasibility of digital image correlation techniques were demonstrated in subscale and full scale hotfire testing. This incorporated a pair of high speed cameras to measure three-dimensional, real-time displacements and strains installed and operated under the extreme environments present on the test stand. The development process, setup and calibrations, data collection, hotfire test data collection and post-test analysis and results are presented in this paper.
Banos, Oresti; Damas, Miguel; Pomares, Hector; Rojas, Ignacio
2012-01-01
The main objective of fusion mechanisms is to increase the individual reliability of the systems through the use of the collectivity knowledge. Moreover, fusion models are also intended to guarantee a certain level of robustness. This is particularly required for problems such as human activity recognition where runtime changes in the sensor setup seriously disturb the reliability of the initial deployed systems. For commonly used recognition systems based on inertial sensors, these changes are primarily characterized as sensor rotations, displacements or faults related to the batteries or calibration. In this work we show the robustness capabilities of a sensor-weighted fusion model when dealing with such disturbances under different circumstances. Using the proposed method, up to 60% outperformance is obtained when a minority of the sensors are artificially rotated or degraded, independent of the level of disturbance (noise) imposed. These robustness capabilities also apply for any number of sensors affected by a low to moderate noise level. The presented fusion mechanism compensates the poor performance that otherwise would be obtained when just a single sensor is considered. PMID:22969386
Banos, Oresti; Damas, Miguel; Pomares, Hector; Rojas, Ignacio
2012-01-01
The main objective of fusion mechanisms is to increase the individual reliability of the systems through the use of the collectivity knowledge. Moreover, fusion models are also intended to guarantee a certain level of robustness. This is particularly required for problems such as human activity recognition where runtime changes in the sensor setup seriously disturb the reliability of the initial deployed systems. For commonly used recognition systems based on inertial sensors, these changes are primarily characterized as sensor rotations, displacements or faults related to the batteries or calibration. In this work we show the robustness capabilities of a sensor-weighted fusion model when dealing with such disturbances under different circumstances. Using the proposed method, up to 60% outperformance is obtained when a minority of the sensors are artificially rotated or degraded, independent of the level of disturbance (noise) imposed. These robustness capabilities also apply for any number of sensors affected by a low to moderate noise level. The presented fusion mechanism compensates the poor performance that otherwise would be obtained when just a single sensor is considered.
Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.
Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiplemore » causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.« less
Optimization-based manufacturing scheduling with multiple resources and setup requirements
NASA Astrophysics Data System (ADS)
Chen, Dong; Luh, Peter B.; Thakur, Lakshman S.; Moreno, Jack, Jr.
1998-10-01
The increasing demand for on-time delivery and low price forces manufacturer to seek effective schedules to improve coordination of multiple resources and to reduce product internal costs associated with labor, setup and inventory. This study describes the design and implementation of a scheduling system for J. M. Product Inc. whose manufacturing is characterized by the need to simultaneously consider machines and operators while an operator may attend several operations at the same time, and the presence of machines requiring significant setup times. The scheduling problem with these characteristics are typical for many manufacturers, very difficult to be handled, and have not been adequately addressed in the literature. In this study, both machine and operators are modeled as resources with finite capacities to obtain efficient coordination between them, and an operator's time can be shared by several operations at the same time to make full use of the operator. Setups are explicitly modeled following our previous work, with additional penalties on excessive setups to reduce setup costs and avoid possible scraps. An integer formulation with a separable structure is developed to maximize on-time delivery of products, low inventory and small number of setups. Within the Lagrangian relaxation framework, the problem is decomposed into individual subproblems that are effectively solved by using dynamic programming with additional penalties embedded in state transitions. Heuristics is then developed to obtain a feasible schedule following on our previous work with new mechanism to satisfy operator capacity constraints. The method has been implemented using the object-oriented programming language C++ with a user-friendly interface, and numerical testing shows that the method generates high quality schedules in a timely fashion. Through simultaneous consideration of machines and operators, machines and operators are well coordinated to facilitate the smooth flow of parts through the system. The explicit modeling of setups and the associated penalties let parts with same setup requirements clustered together to avoid excessive setups.
PRIMAS: a real-time 3D motion-analysis system
NASA Astrophysics Data System (ADS)
Sabel, Jan C.; van Veenendaal, Hans L. J.; Furnee, E. Hans
1994-03-01
The paper describes a CCD TV-camera-based system for real-time multicamera 2D detection of retro-reflective targets and software for accurate and fast 3D reconstruction. Applications of this system can be found in the fields of sports, biomechanics, rehabilitation research, and various other areas of science and industry. The new feature of real-time 3D opens an even broader perspective of application areas; animations in virtual reality are an interesting example. After presenting an overview of the hardware and the camera calibration method, the paper focuses on the real-time algorithms used for matching of the images and subsequent 3D reconstruction of marker positions. When using a calibrated setup of two cameras, it is now possible to track at least ten markers at 100 Hz. Limitations in the performance are determined by the visibility of the markers, which could be improved by adding a third camera.
Semi-Automatic Determination of Rockfall Trajectories
Volkwein, Axel; Klette, Johannes
2014-01-01
In determining rockfall trajectories in the field, it is essential to calibrate and validate rockfall simulation software. This contribution presents an in situ device and a complementary Local Positioning System (LPS) that allow the determination of parts of the trajectory. An assembly of sensors (herein called rockfall sensor) is installed in the falling block recording the 3D accelerations and rotational velocities. The LPS automatically calculates the position of the block along the slope over time based on Wi-Fi signals emitted from the rockfall sensor. The velocity of the block over time is determined through post-processing. The setup of the rockfall sensor is presented followed by proposed calibration and validation procedures. The performance of the LPS is evaluated by means of different experiments. The results allow for a quality analysis of both the obtained field data and the usability of the rockfall sensor for future/further applications in the field. PMID:25268916
Self-addressed diffractive lens schemes for the characterization of LCoS displays
NASA Astrophysics Data System (ADS)
Zhang, Haolin; Lizana, Angel; Iemmi, Claudio; Monroy-Ramírez, Freddy A.; Marquez, Andrés.; Moreno, Ignacio; Campos, Juan
2018-02-01
We proposed a self-calibration method to calibrate both the phase-voltage look-up table and the screen phase distribution of Liquid Crystal on Silicon (LCoS) displays by implementing different lens configurations on the studied device within a same optical scheme. On the one hand, the phase-voltage relation is determined from interferometric measurements, which are obtained by addressing split-lens phase distributions on the LCoS display. On the other hand, the surface profile is retrieved by self-addressing a diffractive micro-lens array to the LCoS display, in a way that we configure a Shack-Hartmann wavefront sensor that self-determines the screen spatial variations. Moreover, both the phase-voltage response and the surface phase inhomogeneity of the LCoS are measured within the same experimental set-up, without the necessity of further adjustments. Experimental results prove the usefulness of the above-mentioned technique for LCoS displays characterization.
Ion generation and CPC detection efficiency studies in sub 3-nm size range
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kangasluoma, J.; Junninen, H.; Sipilae, M.
2013-05-24
We studied the chemical composition of commonly used condensation particle counter calibration ions with a mass spectrometer and found that in our calibration setup the negatively charged ammonium sulphate, sodium chloride and tungsten oxide are the least contaminated whereas silver on both positive and negative and the three mentioned earlier in positive mode are contaminated with organics. We report cut-off diameters for Airmodus Particle Size Magnifier (PSM) 1.1, 1.3, 1.4, 1.6 and 1.6-1.8 nm for negative sodium chloride, ammonium sulphate, tungsten oxide, silver and positive organics, respectively. To study the effect of sample relative humidity on detection efficiency of themore » PSM we used different humidities in the differential mobility analyzer sheath flow and found that with increasing relative humidity also the detection efficiency of the PSM increases.« less
Low cost photonic comb for sub-m/s wavelength calibration
NASA Astrophysics Data System (ADS)
Betters, Christopher H.; Hermouet, Maxime; Blanc, Thomas; Colless, James I.; Bland-Hawthorn, Joss; Kos, Janez; Leon-Saval, Sergio
2016-07-01
A fundamental limitation of precision radial velocity measurements is the accuracy and stability of the calibration source. Here we present a low-cost alternative to more complex laser metrology based systems that utilises a single-mode fibre Fabry-Perot etalon. There are three key elements on this photonic comb: i) an optical fibre etalon with thermo-electric coolers; ii) a Rubidium Saturation Absorption Spectroscopy (SAS) setup; and iii) an optical fibre switch system for simultaneous laser locking of the etalon. We simultaneously measure the Rubidium D2 transitions around 780.2 nm and the closest etalon line. A PID loop controls the etalon temperate to maintain the position of its peak with an RMS error of <10cm/s for 10 minute integration intervals in continous operation. The optical fibre switch system allows for a time multiplexed coupling of the etalon to a spectrograph and SAS system.
3D endoscopic imaging using structured illumination technique (Conference Presentation)
NASA Astrophysics Data System (ADS)
Le, Hanh N. D.; Nguyen, Hieu; Wang, Zhaoyang; Kang, Jin U.
2017-02-01
Surgeons have been increasingly relying on minimally invasive surgical guidance techniques not only to reduce surgical trauma but also to achieve accurate and objective surgical risk evaluations. A typical minimally invasive surgical guidance system provides visual assistance in two-dimensional anatomy and pathology of internal organ within a limited field of view. In this work, we propose and implement a structure illumination endoscope to provide a simple, inexpensive 3D endoscopic imaging to conduct high resolution 3D imagery for use in surgical guidance system. The system is calibrated and validated for quantitative depth measurement in both calibrated target and human subject. The system exhibits a depth of field of 20 mm, depth resolution of 0.2mm and a relative accuracy of 0.1%. The demonstrated setup affirms the feasibility of using the structured illumination endoscope for depth quantization and assisting medical diagnostic assessments
Experimental validation of 2D uncertainty quantification for DIC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reu, Phillip L.
Because digital image correlation (DIC) has become such an important and standard tool in the toolbox of experimental mechanicists, a complete uncertainty quantification of the method is needed. It should be remembered that each DIC setup and series of images will have a unique uncertainty based on the calibration quality and the image and speckle quality of the analyzed images. Any pretest work done with a calibrated DIC stereo-rig to quantify the errors using known shapes and translations, while useful, do not necessarily reveal the uncertainty of a later test. This is particularly true with high-speed applications where actual testmore » images are often less than ideal. Work has previously been completed on the mathematical underpinnings of DIC uncertainty quantification and is already published, this paper will present corresponding experimental work used to check the validity of the uncertainty equations.« less
Experimental validation of 2D uncertainty quantification for digital image correlation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reu, Phillip L.
Because digital image correlation (DIC) has become such an important and standard tool in the toolbox of experimental mechanicists, a complete uncertainty quantification of the method is needed. It should be remembered that each DIC setup and series of images will have a unique uncertainty based on the calibration quality and the image and speckle quality of the analyzed images. Any pretest work done with a calibrated DIC stereo-rig to quantify the errors using known shapes and translations, while useful, do not necessarily reveal the uncertainty of a later test. This is particularly true with high-speed applications where actual testmore » images are often less than ideal. Work has previously been completed on the mathematical underpinnings of DIC uncertainty quantification and is already published, this paper will present corresponding experimental work used to check the validity of the uncertainty equations.« less
Calibrating excitation light fluxes for quantitative light microscopy in cell biology
Grünwald, David; Shenoy, Shailesh M; Burke, Sean; Singer, Robert H
2011-01-01
Power output of light bulbs changes over time and the total energy delivered will depend on the optical beam path of the microscope, filter sets and objectives used, thus making comparison between experiments performed on different microscopes complicated. Using a thermocoupled power meter, it is possible to measure the exact amount of light applied to a specimen in fluorescence microscopy, regardless of the light source, as the light power measured can be translated into a power density at the sample. This widely used and simple tool forms the basis of a new degree of calibration precision and comparability of results among experiments and setups. Here we describe an easy-to-follow protocol that allows researchers to precisely estimate excitation intensities in the object plane, using commercially available opto-mechanical components. The total duration of this protocol for one objective and six filter cubes is 75 min including start-up time for the lamp. PMID:18974739
Analysis of Xrage and Flag High Explosive Burn Models with PBX 9404 Cylinder Tests
NASA Astrophysics Data System (ADS)
Harrier, Danielle; Fessenden, Julianna; Ramsey, Scott
2016-11-01
High explosives are energetic materials that release their chemical energy in a short interval of time. They are able to generate extreme heat and pressure by a shock driven chemical decomposition reaction, which makes them valuable tools that must be understood. This study investigated the accuracy and performance of two Los Alamos National Laboratory hydrodynamic codes, which are used to determine the behavior of explosives within a variety of systems: xRAGE which utilizes an Eulerian mesh, and FLAG with utilizes a Lagrangian mesh. Various programmed and reactive burn models within both codes were tested, using a copper cylinder expansion test. The test was based off of a recent experimental setup which contained the plastic bonded explosive PBX 9404. Detonation velocity versus time curves for this explosive were obtained from the experimental velocity data collected using Photon Doppler Velocimetry (PDV). The modeled results from each of the burn models tested were then compared to one another and to the experimental results using the Jones-Wilkins-Lee (JWL) equation of state parameters that were determined and adjusted from the experimental tests. This study is important to validate the accuracy of our high explosive burn models and the calibrated EOS parameters, which are important for many research topics in physical sciences.
Pleil, Joachim D; Angrish, Michelle M; Madden, Michael C
2015-12-11
Immunochemistry is an important clinical tool for indicating biological pathways leading towards disease. Standard enzyme-linked immunosorbent assays (ELISA) are labor intensive and lack sensitivity at low-level concentrations. Here we report on emerging technology implementing fully-automated ELISA capable of molecular level detection and describe application to exhaled breath condensate (EBC) samples. The Quanterix SIMOA HD-1 analyzer was evaluated for analytical performance for inflammatory cytokines (IL-6, TNF-α, IL-1β and IL-8). The system was challenged with human EBC representing the most dilute and analytically difficult of the biological media. Calibrations from synthetic samples and spiked EBC showed excellent linearity at trace levels (r(2) > 0.99). Sensitivities varied by analyte, but were robust from ~0.006 (IL-6) to ~0.01 (TNF-α) pg ml(-1). All analytes demonstrated response suppression when diluted with deionized water and so assay buffer diluent was found to be a better choice. Analytical runs required ~45 min setup time for loading samples, reagents, calibrants, etc., after which the instrument performs without further intervention for up to 288 separate samples. Currently, available kits are limited to single-plex analyses and so sample volumes require adjustments. Sample dilutions should be made with assay diluent to avoid response suppression. Automation performs seamlessly and data are automatically analyzed and reported in spreadsheet format. The internal 5-parameter logistic (pl) calibration model should be supplemented with a linear regression spline at the very lowest analyte levels, (<1.3 pg ml(-1)). The implementation of the automated Quanterix platform was successfully demonstrated using EBC, which poses the greatest challenge to ELISA due to limited sample volumes and low protein levels.
NASA Astrophysics Data System (ADS)
Takahashi, Tomoko; Thornton, Blair
2017-12-01
This paper reviews methods to compensate for matrix effects and self-absorption during quantitative analysis of compositions of solids measured using Laser Induced Breakdown Spectroscopy (LIBS) and their applications to in-situ analysis. Methods to reduce matrix and self-absorption effects on calibration curves are first introduced. The conditions where calibration curves are applicable to quantification of compositions of solid samples and their limitations are discussed. While calibration-free LIBS (CF-LIBS), which corrects matrix effects theoretically based on the Boltzmann distribution law and Saha equation, has been applied in a number of studies, requirements need to be satisfied for the calculation of chemical compositions to be valid. Also, peaks of all elements contained in the target need to be detected, which is a bottleneck for in-situ analysis of unknown materials. Multivariate analysis techniques are gaining momentum in LIBS analysis. Among the available techniques, principal component regression (PCR) analysis and partial least squares (PLS) regression analysis, which can extract related information to compositions from all spectral data, are widely established methods and have been applied to various fields including in-situ applications in air and for planetary explorations. Artificial neural networks (ANNs), where non-linear effects can be modelled, have also been investigated as a quantitative method and their applications are introduced. The ability to make quantitative estimates based on LIBS signals is seen as a key element for the technique to gain wider acceptance as an analytical method, especially in in-situ applications. In order to accelerate this process, it is recommended that the accuracy should be described using common figures of merit which express the overall normalised accuracy, such as the normalised root mean square errors (NRMSEs), when comparing the accuracy obtained from different setups and analytical methods.
Lu, Chen; Zhao, Xiaodan; Kawamura, Ryo
2017-01-01
Frictional drag force on an object in Stokes flow follows a linear relationship with the velocity of translation and a translational drag coefficient. This drag coefficient is related to the size, shape, and orientation of the object. For rod-like objects, analytical solutions of the drag coefficients have been proposed based on three rough approximations of the rod geometry, namely the bead model, ellipsoid model, and cylinder model. These theories all agree that translational drag coefficients of rod-like objects are functions of the rod length and aspect ratio, but differ among one another on the correction factor terms in the equations. By tracking the displacement of the particles through stationary fluids of calibrated viscosity in magnetic tweezers setup, we experimentally measured the drag coefficients of micron-sized beads and their bead-chain formations with chain length of 2 to 27. We verified our methodology with analytical solutions of dimers of two touching beads, and compared our measured drag coefficient values of rod-like objects with theoretical calculations. Our comparison reveals several analytical solutions that used more appropriate approximation and derived formulae that agree with our measurement better. PMID:29145447
NASA Astrophysics Data System (ADS)
Köhler, Mandy; Haendel, Falk; Epting, Jannis; Binder, Martin; Müller, Matthias; Huggenberger, Peter; Liedl, Rudolf
2015-04-01
Increasing groundwater temperatures have been observed in many urban areas such as London (UK), Tokyo (Japan) and also in Basel (Switzerland). Elevated groundwater temperatures are a result of different direct and indirect thermal impacts. Groundwater heat pumps, building structures located within the groundwater and district heating pipes, among others, can be addressed to direct impacts, whereas indirect impacts result from the change in climate in urban regions (i.e. reduced wind, diffuse heat sources). A better understanding of the thermal processes within the subsurface is urgently needed for decision makers as a basis for the selection of appropriate measures to reduce the ongoing increase of groundwater temperatures. However, often only limited temperature data is available that derives from measurements in conventional boreholes, which differ in construction and instrumental setup resulting in measurements that are often biased and not comparable. For three locations in the City of Basel models were implemented to study selected thermal processes and to investigate if heat-transport models can reproduce thermal measurements. Therefore, and to overcome the limitations of conventional borehole measurements, high-resolution depth-oriented temperature measurement systems have been introduced in the urban area of Basel. In total seven devices were installed with up to 16 sensors which are located in the unsaturated and saturated zone (0.5 to 1 m separation distance). Measurements were performed over a period of 4 years (ongoing) and provide sufficient data to set up and calibrate high-resolution local numerical heat transport models which allow studying selected local thermal processes. In a first setup two- and three-dimensional models were created to evaluate the impact of the atmosphere boundary on groundwater temperatures (see EGU Poster EGU2013-9230: Modelling Strategies for the Thermal Management of Shallow Rural and Urban Groundwater bodies). For Basel, where the mean thickness of the unsaturated zone amounts to 19 m, it could be observed that atmospheric seasonal temperature variations are small compared to advective groundwater heat transport. At chosen locations: i) near the river Rhine to study river-groundwater interaction processes, ii) downstream of a thermal groundwater user who uses water for cooling and infiltrates water with elevated temperatures and iii) downstream of a building structure reaching into the groundwater saturated zone, models were further extended to study selected thermal processes in detail and to investigate if these models can reproduce thermal impacts in the vicinity of the temperature measurement devices. Calibration, based on the depth-oriented temperature measurements, was performed for the saturated and unsaturated zone, respectively. Model results show that, although depth-oriented measurements provide valuable insights into local thermal processes, the identification of the governing impacts is strongly dependent on an appropriate positioning of the measurement device. Numerical simulations based on existing flow- and heat transport models, considering the site specific local hydraulic and thermal boundary conditions, allow optimizing the location of such systems before installation. Furthermore, the results of the local heat transport models can be transferred to regional scale models which are an important tool for thermal management in urban areas.
Real time in situ ellipsometric and gravimetric monitoring for electrochemistry experiments.
Broch, Laurent; Johann, Luc; Stein, Nicolas; Zimmer, Alexandre; Beck, Raphaël
2007-06-01
This work describes a new system using real time spectroscopic ellipsometer with simultaneous electrochemical and electrochemical quartz crystal microbalance (EQCM) measurements. This method is particularly adapted to characterize electrolyte/electrode interfaces during electrochemical and chemical processes in liquid medium. The ellipsometer, based on a rotating compensator Horiba Jobin-Yvon ellipsometer, has been adapted to acquire Psi-Delta spectra every 25 ms on a spectral range fixed from 400 to 800 nm. Measurements with short sampling times are only achievable with a fixed analyzer position (A=45 degrees ). Therefore the ellipsometer calibration is extremely important for high precision measurements and we propose a spectroscopic calibration (i.e., determination of the azimuth of elements according to the wavelength) on the whole spectral range. A homemade EQCM was developed to detect mass variations attached to the electrode. This additional instrument provides further information useful for ellipsometric data modeling of complex electrochemical systems. The EQCM measures frequency variations of piezoelectric quartz crystal oscillator working at 5 MHz. These frequency variations are linked to mass variations of electrode surface with a precision of 20 ng cm(-2) every 160 ms. Data acquisition has been developed in order to simultaneously record spectroscopic ellipsometry, EQCM, and electrochemical measurements by a single computer. Finally the electrodeposition of bismuth telluride film was monitored by this new in situ experimental setup and the density of electroplated layers was extracted from the optical thickness and EQCM mass.
NASA Astrophysics Data System (ADS)
Murtiyoso, A.; Grussenmeyer, P.; Freville, T.
2017-02-01
Close-range photogrammetry is an image-based technique which has often been used for the 3D documentation of heritage objects. Recently, advances in the field of image processing and UAVs (Unmanned Aerial Vehicles) have resulted in a renewed interest in this technique. However, commercially ready-to-use UAVs are often equipped with smaller sensors in order to minimize payload and the quality of the documentation is still an issue. In this research, two commercial UAVs (the Sensefly Albris and DJI Phantom 3 Professional) were setup to record the 19th century St-Pierre-le-Jeune church in Strasbourg, France. Several software solutions (commercial and open source) were used to compare both UAVs' images in terms of calibration, accuracy of external orientation, as well as dense matching. Results show some instability in regards to the calibration of Phantom 3, while the Albris had issues regarding its aerotriangulation results. Despite these shortcomings, both UAVs succeeded in producing dense point clouds of up to a few centimeters in accuracy, which is largely sufficient for the purposes of a city 3D GIS (Geographical Information System). The acquisition of close range images using UAVs also provides greater LoD flexibility in processing. These advantages over other methods such as the TLS (Terrestrial Laser Scanning) or terrestrial close range photogrammetry can be exploited in order for these techniques to complement each other.
Testing Instrument for Flight-Simulator Displays
NASA Technical Reports Server (NTRS)
Haines, Richard F.
1987-01-01
Displays for flight-training simulators rapidly aligned with aid of integrated optical instrument. Calibrations and tests such as aligning boresight of display with respect to user's eyes, checking and adjusting display horizon, checking image sharpness, measuring illuminance of displayed scenes, and measuring distance of optical focus of scene performed with single unit. New instrument combines all measurement devices in single, compact, integrated unit. Requires just one initial setup. Employs laser and produces narrow, collimated beam for greater measurement accuracy. Uses only one moving part, double right prism, to position laser beam.
1999-09-28
part of the talk will be devoted to the high resolution ab- sorption spectroscopy of the vi = 2-6 acetylenic overtone bands of propyned (CH 3-C=C-H...period CATGAS (Calibration Apparatus for Trace GAs Spectra), a transportable laboratory set-up for ab- sorption spectroscopy, was connected to the...the NIR around 1.95- 2.04 nm and 2.26- 2.39 nm, where accurate line parameters of ozone ab- sorption are available by high-resolution Fourier transform
Methodical aspects of text testing in a driving simulator.
Sundin, A; Patten, C J D; Bergmark, M; Hedberg, A; Iraeus, I-M; Pettersson, I
2012-01-01
A test with 30 test persons was conducted in a driving simulator. The test was a concept exploration and comparison of existing user interaction technologies for text message handling with focus on traffic safety and experience (technology familiarity and learning effects). Focus was put on methodical aspects how to measure and how to analyze the data. Results show difficulties with the eye tracking system (calibration etc.) per se, and also include the subsequent raw data preparation. The physical setup in the car where found important for the test completion.
Solar simulators vs outdoor module performance in the Negev Desert
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faiman, D
The power output of photovoltaic cells depends on the intensity of the incoming light, its spectral content and the cell temperature. In order to be able to predict the performance of a pv system, therefore, it is of paramount importance to be able to quantify cell performance in a reproducible manner. The standard laboratory technique for this purpose is to employ a solar simulator and a calibrated reference cell. Such a setup enables module performance to be assessed under constant, standard, illumination and temperature conditions. However, this technique has three inherent weaknesses.
Apollo Contour Rocket Nozzle in the Propulsion Systems Laboratory
1964-07-21
Bill Harrison and Bud Meilander check the setup of an Apollo Contour rocket nozzle in the Propulsion Systems Laboratory at the National Aeronautics and Space Administration (NASA) Lewis Research Center. The Propulsion Systems Laboratory contained two 14-foot diameter test chambers that could simulate conditions found at very high altitudes. The facility was used in the 1960s to study complex rocket engines such as the Pratt and Whitney RL-10 and rocket components such as the Apollo Contour nozzle, seen here. Meilander oversaw the facility’s mechanics and the installation of test articles into the chambers. Harrison was head of the Supersonic Tunnels Branch in the Test Installations Division. Researchers sought to determine the impulse value of the storable propellant mix, classify and improve the internal engine performance, and compare the results with analytical tools. A special setup was installed in the chamber that included a device to measure the thrust load and a calibration stand. Both cylindrical and conical combustion chambers were examined with the conical large area ratio nozzles. In addition, two contour nozzles were tested, one based on the Apollo Service Propulsion System and the other on the Air Force’s Titan transtage engine. Three types of injectors were investigated, including a Lewis-designed model that produced 98-percent efficiency. It was determined that combustion instability did not affect the nozzle performance. Although much valuable information was obtained during the tests, attempts to improve the engine performance were not successful.
NASA Astrophysics Data System (ADS)
Firoz, A. B. M.; Nauditt, Alexandra; Fink, Manfred; Ribbe, Lars
2018-01-01
Hydrological droughts are one of the most damaging disasters in terms of economic loss in central Vietnam and other regions of South-east Asia, severely affecting agricultural production and drinking water supply. Their increasing frequency and severity can be attributed to extended dry spells and increasing water abstractions for e.g. irrigation and hydropower development to meet the demand of dynamic socioeconomic development. Based on hydro-climatic data for the period from 1980 to 2013 and reservoir operation data, the impacts of recent hydropower development and other alterations of the hydrological network on downstream streamflow and drought risk were assessed for a mesoscale basin of steep topography in central Vietnam, the Vu Gia Thu Bon (VGTB) River basin. The Just Another Modelling System (JAMS)/J2000 was calibrated for the VGTB River basin to simulate reservoir inflow and the naturalized discharge time series for the downstream gauging stations. The HEC-ResSim reservoir operation model simulated reservoir outflow from eight major hydropower stations as well as the reconstructed streamflow for the main river branches Vu Gia and Thu Bon. Drought duration, severity, and frequency were analysed for different timescales for the naturalized and reconstructed streamflow by applying the daily varying threshold method. Efficiency statistics for both models show good results. A strong impact of reservoir operation on downstream discharge at the daily, monthly, seasonal, and annual scales was detected for four discharge stations relevant for downstream water allocation. We found a stronger hydrological drought risk for the Vu Gia river supplying water to the city of Da Nang and large irrigation systems especially in the dry season. We conclude that the calibrated model set-up provides a valuable tool to quantify the different origins of drought to support cross-sectorial water management and planning in a suitable way to be transferred to similar river basins.
NASA Astrophysics Data System (ADS)
Kanagaraj, S.; Pattanayak, S.
2004-06-01
The applications of fibre reinforced plastic (FRP) materials in cryogenic engineering have stimulated keen interest in the investigation of its properties. The reliable design data generated by a precisely controlled setup at identical environment of its applications are extremely important. This paper describes an apparatus based on a GM refrigerator for the simultaneous measurements of thermal conductivity, thermal expansion and thermal diffusivity using a double-specimen guarded-hotplate, 3-terminal capacitance technique and Angstrom method respectively in the temperature range from 30 K to 300 K. An integrated and perfectly insulated sample holder is designed and fabricated in such a way that the simultaneous measurements of the above properties are conveniently and accurately carried out at different temperatures. A set of stability criteria has been followed during the measurements to ensure the accuracy of the experimental data. The setup is calibrated with stainless steel and copper and the experimental results are within 10 % of the published results given in the literatures.
Stereo Cameras for Clouds (STEREOCAM) Instrument Handbook
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romps, David; Oktem, Rusen
2017-10-31
The three pairs of stereo camera setups aim to provide synchronized and stereo calibrated time series of images that can be used for 3D cloud mask reconstruction. Each camera pair is positioned at approximately 120 degrees from the other pair, with a 17o-19o pitch angle from the ground, and at 5-6 km distance from the U.S. Department of Energy (DOE) Central Facility at the Atmospheric Radiation Measurement (ARM) Climate Research Facility Southern Great Plains (SGP) observatory to cover the region from northeast, northwest, and southern views. Images from both cameras of the same stereo setup can be paired together tomore » obtain 3D reconstruction by triangulation. 3D reconstructions from the ring of three stereo pairs can be combined together to generate a 3D mask from surrounding views. This handbook delivers all stereo reconstruction parameters of the cameras necessary to make 3D reconstructions from the stereo camera images.« less
Microchannel plate cross-talk mitigation for spatial autocorrelation measurements
NASA Astrophysics Data System (ADS)
Lipka, Michał; Parniak, Michał; Wasilewski, Wojciech
2018-05-01
Microchannel plates (MCP) are the basis for many spatially resolved single-particle detectors such as ICCD or I-sCMOS cameras employing image intensifiers (II), MCPs with delay-line anodes for the detection of cold gas particles or Cherenkov radiation detectors. However, the spatial characterization provided by an MCP is severely limited by cross-talk between its microchannels, rendering MCP and II ill-suited for autocorrelation measurements. Here, we present a cross-talk subtraction method experimentally exemplified for an I-sCMOS based measurement of pseudo-thermal light second-order intensity autocorrelation function at the single-photon level. The method merely requires a dark counts measurement for calibration. A reference cross-correlation measurement certifies the cross-talk subtraction. While remaining universal for MCP applications, the presented cross-talk subtraction, in particular, simplifies quantum optical setups. With the possibility of autocorrelation measurements, the signal needs no longer to be divided into two camera regions for a cross-correlation measurement, reducing the experimental setup complexity and increasing at least twofold the simultaneously employable camera sensor region.
NASA Astrophysics Data System (ADS)
Xiao, X.; Le Berre, S.; Fobar, D. G.; Burger, M.; Skrodzki, P. J.; Hartig, K. C.; Motta, A. T.; Jovanovic, I.
2018-03-01
The corrosive environment provided by chlorine ions on the welds of stainless steel dry cask storage canisters for used nuclear fuel may contribute to the occurrence of stress corrosion cracking. We demonstrate the use of fiber-optic laser-induced breakdown spectroscopy (FOLIBS) in the double-pulse (DP) configuration for high-sensitivity, remote measurement of the surface concentrations of chlorine compatible in constrained space and challenging environment characteristic for dry cask storage systems. Chlorine surface concentrations as low as 5 mg/m2 have been detected and quantified by use of a laboratory-based and a fieldable DP FOLIBS setup with the calibration curve approach. The compact final optics assembly in the fieldable setup is interfaced via two 25-m long optical fibers for high-power laser pulse delivery and plasma emission collection and can be readily integrated into a multi-sensor robotic delivery system for in-situ inspection of dry cask storage systems.
Shimamoto, Yuta; Kapoor, Tarun M.
2014-01-01
SUMMARY To explain how micron-sized cellular structures generate and respond to forces we need to characterize their micromechanical properties. Here we provide a protocol to build and use a dual force-calibrated microneedle-based set-up to quantitatively analyze the micromechanics of a metaphase spindle assembled in Xenopus laevis egg extracts. This cell-free extract system allows for controlled biochemical perturbations of spindle components. We describe how the microneedles are prepared and how they can be used to apply and measure forces. A multi-mode imaging system allows tracking of microtubules, chromosomes and needle tips. This set-up can be used to analyze the viscoelastic properties of the spindle on time-scales ranging from minutes to sub-seconds. A typical experiment, along with data analysis, is also detailed. We anticipate that our protocol can be readily extended to analyze the micromechanics of other cellular structures assembled in cell-free extracts. The entire procedure can take 3-4 days. PMID:22538847
NASA Astrophysics Data System (ADS)
Dhakal, B.; Nicholson, D. E.; Saleeb, A. F.; Padula, S. A., II; Vaidyanathan, R.
2016-09-01
Shape memory alloy (SMA) actuators often operate under a complex state of stress for an extended number of thermomechanical cycles in many aerospace and engineering applications. Hence, it becomes important to account for multi-axial stress states and deformation characteristics (which evolve with thermomechanical cycling) when calibrating any SMA model for implementation in large-scale simulation of actuators. To this end, the present work is focused on the experimental validation of an SMA model calibrated for the transient and cyclic evolutionary behavior of shape memory Ni49.9Ti50.1, for the actuation of axially loaded helical-coil springs. The approach requires both experimental and computational aspects to appropriately assess the thermomechanical response of these multi-dimensional structures. As such, an instrumented and controlled experimental setup was assembled to obtain temperature, torque, degree of twist and extension, while controlling end constraints during heating and cooling of an SMA spring under a constant externally applied axial load. The computational component assesses the capabilities of a general, multi-axial, SMA material-modeling framework, calibrated for Ni49.9Ti50.1 with regard to its usefulness in the simulation of SMA helical-coil spring actuators. Axial extension, being the primary response, was examined on an axially-loaded spring with multiple active coils. Two different conditions of end boundary constraint were investigated in both the numerical simulations as well as the validation experiments: Case (1) where the loading end is restrained against twist (and the resulting torque measured as the secondary response) and Case (2) where the loading end is free to twist (and the degree of twist measured as the secondary response). The present study focuses on the transient and evolutionary response associated with the initial isothermal loading and the subsequent thermal cycles under applied constant axial load. The experimental results for the helical-coil actuator under two different boundary conditions are found to be within error to their counterparts in the numerical simulations. The numerical simulation and the experimental validation demonstrate similar transient and evolutionary behavior in the deformation response under the complex, inhomogeneous, multi-axial stress-state and large deformations of the helical-coil actuator. This response, although substantially different in magnitude, exhibited similar evolutionary characteristics to the simple, uniaxial, homogeneous, stress-state of the isobaric tensile tests results used for the model calibration. There was no significant difference in the axial displacement (primary response) magnitudes observed between Cases (1) and (2) for the number of cycles investigated here. The simulated secondary responses of the two cases evolved in a similar manner when compared to the experimental validation of the respective cases.
Fabrication of ф 160 mm convex hyperbolic mirror for remote sensing instrument
NASA Astrophysics Data System (ADS)
Kuo, Ching-Hsiang; Yu, Zong-Ru; Ho, Cheng-Fang; Hsu, Wei-Yao; Chen, Fong-Zhi
2012-10-01
In this study, efficient polishing processes with inspection procedures for a large convex hyperbolic mirror of Cassegrain optical system are presented. The polishing process combines the techniques of conventional lapping and CNC polishing. We apply the conventional spherical lapping process to quickly remove the sub-surface damage (SSD) layer caused by grinding process and to get the accurate radius of best-fit sphere (BFS) of aspheric surface with fine surface texture simultaneously. Thus the removed material for aspherization process can be minimized and the polishing time for SSD removal can also be reduced substantially. The inspection procedure was carried out by using phase shift interferometer with CGH and stitching technique. To acquire the real surface form error of each sub aperture, the wavefront errors of the reference flat and CGH flat due to gravity effect of the vertical setup are calibrated in advance. Subsequently, we stitch 10 calibrated sub-aperture surface form errors to establish the whole irregularity of the mirror in 160 mm diameter for correction polishing. The final result of the In this study, efficient polishing processes with inspection procedures for a large convex hyperbolic mirror of Cassegrain optical system are presented. The polishing process combines the techniques of conventional lapping and CNC polishing. We apply the conventional spherical lapping process to quickly remove the sub-surface damage (SSD) layer caused by grinding process and to get the accurate radius of best-fit sphere (BFS) of aspheric surface with fine surface texture simultaneously. Thus the removed material for aspherization process can be minimized and the polishing time for SSD removal can also be reduced substantially. The inspection procedure was carried out by using phase shift interferometer with CGH and stitching technique. To acquire the real surface form error of each sub aperture, the wavefront errors of the reference flat and CGH flat due to gravity effect of the vertical setup are calibrated in advance. Subsequently, we stitch 10 calibrated sub-aperture surface form errors to establish the whole irregularity of the mirror in 160 mm diameter for correction polishing. The final result of the Fabrication of ф160 mm Convex Hyperbolic Mirror for Remote Sensing Instrument160 mm convex hyperbolic mirror is 0.15 μm PV and 17.9 nm RMS.160 mm convex hyperbolic mirror is 0.15 μm PV and 17.9 nm RMS.
[Comparison of four identical electronic noses and three measurement set-ups].
Koczulla, R; Hattesohl, A; Biller, H; Hofbauer, J; Hohlfeld, J; Oeser, C; Wirtz, H; Jörres, R A
2011-08-01
Volatile organic compounds (VOCs) can be used as biomarkers in exhaled air. VOC profiles can be detected by an array of nanosensors of an electronic nose. These profiles can be analysed using bioinformatics. It is, however, not known whether different devices of the same model measure identically and to which extent different set-ups and the humidity of the inhaled air influence the VOC profile. Three different measuring set-ups were designed and three healthy control subjects were measured with each of them, using four devices of the same model (Cyranose 320™, Smiths Detection). The exhaled air was collected in a plastic bag. Either ambient air was used as reference (set-up Leipzig), or the reference air was humidified (100% relative humidity) (set-up Marburg and set-up Munich). In the set-up Marburg the subjects inhaled standardised medical air (Aer medicinalis Linde, AGA AB) out of a compressed air bottle through a demand valve; this air (after humidification) was also used as reference. In the set-up Leipzig the subjects inhaled VOC-filtered ambient air, in the set-up Munich unfiltered room air. The data were evaluated using either the real-time data or the changes in resistance as calculated by the device. The results were clearly dependent on the set-up. Apparently, humidification of the reference air could reduce the variance between devices, but this result was also dependent on the evaluation method used. When comparing the three subjects, the set-ups Munich and Marburg mapped these in a similar way, whereas not only the signals but also the variance of the set-up Leipzig were larger. Measuring VOCs with an electronic nose has not yet been standardised and the set-up significantly affects the results. As other researchers use further methods, it is currently not possible to draw generally accepted conclusions. More systematic tests are required to find the most sensitive and reliable but still feasible set-up so that comparability is improved. © Georg Thieme Verlag KG Stuttgart · New York.
A calibration hierarchy for risk models was defined: from utopia to empirical data.
Van Calster, Ben; Nieboer, Daan; Vergouwe, Yvonne; De Cock, Bavo; Pencina, Michael J; Steyerberg, Ewout W
2016-06-01
Calibrated risk models are vital for valid decision support. We define four levels of calibration and describe implications for model development and external validation of predictions. We present results based on simulated data sets. A common definition of calibration is "having an event rate of R% among patients with a predicted risk of R%," which we refer to as "moderate calibration." Weaker forms of calibration only require the average predicted risk (mean calibration) or the average prediction effects (weak calibration) to be correct. "Strong calibration" requires that the event rate equals the predicted risk for every covariate pattern. This implies that the model is fully correct for the validation setting. We argue that this is unrealistic: the model type may be incorrect, the linear predictor is only asymptotically unbiased, and all nonlinear and interaction effects should be correctly modeled. In addition, we prove that moderate calibration guarantees nonharmful decision making. Finally, results indicate that a flexible assessment of calibration in small validation data sets is problematic. Strong calibration is desirable for individualized decision support but unrealistic and counter productive by stimulating the development of overly complex models. Model development and external validation should focus on moderate calibration. Copyright © 2016 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, C; Yan, G; Helmig, R
2014-06-01
Purpose: To develop a system that can define the radiation isocenter and correlate this information with couch coordinates, laser alignment, optical distance indicator (ODI) settings, optical tracking system (OTS) calibrations, and mechanical isocenter walkout. Methods: Our team developed a multi-adapter, multi-purpose quality assurance (QA) and calibration device that uses an electronic portal imaging device (EPID) and in-house image-processing software to define the radiation isocenter, thereby allowing linear accelerator (Linac) components to be verified and calibrated. Motivated by the concept that each Linac component related to patient setup for image-guided radiotherapy based on cone-beam CT should be calibrated with respect tomore » the radiation isocenter, we designed multiple concentric adapters of various materials and shapes to meet the needs of MV and KV radiation isocenter definition, laser alignment, and OTS calibration. The phantom's ability to accurately define the radiation isocenter was validated on 4 Elekta Linacs using a commercial ball bearing (BB) phantom as a reference. Radiation isocenter walkout and the accuracy of couch coordinates, ODI, and OTS were then quantified with the device. Results: The device was able to define the radiation isocenter within 0.3 mm. Radiation isocenter walkout was within ±1 mm at 4 cardinal angles. By switching adapters, we identified that the accuracy of the couch position digital readout, ODI, OTS, and mechanical isocenter walkout was within sub-mm. Conclusion: This multi-adapter, multi-purpose isocenter phantom can be used to accurately define the radiation isocenter and represents a potential paradigm shift in Linac QA. Moreover, multiple concentric adapters allowed for sub-mm accuracy for the other relevant components. This intuitive and user-friendly design is currently patent pending.« less
Validating a spatially distributed hydrological model with soil morphology data
NASA Astrophysics Data System (ADS)
Doppler, T.; Honti, M.; Zihlmann, U.; Weisskopf, P.; Stamm, C.
2013-10-01
Spatially distributed hydrological models are popular tools in hydrology and they are claimed to be useful to support management decisions. Despite the high spatial resolution of the computed variables, calibration and validation is often carried out only on discharge time-series at specific locations due to the lack of spatially distributed reference data. Because of this restriction, the predictive power of these models, with regard to predicted spatial patterns, can usually not be judged. An example of spatial predictions in hydrology is the prediction of saturated areas in agricultural catchments. These areas can be important source areas for the transport of agrochemicals to the stream. We set up a spatially distributed model to predict saturated areas in a 1.2 km2 catchment in Switzerland with moderate topography. Around 40% of the catchment area are artificially drained. We measured weather data, discharge and groundwater levels in 11 piezometers for 1.5 yr. For broadening the spatially distributed data sets that can be used for model calibration and validation, we translated soil morphological data available from soil maps into an estimate of the duration of soil saturation in the soil horizons. We used redox-morphology signs for these estimates. This resulted in a data set with high spatial coverage on which the model predictions were validated. In general, these saturation estimates corresponded well to the measured groundwater levels. We worked with a model that would be applicable for management decisions because of its fast calculation speed and rather low data requirements. We simultaneously calibrated the model to the groundwater levels in the piezometers and discharge. The model was able to reproduce the general hydrological behavior of the catchment in terms of discharge and absolute groundwater levels. However, the accuracy of the groundwater level predictions was not high enough to be used for the prediction of saturated areas. The groundwater level dynamics were not adequately reproduced and the predicted spatial patterns of soil saturation did not correspond to the patterns estimated from the soil map. Our results indicate that an accurate prediction of the groundwater level dynamics of the shallow groundwater in our catchment that is subject to artificial drainage would require a more complex model. Especially high spatial resolution and very detailed process representations at the boundary between the unsaturated and the saturated zone are expected to be crucial. The data needed for such a detailed model are not generally available. The high computational demand and the complex model setup would require more resources than the direct identification of saturated areas in the field. This severely hampers the practical use of such models despite their usefulness for scientific purposes.
DOT National Transportation Integrated Search
2016-07-01
This research study aims to investigate the pile set-up phenomenon for clayey soils and develop empirical models to predict pile set-up : resistance at certain time after end of driving (EOD). To fulfill the objective, a total number of twelve prestr...
Parallel computing for automated model calibration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burke, John S.; Danielson, Gary R.; Schulz, Douglas A.
2002-07-29
Natural resources model calibration is a significant burden on computing and staff resources in modeling efforts. Most assessments must consider multiple calibration objectives (for example magnitude and timing of stream flow peak). An automated calibration process that allows real time updating of data/models, allowing scientists to focus effort on improving models is needed. We are in the process of building a fully featured multi objective calibration tool capable of processing multiple models cheaply and efficiently using null cycle computing. Our parallel processing and calibration software routines have been generically, but our focus has been on natural resources model calibration. Somore » far, the natural resources models have been friendly to parallel calibration efforts in that they require no inter-process communication, only need a small amount of input data and only output a small amount of statistical information for each calibration run. A typical auto calibration run might involve running a model 10,000 times with a variety of input parameters and summary statistical output. In the past model calibration has been done against individual models for each data set. The individual model runs are relatively fast, ranging from seconds to minutes. The process was run on a single computer using a simple iterative process. We have completed two Auto Calibration prototypes and are currently designing a more feature rich tool. Our prototypes have focused on running the calibration in a distributed computing cross platform environment. They allow incorporation of?smart? calibration parameter generation (using artificial intelligence processing techniques). Null cycle computing similar to SETI@Home has also been a focus of our efforts. This paper details the design of the latest prototype and discusses our plans for the next revision of the software.« less
NASA Astrophysics Data System (ADS)
Moeys, J.; Larsbo, M.; Bergström, L.; Brown, C. D.; Coquet, Y.; Jarvis, N. J.
2012-07-01
Estimating pesticide leaching risks at the regional scale requires the ability to completely parameterise a pesticide fate model using only survey data, such as soil and land-use maps. Such parameterisations usually rely on a set of lookup tables and (pedo)transfer functions, relating elementary soil and site properties to model parameters. The aim of this paper is to describe and test a complete set of parameter estimation algorithms developed for the pesticide fate model MACRO, which accounts for preferential flow in soil macropores. We used tracer monitoring data from 16 lysimeter studies, carried out in three European countries, to evaluate the ability of MACRO and this "blind parameterisation" scheme to reproduce measured solute leaching at the base of each lysimeter. We focused on the prediction of early tracer breakthrough due to preferential flow, because this is critical for pesticide leaching. We then calibrated a selected number of parameters in order to assess to what extent the prediction of water and solute leaching could be improved. Our results show that water flow was generally reasonably well predicted (median model efficiency, ME, of 0.42). Although the general pattern of solute leaching was reproduced well by the model, the overall model efficiency was low (median ME = -0.26) due to errors in the timing and magnitude of some peaks. Preferential solute leaching at early pore volumes was also systematically underestimated. Nonetheless, the ranking of soils according to solute loads at early pore volumes was reasonably well estimated (concordance correlation coefficient, CCC, between 0.54 and 0.72). Moreover, we also found that ignoring macropore flow leads to a significant deterioration in the ability of the model to reproduce the observed leaching pattern, and especially the early breakthrough in some soils. Finally, the calibration procedure showed that improving the estimation of solute transport parameters is probably more important than the estimation of water flow parameters. Overall, the results are encouraging for the use of this modelling set-up to estimate pesticide leaching risks at the regional-scale, especially where the objective is to identify vulnerable soils and "source" areas of contamination.
NASA Astrophysics Data System (ADS)
Camera, Corrado; Bruggeman, Adriana; Zittis, Georgios; Hadjinicolaou, Panos
2017-04-01
Due to limited rainfall concentrated in the winter months and long dry summers, storage and management of water resources is of paramount importance in Cyprus. For water storage purposes, the Cyprus Water Development Department is responsible for the operation of 56 large dams total volume of 310 Mm3) and 51 smaller reservoirs (total volume of 17 Mm3) over the island. Climate change is also expected to heavily affect Cyprus water resources with a 1.5%-12% decrease in mean annual rainfall (Camera et al., 2016) projected for the period 2020-2050, relative to 1980-2010. This will make reliable seasonal water inflow forecasts even more important for water managers. The overall aim of this study is to set-up the widely used Weather Research and Forecasting (WRF) model with its hydrologic extension (WRF-hydro), for seasonal forecasts of water inflow in dams located in the Troodos Mountains of Cyprus. The specific objectives of this study are: i) the calibration and evaluation of WRF-Hydro for the simulation of stream flows, in the Troodos Mountains, for past rainfall seasons; ii) a sensitivity analysis of the model parameters; iii) a comparison of the application of the atmospheric-hydrologic modelling chain versus the use of climate observations as forcing. The hydrologic model is run in its off-line version with daily forcing over a 1-km grid, while the overland and channel routing is performed on a 100-m grid with a time-step of 6 seconds. Model outputs are exported on a daily base. First, WRF-Hydro is calibrated and validated over two 1-year periods (October-September), using a 1-km gridded observational precipitation dataset (Camera et al., 2014) as input. For the calibration and validation periods, years with annual rainfall close to the long-term average and with the presence of extreme rainfall and flow events were selected. A sensitivity analysis is performed, for the following parameters: partitioning of rainfall into runoff and infiltration (REFKDT), the partitioning of deep percolation between losses and baseflow contribution (LOSS_BASE), water retention depth (RETDEPRTFAC), overland roughness (OVROUGHRTFAC), and channel manning coefficients (MANN). The calibrated WRF-Hydro shows a good ability to reproduce annual total streamflow (-19% error) and total peak discharge volumes (+3% error), although very high values of MANN were used to match the timing of the peak and get positive values of Nash-Sutcliffe efficiency coefficient (0.13). The two most sensitive parameters for the modeled seasonal flow were REFKDT and LOSS_BASE. Simulations of the calibrated WRF-Hydro with WRF modelled atmospheric forcing showed high errors in comparison with those forced with observations, which can be corrected only by modifying the most sensitive parameters by at least one order of magnitude. This study has received funding from the EU H2020 BINGO Project (GA 641739). Camera C., Bruggeman A., Hadjinicolaou P., Pashiardis S., Lange M.A., 2016. Evaluation of interpolation techniques for the creation of gridded daily precipitation (1 × 1 km2); Cyprus, 1980-2010. J Geophys Res Atmos 119, 693-712, DOI:10.1002/2013JD020611 Camera C., Bruggeman A., Hadjinicolaou P., Michaelides S., Lange M.A., 2016. Evaluation of a spatial rainfall generator for generating high resolution precipitation projections over orographically complex terrain. Stoch Environ Res Risk Assess, DOI 10.1007/s00477-016-1239-1
Controlling the emission profile of an H2 discharge lamp to simulate interstellar radiation fields
NASA Astrophysics Data System (ADS)
Ligterink, N. F. W.; Paardekooper, D. M.; Chuang, K.-J.; Both, M. L.; Cruz-Diaz, G. A.; van Helden, J. H.; Linnartz, H.
2015-12-01
Context. Microwave discharge hydrogen-flow lamps have been used for more than half a century to simulate interstellar ultraviolet radiation fields in the laboratory. Recent discrepancies between identical measurements in different laboratories, as well as clear wavelength dependent results obtained in monochromatic (synchrotron) experiments, hint at a more elaborate dependence on the exact discharge settings than assumed so far. Aims: We have investigated systematically two lamp geometries in full dependence of a large number of different running conditions and the spectral emission patterns are characterized for the first time with fully calibrated absolute flux numbers. Methods: A sophisticated plasma lamp calibration set-up has been used to record the vacuum-ultraviolet emission spectra with a spectral resolution of 0.5 nm and bandwidth of 1.6 nm in the 116-220 nm region. Spectra are compared with the output of a calibrated D2-lamp which allows a derivation of absolute radiance values. Results: The general findings of over 200 individual measurements are presented, illustrating how the lamp emission pattern depends on i) microwave power; ii) gas and gas mixing ratios; iii) discharge lamp geometry; iv) cavity positioning; and v) gas pressure.
Using an Automated 3D-tracking System to Record Individual and Shoals of Adult Zebrafish
Maaswinkel, Hans; Zhu, Liqun; Weng, Wei
2013-01-01
Like many aquatic animals, zebrafish (Danio rerio) moves in a 3D space. It is thus preferable to use a 3D recording system to study its behavior. The presented automatic video tracking system accomplishes this by using a mirror system and a calibration procedure that corrects for the considerable error introduced by the transition of light from water to air. With this system it is possible to record both single and groups of adult zebrafish. Before use, the system has to be calibrated. The system consists of three modules: Recording, Path Reconstruction, and Data Processing. The step-by-step protocols for calibration and using the three modules are presented. Depending on the experimental setup, the system can be used for testing neophobia, white aversion, social cohesion, motor impairments, novel object exploration etc. It is especially promising as a first-step tool to study the effects of drugs or mutations on basic behavioral patterns. The system provides information about vertical and horizontal distribution of the zebrafish, about the xyz-components of kinematic parameters (such as locomotion, velocity, acceleration, and turning angle) and it provides the data necessary to calculate parameters for social cohesions when testing shoals. PMID:24336189
NASA Astrophysics Data System (ADS)
Lawhead, Carlos; Cooper, Nathan; Anderson, Josiah; Shiver, Tegan; Ujj, Laszlo
2014-03-01
Electronic and vibrational spectroscopy is extremely important tools used in material characterization; therefore a table-top laser spectrometer system was built in the spectroscopy lab at the UWF physics department. The system is based upon an injection seeded nanosecond Nd:YAG Laser. The second and the third harmonics of the fundamental 1064 nm radiation are used to generate Raman and fluorescence spectra measured with MS260i imaging spectrograph occupied with a CCD detector and cooled to -85 °C, in order to minimize the dark background noise. The wavelength calibration was performed with the emission spectra of standard gas-discharge lamps. Spectral sensitivity calibration is needed before any spectra are recorded, because of the table-top nature of the instrument. A variety of intensity standards were investigated to find standards suitable for our table top setup that do not change the geometry of the system. High quality measurement of Raman standards where analyzed to test spectral corrections. Background fluorescence removal methods were used to improve Raman signal intensity reading on highly fluorescent molecules. This instrument will be used to measure vibrational and electronic spectra of biological molecules.
The upgrade of the Thomson scattering system for measurement on the C-2/C-2U devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhai, K.; Schindler, T.; Kinley, J.
The C-2/C-2U Thomson scattering system has been substantially upgraded during the latter phase of C-2/C-2U program. A Rayleigh channel has been added to each of the three polychromators of the C-2/C-2U Thomson scattering system. Onsite spectral calibration has been applied to avoid the issue of different channel responses at different spots on the photomultiplier tube surface. With the added Rayleigh channel, the absolute intensity response of the system is calibrated with Rayleigh scattering in argon gas from 0.1 to 4 Torr, where the Rayleigh scattering signal is comparable to the Thomson scattering signal at electron densities from 1 × 10{supmore » 13} to 4 × 10{sup 14} cm{sup −3}. A new signal processing algorithm, using a maximum likelihood method and including detailed analysis of different noise contributions within the system, has been developed to obtain electron temperature and density profiles. The system setup, spectral and intensity calibration procedure and its outcome, data analysis, and the results of electron temperature/density profile measurements will be presented.« less
Aerodynamic and hydrodynamic model tests of the Enserch Garden Banks floating production facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, E.W.; Bauer, T.C.; Kelly, P.J.
1995-12-01
This paper presents the results of aerodynamic and hydrodynamic model tests of the Enserch Garden Banks, a semisubmersible Floating Production Facility (FPF) moored in 2,190-ft waters. During the wind tunnel tests, the steady component of wind and current forces/moments at various skew and heel axes were measured. The results were compared and calibrated against analytical calculations using techniques recommended by ABS and API. During the wave basin recommend test the mooring line tensions and vessel motions including the effects of dynamic wind and current were measured. An analytical calculation of the airgap, vessel motions, and mooring line loads were comparedmore » with wave basin model test results. This paper discusses the test objectives, test setups and agendas for wind and wave basin testing of a deepwater permanently moored floating production system. The experience from these tests and the comparison of measured tests results with analytical calculations will be of value to designers and operators contemplating the use of a semisubmersible based floating production system. The analysis procedures are aimed at estimating (1) vessel motions, (2) airgap, and (3) mooring line tensions with reasonable accuracy. Finally, this paper demonstrates how the model test results were interpolated and adapted in the design loop.« less
Towards reliable ET estimates in the semi-arid Júcar region in Spain.
NASA Astrophysics Data System (ADS)
Brenner, Johannes; Zink, Matthias; Schrön, Martin; Thober, Stephan; Rakovec, Oldrich; Cuntz, Matthias; Merz, Ralf; Samaniego, Luis
2017-04-01
Current research indicated the potential for improving evapotranspiration (ET) estimates in state-of-the-art hydrologic models such as the mesoscale Hydrological Model (mHM, www.ufz.de/mhm). Most models exhibit deficiencies to estimate the ET flux in semi-arid regions. Possible reasons for poor performance may be related to the low resolution of the forcings, the estimation of the PET, which is in most cases based on temperature only, the joint estimation of the transpiration and evaporation through the Feddes equation, poor process parameterizations, among others. In this study, we aim at sequential hypothesis-based experiments to uncover the main reasons of these deficiencies at the Júcar basin in Spain. We plan the following experiments: 1) Use the high resolution meteorological forcing (P and T) provided by local authorities to estimate its effects on ET and streamflow. 2) Use local ET measurements at seven eddy covariance stations to estimate evaporation related parameters. 3) Test the influence of the PET formulations (Hargreaves-Samani, Priestley-Taylor, Penman-Montheith). 4) Estimate evaporation and transpiration separately based on equations proposed by Bohn and Vivoni (2016) 5) Incorporate local soil moisture measurements to re-estimate ET and soil moisture related parameters. We set-up mHM for seven eddy-covariance sites at the local scale (100 × 100 m2). This resolution was chosen because it is representative for the footprint of the latent heat estimation at the eddy-covariance station. In the second experiment, for example, a parameter set is to be found as a compromised solution between ET measured at local stations and the streamflow observations at eight sub-basins of the Júcar river. Preliminary results indicate that higher model performance regarding streamflow can be achieved using local high-resolution meteorology. ET performance is, however, still deficient. On the contrary, using ET site calibrations alone increase performance in ET but yields in poor performance in streamflow. Results suggest the need of multi-variable, simultaneous calibration schemes to reliable estimate ET and streamflow in the Júcar basin. Penman-Montheith appears to be the best performing PET formulation. Experiments 4 and 5 should reveal the benefits of separating evaporation from bare soil and transpiration in semi-arid regions using mHM. Further research in this direction is foreseen by incorporating neutron counts from Cosmic Ray Neutron Sensing technology in the calibration/validation procedure of mHM.
NASA Technical Reports Server (NTRS)
Georgiev, Georgi T.; Butler, James J.; Thome, Kurt; Cooksey, Catherine; Ding, Leibo
2016-01-01
Satellite instruments operating in the reflective solar wavelength region require accurate and precise determination of the Bidirectional Reflectance Distribution Functions (BRDFs) of the laboratory and flight diffusers used in their pre-flight and on-orbit calibrations. This paper advances that initial work and presents a comparison of spectral Bidirectional Reflectance Distribution Function (BRDF) and Directional Hemispherical Reflectance (DHR) of Spectralon*, a common material for laboratory and onorbit flight diffusers. A new measurement setup for BRDF measurements from 900 nm to 2500 nm located at NASA Goddard Space Flight Center (GSFC) is described. The GSFC setup employs an extended indium gallium arsenide detector, bandpass filters, and a supercontinuum light source. Comparisons of the GSFC BRDF measurements in the ShortWave InfraRed (SWIR) with those made by the NIST Spectral Trifunction Automated Reference Reflectometer (STARR) are presented. The Spectralon sample used in this study was 2 inch diameter, 99% white pressed and sintered Polytetrafluoroethylene (PTFE) target. The NASA/NIST BRDF comparison measurements were made at an incident angle of 0 deg and viewing angle of 45 deg. Additional BRDF data not compared to NIST were measured at additional incident and viewing angle geometries and are not presented here The total combined uncertainty for the measurement of BRDF in the SWIR range made by the GSFC scatterometer is less than 1% (k=1). This study is in support of the calibration of the Joint Polar Satellite System (JPSS) Radiation Budget Instrument (RBI) and Visible Infrared Imaging Radiometer Suite (VIIRS) of and other current and future NASA remote sensing missions operating across the reflected solar wavelength region.
2004-03-05
Reflectors setup in the La Selva region of the Costa Rican rain forest by scientist Paul Siqueira from NASA’s Jet Propulsion Lab. These reflectors are used by JPL scientists onboard Dryden's DC-8 aircraft to calibrate the Airborne Synthetic Aperture Radar (AirSAR) system. Scientists place these reflectors at known points on the ground, allowing researchers onboard the aircraft to verify their data. AirSAR 2004 Mesoamerica is a three-week expedition by an international team of scientists that uses an all-weather imaging tool, called the Airborne Synthetic Aperture Radar (AirSAR) which is located onboard NASA's DC-8 airborne laboratory. Scientists from many parts of the world including NASA's Jet Propulsion Laboratory are combining ground research done in several areas in Central America with NASA's AirSAR technology to improve and expand on the quality of research they are able to conduct. The radar, developed by NASA's Jet Propulsion Laboratory, can penetrate clouds and also collect data at night. Its high-resolution sensors operate at multiple wavelengths and modes, allowing AirSAR to see beneath treetops, through thin sand, and dry snow pack. AirSAR's 2004 campaign is a collaboration of many U.S. and Central American institutions and scientists, including NASA; the National Science Foundation; the Smithsonian Institution; National Geographic; Conservation International; the Organization of Tropical Studies; the Central American Commission for Environment and Development; and the Inter-American Development Bank.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santhanam, A; Min, Y; Beron, P
Purpose: Patient safety hazards such as a wrong patient/site getting treated can lead to catastrophic results. The purpose of this project is to automatically detect potential patient safety hazards during the radiotherapy setup and alert the therapist before the treatment is initiated. Methods: We employed a set of co-located and co-registered 3D cameras placed inside the treatment room. Each camera provided a point-cloud of fraxels (fragment pixels with 3D depth information). Each of the cameras were calibrated using a custom-built calibration target to provide 3D information with less than 2 mm error in the 500 mm neighborhood around the isocenter.more » To identify potential patient safety hazards, the treatment room components and the patient’s body needed to be identified and tracked in real-time. For feature recognition purposes, we used a graph-cut based feature recognition with principal component analysis (PCA) based feature-to-object correlation to segment the objects in real-time. Changes in the object’s position were tracked using the CamShift algorithm. The 3D object information was then stored for each classified object (e.g. gantry, couch). A deep learning framework was then used to analyze all the classified objects in both 2D and 3D and was then used to fine-tune a convolutional network for object recognition. The number of network layers were optimized to identify the tracked objects with >95% accuracy. Results: Our systematic analyses showed that, the system was effectively able to recognize wrong patient setups and wrong patient accessories. The combined usage of 2D camera information (color + depth) enabled a topology-preserving approach to verify patient safety hazards in an automatic manner and even in scenarios where the depth information is partially available. Conclusion: By utilizing the 3D cameras inside the treatment room and a deep learning based image classification, potential patient safety hazards can be effectively avoided.« less
Poole, Sandra; Vis, Marc; Knight, Rodney; Seibert, Jan
2017-01-01
Ecologically relevant streamflow characteristics (SFCs) of ungauged catchments are often estimated from simulated runoff of hydrologic models that were originally calibrated on gauged catchments. However, SFC estimates of the gauged donor catchments and subsequently the ungauged catchments can be substantially uncertain when models are calibrated using traditional approaches based on optimization of statistical performance metrics (e.g., Nash–Sutcliffe model efficiency). An improved calibration strategy for gauged catchments is therefore crucial to help reduce the uncertainties of estimated SFCs for ungauged catchments. The aim of this study was to improve SFC estimates from modeled runoff time series in gauged catchments by explicitly including one or several SFCs in the calibration process. Different types of objective functions were defined consisting of the Nash–Sutcliffe model efficiency, single SFCs, or combinations thereof. We calibrated a bucket-type runoff model (HBV – Hydrologiska Byråns Vattenavdelning – model) for 25 catchments in the Tennessee River basin and evaluated the proposed calibration approach on 13 ecologically relevant SFCs representing major flow regime components and different flow conditions. While the model generally tended to underestimate the tested SFCs related to mean and high-flow conditions, SFCs related to low flow were generally overestimated. The highest estimation accuracies were achieved by a SFC-specific model calibration. Estimates of SFCs not included in the calibration process were of similar quality when comparing a multi-SFC calibration approach to a traditional model efficiency calibration. For practical applications, this implies that SFCs should preferably be estimated from targeted runoff model calibration, and modeled estimates need to be carefully interpreted.
André, L; Lamy, E; Lutz, P; Pernier, M; Lespinard, O; Pauss, A; Ribeiro, T
2016-02-01
The electrical resistivity tomography (ERT) method is a non-intrusive method widely used in landfills to detect and locate liquid content. An experimental set-up was performed on a dry batch anaerobic digestion reactor to investigate liquid repartition in process and to map spatial distribution of inoculum. Two array electrodes were used: pole-dipole and gradient arrays. A technical adaptation of ERT method was necessary. Measured resistivity data were inverted and modeled by RES2DINV software to get resistivity sections. Continuous calibration along resistivity section was necessary to understand data involving sampling and physicochemical analysis. Samples were analyzed performing both biochemical methane potential and fiber quantification. Correlations were established between the protocol of reactor preparation, resistivity values, liquid content, methane potential and fiber content representing liquid repartition, high methane potential zones and degradations zones. ERT method showed a strong relevance to monitor and to optimize the dry batch anaerobic digestion process. Copyright © 2015 Elsevier Ltd. All rights reserved.
Radiation Requirements and Testing of Cryogenic Thermometers for the Ilc
NASA Astrophysics Data System (ADS)
Barnett, T.; Filippov, Yu. P.; Filippova, E. Yu.; Mokhov, N. V.; Nakao, N.; Klebaner, A. L.; Korenev, S. A.; Theilacker, J. C.; Trenikhina, J.; Vaziri, K.
2008-03-01
Large quantity of cryogenic temperature sensors will be used for operation of the International Linear Collider (ILC). Most of them will be subject to high radiation doses during the accelerator lifetime. Understanding of particle energy spectra, accumulated radiation dose in thermometers and its impact on performance are vital in establishing technical specification of cryogenic thermometry for the ILC. Realistic MARS15 computer simulations were performed to understand the ILC radiation environment. Simulation results were used to establish radiation dose requirements for commercially available cryogenic thermometers. Two types of thermometers, Cernox® and TVO, were calibrated prior to irradiation using different technique. The sensors were subjected then to up to 200 kGy electron beam irradiation with kinetic energy of 5 MeV, a representative of the situation at the ILC operation. A post-irradiation behavior of the sensors was studied. The paper describes the MARS15 model, simulation results, cryogenic test set-up, irradiation tests, and cryogenic test results.
A Comparison of Two Balance Calibration Model Building Methods
NASA Technical Reports Server (NTRS)
DeLoach, Richard; Ulbrich, Norbert
2007-01-01
Simulated strain-gage balance calibration data is used to compare the accuracy of two balance calibration model building methods for different noise environments and calibration experiment designs. The first building method obtains a math model for the analysis of balance calibration data after applying a candidate math model search algorithm to the calibration data set. The second building method uses stepwise regression analysis in order to construct a model for the analysis. Four balance calibration data sets were simulated in order to compare the accuracy of the two math model building methods. The simulated data sets were prepared using the traditional One Factor At a Time (OFAT) technique and the Modern Design of Experiments (MDOE) approach. Random and systematic errors were introduced in the simulated calibration data sets in order to study their influence on the math model building methods. Residuals of the fitted calibration responses and other statistical metrics were compared in order to evaluate the calibration models developed with different combinations of noise environment, experiment design, and model building method. Overall, predicted math models and residuals of both math model building methods show very good agreement. Significant differences in model quality were attributable to noise environment, experiment design, and their interaction. Generally, the addition of systematic error significantly degraded the quality of calibration models developed from OFAT data by either method, but MDOE experiment designs were more robust with respect to the introduction of a systematic component of the unexplained variance.
Input variable selection and calibration data selection for storm water quality regression models.
Sun, Siao; Bertrand-Krajewski, Jean-Luc
2013-01-01
Storm water quality models are useful tools in storm water management. Interest has been growing in analyzing existing data for developing models for urban storm water quality evaluations. It is important to select appropriate model inputs when many candidate explanatory variables are available. Model calibration and verification are essential steps in any storm water quality modeling. This study investigates input variable selection and calibration data selection in storm water quality regression models. The two selection problems are mutually interacted. A procedure is developed in order to fulfil the two selection tasks in order. The procedure firstly selects model input variables using a cross validation method. An appropriate number of variables are identified as model inputs to ensure that a model is neither overfitted nor underfitted. Based on the model input selection results, calibration data selection is studied. Uncertainty of model performances due to calibration data selection is investigated with a random selection method. An approach using the cluster method is applied in order to enhance model calibration practice based on the principle of selecting representative data for calibration. The comparison between results from the cluster selection method and random selection shows that the former can significantly improve performances of calibrated models. It is found that the information content in calibration data is important in addition to the size of calibration data.
Improving piezo actuators for nanopositioning tasks
NASA Astrophysics Data System (ADS)
Seeliger, Martin; Gramov, Vassil; Götz, Bernt
2018-02-01
In recent years, numerous applications emerged on the market with seemingly contradicting demands. On one side, the structure size decreased while on the other side, the overall sample size and speed of operation increased. Although the principle usage of piezoelectric positioning solutions has become a standard in the field of micro- and nanopositioning, surface inspection and manipulation, piezosystem jena now enhanced the performance beyond simple control loop tuning and actuator design. In automated manufacturing machines, a given signal has to be tracked fast and precise. However, control systems naturally decrease the ability to follow this signal in real time. piezosystem jena developed a new signal feed forward system bypassing the PID control. This way, we could reduce signal tracking errors by a factor of three compared to a conventionally optimized PID control. Of course, PID-values still have to be adjusted to specific conditions, e.g. changing additional mass, to optimize the performance. This can now be done with a new automatic tuning tool designed to analyze the current setup, find the best fitting configuration, and also gather and display theoretical as well as experimental performance data. Thus, the control quality of a mechanical setup can be improved within a few minutes without the need of external calibration equipment. Furthermore, new mechanical optimization techniques that focus not only on the positioning device, but also take the whole setup into account, prevent parasitic motion down to a few nanometers.
NASA Astrophysics Data System (ADS)
Ferraro, F.; Takács, M. P.; Piatti, D.; Mossa, V.; Aliotta, M.; Bemmerer, D.; Best, A.; Boeltzig, A.; Broggini, C.; Bruno, C. G.; Caciolli, A.; Cavanna, F.; Chillery, T.; Ciani, G. F.; Corvisiero, P.; Csedreki, L.; Davinson, T.; Depalo, R.; D'Erasmo, G.; Di Leva, A.; Elekes, Z.; Fiore, E. M.; Formicola, A.; Fülöp, Zs.; Gervino, G.; Guglielmetti, A.; Gustavino, C.; Gyürky, Gy.; Imbriani, G.; Junker, M.; Kochanek, I.; Lugaro, M.; Marcucci, L. E.; Marigo, P.; Menegazzo, R.; Pantaleo, F. R.; Paticchio, V.; Perrino, R.; Prati, P.; Schiavulli, L.; Stöckel, K.; Straniero, O.; Szücs, T.; Trezzi, D.; Zavatarelli, S.
2018-03-01
The experimental study of nuclear reactions of astrophysical interest is greatly facilitated by a low-background, high-luminosity setup. The Laboratory for Underground Nuclear Astrophysics (LUNA) 400kV accelerator offers ultra-low cosmic-ray induced background due to its location deep underground in the Gran Sasso National Laboratory (INFN-LNGS), Italy, and high intensity, 250-500μA, proton and α ion beams. In order to fully exploit these features, a high-purity, recirculating gas target system for isotopically enriched gases is coupled to a high-efficiency, six-fold optically segmented bismuth germanate (BGO) γ-ray detector. The beam intensity is measured with a beam calorimeter with constant temperature gradient. Pressure and temperature measurements have been carried out at several positions along the beam path, and the resultant gas density profile has been determined. Calibrated γ-intensity standards and the well-known Ep = 278 keV 14N(p,γ)15O resonance were used to determine the γ-ray detection efficiency and to validate the simulation of the target and detector setup. As an example, the recently measured resonance at Ep = 189.5 keV in the 22Ne(p,γ)23Na reaction has been investigated with high statistics, and the γ-decay branching ratios of the resonance have been determined.
NASA Astrophysics Data System (ADS)
Burkert, A.; Müller, D.; Rieger, S.; Schmidl, G.; Triebel, W.; Paa, W.
2015-12-01
Formaldehyde is an excellent tracer for the early phase of ignition of hydrocarbon fuels and can be used, e.g., for characterization of single droplet ignition. However, due to its fast thermal decomposition at elevated temperatures and pressures, the determination of concentration fields from laser-induced fluorescence (LIF) measurements is difficult. In this paper, we address LIF measurements of this important combustion intermediate using a calibration cell. Here, formaldehyde is created from evaporation of paraformaldehyde. We discuss three setups for preparation of formaldehyde/air mixtures with respect to their usability for well-defined heating of formaldehyde/air mixtures. The "basic setup" uses a resist heater around the measurement cell for investigation of formaldehyde near vacuum conditions or formaldehyde/air samples after sequential admixing of air. The second setup, described for the first time in detail here, takes advantage of a constant flow formaldehyde/air regime which uses preheated air to reduce the necessary time for gas heating. We used the constant flow system to measure new pressure dependent LIF excitation spectra in the 343 nm spectral region (414 absorption band of formaldehyde). The third setup, based on a novel concept for fast gas heating via excitation of SF6 (chemically inert gas) using a TEA (transverse excitation at atmospheric pressure) CO2 laser, allows to further minimize both gas heating time and thermal decomposition. Here, an admixture of CO2 is served for real time temperature measurement based on Raman scattering. The applicability of the fast laser heating system has been demonstrated with gas mixtures of SF6 + air, SF6 + N2, as well as SF6 + N2 + CO2 at 1 bar total pressure.
Amiri, Shahram; Wilson, David R; Masri, Bassam A; Sharma, Gulshan; Anglin, Carolyn
2011-06-03
Determining the 3D pose of the patella after total knee arthroplasty is challenging. The commonly used single-plane fluoroscopy is prone to large errors in the clinically relevant mediolateral direction. A conventional fixed bi-planar setup is limited in the minimum angular distance between the imaging planes necessary for visualizing the patellar component, and requires a highly flexible setup to adjust for the subject-specific geometries. As an alternative solution, this study investigated the use of a novel multi-planar imaging setup that consists of a C-arm tracked by an external optoelectric tracking system, to acquire calibrated radiographs from multiple orientations. To determine the accuracies, a knee prosthesis was implanted on artificial bones and imaged in simulated 'Supine' and 'Weightbearing' configurations. The results were compared with measures from a coordinate measuring machine as the ground-truth reference. The weightbearing configuration was the preferred imaging direction with RMS errors of 0.48 mm and 1.32 ° for mediolateral shift and tilt of the patella, respectively, the two most clinically relevant measures. The 'imaging accuracies' of the system, defined as the accuracies in 3D reconstruction of a cylindrical ball bearing phantom (so as to avoid the influence of the shape and orientation of the imaging object), showed an order of magnitude (11.5 times) reduction in the out-of-plane RMS errors in comparison to single-plane fluoroscopy. With this new method, complete 3D pose of the patellofemoral and tibiofemoral joints during quasi-static activities can be determined with a many-fold (up to 8 times) (3.4mm) improvement in the out-of-plane accuracies compared to a conventional single-plane fluoroscopy setup. Copyright © 2011 Elsevier Ltd. All rights reserved.
Sin, Gürkan; Van Hulle, Stijn W H; De Pauw, Dirk J W; van Griensven, Ann; Vanrolleghem, Peter A
2005-07-01
Modelling activated sludge systems has gained an increasing momentum after the introduction of activated sludge models (ASMs) in 1987. Application of dynamic models for full-scale systems requires essentially a calibration of the chosen ASM to the case under study. Numerous full-scale model applications have been performed so far which were mostly based on ad hoc approaches and expert knowledge. Further, each modelling study has followed a different calibration approach: e.g. different influent wastewater characterization methods, different kinetic parameter estimation methods, different selection of parameters to be calibrated, different priorities within the calibration steps, etc. In short, there was no standard approach in performing the calibration study, which makes it difficult, if not impossible, to (1) compare different calibrations of ASMs with each other and (2) perform internal quality checks for each calibration study. To address these concerns, systematic calibration protocols have recently been proposed to bring guidance to the modeling of activated sludge systems and in particular to the calibration of full-scale models. In this contribution four existing calibration approaches (BIOMATH, HSG, STOWA and WERF) will be critically discussed using a SWOT (Strengths, Weaknesses, Opportunities, Threats) analysis. It will also be assessed in what way these approaches can be further developed in view of further improving the quality of ASM calibration. In this respect, the potential of automating some steps of the calibration procedure by use of mathematical algorithms is highlighted.
Flux analysis of the human proximal colon using anaerobic digestion model 1.
Motelica-Wagenaar, Anne Marieke; Nauta, Arjen; van den Heuvel, Ellen G H M; Kleerebezem, Robbert
2014-08-01
The colon can be regarded as an anaerobic digestive compartment within the gastro intestinal tract (GIT). An in silico model simulating the fluxes in the human proximal colon was developed on basis of the anaerobic digestion model 1 (ADM1), which is traditionally used to model waste conversion to biogas. Model calibration was conducted using data from in vitro fermentation of the proximal colon (TIM-2), and, amongst others, supplemented with the bio kinetics of prebiotic galactooligosaccharides (GOS) fermentation. The impact of water and solutes absorption by the host was also included. Hydrolysis constants of carbohydrates and proteins were estimated based on total short chain fatty acids (SCFA) and ammonia production in vitro. Model validation was established using an independent dataset of a different in vitro model: an in vitro three-stage continuous culture system. The in silico model was shown to provide quantitative insight in the microbial community structure in terms of functional groups, and the substrate and product fluxes between these groups as well as the host, as a function of the substrate composition, pH and the solids residence time (SRT). The model confirms the experimental observation that methanogens are washed out at low pH or low SRT-values. The in silico model is proposed as useful tool in the design of experimental setups for in vitro experiments by giving insight in fermentation processes in the proximal human colon. Copyright © 2014. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
He, X.; Stisen, S.; Henriksen, H. J.
2015-12-01
Hydrological models are important tools to support decision making in water resource management in the past few decades. Nowadays, frequent occurrence of extreme hydrological events has put focus on development of real-time hydrological modeling and forecasting systems. Among the various types of hydrological models, it is only the rainfall-runoff models for surface water that are commonly used in the online real-time fashion; and there is never a tradition to use integrated hydrological models for both surface water and groundwater with large scale perspective. At the Geological Survey of Denmark and Greenland (GEUS), we have setup and calibrated an integrated hydrological model that covers the entire nation, namely the DK-model. So far, the DK-model has only been used in offline mode for historical and future scenario simulations. Therefore, challenges arise when operating the DK-model in real-time mode due to lack of technical experiences and stakeholder awareness. In the present study, we try to demonstrate the process of bringing the DK-model online while actively involving the opinions of the stakeholders. Although the system is not yet fully operational, a prototype has been finished and presented to the stakeholders which can simulate groundwater levels, streamflow and water content in the root zone with a lead time of 48 hours and refreshed every 6 hours. The active involvement of stakeholders has provided very valuable insights and feedbacks for future improvements.
Multiple-Objective Stepwise Calibration Using Luca
Hay, Lauren E.; Umemoto, Makiko
2007-01-01
This report documents Luca (Let us calibrate), a multiple-objective, stepwise, automated procedure for hydrologic model calibration and the associated graphical user interface (GUI). Luca is a wizard-style user-friendly GUI that provides an easy systematic way of building and executing a calibration procedure. The calibration procedure uses the Shuffled Complex Evolution global search algorithm to calibrate any model compiled with the U.S. Geological Survey's Modular Modeling System. This process assures that intermediate and final states of the model are simulated consistently with measured values.
NASA Astrophysics Data System (ADS)
Günther, Uwe; Kuzhel, Sergii
2010-10-01
Gauged \\ {P}\\ {T} quantum mechanics (PTQM) and corresponding Krein space setups are studied. For models with constant non-Abelian gauge potentials and extended parity inversions compact and noncompact Lie group components are analyzed via Cartan decompositions. A Lie-triple structure is found and an interpretation as \\ {P}\\ {T}-symmetrically generalized Jaynes-Cummings model is possible with close relation to recently studied cavity QED setups with transmon states in multilevel artificial atoms. For models with Abelian gauge potentials a hidden Clifford algebra structure is found and used to obtain the fundamental symmetry of Krein space-related J-self-adjoint extensions for PTQM setups with ultra-localized potentials.
Feedforward Coordinate Control of a Robotic Cell Injection Catheter.
Cheng, Weyland; Law, Peter K
2017-08-01
Remote and robotically actuated catheters are the stepping-stones toward autonomous catheters, where complex intravascular procedures may be performed with minimal intervention from a physician. This article proposes a concept for the positional, feedforward control of a robotically actuated cell injection catheter used for the injection of myogenic or undifferentiated stem cells into the myocardial infarct boundary zones of the left ventricle. The prototype for the catheter system was built upon a needle-based catheter with a single degree of deflection, a 3-D printed handle combined with actuators, and the Arduino microcontroller platform. A bench setup was used to mimic a left ventricle catheter procedure starting from the femoral artery. Using Matlab and the open-source video modeling tool Tracker, the planar coordinates ( y, z) of the catheter position were analyzed, and a feedforward control system was developed based on empirical models. Using the Student's t test with a sample size of 26, it was determined that for both the y- and z-axes, the mean discrepancy between the calibrated and theoretical coordinate values had no significant difference compared to the hypothetical value of µ = 0. The root mean square error of the calibrated coordinates also showed an 88% improvement in the z-axis and 31% improvement in the y-axis compared to the unmodified trial run. This proof of concept investigation leads to the possibility of further developing a feedfoward control system in vivo using catheters with omnidirectional deflection. Feedforward positional control allows for more flexibility in the design of an automated catheter system where problems such as systemic time delay may be a hindrance in instances requiring an immediate reaction.
Numerical modelling of distributed vibration sensor based on phase-sensitive OTDR
NASA Astrophysics Data System (ADS)
Masoudi, A.; Newson, T. P.
2017-04-01
A Distributed Vibration Sensor Based on Phase-Sensitive OTDR is numerically modeled. The advantage of modeling the building blocks of the sensor individually and combining the blocks to analyse the behavior of the sensing system is discussed. It is shown that the numerical model can accurately imitate the response of the experimental setup to dynamic perturbations a signal processing procedure similar to that used to extract the phase information from sensing setup.
Reynolds-Averaged Navier-Stokes Simulations of Two Partial-Span Flap Wing Experiments
NASA Technical Reports Server (NTRS)
Takalluk, M. A.; Laflin, Kelly R.
1998-01-01
Structured Reynolds Averaged Navier-Stokes simulations of two partial-span flap wing experiments were performed. The high-lift aerodynamic and aeroacoustic wind-tunnel experiments were conducted at both the NASA Ames 7-by 10-Foot Wind Tunnel and at the NASA Langley Quiet Flow Facility. The purpose of these tests was to accurately document the acoustic and aerodynamic characteristics associated with the principle airframe noise sources, including flap side-edge noise. Specific measurements were taken that can be used to validate analytic and computational models of the noise sources and associated aerodynamic for configurations and conditions approximating flight for transport aircraft. The numerical results are used to both calibrate a widely used CFD code, CFL3D, and to obtain details of flap side-edge flow features not discernible from experimental observations. Both experimental set-ups were numerically modeled by using multiple block structured grids. Various turbulence models, grid block-interface interaction methods and grid topologies were implemented. Numerical results of both simulations are in excellent agreement with experimental measurements and flow visualization observations. The flow field in the flap-edge region was adequately resolved to discern some crucial information about the flow physics and to substantiate the merger of the two vortical structures. As a result of these investigations, airframe noise modelers have proposed various simplified models which use the results obtained from the steady-state computations as input.
Computational fluid dynamics modeling of laboratory flames and an industrial flare.
Singh, Kanwar Devesh; Gangadharan, Preeti; Chen, Daniel H; Lou, Helen H; Li, Xianchang; Richmond, Peyton
2014-11-01
A computational fluid dynamics (CFD) methodology for simulating the combustion process has been validated with experimental results. Three different types of experimental setups were used to validate the CFD model. These setups include an industrial-scale flare setups and two lab-scale flames. The CFD study also involved three different fuels: C3H6/CH/Air/N2, C2H4/O2/Ar and CH4/Air. In the first setup, flare efficiency data from the Texas Commission on Environmental Quality (TCEQ) 2010 field tests were used to validate the CFD model. In the second setup, a McKenna burner with flat flames was simulated. Temperature and mass fractions of important species were compared with the experimental data. Finally, results of an experimental study done at Sandia National Laboratories to generate a lifted jet flame were used for the purpose of validation. The reduced 50 species mechanism, LU 1.1, the realizable k-epsilon turbulence model, and the EDC turbulence-chemistry interaction model were usedfor this work. Flare efficiency, axial profiles of temperature, and mass fractions of various intermediate species obtained in the simulation were compared with experimental data and a good agreement between the profiles was clearly observed. In particular the simulation match with the TCEQ 2010 flare tests has been significantly improved (within 5% of the data) compared to the results reported by Singh et al. in 2012. Validation of the speciated flat flame data supports the view that flares can be a primary source offormaldehyde emission.
NASA Technical Reports Server (NTRS)
Malroy, Eric T.
2007-01-01
The programs, arrays and logic structure were developed to enable the dynamic update of conductors in thermal desktop. The MatLab program FMHTPRE.m processes the Thermal Desktop conductors and sets up the arrays. The user needs to manually copy portions of the output to different input regions in Thermal Desktop. Also, Fortran subroutines are provided that perform the actual updates to the conductors. The subroutines are setup for helium gas, but the equations can be modified for other gases. The maximum number of free molecular conductors allowed is 10,000 for a given radiation task. Additional radiation tasks for FMHT can be generated to account for more conductors. Modifications to the Fortran subroutines may be warranted, when the mode of heat transfer is in the mixed or continuum mode. The FMHT Thermal Desktop model should be activated by using the "Case Set Manager" once the model is setup. Careful setup of the model is needed to avoid excessive solve times.
NASA Astrophysics Data System (ADS)
Karssenberg, D.; Wanders, N.; de Roo, A.; de Jong, S.; Bierkens, M. F.
2013-12-01
Large-scale hydrological models are nowadays mostly calibrated using observed discharge. As a result, a large part of the hydrological system that is not directly linked to discharge, in particular the unsaturated zone, remains uncalibrated, or might be modified unrealistically. Soil moisture observations from satellites have the potential to fill this gap, as these provide the closest thing to a direct measurement of the state of the unsaturated zone, and thus are potentially useful in calibrating unsaturated zone model parameters. This is expected to result in a better identification of the complete hydrological system, potentially leading to improved forecasts of the hydrograph as well. Here we evaluate this added value of remotely sensed soil moisture in calibration of large-scale hydrological models by addressing two research questions: 1) Which parameters of hydrological models can be identified by calibration with remotely sensed soil moisture? 2) Does calibration with remotely sensed soil moisture lead to an improved calibration of hydrological models compared to approaches that calibrate only with discharge, such that this leads to improved forecasts of soil moisture content and discharge as well? To answer these questions we use a dual state and parameter ensemble Kalman filter to calibrate the hydrological model LISFLOOD for the Upper Danube area. Calibration is done with discharge and remotely sensed soil moisture acquired by AMSR-E, SMOS and ASCAT. Four scenarios are studied: no calibration (expert knowledge), calibration on discharge, calibration on remote sensing data (three satellites) and calibration on both discharge and remote sensing data. Using a split-sample approach, the model is calibrated for a period of 2 years and validated for the calibrated model parameters on a validation period of 10 years. Results show that calibration with discharge data improves the estimation of groundwater parameters (e.g., groundwater reservoir constant) and routing parameters. Calibration with only remotely sensed soil moisture results in an accurate calibration of parameters related to land surface process (e.g., the saturated conductivity of the soil), which is not possible when calibrating on discharge alone. For the upstream area up to 40000 km2, calibration on both discharge and soil moisture results in a reduction by 10-30 % in the RMSE for discharge simulations, compared to calibration on discharge alone. For discharge in the downstream area, the model performance due to assimilation of remotely sensed soil moisture is not increased or slightly decreased, most probably due to the longer relative importance of the routing and contribution of groundwater in downstream areas. When microwave soil moisture is used for calibration the RMSE of soil moisture simulations decreases from 0.072 m3m-3 to 0.062 m3m-3. The conclusion is that remotely sensed soil moisture holds potential for calibration of hydrological models leading to a better simulation of soil moisture content throughout and a better simulation of discharge in upstream areas, particularly if discharge observations are sparse.
SWAT: Model use, calibration, and validation
USDA-ARS?s Scientific Manuscript database
SWAT (Soil and Water Assessment Tool) is a comprehensive, semi-distributed river basin model that requires a large number of input parameters which complicates model parameterization and calibration. Several calibration techniques have been developed for SWAT including manual calibration procedures...
NASA Astrophysics Data System (ADS)
Vadrucci, M.; Bazzano, G.; Borgognoni, F.; Chiari, M.; Mazzinghi, A.; Picardi, L.; Ronsivalle, C.; Ruberto, C.; Taccetti, F.
2017-09-01
In the framework of the COBRA project, elemental analyses of cultural heritage objects based on the particle induced X-ray emission (PIXE) are planned in a collaboration between the APAM laboratory of ENEA-Frascati and the LABEC laboratory of INFN in Florence. With this aim a 3-7 MeV pulsed proton beam, driven by the injector of the protontherapy accelerator under construction for the TOP-IMPLART project, will be used to demonstrate the feasibility of the technique with a small-footprint pulsed accelerator to Italian small and medium enterprises interested in the composition analysis of ancient artifacts. The experimental set-up for PIXE analysis on the TOP-IMPLART machine consists of a modified assembly of the vertical beam line usually dedicated to radiobiology experiments: the beam produced by the injector (RFQ + DTL, a PL7 ACCSYSHITACHI model) is bent to 90° by a magnet, is collimated by a 300 μm aperture inserted in the end nozzle and extracted into ambient pressure by an exit window consisting of a Upilex foil 7.5 μm thick. The beam is pulsed with a variable pulse duration of 20-100 μs and a repetition rate variable from 10 to 100 Hz. The X-ray detection system is based on a Ketek Silicon Drift Detector (SDD) with 7 mm2 active area and 450 μm thickness, with a thin Beryllium entrance window (8 μm). The results of the calibration of this new PIXE set-up using thick target standards and of the analysis of the preliminary measurements on pigments are presented.
NASA Astrophysics Data System (ADS)
Brans, Toon; Strubbe, Filip; Schreuer, Caspar; Neyts, Kristiaan; Beunis, Filip
2015-06-01
We present a novel approach for label-free concentration measurement of a specific protein in a solution. The technique combines optical tweezers and microelectrophoresis to establish the electrophoretic mobility of a single microparticle suspended in the solution. From this mobility measurement, the amount of adsorbed protein on the particle is derived. Using this method, we determine the concentration of avidin in a buffer solution. After calibration of the setup, which accounts for electro-osmotic flow in the measurement device, the mobilities of both bare and biotinylated microspheres are measured as a function of the avidin concentration in the mixture. Two types of surface adsorption are identified: the biotinylated particles show specific adsorption, resulting from the binding of avidin molecules with biotin, at low avidin concentrations (below 0.04 μg/ml) while at concentrations of several μg/ml non-specific on both types of particles is observed. These two adsorption mechanisms are incorporated in a theoretical model describing the relation between the measured mobility and the avidin concentration in the mixture. This model describes the electrophoretic mobility of these particles accurately over four orders of magnitude of the avidin concentration.
Overview of the JET results in support to ITER
NASA Astrophysics Data System (ADS)
Litaudon, X.; Abduallev, S.; Abhangi, M.; Abreu, P.; Afzal, M.; Aggarwal, K. M.; Ahlgren, T.; Ahn, J. H.; Aho-Mantila, L.; Aiba, N.; Airila, M.; Albanese, R.; Aldred, V.; Alegre, D.; Alessi, E.; Aleynikov, P.; Alfier, A.; Alkseev, A.; Allinson, M.; Alper, B.; Alves, E.; Ambrosino, G.; Ambrosino, R.; Amicucci, L.; Amosov, V.; Andersson Sundén, E.; Angelone, M.; Anghel, M.; Angioni, C.; Appel, L.; Appelbee, C.; Arena, P.; Ariola, M.; Arnichand, H.; Arshad, S.; Ash, A.; Ashikawa, N.; Aslanyan, V.; Asunta, O.; Auriemma, F.; Austin, Y.; Avotina, L.; Axton, M. D.; Ayres, C.; Bacharis, M.; Baciero, A.; Baião, D.; Bailey, S.; Baker, A.; Balboa, I.; Balden, M.; Balshaw, N.; Bament, R.; Banks, J. W.; Baranov, Y. F.; Barnard, M. A.; Barnes, D.; Barnes, M.; Barnsley, R.; Baron Wiechec, A.; Barrera Orte, L.; Baruzzo, M.; Basiuk, V.; Bassan, M.; Bastow, R.; Batista, A.; Batistoni, P.; Baughan, R.; Bauvir, B.; Baylor, L.; Bazylev, B.; Beal, J.; Beaumont, P. S.; Beckers, M.; Beckett, B.; Becoulet, A.; Bekris, N.; Beldishevski, M.; Bell, K.; Belli, F.; Bellinger, M.; Belonohy, É.; Ben Ayed, N.; Benterman, N. A.; Bergsåker, H.; Bernardo, J.; Bernert, M.; Berry, M.; Bertalot, L.; Besliu, C.; Beurskens, M.; Bieg, B.; Bielecki, J.; Biewer, T.; Bigi, M.; Bílková, P.; Binda, F.; Bisoffi, A.; Bizarro, J. P. S.; Björkas, C.; Blackburn, J.; Blackman, K.; Blackman, T. R.; Blanchard, P.; Blatchford, P.; Bobkov, V.; Boboc, A.; Bodnár, G.; Bogar, O.; Bolshakova, I.; Bolzonella, T.; Bonanomi, N.; Bonelli, F.; Boom, J.; Booth, J.; Borba, D.; Borodin, D.; Borodkina, I.; Botrugno, A.; Bottereau, C.; Boulting, P.; Bourdelle, C.; Bowden, M.; Bower, C.; Bowman, C.; Boyce, T.; Boyd, C.; Boyer, H. J.; Bradshaw, J. M. A.; Braic, V.; Bravanec, R.; Breizman, B.; Bremond, S.; Brennan, P. D.; Breton, S.; Brett, A.; Brezinsek, S.; Bright, M. D. J.; Brix, M.; Broeckx, W.; Brombin, M.; Brosławski, A.; Brown, D. P. D.; Brown, M.; Bruno, E.; Bucalossi, J.; Buch, J.; Buchanan, J.; Buckley, M. A.; Budny, R.; Bufferand, H.; Bulman, M.; Bulmer, N.; Bunting, P.; Buratti, P.; Burckhart, A.; Buscarino, A.; Busse, A.; Butler, N. K.; Bykov, I.; Byrne, J.; Cahyna, P.; Calabrò, G.; Calvo, I.; Camenen, Y.; Camp, P.; Campling, D. C.; Cane, J.; Cannas, B.; Capel, A. J.; Card, P. J.; Cardinali, A.; Carman, P.; Carr, M.; Carralero, D.; Carraro, L.; Carvalho, B. B.; Carvalho, I.; Carvalho, P.; Casson, F. J.; Castaldo, C.; Catarino, N.; Caumont, J.; Causa, F.; Cavazzana, R.; Cave-Ayland, K.; Cavinato, M.; Cecconello, M.; Ceccuzzi, S.; Cecil, E.; Cenedese, A.; Cesario, R.; Challis, C. D.; Chandler, M.; Chandra, D.; Chang, C. S.; Chankin, A.; Chapman, I. T.; Chapman, S. C.; Chernyshova, M.; Chitarin, G.; Ciraolo, G.; Ciric, D.; Citrin, J.; Clairet, F.; Clark, E.; Clark, M.; Clarkson, R.; Clatworthy, D.; Clements, C.; Cleverly, M.; Coad, J. P.; Coates, P. A.; Cobalt, A.; Coccorese, V.; Cocilovo, V.; Coda, S.; Coelho, R.; Coenen, J. W.; Coffey, I.; Colas, L.; Collins, S.; Conka, D.; Conroy, S.; Conway, N.; Coombs, D.; Cooper, D.; Cooper, S. R.; Corradino, C.; Corre, Y.; Corrigan, G.; Cortes, S.; Coster, D.; Couchman, A. S.; Cox, M. P.; Craciunescu, T.; Cramp, S.; Craven, R.; Crisanti, F.; Croci, G.; Croft, D.; Crombé, K.; Crowe, R.; Cruz, N.; Cseh, G.; Cufar, A.; Cullen, A.; Curuia, M.; Czarnecka, A.; Dabirikhah, H.; Dalgliesh, P.; Dalley, S.; Dankowski, J.; Darrow, D.; Davies, O.; Davis, W.; Day, C.; Day, I. E.; De Bock, M.; de Castro, A.; de la Cal, E.; de la Luna, E.; De Masi, G.; de Pablos, J. L.; De Temmerman, G.; De Tommasi, G.; de Vries, P.; Deakin, K.; Deane, J.; Degli Agostini, F.; Dejarnac, R.; Delabie, E.; den Harder, N.; Dendy, R. O.; Denis, J.; Denner, P.; Devaux, S.; Devynck, P.; Di Maio, F.; Di Siena, A.; Di Troia, C.; Dinca, P.; D'Inca, R.; Ding, B.; Dittmar, T.; Doerk, H.; Doerner, R. P.; Donné, T.; Dorling, S. E.; Dormido-Canto, S.; Doswon, S.; Douai, D.; Doyle, P. T.; Drenik, A.; Drewelow, P.; Drews, P.; Duckworth, Ph.; Dumont, R.; Dumortier, P.; Dunai, D.; Dunne, M.; Ďuran, I.; Durodié, F.; Dutta, P.; Duval, B. P.; Dux, R.; Dylst, K.; Dzysiuk, N.; Edappala, P. V.; Edmond, J.; Edwards, A. M.; Edwards, J.; Eich, Th.; Ekedahl, A.; El-Jorf, R.; Elsmore, C. G.; Enachescu, M.; Ericsson, G.; Eriksson, F.; Eriksson, J.; Eriksson, L. G.; Esposito, B.; Esquembri, S.; Esser, H. G.; Esteve, D.; Evans, B.; Evans, G. E.; Evison, G.; Ewart, G. D.; Fagan, D.; Faitsch, M.; Falie, D.; Fanni, A.; Fasoli, A.; Faustin, J. M.; Fawlk, N.; Fazendeiro, L.; Fedorczak, N.; Felton, R. C.; Fenton, K.; Fernades, A.; Fernandes, H.; Ferreira, J.; Fessey, J. A.; Février, O.; Ficker, O.; Field, A.; Fietz, S.; Figueiredo, A.; Figueiredo, J.; Fil, A.; Finburg, P.; Firdaouss, M.; Fischer, U.; Fittill, L.; Fitzgerald, M.; Flammini, D.; Flanagan, J.; Fleming, C.; Flinders, K.; Fonnesu, N.; Fontdecaba, J. M.; Formisano, A.; Forsythe, L.; Fortuna, L.; Fortuna-Zalesna, E.; Fortune, M.; Foster, S.; Franke, T.; Franklin, T.; Frasca, M.; Frassinetti, L.; Freisinger, M.; Fresa, R.; Frigione, D.; Fuchs, V.; Fuller, D.; Futatani, S.; Fyvie, J.; Gál, K.; Galassi, D.; Gałązka, K.; Galdon-Quiroga, J.; Gallagher, J.; Gallart, D.; Galvão, R.; Gao, X.; Gao, Y.; Garcia, J.; Garcia-Carrasco, A.; García-Muñoz, M.; Gardarein, J.-L.; Garzotti, L.; Gaudio, P.; Gauthier, E.; Gear, D. F.; Gee, S. J.; Geiger, B.; Gelfusa, M.; Gerasimov, S.; Gervasini, G.; Gethins, M.; Ghani, Z.; Ghate, M.; Gherendi, M.; Giacalone, J. C.; Giacomelli, L.; Gibson, C. S.; Giegerich, T.; Gil, C.; Gil, L.; Gilligan, S.; Gin, D.; Giovannozzi, E.; Girardo, J. B.; Giroud, C.; Giruzzi, G.; Glöggler, S.; Godwin, J.; Goff, J.; Gohil, P.; Goloborod'ko, V.; Gomes, R.; Gonçalves, B.; Goniche, M.; Goodliffe, M.; Goodyear, A.; Gorini, G.; Gosk, M.; Goulding, R.; Goussarov, A.; Gowland, R.; Graham, B.; Graham, M. E.; Graves, J. P.; Grazier, N.; Grazier, P.; Green, N. R.; Greuner, H.; Grierson, B.; Griph, F. S.; Grisolia, C.; Grist, D.; Groth, M.; Grove, R.; Grundy, C. N.; Grzonka, J.; Guard, D.; Guérard, C.; Guillemaut, C.; Guirlet, R.; Gurl, C.; Utoh, H. H.; Hackett, L. J.; Hacquin, S.; Hagar, A.; Hager, R.; Hakola, A.; Halitovs, M.; Hall, S. J.; Hallworth Cook, S. P.; Hamlyn-Harris, C.; Hammond, K.; Harrington, C.; Harrison, J.; Harting, D.; Hasenbeck, F.; Hatano, Y.; Hatch, D. R.; Haupt, T. D. V.; Hawes, J.; Hawkes, N. C.; Hawkins, J.; Hawkins, P.; Haydon, P. W.; Hayter, N.; Hazel, S.; Heesterman, P. J. L.; Heinola, K.; Hellesen, C.; Hellsten, T.; Helou, W.; Hemming, O. N.; Hender, T. C.; Henderson, M.; Henderson, S. S.; Henriques, R.; Hepple, D.; Hermon, G.; Hertout, P.; Hidalgo, C.; Highcock, E. G.; Hill, M.; Hillairet, J.; Hillesheim, J.; Hillis, D.; Hizanidis, K.; Hjalmarsson, A.; Hobirk, J.; Hodille, E.; Hogben, C. H. A.; Hogeweij, G. M. D.; Hollingsworth, A.; Hollis, S.; Homfray, D. A.; Horáček, J.; Hornung, G.; Horton, A. R.; Horton, L. D.; Horvath, L.; Hotchin, S. P.; Hough, M. R.; Howarth, P. J.; Hubbard, A.; Huber, A.; Huber, V.; Huddleston, T. M.; Hughes, M.; Huijsmans, G. T. A.; Hunter, C. L.; Huynh, P.; Hynes, A. M.; Iglesias, D.; Imazawa, N.; Imbeaux, F.; Imríšek, M.; Incelli, M.; Innocente, P.; Irishkin, M.; Ivanova-Stanik, I.; Jachmich, S.; Jacobsen, A. S.; Jacquet, P.; Jansons, J.; Jardin, A.; Järvinen, A.; Jaulmes, F.; Jednoróg, S.; Jenkins, I.; Jeong, C.; Jepu, I.; Joffrin, E.; Johnson, R.; Johnson, T.; Johnston, Jane; Joita, L.; Jones, G.; Jones, T. T. C.; Hoshino, K. K.; Kallenbach, A.; Kamiya, K.; Kaniewski, J.; Kantor, A.; Kappatou, A.; Karhunen, J.; Karkinsky, D.; Karnowska, I.; Kaufman, M.; Kaveney, G.; Kazakov, Y.; Kazantzidis, V.; Keeling, D. L.; Keenan, T.; Keep, J.; Kempenaars, M.; Kennedy, C.; Kenny, D.; Kent, J.; Kent, O. N.; Khilkevich, E.; Kim, H. T.; Kim, H. S.; Kinch, A.; king, C.; King, D.; King, R. F.; Kinna, D. J.; Kiptily, V.; Kirk, A.; Kirov, K.; Kirschner, A.; Kizane, G.; Klepper, C.; Klix, A.; Knight, P.; Knipe, S. J.; Knott, S.; Kobuchi, T.; Köchl, F.; Kocsis, G.; Kodeli, I.; Kogan, L.; Kogut, D.; Koivuranta, S.; Kominis, Y.; Köppen, M.; Kos, B.; Koskela, T.; Koslowski, H. R.; Koubiti, M.; Kovari, M.; Kowalska-Strzęciwilk, E.; Krasilnikov, A.; Krasilnikov, V.; Krawczyk, N.; Kresina, M.; Krieger, K.; Krivska, A.; Kruezi, U.; Książek, I.; Kukushkin, A.; Kundu, A.; Kurki-Suonio, T.; Kwak, S.; Kwiatkowski, R.; Kwon, O. J.; Laguardia, L.; Lahtinen, A.; Laing, A.; Lam, N.; Lambertz, H. T.; Lane, C.; Lang, P. T.; Lanthaler, S.; Lapins, J.; Lasa, A.; Last, J. R.; Łaszyńska, E.; Lawless, R.; Lawson, A.; Lawson, K. D.; Lazaros, A.; Lazzaro, E.; Leddy, J.; Lee, S.; Lefebvre, X.; Leggate, H. J.; Lehmann, J.; Lehnen, M.; Leichtle, D.; Leichuer, P.; Leipold, F.; Lengar, I.; Lennholm, M.; Lerche, E.; Lescinskis, A.; Lesnoj, S.; Letellier, E.; Leyland, M.; Leysen, W.; Li, L.; Liang, Y.; Likonen, J.; Linke, J.; Linsmeier, Ch.; Lipschultz, B.; Liu, G.; Liu, Y.; Lo Schiavo, V. P.; Loarer, T.; Loarte, A.; Lobel, R. C.; Lomanowski, B.; Lomas, P. J.; Lönnroth, J.; López, J. M.; López-Razola, J.; Lorenzini, R.; Losada, U.; Lovell, J. J.; Loving, A. B.; Lowry, C.; Luce, T.; Lucock, R. M. A.; Lukin, A.; Luna, C.; Lungaroni, M.; Lungu, C. P.; Lungu, M.; Lunniss, A.; Lupelli, I.; Lyssoivan, A.; Macdonald, N.; Macheta, P.; Maczewa, K.; Magesh, B.; Maget, P.; Maggi, C.; Maier, H.; Mailloux, J.; Makkonen, T.; Makwana, R.; Malaquias, A.; Malizia, A.; Manas, P.; Manning, A.; Manso, M. E.; Mantica, P.; Mantsinen, M.; Manzanares, A.; Maquet, Ph.; Marandet, Y.; Marcenko, N.; Marchetto, C.; Marchuk, O.; Marinelli, M.; Marinucci, M.; Markovič, T.; Marocco, D.; Marot, L.; Marren, C. A.; Marshal, R.; Martin, A.; Martin, Y.; Martín de Aguilera, A.; Martínez, F. J.; Martín-Solís, J. R.; Martynova, Y.; Maruyama, S.; Masiello, A.; Maslov, M.; Matejcik, S.; Mattei, M.; Matthews, G. F.; Maviglia, F.; Mayer, M.; Mayoral, M. L.; May-Smith, T.; Mazon, D.; Mazzotta, C.; McAdams, R.; McCarthy, P. J.; McClements, K. G.; McCormack, O.; McCullen, P. A.; McDonald, D.; McIntosh, S.; McKean, R.; McKehon, J.; Meadows, R. C.; Meakins, A.; Medina, F.; Medland, M.; Medley, S.; Meigh, S.; Meigs, A. G.; Meisl, G.; Meitner, S.; Meneses, L.; Menmuir, S.; Mergia, K.; Merrigan, I. R.; Mertens, Ph.; Meshchaninov, S.; Messiaen, A.; Meyer, H.; Mianowski, S.; Michling, R.; Middleton-Gear, D.; Miettunen, J.; Militello, F.; Militello-Asp, E.; Miloshevsky, G.; Mink, F.; Minucci, S.; Miyoshi, Y.; Mlynář, J.; Molina, D.; Monakhov, I.; Moneti, M.; Mooney, R.; Moradi, S.; Mordijck, S.; Moreira, L.; Moreno, R.; Moro, F.; Morris, A. W.; Morris, J.; Moser, L.; Mosher, S.; Moulton, D.; Murari, A.; Muraro, A.; Murphy, S.; Asakura, N. N.; Na, Y. S.; Nabais, F.; Naish, R.; Nakano, T.; Nardon, E.; Naulin, V.; Nave, M. F. F.; Nedzelski, I.; Nemtsev, G.; Nespoli, F.; Neto, A.; Neu, R.; Neverov, V. S.; Newman, M.; Nicholls, K. J.; Nicolas, T.; Nielsen, A. H.; Nielsen, P.; Nilsson, E.; Nishijima, D.; Noble, C.; Nocente, M.; Nodwell, D.; Nordlund, K.; Nordman, H.; Nouailletas, R.; Nunes, I.; Oberkofler, M.; Odupitan, T.; Ogawa, M. T.; O'Gorman, T.; Okabayashi, M.; Olney, R.; Omolayo, O.; O'Mullane, M.; Ongena, J.; Orsitto, F.; Orszagh, J.; Oswuigwe, B. I.; Otin, R.; Owen, A.; Paccagnella, R.; Pace, N.; Pacella, D.; Packer, L. W.; Page, A.; Pajuste, E.; Palazzo, S.; Pamela, S.; Panja, S.; Papp, P.; Paprok, R.; Parail, V.; Park, M.; Parra Diaz, F.; Parsons, M.; Pasqualotto, R.; Patel, A.; Pathak, S.; Paton, D.; Patten, H.; Pau, A.; Pawelec, E.; Soldan, C. Paz; Peackoc, A.; Pearson, I. J.; Pehkonen, S.-P.; Peluso, E.; Penot, C.; Pereira, A.; Pereira, R.; Pereira Puglia, P. P.; Perez von Thun, C.; Peruzzo, S.; Peschanyi, S.; Peterka, M.; Petersson, P.; Petravich, G.; Petre, A.; Petrella, N.; Petržilka, V.; Peysson, Y.; Pfefferlé, D.; Philipps, V.; Pillon, M.; Pintsuk, G.; Piovesan, P.; Pires dos Reis, A.; Piron, L.; Pironti, A.; Pisano, F.; Pitts, R.; Pizzo, F.; Plyusnin, V.; Pomaro, N.; Pompilian, O. G.; Pool, P. J.; Popovichev, S.; Porfiri, M. T.; Porosnicu, C.; Porton, M.; Possnert, G.; Potzel, S.; Powell, T.; Pozzi, J.; Prajapati, V.; Prakash, R.; Prestopino, G.; Price, D.; Price, M.; Price, R.; Prior, P.; Proudfoot, R.; Pucella, G.; Puglia, P.; Puiatti, M. E.; Pulley, D.; Purahoo, K.; Pütterich, Th.; Rachlew, E.; Rack, M.; Ragona, R.; Rainford, M. S. J.; Rakha, A.; Ramogida, G.; Ranjan, S.; Rapson, C. J.; Rasmussen, J. J.; Rathod, K.; Rattá, G.; Ratynskaia, S.; Ravera, G.; Rayner, C.; Rebai, M.; Reece, D.; Reed, A.; Réfy, D.; Regan, B.; Regaña, J.; Reich, M.; Reid, N.; Reimold, F.; Reinhart, M.; Reinke, M.; Reiser, D.; Rendell, D.; Reux, C.; Reyes Cortes, S. D. A.; Reynolds, S.; Riccardo, V.; Richardson, N.; Riddle, K.; Rigamonti, D.; Rimini, F. G.; Risner, J.; Riva, M.; Roach, C.; Robins, R. J.; Robinson, S. A.; Robinson, T.; Robson, D. W.; Roccella, R.; Rodionov, R.; Rodrigues, P.; Rodriguez, J.; Rohde, V.; Romanelli, F.; Romanelli, M.; Romanelli, S.; Romazanov, J.; Rowe, S.; Rubel, M.; Rubinacci, G.; Rubino, G.; Ruchko, L.; Ruiz, M.; Ruset, C.; Rzadkiewicz, J.; Saarelma, S.; Sabot, R.; Safi, E.; Sagar, P.; Saibene, G.; Saint-Laurent, F.; Salewski, M.; Salmi, A.; Salmon, R.; Salzedas, F.; Samaddar, D.; Samm, U.; Sandiford, D.; Santa, P.; Santala, M. I. K.; Santos, B.; Santucci, A.; Sartori, F.; Sartori, R.; Sauter, O.; Scannell, R.; Schlummer, T.; Schmid, K.; Schmidt, V.; Schmuck, S.; Schneider, M.; Schöpf, K.; Schwörer, D.; Scott, S. D.; Sergienko, G.; Sertoli, M.; Shabbir, A.; Sharapov, S. E.; Shaw, A.; Shaw, R.; Sheikh, H.; Shepherd, A.; Shevelev, A.; Shumack, A.; Sias, G.; Sibbald, M.; Sieglin, B.; Silburn, S.; Silva, A.; Silva, C.; Simmons, P. A.; Simpson, J.; Simpson-Hutchinson, J.; Sinha, A.; Sipilä, S. K.; Sips, A. C. C.; Sirén, P.; Sirinelli, A.; Sjöstrand, H.; Skiba, M.; Skilton, R.; Slabkowska, K.; Slade, B.; Smith, N.; Smith, P. G.; Smith, R.; Smith, T. J.; Smithies, M.; Snoj, L.; Soare, S.; Solano, E. R.; Somers, A.; Sommariva, C.; Sonato, P.; Sopplesa, A.; Sousa, J.; Sozzi, C.; Spagnolo, S.; Spelzini, T.; Spineanu, F.; Stables, G.; Stamatelatos, I.; Stamp, M. F.; Staniec, P.; Stankūnas, G.; Stan-Sion, C.; Stead, M. J.; Stefanikova, E.; Stepanov, I.; Stephen, A. V.; Stephen, M.; Stevens, A.; Stevens, B. D.; Strachan, J.; Strand, P.; Strauss, H. R.; Ström, P.; Stubbs, G.; Studholme, W.; Subba, F.; Summers, H. P.; Svensson, J.; Świderski, Ł.; Szabolics, T.; Szawlowski, M.; Szepesi, G.; Suzuki, T. T.; Tál, B.; Tala, T.; Talbot, A. R.; Talebzadeh, S.; Taliercio, C.; Tamain, P.; Tame, C.; Tang, W.; Tardocchi, M.; Taroni, L.; Taylor, D.; Taylor, K. A.; Tegnered, D.; Telesca, G.; Teplova, N.; Terranova, D.; Testa, D.; Tholerus, E.; Thomas, J.; Thomas, J. D.; Thomas, P.; Thompson, A.; Thompson, C.-A.; Thompson, V. K.; Thorne, L.; Thornton, A.; Thrysøe, A. S.; Tigwell, P. A.; Tipton, N.; Tiseanu, I.; Tojo, H.; Tokitani, M.; Tolias, P.; Tomeš, M.; Tonner, P.; Towndrow, M.; Trimble, P.; Tripsky, M.; Tsalas, M.; Tsavalas, P.; Tskhakaya jun, D.; Turner, I.; Turner, M. M.; Turnyanskiy, M.; Tvalashvili, G.; Tyrrell, S. G. J.; Uccello, A.; Ul-Abidin, Z.; Uljanovs, J.; Ulyatt, D.; Urano, H.; Uytdenhouwen, I.; Vadgama, A. P.; Valcarcel, D.; Valentinuzzi, M.; Valisa, M.; Vallejos Olivares, P.; Valovic, M.; Van De Mortel, M.; Van Eester, D.; Van Renterghem, W.; van Rooij, G. J.; Varje, J.; Varoutis, S.; Vartanian, S.; Vasava, K.; Vasilopoulou, T.; Vega, J.; Verdoolaege, G.; Verhoeven, R.; Verona, C.; Verona Rinati, G.; Veshchev, E.; Vianello, N.; Vicente, J.; Viezzer, E.; Villari, S.; Villone, F.; Vincenzi, P.; Vinyar, I.; Viola, B.; Vitins, A.; Vizvary, Z.; Vlad, M.; Voitsekhovitch, I.; Vondráček, P.; Vora, N.; Vu, T.; Pires de Sa, W. W.; Wakeling, B.; Waldon, C. W. F.; Walkden, N.; Walker, M.; Walker, R.; Walsh, M.; Wang, E.; Wang, N.; Warder, S.; Warren, R. J.; Waterhouse, J.; Watkins, N. W.; Watts, C.; Wauters, T.; Weckmann, A.; Weiland, J.; Weisen, H.; Weiszflog, M.; Wellstood, C.; West, A. T.; Wheatley, M. R.; Whetham, S.; Whitehead, A. M.; Whitehead, B. D.; Widdowson, A. M.; Wiesen, S.; Wilkinson, J.; Williams, J.; Williams, M.; Wilson, A. R.; Wilson, D. J.; Wilson, H. R.; Wilson, J.; Wischmeier, M.; Withenshaw, G.; Withycombe, A.; Witts, D. M.; Wood, D.; Wood, R.; Woodley, C.; Wray, S.; Wright, J.; Wright, J. C.; Wu, J.; Wukitch, S.; Wynn, A.; Xu, T.; Yadikin, D.; Yanling, W.; Yao, L.; Yavorskij, V.; Yoo, M. G.; Young, C.; Young, D.; Young, I. D.; Young, R.; Zacks, J.; Zagorski, R.; Zaitsev, F. S.; Zanino, R.; Zarins, A.; Zastrow, K. D.; Zerbini, M.; Zhang, W.; Zhou, Y.; Zilli, E.; Zoita, V.; Zoletnik, S.; Zychor, I.; JET Contributors
2017-10-01
The 2014-2016 JET results are reviewed in the light of their significance for optimising the ITER research plan for the active and non-active operation. More than 60 h of plasma operation with ITER first wall materials successfully took place since its installation in 2011. New multi-machine scaling of the type I-ELM divertor energy flux density to ITER is supported by first principle modelling. ITER relevant disruption experiments and first principle modelling are reported with a set of three disruption mitigation valves mimicking the ITER setup. Insights of the L-H power threshold in Deuterium and Hydrogen are given, stressing the importance of the magnetic configurations and the recent measurements of fine-scale structures in the edge radial electric. Dimensionless scans of the core and pedestal confinement provide new information to elucidate the importance of the first wall material on the fusion performance. H-mode plasmas at ITER triangularity (H = 1 at β N ~ 1.8 and n/n GW ~ 0.6) have been sustained at 2 MA during 5 s. The ITER neutronics codes have been validated on high performance experiments. Prospects for the coming D-T campaign and 14 MeV neutron calibration strategy are reviewed.
NASA Technical Reports Server (NTRS)
Kim, Won S.; Bejczy, Antal K.
1993-01-01
A highly effective predictive/preview display technique for telerobotic servicing in space under several seconds communication time delay has been demonstrated on a large laboratory scale in May 1993, involving the Jet Propulsion Laboratory as the simulated ground control station and, 2500 miles away, the Goddard Space Flight Center as the simulated satellite servicing set-up. The technique is based on a high-fidelity calibration procedure that enables a high-fidelity overlay of 3-D graphics robot arm and object models over given 2-D TV camera images of robot arm and objects. To generate robot arm motions, the operator can confidently interact in real time with the graphics models of the robot arm and objects overlaid on an actual camera view of the remote work site. The technique also enables the operator to generate high-fidelity synthetic TV camera views showing motion events that are hidden in a given TV camera view or for which no TV camera views are available. The positioning accuracy achieved by this technique for a zoomed-in camera setting was about +/-5 mm, well within the allowable +/-12 mm error margin at the insertion of a 45 cm long tool in the servicing task.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herraiz, Joaquin Lopez
Experimental coincidence cross section and transverse-longitudinal asymmetry ATL have been obtained for the quasielastic (e,e'p) reaction in 16O, 12C, and {sup 208}Pb in constant q-ω kinematics in the missing momentum range -350 < p miss < 350 MeV/c. In these experiments, performed in experimental Hall A of the Thomas Jefferson National Accelerator Facility (JLAB), the beam energy and the momentum and angle of the scattered electrons were kept fixed, while the angle between the proton momentum and the momentum transfer q was varied in order to map out the missing momentum distribution. The experimental cross section and A TL asymmetrymore » have been compared with Monte Carlo simulations based on Distorted Wave Impulse Approximation (DWIA) calculations with both relativistic and non-relativistic spinor structure. The spectroscopic factors obtained for both models are in agreement with previous experimental values, while A TL measurements favor the relativistic DWIA calculation. This thesis describes the details of the experimental setup, the calibration of the spectrometers, the techniques used in the data analysis to derive the final cross sections and the A TL, the ingredients of the theoretical calculations employed and the comparison of the results with the simulations based on these theoretical models.« less
How well do we know the incoming solar infrared radiation?
NASA Astrophysics Data System (ADS)
Elsey, Jonathan; Coleman, Marc; Gardiner, Tom; Shine, Keith
2017-04-01
The solar spectral irradiance (SSI) has been identified as a key climate variable by the Global Climate Observing System (Bojinski et al. 2014, Bull. Amer. Meteor. Soc.). It is of importance in the modelling of atmospheric radiative transfer, and the quantification of the global energy budget. However, in the near-infrared spectral region (between 2000-10000 cm-1) there exists a discrepancy of 7% between spectra measured from the space-based SOLSPEC instrument (Thuillier et al. 2015, Solar Physics) and those from a ground-based Langley technique (Bolseé et al. 2014, Solar Physics). This same difference is also present between different analyses of the SOLSPEC data. This work aims to reconcile some of these differences by presenting an estimate of the near-infrared SSI obtained from ground-based measurements taken using an absolutely calibrated Fourier transform spectrometer. Spectra are obtained both using the Langley technique and by direct comparison with a radiative transfer model, with appropriate handling of both aerosol scattering and molecular continuum absorption. Particular focus is dedicated to the quantification of uncertainty in these spectra, from both the inherent uncertainty in the measurement setup and that from the use of the radiative transfer code and its inputs.
On Trial: the Compatibility of Measurement in the Physical and Social Sciences
NASA Astrophysics Data System (ADS)
Cano, S. J.; Vosk, T.; Pendrill, L. R.; Stenner, A. J.
2016-11-01
In this paper, we put social measurement on trial: providing two perspectives arguing why measurement in the social and in the physical sciences are incompatible and counter with two perspectives supporting compatibility. For the case ‘against’, we first argue that there is a lack of definition in the social sciences. Thus, while measurement in the physical sciences is supported by empirical evidence, calibrated instruments, and predictive theory that work together to test the quantitative nature of properties, measurement in the social sciences, in the main, rests on a vague, discretionary definition of measurement that places hardly any restrictions on empirical data, does not require calibrated instruments, and rarely articulates predictive theories. The second argument for the case ‘against’ introduces the problem associated with psychometrics, including different approaches, methodologies, criteria for success and failure, and considerations as to what counts as measurement. Making the first case ‘for’, we highlight practical principles for improved social measurement including units, laws, theory, and metrology. The second argument ‘for’ introduces the exemplar of the Lexile Framework for reading that exploits metrological principles and parallels the paths taken by, for example, thermometry. We conclude by proposing a way forward potentially applicable to both physical and social measurement, in which inferences are modelled in terms of a measurement system, where specifically the output of the instrument in response to probing the object (‘entity’) is a performance metric, i.e. how well the set-up performs the assessment.
NASA Astrophysics Data System (ADS)
Hayley, Kevin; Schumacher, J.; MacMillan, G. J.; Boutin, L. C.
2014-05-01
Expanding groundwater datasets collected by automated sensors, and improved groundwater databases, have caused a rapid increase in calibration data available for groundwater modeling projects. Improved methods of subsurface characterization have increased the need for model complexity to represent geological and hydrogeological interpretations. The larger calibration datasets and the need for meaningful predictive uncertainty analysis have both increased the degree of parameterization necessary during model calibration. Due to these competing demands, modern groundwater modeling efforts require a massive degree of parallelization in order to remain computationally tractable. A methodology for the calibration of highly parameterized, computationally expensive models using the Amazon EC2 cloud computing service is presented. The calibration of a regional-scale model of groundwater flow in Alberta, Canada, is provided as an example. The model covers a 30,865-km2 domain and includes 28 hydrostratigraphic units. Aquifer properties were calibrated to more than 1,500 static hydraulic head measurements and 10 years of measurements during industrial groundwater use. Three regionally extensive aquifers were parameterized (with spatially variable hydraulic conductivity fields), as was the aerial recharge boundary condition, leading to 450 adjustable parameters in total. The PEST-based model calibration was parallelized on up to 250 computing nodes located on Amazon's EC2 servers.
Using the cloud to speed-up calibration of watershed-scale hydrologic models (Invited)
NASA Astrophysics Data System (ADS)
Goodall, J. L.; Ercan, M. B.; Castronova, A. M.; Humphrey, M.; Beekwilder, N.; Steele, J.; Kim, I.
2013-12-01
This research focuses on using the cloud to address computational challenges associated with hydrologic modeling. One example is calibration of a watershed-scale hydrologic model, which can take days of execution time on typical computers. While parallel algorithms for model calibration exist and some researchers have used multi-core computers or clusters to run these algorithms, these solutions do not fully address the challenge because (i) calibration can still be too time consuming even on multicore personal computers and (ii) few in the community have the time and expertise needed to manage a compute cluster. Given this, another option for addressing this challenge that we are exploring through this work is the use of the cloud for speeding-up calibration of watershed-scale hydrologic models. The cloud used in this capacity provides a means for renting a specific number and type of machines for only the time needed to perform a calibration model run. The cloud allows one to precisely balance the duration of the calibration with the financial costs so that, if the budget allows, the calibration can be performed more quickly by renting more machines. Focusing specifically on the SWAT hydrologic model and a parallel version of the DDS calibration algorithm, we show significant speed-up time across a range of watershed sizes using up to 256 cores to perform a model calibration. The tool provides a simple web-based user interface and the ability to monitor the calibration job submission process during the calibration process. Finally this talk concludes with initial work to leverage the cloud for other tasks associated with hydrologic modeling including tasks related to preparing inputs for constructing place-based hydrologic models.
Eye gaze tracking using correlation filters
NASA Astrophysics Data System (ADS)
Karakaya, Mahmut; Bolme, David; Boehnen, Chris
2014-03-01
In this paper, we studied a method for eye gaze tracking that provide gaze estimation from a standard webcam with a zoom lens and reduce the setup and calibration requirements for new users. Specifically, we have developed a gaze estimation method based on the relative locations of points on the top of the eyelid and eye corners. Gaze estimation method in this paper is based on the distances between top point of the eyelid and eye corner detected by the correlation filters. Advanced correlation filters were found to provide facial landmark detections that are accurate enough to determine the subjects gaze direction up to angle of approximately 4-5 degrees although calibration errors often produce a larger overall shift in the estimates. This is approximately a circle of diameter 2 inches for a screen that is arm's length from the subject. At this accuracy it is possible to figure out what regions of text or images the subject is looking but it falls short of being able to determine which word the subject has looked at.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marsland, M. G.; Dehnel, M. P.; Theroux, J.
2013-04-19
D-Pace has developed a compact cost-effective gamma detector system based on technology licensed from TRIUMF. These photodiode detectors are convenient for detecting the presence of positron emitting radioisotopes, particularly for the case of transport of radioisotopes from a PET cyclotron to hotlab, or from one location to another in an automated radiochemistry processing unit. This paper describes recent calibration experiments undertaken at the Turku PET Centre for stationary and moving sources of F18 and C11 in standard setups. The practical diagnostic utility of using several of these devices to track the transport of radioisotopes from the cyclotron to hotlab ismore » illustrated. For example, such a detector system provides: a semi-quantitative indication of total activity, speed of transport, location of any activity lost en route and effectiveness of follow-up system flushes, a means of identifying bolus break-up, feedback useful for deciding when to change out tubing.« less
APEX/SPIN: a free test platform to measure speech intelligibility.
Francart, Tom; Hofmann, Michael; Vanthornhout, Jonas; Van Deun, Lieselot; van Wieringen, Astrid; Wouters, Jan
2017-02-01
Measuring speech intelligibility in quiet and noise is important in clinical practice and research. An easy-to-use free software platform for conducting speech tests is presented, called APEX/SPIN. The APEX/SPIN platform allows the use of any speech material in combination with any noise. A graphical user interface provides control over a large range of parameters, such as number of loudspeakers, signal-to-noise ratio and parameters of the procedure. An easy-to-use graphical interface is provided for calibration and storage of calibration values. To validate the platform, perception of words in quiet and sentences in noise were measured both with APEX/SPIN and with an audiometer and CD player, which is a conventional setup in current clinical practice. Five normal-hearing listeners participated in the experimental evaluation. Speech perception results were similar for the APEX/SPIN platform and conventional procedures. APEX/SPIN is a freely available and open source platform that allows the administration of all kinds of custom speech perception tests and procedures.
Wavelength-encoded optical psychrometer for relative humidity measurement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Montanini, Roberto
2007-02-15
In this article an optical psychrometer, in which temperature measurements are performed by means of two fiber Bragg grating sensors used as dry-bulb and wet-bulb thermometers, is introduced. The adopted design exploits both the high accuracy of psychrometric-based relative humidity measurements with acknowledged advantages of wavelength-encoded fiber optic sensing. Important metrological issues that have been addressed in the experimental work include calibration of the fiber Bragg grating temperature sensors, evaluation of response time, sensitivity, hysteresis, linearity, and accuracy. The calibration results give confidence that, with the current experimental setup, measurement of temperature can be done with an uncertainty of {+-}0.2more » deg. C and a resolution of 0.1 deg. C. A detailed uncertainty analysis is also presented in the article to investigate the effects produced by different sources of error on the combined standard uncertainty u{sub c}(U) of the relative humidity measurement, which has been estimated to be roughly within {+-}2% in the range close to saturation.« less
Wavelength-encoded optical psychrometer for relative humidity measurement.
Montanini, Roberto
2007-02-01
In this article an optical psychrometer, in which temperature measurements are performed by means of two fiber Bragg grating sensors used as dry-bulb and wet-bulb thermometers, is introduced. The adopted design exploits both the high accuracy of psychrometric-based relative humidity measurements with acknowledged advantages of wavelength-encoded fiber optic sensing. Important metrological issues that have been addressed in the experimental work include calibration of the fiber Bragg grating temperature sensors, evaluation of response time, sensitivity, hysteresis, linearity, and accuracy. The calibration results give confidence that, with the current experimental setup, measurement of temperature can be done with an uncertainty of +/- 0.2 degrees C and a resolution of 0.1 degrees C. A detailed uncertainty analysis is also presented in the article to investigate the effects produced by different sources of error on the combined standard uncertainty uc(U) of the relative humidity measurement, which has been estimated to be roughly within +/-2% in the range close to saturation.
NASA Astrophysics Data System (ADS)
Ganzha, V.; Ivshin, K.; Kammel, P.; Kravchenko, P.; Kravtsov, P.; Petitjean, C.; Trofimov, V.; Vasilyev, A.; Vorobyov, A.; Vznuzdaev, M.; Wauters, F.
2018-02-01
A series of muon experiments at the Paul Scherrer Institute in Switzerland deploy ultra-pure hydrogen active targets. A new gas impurity analysis technique was developed, based on conventional gas chromatography, with the capability to measure part-per-billion (ppb) traces of nitrogen and oxygen in hydrogen and deuterium. Key ingredients are a cryogenic admixture accumulation, a directly connected sampling system and a dedicated calibration setup. The dependence of the measured concentration on the sample volume was investigated, confirming that all impurities from the sample gas are collected in the accumulation column and measured with the gas chromatograph. The system was calibrated utilizing dynamic dilution of admixtures into the gas flow down to sub-ppb level concentrations. The total amount of impurities accumulated in the purification system during a three month long experimental run was measured and agreed well with the calculated amount based on the measured concentrations in the flow.
Bortolussi, Silva; Ciani, Laura; Postuma, Ian; Protti, Nicoletta; Luca Reversi; Bruschi, Piero; Ferrari, Cinzia; Cansolino, Laura; Panza, Luigi; Ristori, Sandra; Altieri, Saverio
2014-06-01
The possibility to measure boron concentration with high precision in tissues that will be irradiated represents a fundamental step for a safe and effective BNCT treatment. In Pavia, two techniques have been used for this purpose, a quantitative method based on charged particles spectrometry and a boron biodistribution imaging based on neutron autoradiography. A quantitative method to determine boron concentration by neutron autoradiography has been recently set-up and calibrated for the measurement of biological samples, both solid and liquid, in the frame of the feasibility study of BNCT. This technique was calibrated and the obtained results were cross checked with those of α spectrometry, in order to validate them. The comparisons were performed using tissues taken form animals treated with different boron administration protocols. Subsequently the quantitative neutron autoradiography was employed to measure osteosarcoma cell samples treated with BPA and with new boronated formulations. © 2013 Published by Elsevier Ltd.
NASA Technical Reports Server (NTRS)
Giveona, Amir; Shaklan, Stuart; Kern, Brian; Noecker, Charley; Kendrick, Steve; Wallace, Kent
2012-01-01
In a setup similar to the self coherent camera, we have added a set of pinholes in the diffraction ring of the Lyot plane in a high-contrast stellar Lyot coronagraph. We describe a novel complex electric field reconstruction from image plane intensity measurements consisting of light in the coronagraph's dark hole interfering with light from the pinholes. The image plane field is modified by letting light through one pinhole at a time. In addition to estimation of the field at the science camera, this method allows for self-calibration of the probes by letting light through the pinholes in various permutations while blocking the main Lyot opening. We present results of estimation and calibration from the High Contrast Imaging Testbed along with a comparison to the pair-wise deformable mirror diversity based estimation technique. Tests are carried out in narrow-band light and over a composite 10% bandpass.
Investigations of the radio signal of inclined showers with LOPES
NASA Astrophysics Data System (ADS)
Saftoiu, A.; Apel, W. D.; Arteaga, J. C.; Asch, T.; Bähren, L.; Bekk, K.; Bertaina, M.; Biermann, P. L.; Blümer, J.; Bozdog, H.; Brancus, I. M.; Buchholz, P.; Buitink, S.; Cantoni, E.; Chiavassa, A.; Daumiller, K.; de Souza, V.; Doll, P.; Engel, R.; Falcke, H.; Finger, M.; Fuhrmann, D.; Gemmeke, H.; Grupen, C.; Haungs, A.; Heck, D.; Hörandel, J. R.; Horneffer, A.; Huber, D.; Huege, T.; Isar, P. G.; Kampert, K.-H.; Kang, D.; Krömer, O.; Kuijpers, J.; Lafebre, S.; Link, K.; Łuczak, P.; Ludwig, M.; Mathes, H. J.; Melissas, M.; Morello, C.; Nehls, S.; Oehlschläger, J.; Palmieri, N.; Pierog, T.; Rautenberg, J.; Rebel, H.; Roth, M.; Rühle, C.; Schieler, H.; Schmidt, A.; Schröder, F. G.; Sima, O.; Toma, G.; Trinchero, G. C.; Weindl, A.; Wochele, J.; Wommer, M.; Zabierowski, J.; Zensus, J. A.
2012-01-01
We report in this paper on an analysis of 20 months of data taken with LOPES. LOPES is radio antenna array set-up in coincidence with the Grande array, both located at the Karlsruhe Institute of Technology, Germany. The data used in this analysis were taken with an antenna configuration composed of 30 inverted V-shape dipole antennas.We have restricted the analysis to a special selection of inclined showers—with zenith angle θ>40∘. These inclined showers are of particular interest because they are the events with the largest geomagnetic angles and are therefore suitable to test emission models based on geomagnetic effects.The reconstruction procedure of the emitted radio signal in EAS uses as one ingredient the frequency-dependent antenna gain pattern which is obtained from simulations. Effects of the applied antenna model in the calibration procedure of LOPES are studied. In particular, we have focused on one component of the antenna, a metal pedestal, which generates a resonance effect, a peak in the amplification pattern where it is the most affecting high zenith angles, i.e. inclined showers.In addition, polarization characteristics of inclined showers were studied in detail and compared with the features of more vertical showers for the two cases of antenna models, with and without the pedestal.
NASA Technical Reports Server (NTRS)
Humphreys, Brad; Bellisario, Brian; Gallo, Christopher; Thompson, William K.; Lewandowski, Beth
2016-01-01
Long duration space travel to Mars or to an asteroid will expose astronauts to extended periods of reduced gravity. Since gravity is not present to aid loading, astronauts will use resistive and aerobic exercise regimes for the duration of the space flight to minimize the loss of bone density, muscle mass and aerobic capacity that occurs during exposure to a reduced gravity environment. Unlike the International Space Station (ISS), the area available for an exercise device in the next generation of spacecraft is limited. Therefore, compact resistance exercise device prototypes are being developed. The NASA Digital Astronaut Project (DAP) is supporting the Advanced Exercise Concepts (AEC) Project, Exercise Physiology and Countermeasures (ExPC) project and the National Space Biomedical Research Institute (NSBRI) funded researchers by developing computational models of exercising with these new advanced exercise device concepts. To perform validation of these models and to support the Advanced Exercise Concepts Project, several candidate devices have been flown onboard NASAs Reduced Gravity Aircraft. In terrestrial laboratories, researchers typically have available to them motion capture systems for the measurement of subject kinematics. Onboard the parabolic flight aircraft it is not practical to utilize the traditional motion capture systems due to the large working volume they require and their relatively high replacement cost if damaged. To support measuring kinematics on board parabolic aircraft, a motion capture system is being developed utilizing open source computer vision code with commercial off the shelf (COTS) video camera hardware. While the systems accuracy is lower than lab setups, it provides a means to produce quantitative comparison motion capture kinematic data. Additionally, data such as required exercise volume for small spaces such as the Orion capsule can be determined. METHODS: OpenCV is an open source computer vision library that provides the ability to perform multi-camera 3 dimensional reconstruction. Utilizing OpenCV, via the Python programming language, a set of tools has been developed to perform motion capture in confined spaces using commercial cameras. Four Sony Video Cameras were intrinsically calibrated prior to flight. Intrinsic calibration provides a set of camera specific parameters to remove geometric distortion of the lens and sensor (specific to each individual camera). A set of high contrast markers were placed on the exercising subject (safety also necessitated that they be soft in case they become detached during parabolic flight); small yarn balls were used. Extrinsic calibration, the determination of camera location and orientation parameters, is performed using fixed landmark markers shared by the camera scenes. Additionally a wand calibration, the sweeping of the camera scenes simultaneously, was also performed. Techniques have been developed to perform intrinsic calibration, extrinsic calibration, isolation of the markers in the scene, calculation of marker 2D centroids, and 3D reconstruction from multiple cameras. These methods have been tested in the laboratory side-by-side comparison to a traditional motion capture system and also on a parabolic flight.
Bhandari, Ammar B; Nelson, Nathan O; Sweeney, Daniel W; Baffaut, Claire; Lory, John A; Senaviratne, Anomaa; Pierzynski, Gary M; Janssen, Keith A; Barnes, Philip L
2017-11-01
Process-based computer models have been proposed as a tool to generate data for Phosphorus (P) Index assessment and development. Although models are commonly used to simulate P loss from agriculture using managements that are different from the calibration data, this use of models has not been fully tested. The objective of this study is to determine if the Agricultural Policy Environmental eXtender (APEX) model can accurately simulate runoff, sediment, total P, and dissolved P loss from 0.4 to 1.5 ha of agricultural fields with managements that are different from the calibration data. The APEX model was calibrated with field-scale data from eight different managements at two locations (management-specific models). The calibrated models were then validated, either with the same management used for calibration or with different managements. Location models were also developed by calibrating APEX with data from all managements. The management-specific models resulted in satisfactory performance when used to simulate runoff, total P, and dissolved P within their respective systems, with > 0.50, Nash-Sutcliffe efficiency > 0.30, and percent bias within ±35% for runoff and ±70% for total and dissolved P. When applied outside the calibration management, the management-specific models only met the minimum performance criteria in one-third of the tests. The location models had better model performance when applied across all managements compared with management-specific models. Our results suggest that models only be applied within the managements used for calibration and that data be included from multiple management systems for calibration when using models to assess management effects on P loss or evaluate P Indices. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Quantifying Particle Numbers and Mass Flux in Drifting Snow
NASA Astrophysics Data System (ADS)
Crivelli, Philip; Paterna, Enrico; Horender, Stefan; Lehning, Michael
2016-12-01
We compare two of the most common methods of quantifying mass flux, particle numbers and particle-size distribution for drifting snow events, the snow-particle counter (SPC), a laser-diode-based particle detector, and particle tracking velocimetry based on digital shadowgraphic imaging. The two methods were correlated for mass flux and particle number flux. For the SPC measurements, the device was calibrated by the manufacturer beforehand. The shadowgrapic imaging method measures particle size and velocity directly from consecutive images, and before each new test the image pixel length is newly calibrated. A calibration study with artificially scattered sand particles and glass beads provides suitable settings for the shadowgraphical imaging as well as obtaining a first correlation of the two methods in a controlled environment. In addition, using snow collected in trays during snowfall, several experiments were performed to observe drifting snow events in a cold wind tunnel. The results demonstrate a high correlation between the mass flux obtained for the calibration studies (r ≥slant 0.93) and good correlation for the drifting snow experiments (r ≥slant 0.81). The impact of measurement settings is discussed in order to reliably quantify particle numbers and mass flux in drifting snow. The study was designed and performed to optimize the settings of the digital shadowgraphic imaging system for both the acquisition and the processing of particles in a drifting snow event. Our results suggest that these optimal settings can be transferred to different imaging set-ups to investigate sediment transport processes.
Mwashote, B.M.; Burnett, W.C.; Chanton, J.; Santos, I.R.; Dimova, N.; Swarzenski, P.W.
2010-01-01
Submarine groundwater discharge (SGD) assessments were conducted both in the laboratory and at a field site in the northeastern Gulf of Mexico, using a continuous heat-type automated seepage meter (seepmeter). The functioning of the seepmeter is based on measurements of a temperature gradient in the water between downstream and upstream positions in its flow pipe. The device has the potential of providing long-term, high-resolution measurements of SGD. Using a simple inexpensive laboratory set-up, we have shown that connecting an extension cable to the seepmeter has a negligible effect on its measuring capability. Similarly, the observed influence of very low temperature (???3 ??C) on seepmeter measurements can be accounted for by conducting calibrations at such temperatures prior to field deployments. Compared to manual volumetric measurements, calibration experiments showed that at higher water flow rates (>28 cm day-1 or cm3 cm-2 day-1) an analog flowmeter overestimated flow rates by ???7%. This was apparently due to flow resistance, turbulence and formation of air bubbles in the seepmeter water flow tubes. Salinity had no significant effect on the performance of the seepmeter. Calibration results from fresh water and sea water showed close agreement at a 95% confidence level significance between the data sets from the two media (R2 = 0.98). Comparatively, the seepmeter SGD measurements provided data that are comparable to manually-operated seepage meters, the radon geochemical tracer approach, and an electromagnetic (EM) seepage meter. ?? 2009 Elsevier Ltd.
Modeling and Model Identification of Autonomous Underwater Vehicles
2015-06-01
setup, based on a quadrifilar pendulum , is developed to measure the moments of inertia of the vehicle. System identification techniques, based on...parametric models of the platforms: an individual channel excitation approach and a free decay pendulum test. The former is applied to THAUS, which can...excite the system in individual channels in four degrees of freedom. These results are verified in the free decay pendulum setup, which has the
NASA Technical Reports Server (NTRS)
Cramer, Christopher J.; Wright, James D.; Simmons, Scott A.; Bobbitt, Lynn E.; DeMoss, Joshua A.
2015-01-01
The paper will present a brief background of the previous data acquisition system at the National Transonic Facility (NTF) and the reasoning and goals behind the upgrade to the current Test SLATE (Test Software Laboratory and Automated Testing Environments) data acquisition system. The components, performance characteristics, and layout of the Test SLATE system within the NTF control room will be discussed. The development, testing, and integration of Test SLATE within NTF operations will be detailed. The operational capabilities of the system will be outlined including: test setup, instrumentation calibration, automatic test sequencer setup, data recording, communication between data and facility control systems, real time display monitoring, and data reduction. The current operational status of the Test SLATE system and its performance during recent NTF testing will be highlighted including high-speed, frame-by-frame data acquisition with conditional sampling post-processing applied. The paper concludes with current development work on the system including the capability for real-time conditional sampling during data acquisition and further efficiency enhancements to the wind tunnel testing process.
Readiness of the ATLAS detector: Performance with the first beam and cosmic data
NASA Astrophysics Data System (ADS)
Pastore, F.
2010-05-01
During 2008 the ATLAS experiment went through an intense period of preparation to have the detector fully commissioned for the first beam period. In about 30 h of beam time available to ATLAS in 2008 the systems went through a rapid setup sequence, from successfully recording the first bunch ever reaching ATLAS, to setting up the timing of the trigger system synchronous to the incoming single beams. The so-called splash events were recorded, where the beam was stopped on a collimator 140 m upstream of ATLAS, showering the experiment with millions of particles per beam shot. These events were found to be extremely useful for timing setup. After the stop of the beam operation, the experiment went through an extensive cosmic ray data taking campaign, recording more than 500 million cosmic ray events. These events have been used to make significant progress on the calibration and alignment of the detector. This paper describes the commissioning programme and the results obtained from both the single beam data and the cosmic data recorded in 2008.
W-Band Free Space Permittivity Measurement Setup for Candidate Radome Materials
NASA Technical Reports Server (NTRS)
Fralick, Dion T.
1997-01-01
This paper presents a measurement system used for w-band complex permittivity measurements performed in NASA Langley Research Center's Electromagnetics Research Branch. The system was used to characterize candidate radome materials for the passive millimeter wave (PMMW) camera experiment. The PMMW camera is a new technology sensor, with goals of all-weather landings of civilian and military aircraft. The sensor is being developed under a NASA Technology Reinvestment program with TRW, McDonnell- Douglas, Honeywell, and Composite Optics, Inc. as participants. The experiment is scheduled to be flight tested on the Air Force's 'Speckled Trout' aircraft in late 1997. The camera operates at W-band, in a radiometric capacity and generates an image of the viewable field. Because the camera is a radiometer, the system is very sensitive to losses. Minimal transmission loss through the radome at the operating frequency, 89 GHz, was critical to the success of the experiment. This paper details the design, set-up, calibration and operation of a free space measurement system developed and used to characterize the candidate radome materials for this program.
On-field mounting position estimation of a lidar sensor
NASA Astrophysics Data System (ADS)
Khan, Owes; Bergelt, René; Hardt, Wolfram
2017-10-01
In order to retrieve a highly accurate view of their environment, autonomous cars are often equipped with LiDAR sensors. These sensors deliver a three dimensional point cloud in their own co-ordinate frame, where the origin is the sensor itself. However, the common co-ordinate system required by HAD (Highly Autonomous Driving) software systems has its origin at the center of the vehicle's rear axle. Thus, a transformation of the acquired point clouds to car co-ordinates is necessary, and thereby the determination of the exact mounting position of the LiDAR system in car coordinates is required. Unfortunately, directly measuring this position is a time-consuming and error-prone task. Therefore, different approaches have been suggested for its estimation which mostly require an exhaustive test-setup and are again time-consuming to prepare. When preparing a high number of LiDAR mounted test vehicles for data acquisition, most approaches fall short due to time or money constraints. In this paper we propose an approach for mounting position estimation which features an easy execution and setup, thus making it feasible for on-field calibration.
NASA Astrophysics Data System (ADS)
Kredzinski, Lukasz; Connelly, Michael J.
2011-06-01
Optical Coherence Tomography (OCT) is a promising non-invasive imaging technology capable of carrying out 3D high-resolution cross-sectional images of the internal microstructure of examined material. However, almost all of these systems are expensive, requiring the use of complex optical setups, expensive light sources and complicated scanning of the sample under test. In addition most of these systems have not taken advantage of the competitively priced optical components available at wavelength within the main optical communications band located in the 1550 nm region. A comparatively simple and inexpensive full-field OCT system (FF-OCT), based on a superluminescent diode (SLD) light source and anti-stokes imaging device was constructed, to perform 3D cross-sectional imaging. This kind of inexpensive setup with moderate resolution could be easily applicable in low-level biomedical and industrial diagnostics. This paper involves calibration of the system and determines its suitability for imaging structures of biological tissues such as teeth, which has low absorption at 1550 nm.
Accounting for optical errors in microtensiometry.
Hinton, Zachary R; Alvarez, Nicolas J
2018-09-15
Drop shape analysis (DSA) techniques measure interfacial tension subject to error in image analysis and the optical system. While considerable efforts have been made to minimize image analysis errors, very little work has treated optical errors. There are two main sources of error when considering the optical system: the angle of misalignment and the choice of focal plane. Due to the convoluted nature of these sources, small angles of misalignment can lead to large errors in measured curvature. We demonstrate using microtensiometry the contributions of these sources to measured errors in radius, and, more importantly, deconvolute the effects of misalignment and focal plane. Our findings are expected to have broad implications on all optical techniques measuring interfacial curvature. A geometric model is developed to analytically determine the contributions of misalignment angle and choice of focal plane on measurement error for spherical cap interfaces. This work utilizes a microtensiometer to validate the geometric model and to quantify the effect of both sources of error. For the case of a microtensiometer, an empirical calibration is demonstrated that corrects for optical errors and drastically simplifies implementation. The combination of geometric modeling and experimental results reveal a convoluted relationship between the true and measured interfacial radius as a function of the misalignment angle and choice of focal plane. The validated geometric model produces a full operating window that is strongly dependent on the capillary radius and spherical cap height. In all cases, the contribution of optical errors is minimized when the height of the spherical cap is equivalent to the capillary radius, i.e. a hemispherical interface. The understanding of these errors allow for correct measure of interfacial curvature and interfacial tension regardless of experimental setup. For the case of microtensiometry, this greatly decreases the time for experimental setup and increases experiential accuracy. In a broad sense, this work outlines the importance of optical errors in all DSA techniques. More specifically, these results have important implications for all microscale and microfluidic measurements of interface curvature. Copyright © 2018 Elsevier Inc. All rights reserved.
The Calibration System of the E989 Experiment at Fermilab
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anastasi, Antonio
The muon anomaly aµ is one of the most precise quantity known in physics experimentally and theoretically. The high level of accuracy permits to use the measurement of aµ as a test of the Standard Model comparing with the theoretical calculation. After the impressive result obtained at Brookhaven National Laboratory in 2001 with a total accuracy of 0.54 ppm, a new experiment E989 is under construction at Fermilab, motivated by the diff of aexp SM µ - aµ ~ 3σ. The purpose of the E989 experiment is a fourfold reduction of the error, with a goal of 0.14 ppm,more » improving both the systematic and statistical uncertainty. With the use of the Fermilab beam complex a statistic of × 21 with respect to BNL will be reached in almost 2 years of data taking improving the statistical uncertainty to 0.1 ppm. Improvement on the systematic error involves the measurement technique of ωa and ωp, the anomalous precession frequency of the muon and the Larmor precession frequency of the proton respectively. The measurement of ωp involves the magnetic field measurement and improvements on this sector related to the uniformity of the field should reduce the systematic uncertainty with respect to BNL from 170 ppb to 70 ppb. A reduction from 180 ppb to 70 ppb is also required for the measurement of ωa; new DAQ, a faster electronics and new detectors and calibration system will be implemented with respect to E821 to reach this goal. In particular the laser calibration system will reduce the systematic error due to gain fl of the photodetectors from 0.12 to 0.02 ppm. The 0.02 ppm limit on systematic requires a system with a stability of 10 -4 on short time scale (700 µs) while on longer time scale the stability is at the percent level. The 10 -4 stability level required is almost an order of magnitude better than the existing laser calibration system in particle physics, making the calibration system a very challenging item. In addition to the high level of stability a particular environment, due to the presence of a 14 m diameter storage ring, a highly uniform magnetic field and the detector distribution around the storage ring, set specific guidelines and constraints. This thesis will focus on the final design of the Laser Calibration System developed for the E989 experiment. Chapter 1 introduces the subject of the anomalous magnetic moment of the muon; chapter 2 presents previous measurement of g -2, while chapter 3 discusses the Standard Model prediction and possible new physics scenario. Chapter 4 describes the E989 experiment. In this chapter will be described the experimental technique and also will be presented the experimental apparatus focusing on the improvements necessary to reduce the statistical and systematic errors. The main item of the thesis is discussed in the last two chapters: chapter 5 is focused on the Laser Calibration system while chapter 6 describes the Test Beam performed at the Beam Test Facility of Laboratori Nazionali di Frascati from the 29th February to the 7th March as a final test for the full calibrations system. An introduction explain the physics motivation of the system and the diff t devices implemented. In the final chapter the setup used will be described and some of the results obtained will be presented.« less
Agogo, George O.; van der Voet, Hilko; Veer, Pieter van’t; Ferrari, Pietro; Leenders, Max; Muller, David C.; Sánchez-Cantalejo, Emilio; Bamia, Christina; Braaten, Tonje; Knüppel, Sven; Johansson, Ingegerd; van Eeuwijk, Fred A.; Boshuizen, Hendriek
2014-01-01
In epidemiologic studies, measurement error in dietary variables often attenuates association between dietary intake and disease occurrence. To adjust for the attenuation caused by error in dietary intake, regression calibration is commonly used. To apply regression calibration, unbiased reference measurements are required. Short-term reference measurements for foods that are not consumed daily contain excess zeroes that pose challenges in the calibration model. We adapted two-part regression calibration model, initially developed for multiple replicates of reference measurements per individual to a single-replicate setting. We showed how to handle excess zero reference measurements by two-step modeling approach, how to explore heteroscedasticity in the consumed amount with variance-mean graph, how to explore nonlinearity with the generalized additive modeling (GAM) and the empirical logit approaches, and how to select covariates in the calibration model. The performance of two-part calibration model was compared with the one-part counterpart. We used vegetable intake and mortality data from European Prospective Investigation on Cancer and Nutrition (EPIC) study. In the EPIC, reference measurements were taken with 24-hour recalls. For each of the three vegetable subgroups assessed separately, correcting for error with an appropriately specified two-part calibration model resulted in about three fold increase in the strength of association with all-cause mortality, as measured by the log hazard ratio. Further found is that the standard way of including covariates in the calibration model can lead to over fitting the two-part calibration model. Moreover, the extent of adjusting for error is influenced by the number and forms of covariates in the calibration model. For episodically consumed foods, we advise researchers to pay special attention to response distribution, nonlinearity, and covariate inclusion in specifying the calibration model. PMID:25402487
NASA Astrophysics Data System (ADS)
Jackson-Blake, Leah; Helliwell, Rachel
2015-04-01
Process-based catchment water quality models are increasingly used as tools to inform land management. However, for such models to be reliable they need to be well calibrated and shown to reproduce key catchment processes. Calibration can be challenging for process-based models, which tend to be complex and highly parameterised. Calibrating a large number of parameters generally requires a large amount of monitoring data, spanning all hydrochemical conditions. However, regulatory agencies and research organisations generally only sample at a fortnightly or monthly frequency, even in well-studied catchments, often missing peak flow events. The primary aim of this study was therefore to investigate how the quality and uncertainty of model simulations produced by a process-based, semi-distributed catchment model, INCA-P (the INtegrated CAtchment model of Phosphorus dynamics), were improved by calibration to higher frequency water chemistry data. Two model calibrations were carried out for a small rural Scottish catchment: one using 18 months of daily total dissolved phosphorus (TDP) concentration data, another using a fortnightly dataset derived from the daily data. To aid comparability, calibrations were carried out automatically using the Markov Chain Monte Carlo - DiffeRential Evolution Adaptive Metropolis (MCMC-DREAM) algorithm. Calibration to daily data resulted in improved simulation of peak TDP concentrations and improved model performance statistics. Parameter-related uncertainty in simulated TDP was large when fortnightly data was used for calibration, with a 95% credible interval of 26 μg/l. This uncertainty is comparable in size to the difference between Water Framework Directive (WFD) chemical status classes, and would therefore make it difficult to use this calibration to predict shifts in WFD status. The 95% credible interval reduced markedly with the higher frequency monitoring data, to 6 μg/l. The number of parameters that could be reliably auto-calibrated was lower for the fortnightly data, with a physically unrealistic TDP simulation being produced when too many parameters were allowed to vary during model calibration. Parameters should not therefore be varied spatially for models such as INCA-P unless there is solid evidence that this is appropriate, or there is a real need to do so for the model to fulfil its purpose. This study highlights the potential pitfalls of using low frequency timeseries of observed water quality to calibrate complex process-based models. For reliable model calibrations to be produced, monitoring programmes need to be designed which capture system variability, in particular nutrient dynamics during high flow events. In addition, there is a need for simpler models, so that all model parameters can be included in auto-calibration and uncertainty analysis, and to reduce the data needs during calibration.
NASA Astrophysics Data System (ADS)
Jentschel, M.; Blanc, A.; de France, G.; Köster, U.; Leoni, S.; Mutti, P.; Simpson, G.; Soldner, T.; Ur, C.; Urban, W.; Ahmed, S.; Astier, A.; Augey, L.; Back, T.; Baczyk, P.; Bajoga, A.; Balabanski, D.; Belgya, T.; Benzoni, G.; Bernards, C.; Biswas, D. C.; Bocchi, G.; Bottoni, S.; Britton, R.; Bruyneel, B.; Burnett, J.; Cakirli, R. B.; Carroll, R.; Catford, W.; Cederwall, B.; Celikovic, I.; Cieplicka-Oryńczak, N.; Clement, E.; Cooper, N.; Crespi, F.; Csatlos, M.; Curien, D.; Czerwiński, M.; Danu, L. S.; Davies, A.; Didierjean, F.; Drouet, F.; Duchêne, G.; Ducoin, C.; Eberhardt, K.; Erturk, S.; Fraile, L. M.; Gottardo, A.; Grente, L.; Grocutt, L.; Guerrero, C.; Guinet, D.; Hartig, A.-L.; Henrich, C.; Ignatov, A.; Ilieva, S.; Ivanova, D.; John, B. V.; John, R.; Jolie, J.; Kisyov, S.; Krticka, M.; Konstantinopoulos, T.; Korgul, A.; Krasznahorkay, A.; Kröll, T.; Kurpeta, J.; Kuti, I.; Lalkovski, S.; Larijani, C.; Leguillon, R.; Lica, R.; Litaize, O.; Lozeva, R.; Magron, C.; Mancuso, C.; Ruiz Martinez, E.; Massarczyk, R.; Mazzocchi, C.; Melon, B.; Mengoni, D.; Michelagnoli, C.; Million, B.; Mokry, C.; Mukhopadhyay, S.; Mulholland, K.; Nannini, A.; Napoli, D. R.; Olaizola, B.; Orlandi, R.; Patel, Z.; Paziy, V.; Petrache, C.; Pfeiffer, M.; Pietralla, N.; Podolyak, Z.; Ramdhane, M.; Redon, N.; Regan, P.; Regis, J. M.; Regnier, D.; Oliver, R. J.; Rudigier, M.; Runke, J.; Rzaca-Urban, T.; Saed-Samii, N.; Salsac, M. D.; Scheck, M.; Schwengner, R.; Sengele, L.; Singh, P.; Smith, J.; Stezowski, O.; Szpak, B.; Thomas, T.; Thürauf, M.; Timar, J.; Tom, A.; Tomandl, I.; Tornyi, T.; Townsley, C.; Tuerler, A.; Valenta, S.; Vancraeyenest, A.; Vandone, V.; Vanhoy, J.; Vedia, V.; Warr, N.; Werner, V.; Wilmsen, D.; Wilson, E.; Zerrouki, T.; Zielinska, M.
2017-11-01
In the EXILL campaign a highly efficient array of high purity germanium (HPGe) detectors was operated at the cold neutron beam facility PF1B of the Institut Laue-Langevin (ILL) to carry out nuclear structure studies, via measurements of γ-rays following neutron-induced capture and fission reactions. The setup consisted of a collimation system producing a pencil beam with a thermal capture equivalent flux of about 108 n s-1cm-2 at the target position and negligible neutron halo. The target was surrounded by an array of eight to ten anti-Compton shielded EXOGAM Clover detectors, four to six anti-Compton shielded large coaxial GASP detectors and two standard Clover detectors. For a part of the campaign the array was combined with 16 LaBr3:(Ce) detectors from the FATIMA collaboration. The detectors were arranged in an array of rhombicuboctahedron geometry, providing the possibility to carry out very precise angular correlation and directional-polarization correlation measurements. The triggerless acquisition system allowed a signal collection rate of up to 6 × 105 Hz. The data allowed to set multi-fold coincidences to obtain decay schemes and in combination with the FATIMA array of LaBr3:(Ce) detectors to analyze half-lives of excited levels in the pico- to microsecond range. Precise energy and efficiency calibrations of EXILL were performed using standard calibration sources of 133Ba, 60Co and 152Eu as well as data from the reactions 27Al(n,γ)28Al and 35Cl(n,γ)36Cl in the energy range from 30 keV up to 10 MeV.
Hydrogen Epoch of Reinozation Array (HERA) Calibrated FFT Correlator Simulation
NASA Astrophysics Data System (ADS)
Salazar, Jeffrey David; Parsons, Aaron
2018-01-01
The Hydrogen Epoch of Reionization Array (HERA) project is an astronomical radio interferometer array with a redundant baseline configuration. Interferometer arrays are being used widely in radio astronomy because they have a variety of advantages over single antenna systems. For example, they produce images (visibilities) closely matching that of a large antenna (such as the Arecibo observatory), while both the hardware and maintenance costs are significantly lower. However, this method has some complications; one being the computational cost of correlating data from all of the antennas. A correlator is an electronic device that cross-correlates the data between the individual antennas; these are what radio astronomers call visibilities. HERA, being in its early stages, utilizes a traditional correlator system. The correlator cost scales as N2, where N is the number of antennas in the array. The purpose of a redundant baseline configuration array setup is for the use of a more efficient Fast Fourier Transform (FFT) correlator. FFT correlators scale as Nlog2N. The data acquired from this sort of setup, however, inherits geometric delay and uncalibrated antenna gains. This particular project simulates the process of calibrating signals from astronomical sources. Each signal “received” by an antenna in the simulation is given random antenna gain and geometric delay. The “linsolve” Python module was used to solve for the unknown variables in the simulation (complex gains and delays), which then gave a value for the true visibilities. This first version of the simulation only mimics a one dimensional redundant telescope array detecting a small amount of sources located in the volume above the antenna plane. Future versions, using GPUs, will handle a two dimensional redundant array of telescopes detecting a large amount of sources in the volume above the array.
Analysis of Photogrammetry Data from ISIM Mockup
NASA Technical Reports Server (NTRS)
Nowak, Maria; Hill, Mike
2007-01-01
During ground testing of the Integrated Science Instrument Module (ISIM) for the James Webb Space Telescope (JWST), the ISIM Optics group plans to use a Photogrammetry Measurement System for cryogenic calibration of specific target points on the ISIM composite structure and Science Instrument optical benches and other GSE equipment. This testing will occur in the Space Environmental Systems (SES) chamber at Goddard Space Flight Center. Close range photogrammetry is a 3 dimensional metrology system using triangulation to locate custom targets in 3 coordinates via a collection of digital photographs taken from various locations and orientations. These photos are connected using coded targets, special targets that are recognized by the software and can thus correlate the images to provide a 3 dimensional map of the targets, and scaled via well calibrated scale bars. Photogrammetry solves for the camera location and coordinates of the targets simultaneously through the bundling procedure contained in the V-STARS software, proprietary software owned by Geodetic Systems Inc. The primary objectives of the metrology performed on the ISIM mock-up were (1) to quantify the accuracy of the INCA3 photogrammetry camera on a representative full scale version of the ISIM structure at ambient temperature by comparing the measurements obtained with this camera to measurements using the Leica laser tracker system and (2), empirically determine the smallest increment of target position movement that can be resolved by the PG camera in the test setup, i.e., precision, or resolution. In addition, the geometrical details of the test setup defined during the mockup testing, such as target locations and camera positions, will contribute to the final design of the photogrammetry system to be used on the ISIM Flight Structure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bevins, N; Vanderhoek, M; Lang, S
2014-06-15
Purpose: Medical display monitor calibration and quality control present challenges to medical physicists. The purpose of this work is to demonstrate and share experiences with an open source package that allows for both initial monitor setup and routine performance evaluation. Methods: A software package, pacsDisplay, has been developed over the last decade to aid in the calibration of all monitors within the radiology group in our health system. The software is used to calibrate monitors to follow the DICOM Grayscale Standard Display Function (GSDF) via lookup tables installed on the workstation. Additional functionality facilitates periodic evaluations of both primary andmore » secondary medical monitors to ensure satisfactory performance. This software is installed on all radiology workstations, and can also be run as a stand-alone tool from a USB disk. Recently, a database has been developed to store and centralize the monitor performance data and to provide long-term trends for compliance with internal standards and various accrediting organizations. Results: Implementation and utilization of pacsDisplay has resulted in improved monitor performance across the health system. Monitor testing is now performed at regular intervals and the software is being used across multiple imaging modalities. Monitor performance characteristics such as maximum and minimum luminance, ambient luminance and illuminance, color tracking, and GSDF conformity are loaded into a centralized database for system performance comparisons. Compliance reports for organizations such as MQSA, ACR, and TJC are generated automatically and stored in the same database. Conclusion: An open source software solution has simplified and improved the standardization of displays within our health system. This work serves as an example method for calibrating and testing monitors within an enterprise health system.« less
NASA Astrophysics Data System (ADS)
Shahbazi, M.; Sattari, M.; Homayouni, S.; Saadatseresht, M.
2012-07-01
Recent advances in positioning techniques have made it possible to develop Mobile Mapping Systems (MMS) for detection and 3D localization of various objects from a moving platform. On the other hand, automatic traffic sign recognition from an equipped mobile platform has recently been a challenging issue for both intelligent transportation and municipal database collection. However, there are several inevitable problems coherent to all the recognition methods completely relying on passive chromatic or grayscale images. This paper presents the implementation and evaluation of an operational MMS. Being distinct from the others, the developed MMS comprises one range camera based on Photonic Mixer Device (PMD) technology and one standard 2D digital camera. The system benefits from certain algorithms to detect, recognize and localize the traffic signs by fusing the shape, color and object information from both range and intensity images. As the calibrating stage, a self-calibration method based on integrated bundle adjustment via joint setup with the digital camera is applied in this study for PMD camera calibration. As the result, an improvement of 83 % in RMS of range error and 72 % in RMS of coordinates residuals for PMD camera, over that achieved with basic calibration is realized in independent accuracy assessments. Furthermore, conventional photogrammetric techniques based on controlled network adjustment are utilized for platform calibration. Likewise, the well-known Extended Kalman Filtering (EKF) is applied to integrate the navigation sensors, namely GPS and INS. The overall acquisition system along with the proposed techniques leads to 90 % true positive recognition and the average of 12 centimetres 3D positioning accuracy.
NASA Astrophysics Data System (ADS)
Shahbazi, M.; Sattari, M.; Homayouni, S.; Saadatseresht, M.
2012-07-01
Recent advances in positioning techniques have made it possible to develop Mobile Mapping Systems (MMS) for detection and 3D localization of various objects from a moving platform. On the other hand, automatic traffic sign recognition from an equipped mobile platform has recently been a challenging issue for both intelligent transportation and municipal database collection. However, there are several inevitable problems coherent to all the recognition methods completely relying on passive chromatic or grayscale images. This paper presents the implementation and evaluation of an operational MMS. Being distinct from the others, the developed MMS comprises one range camera based on Photonic Mixer Device (PMD) technology and one standard 2D digital camera. The system benefits from certain algorithms to detect, recognize and localize the traffic signs by fusing the shape, color and object information from both range and intensity images. As the calibrating stage, a self-calibration method based on integrated bundle adjustment via joint setup with the digital camera is applied in this study for PMD camera calibration. As the result, an improvement of 83% in RMS of range error and 72% in RMS of coordinates residuals for PMD camera, over that achieved with basic calibration is realized in independent accuracy assessments. Furthermore, conventional photogrammetric techniques based on controlled network adjustment are utilized for platform calibration. Likewise, the well-known Extended Kalman Filtering (EKF) is applied to integrate the navigation sensors, namely GPS and INS. The overall acquisition system along with the proposed techniques leads to 90% true positive recognition and the average of 12 centimetres 3D positioning accuracy.
NASA Astrophysics Data System (ADS)
Minunno, Francesco; Peltoniemi, Mikko; Launiainen, Samuli; Mäkelä, Annikki
2014-05-01
Biogeochemical models quantify the material and energy flux exchanges between biosphere, atmosphere and soil, however there is still considerable uncertainty underpinning model structure and parametrization. The increasing availability of data from of multiple sources provides useful information for model calibration and validation at different space and time scales. We calibrated a simplified ecosystem process model PRELES to data from multiple sites. In this work we had the following objective: to compare a multi-site calibration and site-specific calibrations, in order to test if PRELES is a model of general applicability, and to test how well one parameterization can predict ecosystem fluxes. Model calibration and evaluation were carried out by the means of the Bayesian method; Bayesian calibration (BC) and Bayesian model comparison (BMC) were used to quantify the uncertainty in model parameters and model structure. Evapotranspiration (ET) and gross primary production (GPP) measurements collected in 9 sites of Finland and Sweden were used in the study; half dataset was used for model calibrations and half for the comparative analyses. 10 BCs were performed; the model was independently calibrated for each of the nine sites (site-specific calibrations) and a multi-site calibration was achieved using the data from all the sites in one BC. Then 9 BMCs were carried out, one for each site, using output from the multi-site and the site-specific versions of PRELES. Similar estimates were obtained for the parameters at which model outputs are most sensitive. Not surprisingly, the joint posterior distribution achieved through the multi-site calibration was characterized by lower uncertainty, because more data were involved in the calibration process. No significant differences were encountered in the prediction of the multi-site and site-specific versions of PRELES, and after BMC, we concluded that the model can be reliably used at regional scale to simulate carbon and water fluxes of Boreal forests. Despite being a simple model, PRELES provided good estimates of GPP and ET; only for one site PRELES multi-site version underestimated water fluxes. Our study implies convergence of GPP and water processes in boreal zone to the extent that their plausible prediction is possible with a simple model using global parameterization.
Calibration of a COTS Integration Cost Model Using Local Project Data
NASA Technical Reports Server (NTRS)
Boland, Dillard; Coon, Richard; Byers, Kathryn; Levitt, David
1997-01-01
The software measures and estimation techniques appropriate to a Commercial Off the Shelf (COTS) integration project differ from those commonly used for custom software development. Labor and schedule estimation tools that model COTS integration are available. Like all estimation tools, they must be calibrated with the organization's local project data. This paper describes the calibration of a commercial model using data collected by the Flight Dynamics Division (FDD) of the NASA Goddard Spaceflight Center (GSFC). The model calibrated is SLIM Release 4.0 from Quantitative Software Management (QSM). By adopting the SLIM reuse model and by treating configuration parameters as lines of code, we were able to establish a consistent calibration for COTS integration projects. The paper summarizes the metrics, the calibration process and results, and the validation of the calibration.
Gradient-based model calibration with proxy-model assistance
NASA Astrophysics Data System (ADS)
Burrows, Wesley; Doherty, John
2016-02-01
Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.
NASA Astrophysics Data System (ADS)
Gupta, A.; Singh, P. J.; Gaikwad, D. Y.; Udupa, D. V.; Topkar, A.; Sahoo, N. K.
2018-02-01
An experimental setup is developed for the trace level detection of heavy water (HDO) using the off axis-integrated cavity output spectroscopy technique. The absorption spectrum of water samples is recorded in the spectral range of 7190.7 cm-1-7191.5 cm-1 with the diode laser as the light source. From the recorded water vapor absorption spectrum, the heavy water concentration is determined from the HDO and water line. The effect of cavity gain nonlinearity with per pass absorption is studied. The signal processing and data fitting procedure is devised to obtain linear calibration curves by including nonlinear cavity gain effects into the calculation. Initial calibration of mirror reflectivity is performed by measurements on the natural water sample. The signal processing and data fitting method has been validated by the measurement of the HDO concentration in water samples over a wide range from 20 ppm to 2280 ppm showing a linear calibration curve. The average measurement time is about 30 s. The experimental technique presented in this paper could be applied for the development of a portable instrument for the fast measurement of water isotopic composition in heavy water plants and for the detection of heavy water leak in pressurized heavy water reactors.
NASA Astrophysics Data System (ADS)
Guoxin, Cheng
2015-01-01
In recent years, several calibration-independent transmission/reflection methods have been developed to determine the complex permittivity of liquid materials. However, these methods experience their own respective defects, such as the requirement of multi measurement cells, or the presence of air gap effect. To eliminate these drawbacks, a fast calibration-independent method is proposed in this paper. There are two main advantages of the present method over those in the literature. First, only one measurement cell is required. The cell is measured when it is empty and when it is filled with liquid. This avoids the air gap effect in the approach, in which the structure with two reference ports connected with each other is needed to be measured. Second, it eliminates the effects of uncalibrated coaxial cables, adaptors, and plug sections; systematic errors caused by the experimental setup are avoided by the wave cascading matrix manipulations. Using this method, three dielectric reference liquids, i.e., ethanol, ethanediol, and pure water, and low-loss transformer oil are measured over a wide frequency range to validate the proposed method. Their accuracy is assessed by comparing the results with those obtained from the other well known techniques. It is demonstrated that this proposed method can be used as a robust approach for fast complex permittivity determination of liquid materials.
Whispering Gallery Mode Thermometry
Corbellini, Simone; Ramella, Chiara; Yu, Lili; Pirola, Marco; Fernicola, Vito
2016-01-01
This paper presents a state-of-the-art whispering gallery mode (WGM) thermometer system, which could replace platinum resistance thermometers currently used in many industrial applications, thus overcoming some of their well-known limitations and their potential for providing lower measurement uncertainty. The temperature-sensing element is a sapphire-crystal-based whispering gallery mode resonator with the main resonant modes between 10 GHz and 20 GHz. In particular, it was found that the WGM around 13.6 GHz maximizes measurement performance, affording sub-millikelvin resolution and temperature stability of better than 1 mK at 0 °C. The thermometer system was made portable and low-cost by developing an ad hoc interrogation system (hardware and software) able to achieve an accuracy in the order of a few parts in 109 in the determination of resonance frequencies. Herein we report the experimental assessment of the measurement stability, repeatability and resolution, and the calibration of the thermometer in the temperature range from −74 °C to 85 °C. The combined standard uncertainty for a single temperature calibration point is found to be within 5 mK (i.e., comparable with state-of-the-art for industrial thermometry), and is mainly due to the employed calibration setup. The uncertainty contribution of the WGM thermometer alone is within a millikelvin. PMID:27801868
Application of AXUV diode detectors at ASDEX Upgrade
NASA Astrophysics Data System (ADS)
Bernert, M.; Eich, T.; Burckhart, A.; Fuchs, J. C.; Giannone, L.; Kallenbach, A.; McDermott, R. M.; Sieglin, B.
2014-03-01
In the ASDEX Upgrade tokamak, a radiation measurement for a wide spectral range, based on semiconductor detectors, with 256 lines of sight and a time resolution of 5μs was recently installed. In combination with the foil based bolometry, it is now possible to estimate the absolutely calibrated radiated power of the plasma on fast timescales. This work introduces this diagnostic based on AXUV (Absolute eXtended UltraViolet) n-on-p diodes made by International Radiation Detectors, Inc. The measurement and the degradation of the diodes in a tokamak environment is shown. Even though the AXUV diodes are developed to have a constant sensitivity for all photon energies (1 eV-8 keV), degradation leads to a photon energy dependence of the sensitivity. The foil bolometry, which is restricted to a time resolution of less than 1 kHz, offers a basis for a time dependent calibration of the diodes. The measurements of the quasi-calibrated diodes are compared with the foil bolometry and found to be accurate on the kHz time scale. Therefore, it is assumed, that the corrected values are also valid for the highest time resolution (200 kHz). With this improved diagnostic setup, the radiation induced by edge localized modes is analyzed on fast timescales.
NASA Astrophysics Data System (ADS)
Ammerlaan, B. A. J.; Holzinger, R.; Jedynska, A. D.; Henzing, J. S.
2017-09-01
Equivalent Black Carbon (EBC) and Elemental Carbon (EC) are different mass metrics to quantify the amount of combustion aerosol. Both metrics have their own measurement technique. In state-of-the-art carbon analysers, optical measurements are used to correct for organic carbon that is not evolving because of pyrolysis. These optical measurements are sometimes used to apply the technique of absorption photometers. Here, we use the transmission measurements of our carbon analyser for simultaneous determination of the elemental carbon concentration and the absorption coefficient. We use MAAP data from the CESAR observatory, the Netherlands, to correct for aerosol-filter interactions by linking the attenuation coefficient from the carbon analyser to the absorption coefficient measured by the MAAP. Application of the calibration to an independent data set of MAAP and OC/EC observations for the same location shows that the calibration is applicable to other observation periods. Because of simultaneous measurements of light absorption properties of the aerosol and elemental carbon, variation in the mass absorption efficiency (MAE) can be studied. We further show that the absorption coefficients and MAE in this set-up are determined within a precision of 10% and 12%, respectively. The precisions could be improved to 4% and 8% when the light transmission signal in the carbon analyser is very stable.