Sample records for cost calibration method

  1. Calibration method for a large-scale structured light measurement system.

    PubMed

    Wang, Peng; Wang, Jianmei; Xu, Jing; Guan, Yong; Zhang, Guanglie; Chen, Ken

    2017-05-10

    The structured light method is an effective non-contact measurement approach. The calibration greatly affects the measurement precision of structured light systems. To construct a large-scale structured light system with high accuracy, a large-scale and precise calibration gauge is always required, which leads to an increased cost. To this end, in this paper, a calibration method with a planar mirror is proposed to reduce the calibration gauge size and cost. An out-of-focus camera calibration method is also proposed to overcome the defocusing problem caused by the shortened distance during the calibration procedure. The experimental results verify the accuracy of the proposed calibration method.

  2. Beyond discrimination: A comparison of calibration methods and clinical usefulness of predictive models of readmission risk.

    PubMed

    Walsh, Colin G; Sharman, Kavya; Hripcsak, George

    2017-12-01

    Prior to implementing predictive models in novel settings, analyses of calibration and clinical usefulness remain as important as discrimination, but they are not frequently discussed. Calibration is a model's reflection of actual outcome prevalence in its predictions. Clinical usefulness refers to the utilities, costs, and harms of using a predictive model in practice. A decision analytic approach to calibrating and selecting an optimal intervention threshold may help maximize the impact of readmission risk and other preventive interventions. To select a pragmatic means of calibrating predictive models that requires a minimum amount of validation data and that performs well in practice. To evaluate the impact of miscalibration on utility and cost via clinical usefulness analyses. Observational, retrospective cohort study with electronic health record data from 120,000 inpatient admissions at an urban, academic center in Manhattan. The primary outcome was thirty-day readmission for three causes: all-cause, congestive heart failure, and chronic coronary atherosclerotic disease. Predictive modeling was performed via L1-regularized logistic regression. Calibration methods were compared including Platt Scaling, Logistic Calibration, and Prevalence Adjustment. Performance of predictive modeling and calibration was assessed via discrimination (c-statistic), calibration (Spiegelhalter Z-statistic, Root Mean Square Error [RMSE] of binned predictions, Sanders and Murphy Resolutions of the Brier Score, Calibration Slope and Intercept), and clinical usefulness (utility terms represented as costs). The amount of validation data necessary to apply each calibration algorithm was also assessed. C-statistics by diagnosis ranged from 0.7 for all-cause readmission to 0.86 (0.78-0.93) for congestive heart failure. Logistic Calibration and Platt Scaling performed best and this difference required analyzing multiple metrics of calibration simultaneously, in particular Calibration Slopes and Intercepts. Clinical usefulness analyses provided optimal risk thresholds, which varied by reason for readmission, outcome prevalence, and calibration algorithm. Utility analyses also suggested maximum tolerable intervention costs, e.g., $1720 for all-cause readmissions based on a published cost of readmission of $11,862. Choice of calibration method depends on availability of validation data and on performance. Improperly calibrated models may contribute to higher costs of intervention as measured via clinical usefulness. Decision-makers must understand underlying utilities or costs inherent in the use-case at hand to assess usefulness and will obtain the optimal risk threshold to trigger intervention with intervention cost limits as a result. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. A Flexile and High Precision Calibration Method for Binocular Structured Light Scanning System

    PubMed Central

    Yuan, Jianying; Wang, Qiong; Li, Bailin

    2014-01-01

    3D (three-dimensional) structured light scanning system is widely used in the field of reverse engineering, quality inspection, and so forth. Camera calibration is the key for scanning precision. Currently, 2D (two-dimensional) or 3D fine processed calibration reference object is usually applied for high calibration precision, which is difficult to operate and the cost is high. In this paper, a novel calibration method is proposed with a scale bar and some artificial coded targets placed randomly in the measuring volume. The principle of the proposed method is based on hierarchical self-calibration and bundle adjustment. We get initial intrinsic parameters from images. Initial extrinsic parameters in projective space are estimated with the method of factorization and then upgraded to Euclidean space with orthogonality of rotation matrix and rank 3 of the absolute quadric as constraint. Last, all camera parameters are refined through bundle adjustment. Real experiments show that the proposed method is robust, and has the same precision level as the result using delicate artificial reference object, but the hardware cost is very low compared with the current calibration method used in 3D structured light scanning system. PMID:25202736

  4. Radiometric calibration method for large aperture infrared system with broad dynamic range.

    PubMed

    Sun, Zhiyuan; Chang, Songtao; Zhu, Wei

    2015-05-20

    Infrared radiometric measurements can acquire important data for missile defense systems. When observation is carried out by ground-based infrared systems, a missile is characterized by long distance, small size, and large variation of radiance. Therefore, the infrared systems should be manufactured with a larger aperture to enhance detection ability and calibrated at a broader dynamic range to extend measurable radiance. Nevertheless, the frequently used calibration methods demand an extended-area blackbody with broad dynamic range or a huge collimator for filling the system's field stop, which would greatly increase manufacturing costs and difficulties. To overcome this restriction, a calibration method based on amendment of inner and outer calibration is proposed. First, the principles and procedures of this method are introduced. Then, a shifting strategy of infrared systems for measuring targets with large fluctuations of infrared radiance is put forward. Finally, several experiments are performed on a shortwave infrared system with Φ400  mm aperture. The results indicate that the proposed method cannot only ensure accuracy of calibration but have the advantage of low cost, low power, and high motility. Hence, it is an effective radiometric calibration method in the outfield.

  5. Low-cost precision rotary index calibration

    NASA Astrophysics Data System (ADS)

    Ng, T. W.; Lim, T. S.

    2005-08-01

    The traditional method for calibrating angular indexing repeatability of rotary axes on machine tools and measuring equipment is with a precision polygon (usually 12 sided) and an autocollimator or angular interferometer. Such a setup is typically expensive. Here, we propose a far more cost-effective approach that uses just a laser, diffractive optical element, and CCD camera. We show that significantly high accuracies can be achieved for angular index calibration.

  6. Convert a low-cost sensor to a colorimeter using an improved regression method

    NASA Astrophysics Data System (ADS)

    Wu, Yifeng

    2008-01-01

    Closed loop color calibration is a process to maintain consistent color reproduction for color printers. To perform closed loop color calibration, a pre-designed color target should be printed, and automatically measured by a color measuring instrument. A low cost sensor has been embedded to the printer to perform the color measurement. A series of sensor calibration and color conversion methods have been developed. The purpose is to get accurate colorimetric measurement from the data measured by the low cost sensor. In order to get high accuracy colorimetric measurement, we need carefully calibrate the sensor, and minimize all possible errors during the color conversion. After comparing several classical color conversion methods, a regression based color conversion method has been selected. The regression is a powerful method to estimate the color conversion functions. But the main difficulty to use this method is to find an appropriate function to describe the relationship between the input and the output data. In this paper, we propose to use 1D pre-linearization tables to improve the linearity between the input sensor measuring data and the output colorimetric data. Using this method, we can increase the accuracy of the regression method, so as to improve the accuracy of the color conversion.

  7. Design of a tracked ultrasound calibration phantom made of LEGO bricks

    NASA Astrophysics Data System (ADS)

    Walsh, Ryan; Soehl, Marie; Rankin, Adam; Lasso, Andras; Fichtinger, Gabor

    2014-03-01

    PURPOSE: Spatial calibration of tracked ultrasound systems is commonly performed using precisely fabricated phantoms. Machining or 3D printing has relatively high cost and not easily available. Moreover, the possibilities for modifying the phantoms are very limited. Our goal was to find a method to construct a calibration phantom from affordable, widely available components, which can be built in short time, can be easily modified, and provides comparable accuracy to the existing solutions. METHODS: We designed an N-wire calibration phantom made of LEGO® bricks. To affirm the phantom's reproducibility and build time, ten builds were done by first-time users. The phantoms were used for a tracked ultrasound calibration by an experienced user. The success of each user's build was determined by the lowest root mean square (RMS) wire reprojection error of three calibrations. The accuracy and variance of calibrations were evaluated for the calibrations produced for various tracked ultrasound probes. The proposed model was compared to two of the currently available phantom models for both electromagnetic and optical tracking. RESULTS: The phantom was successfully built by all ten first-time users in an average time of 18.8 minutes. It cost approximately $10 CAD for the required LEGO® bricks and averaged a 0.69mm of error in the calibration reproducibility for ultrasound calibrations. It is one third the cost of similar 3D printed phantoms and takes much less time to build. The proposed phantom's image reprojections were 0.13mm more erroneous than those of the highest performing current phantom model The average standard deviation of multiple 3D image reprojections differed by 0.05mm between the phantoms CONCLUSION: It was found that the phantom could be built in less time, was one third the cost, compared to similar 3D printed models. The proposed phantom was found to be capable of producing equivalent calibrations to 3D printed phantoms.

  8. In Search of Easy-to-Use Methods for Calibrating ADCP's for Velocity and Discharge Measurements

    USGS Publications Warehouse

    Oberg, K.; ,

    2002-01-01

    A cost-effective procedure for calibrating acoustic Doppler current profilers (ADCP) in the field was presented. The advantages and disadvantages of various methods which are used for calibrating ADCP were discussed. The proposed method requires the use of differential global positioning system (DGPS) with sub-meter accuracy and standard software for collecting ADCP data. The method involves traversing a long (400-800 meter) course at a constant compass heading and speed, while collecting simultaneous DGPS and ADCP data.

  9. A novel 360-degree shape measurement using a simple setup with two mirrors and a laser MEMS scanner

    NASA Astrophysics Data System (ADS)

    Jin, Rui; Zhou, Xiang; Yang, Tao; Li, Dong; Wang, Chao

    2017-09-01

    There is no denying that 360-degree shape measurement technology plays an important role in the field of threedimensional optical metrology. Traditional optical 360-degree shape measurement methods are mainly two kinds: the first kind, by placing multiple scanners to achieve 360-degree measurements; the second kind, through the high-precision rotating device to get 360-degree shape model. The former increases the number of scanners and costly, while the latter using rotating devices lead to time consuming. This paper presents a low cost and fast optical 360-degree shape measurement method, which possesses the advantages of full static, fast and low cost. The measuring system consists of two mirrors with a certain angle, a laser projection system, a stereoscopic calibration block, and two cameras. And most of all, laser MEMS scanner can achieve precise movement of laser stripes without any movement mechanism, improving the measurement accuracy and efficiency. What's more, a novel stereo calibration technology presented in this paper can achieve point clouds data registration, and then get the 360-degree model of objects. A stereoscopic calibration block with special coded patterns on six sides is used in this novel stereo calibration method. Through this novel stereo calibration technology we can quickly get the 360-degree models of objects.

  10. Generator Dynamic Model Validation and Parameter Calibration Using Phasor Measurements at the Point of Connection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Zhenyu; Du, Pengwei; Kosterev, Dmitry

    2013-05-01

    Disturbance data recorded by phasor measurement units (PMU) offers opportunities to improve the integrity of dynamic models. However, manually tuning parameters through play-back events demands significant efforts and engineering experiences. In this paper, a calibration method using the extended Kalman filter (EKF) technique is proposed. The formulation of EKF with parameter calibration is discussed. Case studies are presented to demonstrate its validity. The proposed calibration method is cost-effective, complementary to traditional equipment testing for improving dynamic model quality.

  11. Laser Calibration of an Impact Disdrometer

    NASA Technical Reports Server (NTRS)

    Lane, John E.; Kasparis, Takis; Metzger, Philip T.; Jones, W. Linwood

    2014-01-01

    A practical approach to developing an operational low-cost disdrometer hinges on implementing an effective in situ adaptive calibration strategy. This calibration strategy lowers the cost of the device and provides a method to guarantee continued automatic calibration. In previous work, a collocated tipping bucket rain gauge was utilized to provide a calibration signal to the disdrometer's digital signal processing software. Rainfall rate is proportional to the 11/3 moment of the drop size distribution (a 7/2 moment can also be assumed, depending on the choice of terminal velocity relationship). In the previous case, the disdrometer calibration was characterized and weighted to the 11/3 moment of the drop size distribution (DSD). Optical extinction by rainfall is proportional to the 2nd moment of the DSD. Using visible laser light as a means to focus and generate an auxiliary calibration signal, the adaptive calibration processing is significantly improved.

  12. Dimensional accuracy of aluminium extrusions in mechanical calibration

    NASA Astrophysics Data System (ADS)

    Raknes, Christian Arne; Welo, Torgeir; Paulsen, Frode

    2018-05-01

    Reducing dimensional variations in the extrusion process without increasing cost is challenging due to the nature of the process itself. An alternative approach—also from a cost perspective—is using extruded profiles with standard tolerances and utilize downstream processes, and thus calibrate the part within tolerance limits that are not achievable directly from the extrusion process. In this paper, two mechanical calibration strategies for the extruded product are investigated, utilizing the forming lines of the manufacturer. The first calibration strategy is based on global, longitudinal stretching in combination with local bending, while the second strategy utilizes the principle of transversal stretching and local bending of the cross-section. An extruded U-profile is used to make a comparison between the two methods using numerical analyses. To provide response surfaces with the FEA program, ABAQUS is used in combination with Design of Experiment (DOE). DOE is conducted with a two-level fractional factorial design to collect the appropriate data. The aim is to find the main factors affecting the dimension accuracy of the final part obtained by the two calibration methods. The results show that both calibration strategies have proven to reduce cross-sectional variations effectively form standard extrusion tolerances. It is concluded that mechanical calibration is a viable, low-cost alternative for aluminium parts that demand high dimensional accuracy, e.g. due to fit-up or welding requirements.

  13. Simultaneous Calibration: A Joint Optimization Approach for Multiple Kinect and External Cameras.

    PubMed

    Liao, Yajie; Sun, Ying; Li, Gongfa; Kong, Jianyi; Jiang, Guozhang; Jiang, Du; Cai, Haibin; Ju, Zhaojie; Yu, Hui; Liu, Honghai

    2017-06-24

    Camera calibration is a crucial problem in many applications, such as 3D reconstruction, structure from motion, object tracking and face alignment. Numerous methods have been proposed to solve the above problem with good performance in the last few decades. However, few methods are targeted at joint calibration of multi-sensors (more than four devices), which normally is a practical issue in the real-time systems. In this paper, we propose a novel method and a corresponding workflow framework to simultaneously calibrate relative poses of a Kinect and three external cameras. By optimizing the final cost function and adding corresponding weights to the external cameras in different locations, an effective joint calibration of multiple devices is constructed. Furthermore, the method is tested in a practical platform, and experiment results show that the proposed joint calibration method can achieve a satisfactory performance in a project real-time system and its accuracy is higher than the manufacturer's calibration.

  14. Simultaneous Calibration: A Joint Optimization Approach for Multiple Kinect and External Cameras

    PubMed Central

    Liao, Yajie; Sun, Ying; Li, Gongfa; Kong, Jianyi; Jiang, Guozhang; Jiang, Du; Cai, Haibin; Ju, Zhaojie; Yu, Hui; Liu, Honghai

    2017-01-01

    Camera calibration is a crucial problem in many applications, such as 3D reconstruction, structure from motion, object tracking and face alignment. Numerous methods have been proposed to solve the above problem with good performance in the last few decades. However, few methods are targeted at joint calibration of multi-sensors (more than four devices), which normally is a practical issue in the real-time systems. In this paper, we propose a novel method and a corresponding workflow framework to simultaneously calibrate relative poses of a Kinect and three external cameras. By optimizing the final cost function and adding corresponding weights to the external cameras in different locations, an effective joint calibration of multiple devices is constructed. Furthermore, the method is tested in a practical platform, and experiment results show that the proposed joint calibration method can achieve a satisfactory performance in a project real-time system and its accuracy is higher than the manufacturer’s calibration. PMID:28672823

  15. Studies in Software Cost Model Behavior: Do We Really Understand Cost Model Performance?

    NASA Technical Reports Server (NTRS)

    Lum, Karen; Hihn, Jairus; Menzies, Tim

    2006-01-01

    While there exists extensive literature on software cost estimation techniques, industry practice continues to rely upon standard regression-based algorithms. These software effort models are typically calibrated or tuned to local conditions using local data. This paper cautions that current approaches to model calibration often produce sub-optimal models because of the large variance problem inherent in cost data and by including far more effort multipliers than the data supports. Building optimal models requires that a wider range of models be considered while correctly calibrating these models requires rejection rules that prune variables and records and use multiple criteria for evaluating model performance. The main contribution of this paper is to document a standard method that integrates formal model identification, estimation, and validation. It also documents what we call the large variance problem that is a leading cause of cost model brittleness or instability.

  16. Improvement of Gaofen-3 Absolute Positioning Accuracy Based on Cross-Calibration

    PubMed Central

    Deng, Mingjun; Li, Jiansong

    2017-01-01

    The Chinese Gaofen-3 (GF-3) mission was launched in August 2016, equipped with a full polarimetric synthetic aperture radar (SAR) sensor in the C-band, with a resolution of up to 1 m. The absolute positioning accuracy of GF-3 is of great importance, and in-orbit geometric calibration is a key technology for improving absolute positioning accuracy. Conventional geometric calibration is used to accurately calibrate the geometric calibration parameters of the image (internal delay and azimuth shifts) using high-precision ground control data, which are highly dependent on the control data of the calibration field, but it remains costly and labor-intensive to monitor changes in GF-3’s geometric calibration parameters. Based on the positioning consistency constraint of the conjugate points, this study presents a geometric cross-calibration method for the rapid and accurate calibration of GF-3. The proposed method can accurately calibrate geometric calibration parameters without using corner reflectors and high-precision digital elevation models, thus improving absolute positioning accuracy of the GF-3 image. GF-3 images from multiple regions were collected to verify the absolute positioning accuracy after cross-calibration. The results show that this method can achieve a calibration accuracy as high as that achieved by the conventional field calibration method. PMID:29240675

  17. Stepwise Regression Analysis of MDOE Balance Calibration Data Acquired at DNW

    NASA Technical Reports Server (NTRS)

    DeLoach, RIchard; Philipsen, Iwan

    2007-01-01

    This paper reports a comparison of two experiment design methods applied in the calibration of a strain-gage balance. One features a 734-point test matrix in which loads are varied systematically according to a method commonly applied in aerospace research and known in the literature of experiment design as One Factor At a Time (OFAT) testing. Two variations of an alternative experiment design were also executed on the same balance, each with different features of an MDOE experiment design. The Modern Design of Experiments (MDOE) is an integrated process of experiment design, execution, and analysis applied at NASA's Langley Research Center to achieve significant reductions in cycle time, direct operating cost, and experimental uncertainty in aerospace research generally and in balance calibration experiments specifically. Personnel in the Instrumentation and Controls Department of the German Dutch Wind Tunnels (DNW) have applied MDOE methods to evaluate them in the calibration of a balance using an automated calibration machine. The data have been sent to Langley Research Center for analysis and comparison. This paper reports key findings from this analysis. The chief result is that a 100-point calibration exploiting MDOE principles delivered quality comparable to a 700+ point OFAT calibration with significantly reduced cycle time and attendant savings in direct and indirect costs. While the DNW test matrices implemented key MDOE principles and produced excellent results, additional MDOE concepts implemented in balance calibrations at Langley Research Center are also identified and described.

  18. Temporal Analysis and Automatic Calibration of the Velodyne HDL-32E LiDAR System

    NASA Astrophysics Data System (ADS)

    Chan, T. O.; Lichti, D. D.; Belton, D.

    2013-10-01

    At the end of the first quarter of 2012, more than 600 Velodyne LiDAR systems had been sold worldwide for various robotic and high-accuracy survey applications. The ultra-compact Velodyne HDL-32E LiDAR has become a predominant sensor for many applications that require lower sensor size/weight and cost. For high accuracy applications, cost-effective calibration methods with minimal manual intervention are always desired by users. However, the calibrations are complicated by the Velodyne LiDAR's narrow vertical field of view and the very highly time-variant nature of its measurements. In the paper, the temporal stability of the HDL-32E is first analysed as the motivation for developing a new, automated calibration method. This is followed by a detailed description of the calibration method that is driven by a novel segmentation method for extracting vertical cylindrical features from the Velodyne point clouds. The proposed segmentation method utilizes the Velodyne point cloud's slice-like nature and first decomposes the point clouds into 2D layers. Then the layers are treated as 2D images and are processed with the Generalized Hough Transform which extracts the points distributed in circular patterns from the point cloud layers. Subsequently, the vertical cylindrical features can be readily extracted from the whole point clouds based on the previously extracted points. The points are passed to the calibration that estimates the cylinder parameters and the LiDAR's additional parameters simultaneously by constraining the segmented points to fit to the cylindrical geometric model in such a way the weighted sum of the adjustment residuals are minimized. The proposed calibration is highly automatic and this allows end users to obtain the time-variant additional parameters instantly and frequently whenever there are vertical cylindrical features presenting in scenes. The methods were verified with two different real datasets, and the results suggest that up to 78.43% accuracy improvement for the HDL-32E can be achieved using the proposed calibration method.

  19. Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, J.; Polly, B.; Collis, J.

    2013-09-01

    This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define 'explicit' input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAEmore » 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.« less

  20. Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    and Ben Polly, Joseph Robertson; Polly, Ben; Collis, Jon

    2013-09-01

    This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define "explicit" input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAEmore » 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.« less

  1. A low-cost and portable realization on fringe projection three-dimensional measurement

    NASA Astrophysics Data System (ADS)

    Xiao, Suzhi; Tao, Wei; Zhao, Hui

    2015-12-01

    Fringe projection three-dimensional measurement is widely applied in a wide range of industrial application. The traditional fringe projection system has the disadvantages of high expense, big size, and complicated calibration requirements. In this paper we introduce a low-cost and portable realization on three-dimensional measurement with Pico projector. It has the advantages of low cost, compact physical size, and flexible configuration. For the proposed fringe projection system, there is no restriction to camera and projector's relative alignment on parallelism and perpendicularity for installation. Moreover, plane-based calibration method is adopted in this paper that avoids critical requirements on calibration system such as additional gauge block or precise linear z stage. What is more, error sources existing in the proposed system are introduced in this paper. The experimental results demonstrate the feasibility of the proposed low cost and portable fringe projection system.

  2. A holistic calibration method with iterative distortion compensation for stereo deflectometry

    NASA Astrophysics Data System (ADS)

    Xu, Yongjia; Gao, Feng; Zhang, Zonghua; Jiang, Xiangqian

    2018-07-01

    This paper presents a novel holistic calibration method for stereo deflectometry system to improve the system measurement accuracy. The reconstruction result of stereo deflectometry is integrated with the calculated normal data of the measured surface. The calculation accuracy of the normal data is seriously influenced by the calibration accuracy of the geometrical relationship of the stereo deflectometry system. Conventional calibration approaches introduce form error to the system due to inaccurate imaging model and distortion elimination. The proposed calibration method compensates system distortion based on an iterative algorithm instead of the conventional distortion mathematical model. The initial value of the system parameters are calculated from the fringe patterns displayed on the systemic LCD screen through a reflection of a markless flat mirror. An iterative algorithm is proposed to compensate system distortion and optimize camera imaging parameters and system geometrical relation parameters based on a cost function. Both simulation work and experimental results show the proposed calibration method can significantly improve the calibration and measurement accuracy of a stereo deflectometry. The PV (peak value) of measurement error of a flat mirror can be reduced to 69.7 nm by applying the proposed method from 282 nm obtained with the conventional calibration approach.

  3. Comparison of "E-Rater"[R] Automated Essay Scoring Model Calibration Methods Based on Distributional Targets

    ERIC Educational Resources Information Center

    Zhang, Mo; Williamson, David M.; Breyer, F. Jay; Trapani, Catherine

    2012-01-01

    This article describes two separate, related studies that provide insight into the effectiveness of "e-rater" score calibration methods based on different distributional targets. In the first study, we developed and evaluated a new type of "e-rater" scoring model that was cost-effective and applicable under conditions of absent human rating and…

  4. Reducing heliostat field costs by direct measurement and control of the mirror orientation

    NASA Astrophysics Data System (ADS)

    van den Donker, P.; Rosinga, G.; van Voorthuysen, E. du Marchie

    2016-05-01

    The first commercial CSP Central Receiver System has been in operation since 2007. The technology required for such a central receiver system is quite new. The determining factor of the price of electricity is the capital investment in the heliostat field. The cost level per square meter of the heliostat field is rather high. Sun2point is questioning the market development, which is trying to get the cost level down by aiming at large heliostats. Sun2Point aims at mass manufacturing small heliostats to achieve low prices. Mass manufacturing off-site and transport over long distances is possible for small heliostats only. Calibration on the spot is a labour-intensive activity. Autonomous, factory calibrated and wireless controlled heliostats are the solution to lower installation cost. A new measurement method that directly reports the orientation of the heliostat in relation to the earth and the sun can solve the calibration problem when the heliostats are installed. The application of small heliostats will be much cheaper as a result of this measurement method. In this paper several methods for such a measurement are described briefly. The new Sun2Point method has successfully been tested. In this paper Sun2Point challenges the CSP community to investigate this approach. A brief survey is presented of many aspects that lead to a low price.

  5. A Low Cost Weather Balloon Borne Solar Cell Calibration Payload

    NASA Technical Reports Server (NTRS)

    Snyder, David B.; Wolford, David S.

    2012-01-01

    Calibration of standard sets of solar cell sub-cells is an important step to laboratory verification of on-orbit performance of new solar cell technologies. This paper, looks at the potential capabilities of a lightweight weather balloon payload for solar cell calibration. A 1500 gr latex weather balloon can lift a 2.7 kg payload to over 100,000 ft altitude, above 99% of the atmosphere. Data taken between atmospheric pressures of about 30 to 15 mbar may be extrapolated via the Langley Plot method to 0 mbar, i.e. AMO. This extrapolation, in principle, can have better than 0.1 % error. The launch costs of such a payload arc significantly less than the much larger, higher altitude balloons, or the manned flight facility. The low cost enables a risk tolerant approach to payload development. Demonstration of 1% standard deviation flight-to-flight variation is the goal of this project. This paper describes the initial concept of solar cell calibration payload, and reports initial test flight results. .

  6. Single Vector Calibration System for Multi-Axis Load Cells and Method for Calibrating a Multi-Axis Load Cell

    NASA Technical Reports Server (NTRS)

    Parker, Peter A. (Inventor)

    2003-01-01

    A single vector calibration system is provided which facilitates the calibration of multi-axis load cells, including wind tunnel force balances. The single vector system provides the capability to calibrate a multi-axis load cell using a single directional load, for example loading solely in the gravitational direction. The system manipulates the load cell in three-dimensional space, while keeping the uni-directional calibration load aligned. The use of a single vector calibration load reduces the set-up time for the multi-axis load combinations needed to generate a complete calibration mathematical model. The system also reduces load application inaccuracies caused by the conventional requirement to generate multiple force vectors. The simplicity of the system reduces calibration time and cost, while simultaneously increasing calibration accuracy.

  7. Development of Rapid, Continuous Calibration Techniques and Implementation as a Prototype System for Civil Engineering Materials Evaluation

    NASA Astrophysics Data System (ADS)

    Scott, M. L.; Gagarin, N.; Mekemson, J. R.; Chintakunta, S. R.

    2011-06-01

    Until recently, civil engineering material calibration data could only be obtained from material sample cores or via time consuming, stationary calibration measurements in a limited number of locations. Calibration data are used to determine material propagation velocities of electromagnetic waves in test materials for use in layer thickness measurements and subsurface imaging. Limitations these calibration methods impose have been a significant impediment to broader use of nondestructive evaluation methods such as ground-penetrating radar (GPR). In 2006, a new rapid, continuous calibration approach was designed using simulation software to address these measurement limitations during a Federal Highway Administration (FHWA) research and development effort. This continuous calibration method combines a digitally-synthesized step-frequency (SF)-GPR array and a data collection protocol sequence for the common midpoint (CMP) method. Modeling and laboratory test results for various data collection protocols and materials are presented in this paper. The continuous-CMP concept was finally implemented for FHWA in a prototype demonstration system called the Advanced Pavement Evaluation (APE) system in 2009. Data from the continuous-CMP protocol is processed using a semblance/coherency analysis to determine material propagation velocities. Continuously calibrated pavement thicknesses measured with the APE system in 2009 are presented. This method is efficient, accurate, and cost-effective.

  8. Development of rapid, continuous calibration techniques and implementation as a prototype system for civil engineering materials evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott, M. L.; Gagarin, N.; Mekemson, J. R.

    Until recently, civil engineering material calibration data could only be obtained from material sample cores or via time consuming, stationary calibration measurements in a limited number of locations. Calibration data are used to determine material propagation velocities of electromagnetic waves in test materials for use in layer thickness measurements and subsurface imaging. Limitations these calibration methods impose have been a significant impediment to broader use of nondestructive evaluation methods such as ground-penetrating radar (GPR). In 2006, a new rapid, continuous calibration approach was designed using simulation software to address these measurement limitations during a Federal Highway Administration (FHWA) research andmore » development effort. This continuous calibration method combines a digitally-synthesized step-frequency (SF)-GPR array and a data collection protocol sequence for the common midpoint (CMP) method. Modeling and laboratory test results for various data collection protocols and materials are presented in this paper. The continuous-CMP concept was finally implemented for FHWA in a prototype demonstration system called the Advanced Pavement Evaluation (APE) system in 2009. Data from the continuous-CMP protocol is processed using a semblance/coherency analysis to determine material propagation velocities. Continuously calibrated pavement thicknesses measured with the APE system in 2009 are presented. This method is efficient, accurate, and cost-effective.« less

  9. The research on calibration methods of dual-CCD laser three-dimensional human face scanning system

    NASA Astrophysics Data System (ADS)

    Wang, Jinjiang; Chang, Tianyu; Ge, Baozhen; Tian, Qingguo; Yang, Fengting; Shi, Shendong

    2013-09-01

    In this paper, on the basis of considering the performance advantages of two-step method, we combines the stereo matching of binocular stereo vision with active laser scanning to calibrate the system. Above all, we select a reference camera coordinate system as the world coordinate system and unity the coordinates of two CCD cameras. And then obtain the new perspective projection matrix (PPM) of each camera after the epipolar rectification. By those, the corresponding epipolar equation of two cameras can be defined. So by utilizing the trigonometric parallax method, we can measure the space point position after distortion correction and achieve stereo matching calibration between two image points. Experiments verify that this method can improve accuracy and system stability is guaranteed. The stereo matching calibration has a simple process with low-cost, and simplifies regular maintenance work. It can acquire 3D coordinates only by planar checkerboard calibration without the need of designing specific standard target or using electronic theodolite. It is found that during the experiment two-step calibration error and lens distortion lead to the stratification of point cloud data. The proposed calibration method which combining active line laser scanning and binocular stereo vision has the both advantages of them. It has more flexible applicability. Theory analysis and experiment shows the method is reasonable.

  10. A variable acceleration calibration system

    NASA Astrophysics Data System (ADS)

    Johnson, Thomas H.

    2011-12-01

    A variable acceleration calibration system that applies loads using gravitational and centripetal acceleration serves as an alternative, efficient and cost effective method for calibrating internal wind tunnel force balances. Two proof-of-concept variable acceleration calibration systems are designed, fabricated and tested. The NASA UT-36 force balance served as the test balance for the calibration experiments. The variable acceleration calibration systems are shown to be capable of performing three component calibration experiments with an approximate applied load error on the order of 1% of the full scale calibration loads. Sources of error are indentified using experimental design methods and a propagation of uncertainty analysis. Three types of uncertainty are indentified for the systems and are attributed to prediction error, calibration error and pure error. Angular velocity uncertainty is shown to be the largest indentified source of prediction error. The calibration uncertainties using a production variable acceleration based system are shown to be potentially equivalent to current methods. The production quality system can be realized using lighter materials and a more precise instrumentation. Further research is needed to account for balance deflection, forcing effects due to vibration, and large tare loads. A gyroscope measurement technique is shown to be capable of resolving the balance deflection angle calculation. Long term research objectives include a demonstration of a six degree of freedom calibration, and a large capacity balance calibration.

  11. A pipette-based calibration system for fast-scan cyclic voltammetry with fast response times.

    PubMed

    Ramsson, Eric S

    2016-01-01

    Fast-scan cyclic voltammetry (FSCV) is an electrochemical technique that utilizes the oxidation and/or reduction of an analyte of interest to infer rapid changes in concentrations. In order to calibrate the resulting oxidative or reductive current, known concentrations of an analyte must be introduced under controlled settings. Here, I describe a simple and cost-effective method, using a Petri dish and pipettes, for the calibration of carbon fiber microelectrodes (CFMs) using FSCV.

  12. Design and simulation of a sensor for heliostat field closed loop control

    NASA Astrophysics Data System (ADS)

    Collins, Mike; Potter, Daniel; Burton, Alex

    2017-06-01

    Significant research has been completed in pursuit of capital cost reductions for heliostats [1],[2]. The camera array closed loop control concept has potential to radically alter the way heliostats are controlled and installed by replacing high quality open loop targeting systems with low quality targeting devices that rely on measurement of image position to remove tracking errors during operation. Although the system could be used for any heliostat size, the system significantly benefits small heliostats by reducing actuation costs, enabling large numbers of heliostats to be calibrated simultaneously, and enabling calibration of heliostats that produce low irradiance (similar or less than ambient light images) on Lambertian calibration targets, such as small heliostats that are far from the tower. A simulation method for the camera array has been designed and verified experimentally. The simulation tool demonstrates that closed loop calibration or control is possible using this device.

  13. Dry calibration of electromagnetic flowmeters based on numerical models combining multiple physical phenomena (multiphysics)

    NASA Astrophysics Data System (ADS)

    Fu, X.; Hu, L.; Lee, K. M.; Zou, J.; Ruan, X. D.; Yang, H. Y.

    2010-10-01

    This paper presents a method for dry calibration of an electromagnetic flowmeter (EMF). This method, which determines the voltage induced in the EMF as conductive liquid flows through a magnetic field, numerically solves a coupled set of multiphysical equations with measured boundary conditions for the magnetic, electric, and flow fields in the measuring pipe of the flowmeter. Specifically, this paper details the formulation of dry calibration and an efficient algorithm (that adaptively minimizes the number of measurements and requires only the normal component of the magnetic flux density as boundary conditions on the pipe surface to reconstruct the magnetic field involved) for computing the sensitivity of EMF. Along with an in-depth discussion on factors that could significantly affect the final precision of a dry calibrated EMF, the effects of flow disturbance on measuring errors have been experimentally studied by installing a baffle at the inflow port of the EMF. Results of the dry calibration on an actual EMF were compared against flow-rig calibration; excellent agreements (within 0.3%) between dry calibration and flow-rig tests verify the multiphysical computation of the fields and the robustness of the method. As requiring no actual flow, the dry calibration is particularly useful for calibrating large-diameter EMFs where conventional flow-rig methods are often costly and difficult to implement.

  14. The Value of Hydrograph Partitioning Curves for Calibrating Hydrological Models in Glacierized Basins

    NASA Astrophysics Data System (ADS)

    He, Zhihua; Vorogushyn, Sergiy; Unger-Shayesteh, Katy; Gafurov, Abror; Kalashnikova, Olga; Omorova, Elvira; Merz, Bruno

    2018-03-01

    This study refines the method for calibrating a glacio-hydrological model based on Hydrograph Partitioning Curves (HPCs), and evaluates its value in comparison to multidata set optimization approaches which use glacier mass balance, satellite snow cover images, and discharge. The HPCs are extracted from the observed flow hydrograph using catchment precipitation and temperature gradients. They indicate the periods when the various runoff processes, such as glacier melt or snow melt, dominate the basin hydrograph. The annual cumulative curve of the difference between average daily temperature and melt threshold temperature over the basin, as well as the annual cumulative curve of average daily snowfall on the glacierized areas are used to identify the starting and end dates of snow and glacier ablation periods. Model parameters characterizing different runoff processes are calibrated on different HPCs in a stepwise and iterative way. Results show that the HPC-based method (1) delivers model-internal consistency comparably to the tri-data set calibration method; (2) improves the stability of calibrated parameter values across various calibration periods; and (3) estimates the contributions of runoff components similarly to the tri-data set calibration method. Our findings indicate the potential of the HPC-based approach as an alternative for hydrological model calibration in glacierized basins where other calibration data sets than discharge are often not available or very costly to obtain.

  15. Innovative self-calibration method for accelerometer scale factor of the missile-borne RINS with fiber optic gyro.

    PubMed

    Zhang, Qian; Wang, Lei; Liu, Zengjun; Zhang, Yiming

    2016-09-19

    The calibration of an inertial measurement unit (IMU) is a key technique to improve the preciseness of the inertial navigation system (INS) for missile, especially for the calibration of accelerometer scale factor. Traditional calibration method is generally based on the high accuracy turntable, however, it leads to expensive costs and the calibration results are not suitable to the actual operating environment. In the wake of developments in multi-axis rotational INS (RINS) with optical inertial sensors, self-calibration is utilized as an effective way to calibrate IMU on missile and the calibration results are more accurate in practical application. However, the introduction of multi-axis RINS causes additional calibration errors, including non-orthogonality errors of mechanical processing and non-horizontal errors of operating environment, it means that the multi-axis gimbals could not be regarded as a high accuracy turntable. As for its application on missiles, in this paper, after analyzing the relationship between the calibration error of accelerometer scale factor and non-orthogonality and non-horizontal angles, an innovative calibration procedure using the signals of fiber optic gyro and photoelectric encoder is proposed. The laboratory and vehicle experiment results validate the theory and prove that the proposed method relaxes the orthogonality requirement of rotation axes and eliminates the strict application condition of the system.

  16. Determining geometric error model parameters of a terrestrial laser scanner through Two-face, Length-consistency, and Network methods

    PubMed Central

    Wang, Ling; Muralikrishnan, Bala; Rachakonda, Prem; Sawyer, Daniel

    2017-01-01

    Terrestrial laser scanners (TLS) are increasingly used in large-scale manufacturing and assembly where required measurement uncertainties are on the order of few tenths of a millimeter or smaller. In order to meet these stringent requirements, systematic errors within a TLS are compensated in-situ through self-calibration. In the Network method of self-calibration, numerous targets distributed in the work-volume are measured from multiple locations with the TLS to determine parameters of the TLS error model. In this paper, we propose two new self-calibration methods, the Two-face method and the Length-consistency method. The Length-consistency method is proposed as a more efficient way of realizing the Network method where the length between any pair of targets from multiple TLS positions are compared to determine TLS model parameters. The Two-face method is a two-step process. In the first step, many model parameters are determined directly from the difference between front-face and back-face measurements of targets distributed in the work volume. In the second step, all remaining model parameters are determined through the Length-consistency method. We compare the Two-face method, the Length-consistency method, and the Network method in terms of the uncertainties in the model parameters, and demonstrate the validity of our techniques using a calibrated scale bar and front-face back-face target measurements. The clear advantage of these self-calibration methods is that a reference instrument or calibrated artifacts are not required, thus significantly lowering the cost involved in the calibration process. PMID:28890607

  17. Comparison between laser interferometric and calibrated artifacts for the geometric test of machine tools

    NASA Astrophysics Data System (ADS)

    Sousa, Andre R.; Schneider, Carlos A.

    2001-09-01

    A touch probe is used on a 3-axis vertical machine center to check against a hole plate, calibrated on a coordinate measuring machine (CMM). By comparing the results obtained from the machine tool and CMM, the main machine tool error components are measured, attesting the machine accuracy. The error values can b used also t update the error compensation table at the CNC, enhancing the machine accuracy. The method is easy to us, has a lower cost than classical test techniques, and preliminary results have shown that its uncertainty is comparable to well established techniques. In this paper the method is compared with the laser interferometric system, regarding reliability, cost and time efficiency.

  18. Mobile micro-colorimeter and micro-spectrometer sensor modules as enablers for the replacement of subjective inspections by objective measurements for optically clear colored liquids in-field

    NASA Astrophysics Data System (ADS)

    Dittrich, Paul-Gerald; Grunert, Fred; Ehehalt, Jörg; Hofmann, Dietrich

    2015-03-01

    Aim of the paper is to show that the colorimetric characterization of optically clear colored liquids can be performed with different measurement methods and their application specific multichannel spectral sensors. The possible measurement methods are differentiated by the applied types of multichannel spectral sensors and therefore by their spectral resolution, measurement speed, measurement accuracy and measurement costs. The paper describes how different types of multichannel spectral sensors are calibrated with different types of calibration methods and how the measurement values can be used for further colorimetric calculations. The different measurement methods and the different application specific calibration methods will be explained methodically and theoretically. The paper proofs that and how different multichannel spectral sensor modules with different calibration methods can be applied with smartpads for the calculation of measurement results both in laboratory and in field. A given practical example is the application of different multichannel spectral sensors for the colorimetric characterization of petroleum oils and fuels and their colorimetric characterization by the Saybolt color scale.

  19. Geometric artifacts reduction for cone-beam CT via L0-norm minimization without dedicated phantoms.

    PubMed

    Gong, Changcheng; Cai, Yufang; Zeng, Li

    2018-01-01

    For cone-beam computed tomography (CBCT), transversal shifts of the rotation center exist inevitably, which will result in geometric artifacts in CT images. In this work, we propose a novel geometric calibration method for CBCT, which can also be used in micro-CT. The symmetry property of the sinogram is used for the first calibration, and then L0-norm of the gradient image from the reconstructed image is used as the cost function to be minimized for the second calibration. An iterative search method is adopted to pursue the local minimum of the L0-norm minimization problem. The transversal shift value is updated with affirmatory step size within a search range determined by the first calibration. In addition, graphic processing unit (GPU)-based FDK algorithm and acceleration techniques are designed to accelerate the calibration process of the presented new method. In simulation experiments, the mean absolute difference (MAD) and the standard deviation (SD) of the transversal shift value were less than 0.2 pixels between the noise-free and noisy projection images, which indicated highly accurate calibration applying the new calibration method. In real data experiments, the smaller entropies of the corrected images also indicated that higher resolution image was acquired using the corrected projection data and the textures were well protected. Study results also support the feasibility of applying the proposed method to other imaging modalities.

  20. Analysis of various quality attributes of sunflower and soybean plants by near infra-red reflectance spectroscopy: Development and validation calibration models

    USDA-ARS?s Scientific Manuscript database

    Sunflower and soybean are summer annuals that can be grown as an alternative to corn and may be particularly useful in organic production systems. Rapid and low cost methods of analyzing plant quality would be helpful for crop management. We developed and validated calibration models for Near-infrar...

  1. Brightness checkerboard lattice method for the calibration of the coaxial reverse Hartmann test

    NASA Astrophysics Data System (ADS)

    Li, Xinji; Hui, Mei; Li, Ning; Hu, Shinan; Liu, Ming; Kong, Lingqin; Dong, Liquan; Zhao, Yuejin

    2018-01-01

    The coaxial reverse Hartmann test (RHT) is widely used in the measurement of large aspheric surfaces as an auxiliary method for interference measurement, because of its large dynamic range, highly flexible test with low frequency of surface errors, and low cost. And the accuracy of the coaxial RHT depends on the calibration. However, the calibration process remains inefficient, and the signal-to-noise ratio limits the accuracy of the calibration. In this paper, brightness checkerboard lattices were used to replace the traditional dot matrix. The brightness checkerboard method can reduce the number of dot matrix projections in the calibration process, thus improving efficiency. An LCD screen displayed a brightness checkerboard lattice, in which the brighter checkerboard and the darker checkerboard alternately arranged. Based on the image on the detector, the relationship between the rays at certain angles and the photosensitive positions of the detector coordinates can be obtained. And a differential de-noising method can effectively reduce the impact of noise on the measurement results. Simulation and experimentation proved the feasibility of the method. Theoretical analysis and experimental results show that the efficiency of the brightness checkerboard lattices is about four times that of the traditional dot matrix, and the signal-to-noise ratio of the calibration is significantly improved.

  2. Evaluation of assigned-value uncertainty for complex calibrator value assignment processes: a prealbumin example.

    PubMed

    Middleton, John; Vaks, Jeffrey E

    2007-04-01

    Errors of calibrator-assigned values lead to errors in the testing of patient samples. The ability to estimate the uncertainties of calibrator-assigned values and other variables minimizes errors in testing processes. International Organization of Standardization guidelines provide simple equations for the estimation of calibrator uncertainty with simple value-assignment processes, but other methods are needed to estimate uncertainty in complex processes. We estimated the assigned-value uncertainty with a Monte Carlo computer simulation of a complex value-assignment process, based on a formalized description of the process, with measurement parameters estimated experimentally. This method was applied to study uncertainty of a multilevel calibrator value assignment for a prealbumin immunoassay. The simulation results showed that the component of the uncertainty added by the process of value transfer from the reference material CRM470 to the calibrator is smaller than that of the reference material itself (<0.8% vs 3.7%). Varying the process parameters in the simulation model allowed for optimizing the process, while keeping the added uncertainty small. The patient result uncertainty caused by the calibrator uncertainty was also found to be small. This method of estimating uncertainty is a powerful tool that allows for estimation of calibrator uncertainty for optimization of various value assignment processes, with a reduced number of measurements and reagent costs, while satisfying the requirements to uncertainty. The new method expands and augments existing methods to allow estimation of uncertainty in complex processes.

  3. Calibration and Forward Uncertainty Propagation for Large-eddy Simulations of Engineering Flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Templeton, Jeremy Alan; Blaylock, Myra L.; Domino, Stefan P.

    2015-09-01

    The objective of this work is to investigate the efficacy of using calibration strategies from Uncertainty Quantification (UQ) to determine model coefficients for LES. As the target methods are for engineering LES, uncertainty from numerical aspects of the model must also be quantified. 15 The ultimate goal of this research thread is to generate a cost versus accuracy curve for LES such that the cost could be minimized given an accuracy prescribed by an engineering need. Realization of this goal would enable LES to serve as a predictive simulation tool within the engineering design process.

  4. Chromatic aberration correction: an enhancement to the calibration of low-cost digital dermoscopes.

    PubMed

    Wighton, Paul; Lee, Tim K; Lui, Harvey; McLean, David; Atkins, M Stella

    2011-08-01

    We present a method for calibrating low-cost digital dermoscopes that corrects for color and inconsistent lighting and also corrects for chromatic aberration. Chromatic aberration is a form of radial distortion that often occurs in inexpensive digital dermoscopes and creates red and blue halo-like effects on edges. Being radial in nature, distortions due to chromatic aberration are not constant across the image, but rather vary in both magnitude and direction. As a result, distortions are not only visually distracting but could also mislead automated characterization techniques. Two low-cost dermoscopes, based on different consumer-grade cameras, were tested. Color is corrected by imaging a reference and applying singular value decomposition to determine the transformation required to ensure accurate color reproduction. Lighting is corrected by imaging a uniform surface and creating lighting correction maps. Chromatic aberration is corrected using a second-order radial distortion model. Our results for color and lighting calibration are consistent with previously published results, while distortions due to chromatic aberration can be reduced by 42-47% in the two systems considered. The disadvantages of inexpensive dermoscopy can be quickly substantially mitigated with a suitable calibration procedure. © 2011 John Wiley & Sons A/S.

  5. Surrogate Based Uni/Multi-Objective Optimization and Distribution Estimation Methods

    NASA Astrophysics Data System (ADS)

    Gong, W.; Duan, Q.; Huo, X.

    2017-12-01

    Parameter calibration has been demonstrated as an effective way to improve the performance of dynamic models, such as hydrological models, land surface models, weather and climate models etc. Traditional optimization algorithms usually cost a huge number of model evaluations, making dynamic model calibration very difficult, or even computationally prohibitive. With the help of a serious of recently developed adaptive surrogate-modelling based optimization methods: uni-objective optimization method ASMO, multi-objective optimization method MO-ASMO, and probability distribution estimation method ASMO-PODE, the number of model evaluations can be significantly reduced to several hundreds, making it possible to calibrate very expensive dynamic models, such as regional high resolution land surface models, weather forecast models such as WRF, and intermediate complexity earth system models such as LOVECLIM. This presentation provides a brief introduction to the common framework of adaptive surrogate-based optimization algorithms of ASMO, MO-ASMO and ASMO-PODE, a case study of Common Land Model (CoLM) calibration in Heihe river basin in Northwest China, and an outlook of the potential applications of the surrogate-based optimization methods.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saunders, P.

    The majority of general-purpose low-temperature handheld radiation thermometers are severely affected by the size-of-source effect (SSE). Calibration of these instruments is pointless unless the SSE is accounted for in the calibration process. Traditional SSE measurement techniques, however, are costly and time consuming, and because the instruments are direct-reading in temperature, traditional SSE results are not easily interpretable, particularly by the general user. This paper describes a simplified method for measuring the SSE, suitable for second-tier calibration laboratories and requiring no additional equipment, and proposes a means of reporting SSE results on a calibration certificate that should be easily understood bymore » the non-specialist user.« less

  7. An accurate cost effective DFT approach to study the sensing behaviour of polypyrrole towards nitrate ions in gas and aqueous phases.

    PubMed

    Wasim, Fatima; Mahmood, Tariq; Ayub, Khurshid

    2016-07-28

    Density functional theory (DFT) calculations have been performed to study the response of polypyrrole towards nitrate ions in gas and aqueous phases. First, an accurate estimate of interaction energies is obtained by methods calibrated against the gold standard CCSD(T) method. Then, a number of low cost DFT methods are also evaluated for their ability to accurately estimate the binding energies of polymer-nitrate complexes. The low cost methods evaluated here include dispersion corrected potential (DCP), Grimme's D3 correction, counterpoise correction of the B3LYP method, and Minnesota functionals (M05-2X). The interaction energies calculated using the counterpoise (CP) correction and DCP methods at the B3LYP level are in better agreement with the interaction energies calculated using the calibrated methods. The interaction energies of an infinite polymer (polypyrrole) with nitrate ions are calculated by a variety of low cost methods in order to find the associated errors. The electronic and spectroscopic properties of polypyrrole oligomers nPy (where n = 1-9) and nPy-NO3(-) complexes are calculated, and then extrapolated for an infinite polymer through a second degree polynomial fit. Charge analysis, frontier molecular orbital (FMO) analysis and density of state studies also reveal the sensing ability of polypyrrole towards nitrate ions. Interaction energies, charge analysis and density of states analyses illustrate that the response of polypyrrole towards nitrate ions is considerably reduced in the aqueous medium (compared to the gas phase).

  8. Global calibration of multi-cameras with non-overlapping fields of view based on photogrammetry and reconfigurable target

    NASA Astrophysics Data System (ADS)

    Xia, Renbo; Hu, Maobang; Zhao, Jibin; Chen, Songlin; Chen, Yueling

    2018-06-01

    Multi-camera vision systems are often needed to achieve large-scale and high-precision measurement because these systems have larger fields of view (FOV) than a single camera. Multiple cameras may have no or narrow overlapping FOVs in many applications, which pose a huge challenge to global calibration. This paper presents a global calibration method for multi-cameras without overlapping FOVs based on photogrammetry technology and a reconfigurable target. Firstly, two planar targets are fixed together and made into a long target according to the distance between the two cameras to be calibrated. The relative positions of the two planar targets can be obtained by photogrammetric methods and used as invariant constraints in global calibration. Then, the reprojection errors of target feature points in the two cameras’ coordinate systems are calculated at the same time and optimized by the Levenberg–Marquardt algorithm to find the optimal solution of the transformation matrix between the two cameras. Finally, all the camera coordinate systems are converted to the reference coordinate system in order to achieve global calibration. Experiments show that the proposed method has the advantages of high accuracy (the RMS error is 0.04 mm) and low cost and is especially suitable for on-site calibration.

  9. Assessment of annual pollutant loads in combined sewers from continuous turbidity measurements: sensitivity to calibration data.

    PubMed

    Lacour, C; Joannis, C; Chebbo, G

    2009-05-01

    This article presents a methodology for assessing annual wet weather Suspended Solids (SS) and Chemical Oxygen Demand (COD) loads in combined sewers, along with the associated uncertainties from continuous turbidity measurements. The proposed method is applied to data from various urban catchments in the cities of Paris and Nantes. The focus here concerns the impact of the number of rain events sampled for calibration (i.e. through establishing linear SS/turbidity or COD/turbidity relationships) on the uncertainty of annual pollutant load assessments. Two calculation methods are investigated, both of which rely on Monte Carlo simulations: random assignment of event-specific calibration relationships to each individual rain event, and the use of an overall relationship built from the entire available data set. Since results indicate a fairly low inter-event variability for calibration relationship parameters, an accurate assessment of pollutant loads can be derived, even when fewer than 10 events are sampled for calibration purposes. For operational applications, these results suggest that turbidity could provide a more precise evaluation of pollutant loads at lower cost than typical sampling methods.

  10. An innovative method for coordinate measuring machine one-dimensional self-calibration with simplified experimental process.

    PubMed

    Fang, Cheng; Butler, David Lee

    2013-05-01

    In this paper, an innovative method for CMM (Coordinate Measuring Machine) self-calibration is proposed. In contrast to conventional CMM calibration that relies heavily on a high precision reference standard such as a laser interferometer, the proposed calibration method is based on a low-cost artefact which is fabricated with commercially available precision ball bearings. By optimizing the mathematical model and rearranging the data sampling positions, the experimental process and data analysis can be simplified. In mathematical expression, the samples can be minimized by eliminating the redundant equations among those configured by the experimental data array. The section lengths of the artefact are measured at arranged positions, with which an equation set can be configured to determine the measurement errors at the corresponding positions. With the proposed method, the equation set is short of one equation, which can be supplemented by either measuring the total length of the artefact with a higher-precision CMM or calibrating the single point error at the extreme position with a laser interferometer. In this paper, the latter is selected. With spline interpolation, the error compensation curve can be determined. To verify the proposed method, a simple calibration system was set up on a commercial CMM. Experimental results showed that with the error compensation curve uncertainty of the measurement can be reduced to 50%.

  11. Comparison of Calibration Techniques for Low-Cost Air Quality Monitoring

    NASA Astrophysics Data System (ADS)

    Malings, C.; Ramachandran, S.; Tanzer, R.; Kumar, S. P. N.; Hauryliuk, A.; Zimmerman, N.; Presto, A. A.

    2017-12-01

    Assessing the intra-city spatial distribution and temporal variability of air quality can be facilitated by a dense network of monitoring stations. However, the cost of implementing such a network can be prohibitive if high-quality but high-cost monitoring systems are used. To this end, the Real-time Affordable Multi-Pollutant (RAMP) sensor package has been developed at the Center for Atmospheric Particle Studies of Carnegie Mellon University, in collaboration with SenSevere LLC. This self-contained unit can measure up to five gases out of CO, SO2, NO, NO2, O3, VOCs, and CO2, along with temperature and relative humidity. Responses of individual gas sensors can vary greatly even when exposed to the same ambient conditions. Those of VOC sensors in particular were observed to vary by a factor-of-8, which suggests that each sensor requires its own calibration model. To this end, we apply and compare two different calibration methods to data collected by RAMP sensors collocated with a reference monitor station. The first method, random forest (RF) modeling, is a rule-based method which maps sensor responses to pollutant concentrations by implementing a trained sequence of decision rules. RF modeling has previously been used for other RAMP gas sensors by the group, and has produced precise calibrated measurements. However, RF models can only predict pollutant concentrations within the range observed in the training data collected during the collocation period. The second method, Gaussian process (GP) modeling, is a probabilistic Bayesian technique whereby broad prior estimates of pollutant concentrations are updated using sensor responses to generate more refined posterior predictions, as well as allowing predictions beyond the range of the training data. The accuracy and precision of these techniques are assessed and compared on VOC data collected during the summer of 2017 in Pittsburgh, PA. By combining pollutant data gathered by each RAMP sensor and applying appropriate calibration techniques, the potentially noisy or biased responses of individual sensors can be mapped to pollutant concentration values which are comparable to those of reference instruments.

  12. Toward an alternative hardness kernel matrix structure in the Electronegativity Equalization Method (EEM).

    PubMed

    Chaves, J; Barroso, J M; Bultinck, P; Carbó-Dorca, R

    2006-01-01

    This study presents an alternative of the Electronegativity Equalization Method (EEM), where the usual Coulomb kernel has been transformed into a smooth function. The new framework, as the classical EEM, permits fast calculations of atomic charges in a given molecule for a small computational cost. The original EEM procedure needs to previously calibrate the different implied atomic hardness and electronegativity, using a chosen set of molecules. In the new EEM algorithm half the number of parameters needs to be calibrated, since a relationship between electronegativities and hardnesses has been found.

  13. Research on the attitude of small UAV based on MEMS devices

    NASA Astrophysics Data System (ADS)

    Shi, Xiaojie; Lu, Libin; Jin, Guodong; Tan, Lining

    2017-05-01

    This paper mainly introduces the research principle and implementation method of the small UAV navigation attitude system based on MEMS devices. The Gauss - Newton method based on least squares is used to calibrate the MEMS accelerometer and gyroscope for calibration. Improve the accuracy of the attitude by using the modified complementary filtering to correct the attitude angle error. The experimental data show that the design of the attitude and attitude system in this paper to meet the requirements of small UAV attitude accuracy to achieve a small, low cost.

  14. Highly precise acoustic calibration method of ring-shaped ultrasound transducer array for plane-wave-based ultrasound tomography

    NASA Astrophysics Data System (ADS)

    Terada, Takahide; Yamanaka, Kazuhiro; Suzuki, Atsuro; Tsubota, Yushi; Wu, Wenjing; Kawabata, Ken-ichi

    2017-07-01

    Ultrasound computed tomography (USCT) is promising for a non-invasive, painless, operator-independent and quantitative system for breast-cancer screening. Assembly error, production tolerance, and aging-degradation variations of the hardwire components, particularly of plane-wave-based USCT systems, may hamper cost effectiveness, precise imaging, and robust operation. The plane wave is transmitted from a ring-shaped transducer array for receiving the signal at a high signal-to-noise-ratio and fast aperture synthesis. There are four signal-delay components: response delays in the transmitters and receivers and propagation delays depending on the positions of the transducer elements and their directivity. We developed a highly precise calibration method for calibrating these delay components and evaluated it with our prototype plane-wave-based USCT system. Our calibration method was found to be effective in reducing delay errors. Gaps and curves were eliminated from the plane wave, and echo images of wires were sharpened in the entire imaging area.

  15. Proxy-to-proxy calibration: Increasing the temporal resolution of quantitative climate reconstructions

    PubMed Central

    von Gunten, Lucien; D'Andrea, William J.; Bradley, Raymond S.; Huang, Yongsong

    2012-01-01

    High-resolution paleoclimate reconstructions are often restricted by the difficulties of sampling geologic archives in great detail and the analytical costs of processing large numbers of samples. Using sediments from Lake Braya Sø, Greenland, we introduce a new method that provides a quantitative high-resolution paleoclimate record by combining measurements of the alkenone unsaturation index () with non-destructive scanning reflectance spectroscopic measurements in the visible range (VIS-RS). The proxy-to-proxy (PTP) method exploits two distinct calibrations: the in situ calibration of to lake water temperature and the calibration of scanning VIS-RS data to down core data. Using this approach, we produced a quantitative temperature record that is longer and has 5 times higher sampling resolution than the original time series, thereby allowing detection of temperature variability in frequency bands characteristic of the AMO over the past 7,000 years. PMID:22934132

  16. Flight Test Results of a GPS-Based Pitot-Static Calibration Method Using Output-Error Optimization for a Light Twin-Engine Airplane

    NASA Technical Reports Server (NTRS)

    Martos, Borja; Kiszely, Paul; Foster, John V.

    2011-01-01

    As part of the NASA Aviation Safety Program (AvSP), a novel pitot-static calibration method was developed to allow rapid in-flight calibration for subscale aircraft while flying within confined test areas. This approach uses Global Positioning System (GPS) technology coupled with modern system identification methods that rapidly computes optimal pressure error models over a range of airspeed with defined confidence bounds. This method has been demonstrated in subscale flight tests and has shown small 2- error bounds with significant reduction in test time compared to other methods. The current research was motivated by the desire to further evaluate and develop this method for full-scale aircraft. A goal of this research was to develop an accurate calibration method that enables reductions in test equipment and flight time, thus reducing costs. The approach involved analysis of data acquisition requirements, development of efficient flight patterns, and analysis of pressure error models based on system identification methods. Flight tests were conducted at The University of Tennessee Space Institute (UTSI) utilizing an instrumented Piper Navajo research aircraft. In addition, the UTSI engineering flight simulator was used to investigate test maneuver requirements and handling qualities issues associated with this technique. This paper provides a summary of piloted simulation and flight test results that illustrates the performance and capabilities of the NASA calibration method. Discussion of maneuver requirements and data analysis methods is included as well as recommendations for piloting technique.

  17. Photogrammetry in 3d Modelling of Human Bone Structures from Radiographs

    NASA Astrophysics Data System (ADS)

    Hosseinian, S.; Arefi, H.

    2017-05-01

    Photogrammetry can have great impact on the success of medical processes for diagnosis, treatment and surgeries. Precise 3D models which can be achieved by photogrammetry improve considerably the results of orthopedic surgeries and processes. Usual 3D imaging techniques, computed tomography (CT) and magnetic resonance imaging (MRI), have some limitations such as being used only in non-weight-bearing positions, costs and high radiation dose(for CT) and limitations of MRI for patients with ferromagnetic implants or objects in their bodies. 3D reconstruction of bony structures from biplanar X-ray images is a reliable and accepted alternative for achieving accurate 3D information with low dose radiation in weight-bearing positions. The information can be obtained from multi-view radiographs by using photogrammetry. The primary step for 3D reconstruction of human bone structure from medical X-ray images is calibration which is done by applying principles of photogrammetry. After the calibration step, 3D reconstruction can be done using efficient methods with different levels of automation. Because of the different nature of X-ray images from optical images, there are distinct challenges in medical applications for calibration step of stereoradiography. In this paper, after demonstrating the general steps and principles of 3D reconstruction from X-ray images, a comparison will be done on calibration methods for 3D reconstruction from radiographs and they are assessed from photogrammetry point of view by considering various metrics such as their camera models, calibration objects, accuracy, availability, patient-friendly and cost.

  18. A novel multivariate approach using science-based calibration for direct coating thickness determination in real-time NIR process monitoring.

    PubMed

    Möltgen, C-V; Herdling, T; Reich, G

    2013-11-01

    This study demonstrates an approach, using science-based calibration (SBC), for direct coating thickness determination on heart-shaped tablets in real-time. Near-Infrared (NIR) spectra were collected during four full industrial pan coating operations. The tablets were coated with a thin hydroxypropyl methylcellulose (HPMC) film up to a film thickness of 28 μm. The application of SBC permits the calibration of the NIR spectral data without using costly determined reference values. This is due to the fact that SBC combines classical methods to estimate the coating signal and statistical methods for the noise estimation. The approach enabled the use of NIR for the measurement of the film thickness increase from around 8 to 28 μm of four independent batches in real-time. The developed model provided a spectroscopic limit of detection for the coating thickness of 0.64 ± 0.03 μm root-mean square (RMS). In the commonly used statistical methods for calibration, such as Partial Least Squares (PLS), sufficiently varying reference values are needed for calibration. For thin non-functional coatings this is a challenge because the quality of the model depends on the accuracy of the selected calibration standards. The obvious and simple approach of SBC eliminates many of the problems associated with the conventional statistical methods and offers an alternative for multivariate calibration. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Spectral multivariate calibration without laboratory prepared or determined reference analyte values.

    PubMed

    Ottaway, Josh; Farrell, Jeremy A; Kalivas, John H

    2013-02-05

    An essential part to calibration is establishing the analyte calibration reference samples. These samples must characterize the sample matrix and measurement conditions (chemical, physical, instrumental, and environmental) of any sample to be predicted. Calibration usually requires measuring spectra for numerous reference samples in addition to determining the corresponding analyte reference values. Both tasks are typically time-consuming and costly. This paper reports on a method named pure component Tikhonov regularization (PCTR) that does not require laboratory prepared or determined reference values. Instead, an analyte pure component spectrum is used in conjunction with nonanalyte spectra for calibration. Nonanalyte spectra can be from different sources including pure component interference samples, blanks, and constant analyte samples. The approach is also applicable to calibration maintenance when the analyte pure component spectrum is measured in one set of conditions and nonanalyte spectra are measured in new conditions. The PCTR method balances the trade-offs between calibration model shrinkage and the degree of orthogonality to the nonanalyte content (model direction) in order to obtain accurate predictions. Using visible and near-infrared (NIR) spectral data sets, the PCTR results are comparable to those obtained using ridge regression (RR) with reference calibration sets. The flexibility of PCTR also allows including reference samples if such samples are available.

  20. Assuring Software Cost Estimates: Is it an Oxymoron?

    NASA Technical Reports Server (NTRS)

    Hihn, Jarius; Tregre, Grant

    2013-01-01

    The software industry repeatedly observes cost growth of well over 100% even after decades of cost estimation research and well-known best practices, so "What's the problem?" In this paper we will provide an overview of the current state oj software cost estimation best practice. We then explore whether applying some of the methods used in software assurance might improve the quality of software cost estimates. This paper especially focuses on issues associated with model calibration, estimate review, and the development and documentation of estimates as part alan integrated plan.

  1. Comparison between different direct search optimization algorithms in the calibration of a distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Campo, Lorenzo; Castelli, Fabio; Caparrini, Francesca

    2010-05-01

    The modern distributed hydrological models allow the representation of the different surface and subsurface phenomena with great accuracy and high spatial and temporal resolution. Such complexity requires, in general, an equally accurate parametrization. A number of approaches have been followed in this respect, from simple local search method (like Nelder-Mead algorithm), that minimize a cost function representing some distance between model's output and available measures, to more complex approaches like dynamic filters (such as the Ensemble Kalman Filter) that carry on an assimilation of the observations. In this work the first approach was followed in order to compare the performances of three different direct search algorithms on the calibration of a distributed hydrological balance model. The direct search family can be defined as that category of algorithms that make no use of derivatives of the cost function (that is, in general, a black box) and comprehend a large number of possible approaches. The main benefit of this class of methods is that they don't require changes in the implementation of the numerical codes to be calibrated. The first algorithm is the classical Nelder-Mead, often used in many applications and utilized as reference. The second algorithm is a GSS (Generating Set Search) algorithm, built in order to guarantee the conditions of global convergence and suitable for a parallel and multi-start implementation, here presented. The third one is the EGO algorithm (Efficient Global Optimization), that is particularly suitable to calibrate black box cost functions that require expensive computational resource (like an hydrological simulation). EGO minimizes the number of evaluations of the cost function balancing the need to minimize a response surface that approximates the problem and the need to improve the approximation sampling where prediction error may be high. The hydrological model to be calibrated was MOBIDIC, a complete balance distributed model developed at the Department of Civil and Environmental Engineering of the University of Florence. Discussion on the comparisons between the effectiveness of the different algorithms on different cases of study on Central Italy basins is provided.

  2. Self-Calibration Approach for Mixed Signal Circuits in Systems-on-Chip

    NASA Astrophysics Data System (ADS)

    Jung, In-Seok

    MOSFET scaling has served industry very well for a few decades by proving improvements in transistor performance, power, and cost. However, they require high test complexity and cost due to several issues such as limited pin count and integration of analog and digital mixed circuits. Therefore, self-calibration is an excellent and promising method to improve yield and to reduce manufacturing cost by simplifying the test complexity, because it is possible to address the process variation effects by means of self-calibration technique. Since the prior published calibration techniques were developed for a specific targeted application, it is not easy to be utilized for other applications. In order to solve the aforementioned issues, in this dissertation, several novel self-calibration design techniques in mixed-signal mode circuits are proposed for an analog to digital converter (ADC) to reduce mismatch error and improve performance. These are essential components in SOCs and the proposed self-calibration approach also compensates the process variations. The proposed novel self-calibration approach targets the successive approximation (SA) ADC. First of all, the offset error of the comparator in the SA-ADC is reduced using the proposed approach by enabling the capacitor array in the input nodes for better matching. In addition, the auxiliary capacitors for each capacitor of DAC in the SA-ADC are controlled by using synthesized digital controller to minimize the mismatch error of the DAC. Since the proposed technique is applied during foreground operation, the power overhead in SA-ADC case is minimal because the calibration circuit is deactivated during normal operation time. Another benefit of the proposed technique is that the offset voltage of the comparator is continuously adjusted for every step to decide one-bit code, because not only the inherit offset voltage of the comparator but also the mismatch of DAC are compensated simultaneously. Synthesized digital calibration control circuit operates as fore-ground mode, and the controller has been highly optimized for low power and better performance with simplified structure. In addition, in order to increase the sampling clock frequency of proposed self-calibration approach, novel variable clock period method is proposed. To achieve high speed SAR operation, a variable clock time technique is used to reduce not only peak current but also die area. The technique removes conversion time waste and extends the SAR operation speed easily. To verify and demonstrate the proposed techniques, a prototype charge-redistribution SA-ADCs with the proposed self-calibration is implemented in a 130nm standard CMOS process. The prototype circuit's silicon area is 0.0715 mm 2 and consumers 4.62mW with 1.2V power supply.

  3. A novel method of calibrating a MEMS inertial reference unit on a turntable under limited working conditions

    NASA Astrophysics Data System (ADS)

    Lu, Jiazhen; Liang, Shufang; Yang, Yanqiang

    2017-10-01

    Micro-electro-mechanical systems (MEMS) inertial measurement devices tend to be widely used in inertial navigation systems and have quickly emerged on the market due to their characteristics of low cost, high reliability and small size. Calibration is the most effective way to remove the deterministic error of an inertial reference unit (IRU), which in this paper consists of three orthogonally mounted MEMS gyros. However, common testing methods in the lab cannot predict the corresponding errors precisely when the turntable’s working condition is restricted. In this paper, the turntable can only provide a relatively small rotation angle. Moreover, the errors must be compensated exactly because of the great effect caused by the high angular velocity of the craft. To deal with this question, a new method is proposed to evaluate the MEMS IRU’s performance. In the calibration procedure, a one-axis table that can rotate a limited angle in the form of a sine function is utilized to provide the MEMS IRU’s angular velocity. A new algorithm based on Fourier series is designed to calculate the misalignment and scale factor errors. The proposed method is tested in a set of experiments, and the calibration results are compared to a traditional calibration method performed under normal working conditions to verify their correctness. In addition, a verification test in the given rotation speed is implemented for further demonstration.

  4. Solution to the Problem of Calibration of Low-Cost Air Quality Measurement Sensors in Networks.

    PubMed

    Miskell, Georgia; Salmond, Jennifer A; Williams, David E

    2018-04-27

    We provide a simple, remote, continuous calibration technique suitable for application in a hierarchical network featuring a few well-maintained, high-quality instruments ("proxies") and a larger number of low-cost devices. The ideas are grounded in a clear definition of the purpose of a low-cost network, defined here as providing reliable information on air quality at small spatiotemporal scales. The technique assumes linearity of the sensor signal. It derives running slope and offset estimates by matching mean and standard deviations of the sensor data to values derived from proxies over the same time. The idea is extremely simple: choose an appropriate proxy and an averaging-time that is sufficiently long to remove the influence of short-term fluctuations but sufficiently short that it preserves the regular diurnal variations. The use of running statistical measures rather than cross-correlation of sites means that the method is robust against periods of missing data. Ideas are first developed using simulated data and then demonstrated using field data, at hourly and 1 min time-scales, from a real network of low-cost semiconductor-based sensors. Despite the almost naïve simplicity of the method, it was robust for both drift detection and calibration correction applications. We discuss the use of generally available geographic and environmental data as well as microscale land-use regression as means to enhance the proxy estimates and to generalize the ideas to other pollutants with high spatial variability, such as nitrogen dioxide and particulates. These improvements can also be used to minimize the required number of proxy sites.

  5. An Improved Calibration Method for a Rotating 2D LIDAR System.

    PubMed

    Zeng, Yadan; Yu, Heng; Dai, Houde; Song, Shuang; Lin, Mingqiang; Sun, Bo; Jiang, Wei; Meng, Max Q-H

    2018-02-07

    This paper presents an improved calibration method of a rotating two-dimensional light detection and ranging (R2D-LIDAR) system, which can obtain the 3D scanning map of the surroundings. The proposed R2D-LIDAR system, composed of a 2D LIDAR and a rotating unit, is pervasively used in the field of robotics owing to its low cost and dense scanning data. Nevertheless, the R2D-LIDAR system must be calibrated before building the geometric model because there are assembled deviation and abrasion between the 2D LIDAR and the rotating unit. Hence, the calibration procedures should contain both the adjustment between the two devices and the bias of 2D LIDAR itself. The main purpose of this work is to resolve the 2D LIDAR bias issue with a flat plane based on the Levenberg-Marquardt (LM) algorithm. Experimental results for the calibration of the R2D-LIDAR system prove the reliability of this strategy to accurately estimate sensor offsets with the error range from -15 mm to 15 mm for the performance of capturing scans.

  6. An Improved Calibration Method for a Rotating 2D LIDAR System

    PubMed Central

    Zeng, Yadan; Yu, Heng; Song, Shuang; Lin, Mingqiang; Sun, Bo; Jiang, Wei; Meng, Max Q.-H.

    2018-01-01

    This paper presents an improved calibration method of a rotating two-dimensional light detection and ranging (R2D-LIDAR) system, which can obtain the 3D scanning map of the surroundings. The proposed R2D-LIDAR system, composed of a 2D LIDAR and a rotating unit, is pervasively used in the field of robotics owing to its low cost and dense scanning data. Nevertheless, the R2D-LIDAR system must be calibrated before building the geometric model because there are assembled deviation and abrasion between the 2D LIDAR and the rotating unit. Hence, the calibration procedures should contain both the adjustment between the two devices and the bias of 2D LIDAR itself. The main purpose of this work is to resolve the 2D LIDAR bias issue with a flat plane based on the Levenberg–Marquardt (LM) algorithm. Experimental results for the calibration of the R2D-LIDAR system prove the reliability of this strategy to accurately estimate sensor offsets with the error range from −15 mm to 15 mm for the performance of capturing scans. PMID:29414885

  7. Heliostat kinematic system calibration using uncalibrated cameras

    NASA Astrophysics Data System (ADS)

    Burisch, Michael; Gomez, Luis; Olasolo, David; Villasante, Cristobal

    2017-06-01

    The efficiency of the solar field greatly depends on the ability of the heliostats to precisely reflect solar radiation onto a central receiver. To control the heliostats with such a precision accurate knowledge of the motion of each of them modeled as a kinematic system is required. Determining the parameters of this system for each heliostat by a calibration system is crucial for the efficient operation of the solar field. For small sized heliostats being able to make such a calibration in a fast and automatic manner is imperative as the solar field potentially contain tens or even hundreds of thousands of them. A calibration system which can rapidly recalibrate a whole solar field would also allow reducing costs. Heliostats are generally designed to provide stability over a large period of time. Being able to relax this requirement and compensate any occurring error by adapting parameters in a model, the costs of the heliostat can be reduced. The presented method describes such an automatic calibration system using uncalibrated cameras rigidly attached to each heliostat. The cameras are used to observe targets spread out through the solar field; based on this the kinematic system of the heliostat can be estimated with high precision. A comparison of this approach to similar solutions shows the viability of the proposed solution.

  8. Food adulteration analysis without laboratory prepared or determined reference food adulterant values.

    PubMed

    Kalivas, John H; Georgiou, Constantinos A; Moira, Marianna; Tsafaras, Ilias; Petrakis, Eleftherios A; Mousdis, George A

    2014-04-01

    Quantitative analysis of food adulterants is an important health and economic issue that needs to be fast and simple. Spectroscopy has significantly reduced analysis time. However, still needed are preparations of analyte calibration samples matrix matched to prediction samples which can be laborious and costly. Reported in this paper is the application of a newly developed pure component Tikhonov regularization (PCTR) process that does not require laboratory prepared or reference analysis methods, and hence, is a greener calibration method. The PCTR method requires an analyte pure component spectrum and non-analyte spectra. As a food analysis example, synchronous fluorescence spectra of extra virgin olive oil samples adulterated with sunflower oil is used. Results are shown to be better than those obtained using ridge regression with reference calibration samples. The flexibility of PCTR allows including reference samples and is generic for use with other instrumental methods and food products. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Using random forest for reliable classification and cost-sensitive learning for medical diagnosis.

    PubMed

    Yang, Fan; Wang, Hua-zhen; Mi, Hong; Lin, Cheng-de; Cai, Wei-wen

    2009-01-30

    Most machine-learning classifiers output label predictions for new instances without indicating how reliable the predictions are. The applicability of these classifiers is limited in critical domains where incorrect predictions have serious consequences, like medical diagnosis. Further, the default assumption of equal misclassification costs is most likely violated in medical diagnosis. In this paper, we present a modified random forest classifier which is incorporated into the conformal predictor scheme. A conformal predictor is a transductive learning scheme, using Kolmogorov complexity to test the randomness of a particular sample with respect to the training sets. Our method show well-calibrated property that the performance can be set prior to classification and the accurate rate is exactly equal to the predefined confidence level. Further, to address the cost sensitive problem, we extend our method to a label-conditional predictor which takes into account different costs for misclassifications in different class and allows different confidence level to be specified for each class. Intensive experiments on benchmark datasets and real world applications show the resultant classifier is well-calibrated and able to control the specific risk of different class. The method of using RF outlier measure to design a nonconformity measure benefits the resultant predictor. Further, a label-conditional classifier is developed and turn to be an alternative approach to the cost sensitive learning problem that relies on label-wise predefined confidence level. The target of minimizing the risk of misclassification is achieved by specifying the different confidence level for different class.

  10. A machine learning calibration model using random forests to improve sensor performance for lower-cost air quality monitoring

    NASA Astrophysics Data System (ADS)

    Zimmerman, Naomi; Presto, Albert A.; Kumar, Sriniwasa P. N.; Gu, Jason; Hauryliuk, Aliaksei; Robinson, Ellis S.; Robinson, Allen L.; Subramanian, R.

    2018-01-01

    Low-cost sensing strategies hold the promise of denser air quality monitoring networks, which could significantly improve our understanding of personal air pollution exposure. Additionally, low-cost air quality sensors could be deployed to areas where limited monitoring exists. However, low-cost sensors are frequently sensitive to environmental conditions and pollutant cross-sensitivities, which have historically been poorly addressed by laboratory calibrations, limiting their utility for monitoring. In this study, we investigated different calibration models for the Real-time Affordable Multi-Pollutant (RAMP) sensor package, which measures CO, NO2, O3, and CO2. We explored three methods: (1) laboratory univariate linear regression, (2) empirical multiple linear regression, and (3) machine-learning-based calibration models using random forests (RF). Calibration models were developed for 16-19 RAMP monitors (varied by pollutant) using training and testing windows spanning August 2016 through February 2017 in Pittsburgh, PA, US. The random forest models matched (CO) or significantly outperformed (NO2, CO2, O3) the other calibration models, and their accuracy and precision were robust over time for testing windows of up to 16 weeks. Following calibration, average mean absolute error on the testing data set from the random forest models was 38 ppb for CO (14 % relative error), 10 ppm for CO2 (2 % relative error), 3.5 ppb for NO2 (29 % relative error), and 3.4 ppb for O3 (15 % relative error), and Pearson r versus the reference monitors exceeded 0.8 for most units. Model performance is explored in detail, including a quantification of model variable importance, accuracy across different concentration ranges, and performance in a range of monitoring contexts including the National Ambient Air Quality Standards (NAAQS) and the US EPA Air Sensors Guidebook recommendations of minimum data quality for personal exposure measurement. A key strength of the RF approach is that it accounts for pollutant cross-sensitivities. This highlights the importance of developing multipollutant sensor packages (as opposed to single-pollutant monitors); we determined this is especially critical for NO2 and CO2. The evaluation reveals that only the RF-calibrated sensors meet the US EPA Air Sensors Guidebook recommendations of minimum data quality for personal exposure measurement. We also demonstrate that the RF-model-calibrated sensors could detect differences in NO2 concentrations between a near-road site and a suburban site less than 1.5 km away. From this study, we conclude that combining RF models with carefully controlled state-of-the-art multipollutant sensor packages as in the RAMP monitors appears to be a very promising approach to address the poor performance that has plagued low-cost air quality sensors.

  11. Optical Mass Displacement Tracking: A simplified field calibration method for the electro-mechanical seismometer.

    NASA Astrophysics Data System (ADS)

    Burk, D. R.; Mackey, K. G.; Hartse, H. E.

    2016-12-01

    We have developed a simplified field calibration method for use in seismic networks that still employ the classical electro-mechanical seismometer. Smaller networks may not always have the financial capability to purchase and operate modern, state of the art equipment. Therefore these networks generally operate a modern, low-cost digitizer that is paired to an existing electro-mechanical seismometer. These systems are typically poorly calibrated. Calibration of the station is difficult to estimate because coil loading, digitizer input impedance, and amplifier gain differences vary by station and digitizer model. Therefore, it is necessary to calibrate the station channel as a complete system to take into account all components from instrument, to amplifier, to even the digitizer. Routine calibrations at the smaller networks are not always consistent, because existing calibration techniques require either specialized equipment or significant technical expertise. To improve station data quality at the small network, we developed a calibration method that utilizes open source software and a commonly available laser position sensor. Using a signal generator and a small excitation coil, we force the mass of the instrument to oscillate at various frequencies across its operating range. We then compare the channel voltage output to the laser-measured mass displacement to determine the instrument voltage sensitivity at each frequency point. Using the standard equations of forced motion, a representation of the calibration curve as a function of voltage per unit of ground velocity is calculated. A computer algorithm optimizes the curve and then translates the instrument response into a Seismic Analysis Code (SAC) poles & zeros format. Results have been demonstrated to fall within a few percent of a standard laboratory calibration. This method is an effective and affordable option for networks that employ electro-mechanical seismometers, and it is currently being deployed in regional networks throughout Russia and in Central Asia.

  12. Spectral Transfer Learning Using Information Geometry for a User-Independent Brain-Computer Interface

    PubMed Central

    Waytowich, Nicholas R.; Lawhern, Vernon J.; Bohannon, Addison W.; Ball, Kenneth R.; Lance, Brent J.

    2016-01-01

    Recent advances in signal processing and machine learning techniques have enabled the application of Brain-Computer Interface (BCI) technologies to fields such as medicine, industry, and recreation; however, BCIs still suffer from the requirement of frequent calibration sessions due to the intra- and inter-individual variability of brain-signals, which makes calibration suppression through transfer learning an area of increasing interest for the development of practical BCI systems. In this paper, we present an unsupervised transfer method (spectral transfer using information geometry, STIG), which ranks and combines unlabeled predictions from an ensemble of information geometry classifiers built on data from individual training subjects. The STIG method is validated in both off-line and real-time feedback analysis during a rapid serial visual presentation task (RSVP). For detection of single-trial, event-related potentials (ERPs), the proposed method can significantly outperform existing calibration-free techniques as well as outperform traditional within-subject calibration techniques when limited data is available. This method demonstrates that unsupervised transfer learning for single-trial detection in ERP-based BCIs can be achieved without the requirement of costly training data, representing a step-forward in the overall goal of achieving a practical user-independent BCI system. PMID:27713685

  13. Spectral Transfer Learning Using Information Geometry for a User-Independent Brain-Computer Interface.

    PubMed

    Waytowich, Nicholas R; Lawhern, Vernon J; Bohannon, Addison W; Ball, Kenneth R; Lance, Brent J

    2016-01-01

    Recent advances in signal processing and machine learning techniques have enabled the application of Brain-Computer Interface (BCI) technologies to fields such as medicine, industry, and recreation; however, BCIs still suffer from the requirement of frequent calibration sessions due to the intra- and inter-individual variability of brain-signals, which makes calibration suppression through transfer learning an area of increasing interest for the development of practical BCI systems. In this paper, we present an unsupervised transfer method (spectral transfer using information geometry, STIG), which ranks and combines unlabeled predictions from an ensemble of information geometry classifiers built on data from individual training subjects. The STIG method is validated in both off-line and real-time feedback analysis during a rapid serial visual presentation task (RSVP). For detection of single-trial, event-related potentials (ERPs), the proposed method can significantly outperform existing calibration-free techniques as well as outperform traditional within-subject calibration techniques when limited data is available. This method demonstrates that unsupervised transfer learning for single-trial detection in ERP-based BCIs can be achieved without the requirement of costly training data, representing a step-forward in the overall goal of achieving a practical user-independent BCI system.

  14. Precise frequency calibration using television video carriers

    NASA Technical Reports Server (NTRS)

    Burkhardt, Edward E.

    1990-01-01

    The availability of inexpensive and quick precise frequency calibration methods is limited. VLF and GPS do offer precise calibration. However, antenna placement, cost of equipment, and calibration time place many restrictions on the user. The USNO maintained line-10 television Time of Coincidence (TOC) of station WTTG, channel 5, Washington, DC requires a frequency stable video carrier. This video carrier, 77.24 MHz is controlled by the same cesium beam standard controlling the TOC of line-10. Excellent frequency comparisons against this video carrier have been accomplished at 95 miles (153 km). With stable propagation and a three foot wire antenna, a part in 10(exp 9) can be determined in a few minutes. Inexpensive field equipment with a synthesized 1 kHz offset from the video carrier offers parts in 10(exp 11) calibrations in a few minutes using an oscilloscope as a phase comparator.

  15. Precise frequency calibration using television video carriers

    NASA Astrophysics Data System (ADS)

    Burkhardt, Edward E.

    1990-05-01

    The availability of inexpensive and quick precise frequency calibration methods is limited. VLF and GPS do offer precise calibration. However, antenna placement, cost of equipment, and calibration time place many restrictions on the user. The USNO maintained line-10 television Time of Coincidence (TOC) of station WTTG, channel 5, Washington, DC requires a frequency stable video carrier. This video carrier, 77.24 MHz is controlled by the same cesium beam standard controlling the TOC of line-10. Excellent frequency comparisons against this video carrier have been accomplished at 95 miles (153 km). With stable propagation and a three foot wire antenna, a part in 10(exp 9) can be determined in a few minutes. Inexpensive field equipment with a synthesized 1 kHz offset from the video carrier offers parts in 10(exp 11) calibrations in a few minutes using an oscilloscope as a phase comparator.

  16. High-Altitude Air Mass Zero Calibration of Solar Cells

    NASA Technical Reports Server (NTRS)

    Woodyard, James R.; Snyder, David B.

    2005-01-01

    Air mass zero calibration of solar cells has been carried out for several years by NASA Glenn Research Center using a Lear-25 aircraft and Langley plots. The calibration flights are carried out during early fall and late winter when the tropopause is at the lowest altitude. Measurements are made starting at about 50,000 feet and continue down to the tropopause. A joint NASA/Wayne State University program called Suntracker is underway to explore the use of weather balloon and communication technologies to characterize solar cells at elevations up to about 100 kft. The balloon flights are low-cost and can be carried out any time of the year. AMO solar cell characterization employing the mountaintop, aircraft and balloon methods are reviewed. Results of cell characterization with the Suntracker are reported and compared with the NASA Glenn Research Center aircraft method.

  17. A high-throughput screening approach for the optoelectronic properties of conjugated polymers.

    PubMed

    Wilbraham, Liam; Berardo, Enrico; Turcani, Lukas; Jelfs, Kim E; Zwijnenburg, Martijn A

    2018-06-25

    We propose a general high-throughput virtual screening approach for the optical and electronic properties of conjugated polymers. This approach makes use of the recently developed xTB family of low-computational-cost density functional tight-binding methods from Grimme and co-workers, calibrated here to (TD-)DFT data computed for a representative diverse set of (co-)polymers. Parameters drawn from the resulting calibration using a linear model can then be applied to the xTB derived results for new polymers, thus generating near DFT-quality data with orders of magnitude reduction in computational cost. As a result, after an initial computational investment for calibration, this approach can be used to quickly and accurately screen on the order of thousands of polymers for target applications. We also demonstrate that the (opto)electronic properties of the conjugated polymers show only a very minor variation when considering different conformers and that the results of high-throughput screening are therefore expected to be relatively insensitive with respect to the conformer search methodology applied.

  18. A LiDAR data-based camera self-calibration method

    NASA Astrophysics Data System (ADS)

    Xu, Lijun; Feng, Jing; Li, Xiaolu; Chen, Jianjun

    2018-07-01

    To find the intrinsic parameters of a camera, a LiDAR data-based camera self-calibration method is presented here. Parameters have been estimated using particle swarm optimization (PSO), enhancing the optimal solution of a multivariate cost function. The main procedure of camera intrinsic parameter estimation has three parts, which include extraction and fine matching of interest points in the images, establishment of cost function, based on Kruppa equations and optimization of PSO using LiDAR data as the initialization input. To improve the precision of matching pairs, a new method of maximal information coefficient (MIC) and maximum asymmetry score (MAS) was used to remove false matching pairs based on the RANSAC algorithm. Highly precise matching pairs were used to calculate the fundamental matrix so that the new cost function (deduced from Kruppa equations in terms of the fundamental matrix) was more accurate. The cost function involving four intrinsic parameters was minimized by PSO for the optimal solution. To overcome the issue of optimization pushed to a local optimum, LiDAR data was used to determine the scope of initialization, based on the solution to the P4P problem for camera focal length. To verify the accuracy and robustness of the proposed method, simulations and experiments were implemented and compared with two typical methods. Simulation results indicated that the intrinsic parameters estimated by the proposed method had absolute errors less than 1.0 pixel and relative errors smaller than 0.01%. Based on ground truth obtained from a meter ruler, the distance inversion accuracy in the experiments was smaller than 1.0 cm. Experimental and simulated results demonstrated that the proposed method was highly accurate and robust.

  19. A simple method for HPLC retention time prediction: linear calibration using two reference substances.

    PubMed

    Sun, Lei; Jin, Hong-Yu; Tian, Run-Tao; Wang, Ming-Juan; Liu, Li-Na; Ye, Liu-Ping; Zuo, Tian-Tian; Ma, Shuang-Cheng

    2017-01-01

    Analysis of related substances in pharmaceutical chemicals and multi-components in traditional Chinese medicines needs bulk of reference substances to identify the chromatographic peaks accurately. But the reference substances are costly. Thus, the relative retention (RR) method has been widely adopted in pharmacopoeias and literatures for characterizing HPLC behaviors of those reference substances unavailable. The problem is it is difficult to reproduce the RR on different columns due to the error between measured retention time (t R ) and predicted t R in some cases. Therefore, it is useful to develop an alternative and simple method for prediction of t R accurately. In the present study, based on the thermodynamic theory of HPLC, a method named linear calibration using two reference substances (LCTRS) was proposed. The method includes three steps, procedure of two points prediction, procedure of validation by multiple points regression and sequential matching. The t R of compounds on a HPLC column can be calculated by standard retention time and linear relationship. The method was validated in two medicines on 30 columns. It was demonstrated that, LCTRS method is simple, but more accurate and more robust on different HPLC columns than RR method. Hence quality standards using LCTRS method are easy to reproduce in different laboratories with lower cost of reference substances.

  20. Ground-based automated radiometric calibration system in Baotou site, China

    NASA Astrophysics Data System (ADS)

    Wang, Ning; Li, Chuanrong; Ma, Lingling; Liu, Yaokai; Meng, Fanrong; Zhao, Yongguang; Pang, Bo; Qian, Yonggang; Li, Wei; Tang, Lingli; Wang, Dongjin

    2017-10-01

    Post-launch vicarious calibration method, as an important post launch method, not only can be used to evaluate the onboard calibrators but also can be allowed for a traceable knowledge of the absolute accuracy, although it has the drawbacks of low frequency data collections due expensive on personal and cost. To overcome the problems, CEOS Working Group on Calibration and Validation (WGCV) Infrared Visible Optical Sensors (IVOS) subgroup has proposed an Automated Radiative Calibration Network (RadCalNet) project. Baotou site is one of the four demonstration sites of RadCalNet. The superiority characteristics of Baotou site is the combination of various natural scenes and artificial targets. In each artificial target and desert, an automated spectrum measurement instrument is developed to obtain the surface reflected radiance spectra every 2 minutes with a spectrum resolution of 2nm. The aerosol optical thickness and column water vapour content are measured by an automatic sun photometer. To meet the requirement of RadCalNet, a surface reflectance spectrum retrieval method is used to generate the standard input files, with the support of surface and atmospheric measurements. Then the top of atmospheric reflectance spectra are derived from the input files. The results of the demonstration satellites, including Landsat 8, Sentinal-2A, show that there is a good agreement between observed and calculated results.

  1. Boresight Calibration of Construction Misalignments for 3D Scanners Built with a 2D Laser Rangefinder Rotating on Its Optical Center

    PubMed Central

    Morales, Jesús; Martínez, Jorge L.; Mandow, Anthony; Reina, Antonio J.; Pequeño-Boter, Alejandro; García-Cerezo, Alfonso

    2014-01-01

    Many applications, like mobile robotics, can profit from acquiring dense, wide-ranging and accurate 3D laser data. Off-the-shelf 2D scanners are commonly customized with an extra rotation as a low-cost, lightweight and low-power-demanding solution. Moreover, aligning the extra rotation axis with the optical center allows the 3D device to maintain the same minimum range as the 2D scanner and avoids offsets in computing Cartesian coordinates. The paper proposes a practical procedure to estimate construction misalignments based on a single scan taken from an arbitrary position in an unprepared environment that contains planar surfaces of unknown dimensions. Inherited measurement limitations from low-cost 2D devices prevent the estimation of very small translation misalignments, so the calibration problem reduces to obtaining boresight parameters. The distinctive approach with respect to previous plane-based intrinsic calibration techniques is the iterative maximization of both the flatness and the area of visible planes. Calibration results are presented for a case study. The method is currently being applied as the final stage in the production of a commercial 3D rangefinder. PMID:25347585

  2. Calibration of a dual-PTZ camera system for stereo vision

    NASA Astrophysics Data System (ADS)

    Chang, Yau-Zen; Hou, Jung-Fu; Tsao, Yi Hsiang; Lee, Shih-Tseng

    2010-08-01

    In this paper, we propose a calibration process for the intrinsic and extrinsic parameters of dual-PTZ camera systems. The calibration is based on a complete definition of six coordinate systems fixed at the image planes, and the pan and tilt rotation axes of the cameras. Misalignments between estimated and ideal coordinates of image corners are formed into cost values to be solved by the Nelder-Mead simplex optimization method. Experimental results show that the system is able to obtain 3D coordinates of objects with a consistent accuracy of 1 mm when the distance between the dual-PTZ camera set and the objects are from 0.9 to 1.1 meters.

  3. Out of lab calibration of a rotating 2D scanner for 3D mapping

    NASA Astrophysics Data System (ADS)

    Koch, Rainer; Böttcher, Lena; Jahrsdörfer, Maximilian; Maier, Johannes; Trommer, Malte; May, Stefan; Nüchter, Andreas

    2017-06-01

    Mapping is an essential task in mobile robotics. To fulfil advanced navigation and manipulation tasks a 3D representation of the environment is required. Applying stereo cameras or Time-of-flight cameras (TOF cameras) are one way to archive this requirement. Unfortunately, they suffer from drawbacks which makes it difficult to map properly. Therefore, costly 3D laser scanners are applied. An inexpensive way to build a 3D representation is to use a 2D laser scanner and rotate the scan plane around an additional axis. A 3D point cloud acquired with such a custom device consists of multiple 2D line scans. Therefore the scanner pose of each line scan need to be determined as well as parameters resulting from a calibration to generate a 3D point cloud. Using external sensor systems are a common method to determine these calibration parameters. This is costly and difficult when the robot needs to be calibrated outside the lab. Thus, this work presents a calibration method applied on a rotating 2D laser scanner. It uses a hardware setup to identify the required parameters for calibration. This hardware setup is light, small, and easy to transport. Hence, an out of lab calibration is possible. Additional a theoretical model was created to test the algorithm and analyse impact of the scanner accuracy. The hardware components of the 3D scanner system are an HOKUYO UTM-30LX-EW 2D laser scanner, a Dynamixel servo-motor, and a control unit. The calibration system consists of an hemisphere. In the inner of the hemisphere a circular plate is mounted. The algorithm needs to be provided with a dataset of a single rotation from the laser scanner. To achieve a proper calibration result the scanner needs to be located in the middle of the hemisphere. By means of geometric formulas the algorithms determine the individual deviations of the placed laser scanner. In order to minimize errors, the algorithm solves the formulas in an iterative process. First, the calibration algorithm was tested with an ideal hemisphere model created in Matlab. Second, laser scanner was mounted differently, the scanner position and the rotation axis was modified. In doing so, every deviation, was compared with the algorithm results. Several measurement settings were tested repeatedly with the 3D scanner system and the calibration system. The results show that the length accuracy of the laser scanner is most critical. It influences the required size of the hemisphere and the calibration accuracy.

  4. Calibration Matters: Advances in Strapdown Airborne Gravimetry

    NASA Astrophysics Data System (ADS)

    Becker, D.

    2015-12-01

    Using a commercial navigation-grade strapdown inertial measurement unit (IMU) for airborne gravimetry can be advantageous in terms of cost, handling, and space consumption compared to the classical stable-platform spring gravimeters. Up to now, however, large sensor errors made it impossible to reach the mGal-level using such type IMUs as they are not designed or optimized for this kind of application. Apart from a proper error-modeling in the filtering process, specific calibration methods that are tailored to the application of aerogravity may help to bridge this gap and to improve their performance. Based on simulations, a quantitative analysis is presented on how much IMU sensor errors, as biases, scale factors, cross couplings, and thermal drifts distort the determination of gravity and the deflection of the vertical (DOV). Several lab and in-field calibration methods are briefly discussed, and calibration results are shown for an iMAR RQH unit. In particular, a thermal lab calibration of its QA2000 accelerometers greatly improved the long-term drift behavior. Latest results from four recent airborne gravimetry campaigns confirm the effectiveness of the calibrations applied, with cross-over accuracies reaching 1.0 mGal (0.6 mGal after cross-over adjustment) and DOV accuracies reaching 1.1 arc seconds after cross-over adjustment.

  5. Quantification of transformation products of rocket fuel unsymmetrical dimethylhydrazine in soils using SPME and GC-MS.

    PubMed

    Bakaikina, Nadezhda V; Kenessov, Bulat; Ul'yanovskii, Nikolay V; Kosyakov, Dmitry S

    2018-07-01

    Determination of transformation products (TPs) of rocket fuel unsymmetrical dimethylhydrazine (UDMH) in soil is highly important for environmental impact assessment of the launches of heavy space rockets from Kazakhstan, Russia, China and India. The method based on headspace solid-phase microextraction (HS SPME) and gas chromatography-mass spectrometry is advantageous over other known methods due to greater simplicity and cost efficiency. However, accurate quantification of these analytes using HS SPME is limited by the matrix effect. In this research, we proposed using internal standard and standard addition calibrations to achieve proper combination of accuracies of the quantification of key TPs of UDMH and cost efficiency. 1-Trideuteromethyl-1H-1,2,4-triazole (MTA-d3) was used as the internal standard. Internal standard calibration allowed controlling matrix effects during quantification of 1-methyl-1H-1,2,4-triazole (MTA), N,N-dimethylformamide (DMF), and N-nitrosodimethylamine (NDMA) in soils with humus content < 1%. Using SPME at 60 °C for 15 min by 65 µm Carboxen/polydimethylsiloxane fiber, recoveries of MTA, DMF and NDMA for sandy and loamy soil samples were 91-117, 85-123 and 64-132%, respectively. For improving the method accuracy and widening the range of analytes, standard addition and its combination with internal standard calibration were tested and compared on real soil samples. The combined calibration approach provided greatest accuracies for NDMA, DMF, N-methylformamide, formamide, 1H-pyrazole, 3-methyl-1H-pyrazole and 1H-pyrazole. For determination of 1-formyl-2,2-dimethylhydrazine, 3,5-dimethylpyrazole, 2-ethyl-1H-imidazole, 1H-imidazole, 1H-1,2,4-triazole, pyrazines and pyridines, standard addition calibration is more suitable. However, the proposed approach and collected data allow using both approaches simultaneously. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. A Novel Calibration-Minimum Method for Prediction of Mole Fraction in Non-Ideal Mixture.

    PubMed

    Shibayama, Shojiro; Kaneko, Hiromasa; Funatsu, Kimito

    2017-04-01

    This article proposes a novel concentration prediction model that requires little training data and is useful for rapid process understanding. Process analytical technology is currently popular, especially in the pharmaceutical industry, for enhancement of process understanding and process control. A calibration-free method, iterative optimization technology (IOT), was proposed to predict pure component concentrations, because calibration methods such as partial least squares, require a large number of training samples, leading to high costs. However, IOT cannot be applied to concentration prediction in non-ideal mixtures because its basic equation is derived from the Beer-Lambert law, which cannot be applied to non-ideal mixtures. We proposed a novel method that realizes prediction of pure component concentrations in mixtures from a small number of training samples, assuming that spectral changes arising from molecular interactions can be expressed as a function of concentration. The proposed method is named IOT with virtual molecular interaction spectra (IOT-VIS) because the method takes spectral change as a virtual spectrum x nonlin,i into account. It was confirmed through the two case studies that the predictive accuracy of IOT-VIS was the highest among existing IOT methods.

  7. High-precision method of binocular camera calibration with a distortion model.

    PubMed

    Li, Weimin; Shan, Siyu; Liu, Hui

    2017-03-10

    A high-precision camera calibration method for binocular stereo vision system based on a multi-view template and alternative bundle adjustment is presented in this paper. The proposed method could be achieved by taking several photos on a specially designed calibration template that has diverse encoded points in different orientations. In this paper, the method utilized the existing algorithm used for monocular camera calibration to obtain the initialization, which involves a camera model, including radial lens distortion and tangential distortion. We created a reference coordinate system based on the left camera coordinate to optimize the intrinsic parameters of left camera through alternative bundle adjustment to obtain optimal values. Then, optimal intrinsic parameters of the right camera can be obtained through alternative bundle adjustment when we create a reference coordinate system based on the right camera coordinate. We also used all intrinsic parameters that were acquired to optimize extrinsic parameters. Thus, the optimal lens distortion parameters and intrinsic and extrinsic parameters were obtained. Synthetic and real data were used to test the method. The simulation results demonstrate that the maximum mean absolute relative calibration errors are about 3.5e-6 and 1.2e-6 for the focal length and the principal point, respectively, under zero-mean Gaussian noise with 0.05 pixels standard deviation. The real result shows that the reprojection error of our model is about 0.045 pixels with the relative standard deviation of 1.0e-6 over the intrinsic parameters. The proposed method is convenient, cost-efficient, highly precise, and simple to carry out.

  8. Low Cost and Efficient 3d Indoor Mapping Using Multiple Consumer Rgb-D Cameras

    NASA Astrophysics Data System (ADS)

    Chen, C.; Yang, B. S.; Song, S.

    2016-06-01

    Driven by the miniaturization, lightweight of positioning and remote sensing sensors as well as the urgent needs for fusing indoor and outdoor maps for next generation navigation, 3D indoor mapping from mobile scanning is a hot research and application topic. The point clouds with auxiliary data such as colour, infrared images derived from 3D indoor mobile mapping suite can be used in a variety of novel applications, including indoor scene visualization, automated floorplan generation, gaming, reverse engineering, navigation, simulation and etc. State-of-the-art 3D indoor mapping systems equipped with multiple laser scanners product accurate point clouds of building interiors containing billions of points. However, these laser scanner based systems are mostly expensive and not portable. Low cost consumer RGB-D Cameras provides an alternative way to solve the core challenge of indoor mapping that is capturing detailed underlying geometry of the building interiors. Nevertheless, RGB-D Cameras have a very limited field of view resulting in low efficiency in the data collecting stage and incomplete dataset that missing major building structures (e.g. ceilings, walls). Endeavour to collect a complete scene without data blanks using single RGB-D Camera is not technic sound because of the large amount of human labour and position parameters need to be solved. To find an efficient and low cost way to solve the 3D indoor mapping, in this paper, we present an indoor mapping suite prototype that is built upon a novel calibration method which calibrates internal parameters and external parameters of multiple RGB-D Cameras. Three Kinect sensors are mounted on a rig with different view direction to form a large field of view. The calibration procedure is three folds: 1, the internal parameters of the colour and infrared camera inside each Kinect are calibrated using a chess board pattern, respectively; 2, the external parameters between the colour and infrared camera inside each Kinect are calibrated using a chess board pattern; 3, the external parameters between every Kinect are firstly calculated using a pre-set calibration field and further refined by an iterative closet point algorithm. Experiments are carried out to validate the proposed method upon RGB-D datasets collected by the indoor mapping suite prototype. The effectiveness and accuracy of the proposed method is evaluated by comparing the point clouds derived from the prototype with ground truth data collected by commercial terrestrial laser scanner at ultra-high density. The overall analysis of the results shows that the proposed method achieves seamless integration of multiple point clouds form different RGB-D cameras collected at 30 frame per second.

  9. Effective Data-Driven Calibration for a Galvanometric Laser Scanning System Using Binocular Stereo Vision.

    PubMed

    Tu, Junchao; Zhang, Liyan

    2018-01-12

    A new solution to the problem of galvanometric laser scanning (GLS) system calibration is presented. Under the machine learning framework, we build a single-hidden layer feedforward neural network (SLFN)to represent the GLS system, which takes the digital control signal at the drives of the GLS system as input and the space vector of the corresponding outgoing laser beam as output. The training data set is obtained with the aid of a moving mechanism and a binocular stereo system. The parameters of the SLFN are efficiently solved in a closed form by using extreme learning machine (ELM). By quantitatively analyzing the regression precision with respective to the number of hidden neurons in the SLFN, we demonstrate that the proper number of hidden neurons can be safely chosen from a broad interval to guarantee good generalization performance. Compared to the traditional model-driven calibration, the proposed calibration method does not need a complex modeling process and is more accurate and stable. As the output of the network is the space vectors of the outgoing laser beams, it costs much less training time and can provide a uniform solution to both laser projection and 3D-reconstruction, in contrast with the existing data-driven calibration method which only works for the laser triangulation problem. Calibration experiment, projection experiment and 3D reconstruction experiment are respectively conducted to test the proposed method, and good results are obtained.

  10. Spectral Transfer Learning Using Information Geometry for a User-Independent Brain-Computer Interface

    DOE PAGES

    Waytowich, Nicholas R.; Lawhern, Vernon J.; Bohannon, Addison W.; ...

    2016-09-22

    Recent advances in signal processing and machine learning techniques have enabled the application of Brain-Computer Interface (BCI) technologies to fields such as medicine, industry, and recreation; however, BCIs still suffer from the requirement of frequent calibration sessions due to the intra- and inter-individual variability of brain-signals, which makes calibration suppression through transfer learning an area of increasing interest for the development of practical BCI systems. In this paper, we present an unsupervised transfer method (spectral transfer using information geometry,STIG),which ranks and combines unlabeled predictions from an ensemble of information geometry classifiers built on data from individual training subjects. The STIGmore » method is validated in both off-line and real-time feedback analysis during a rapid serial visual presentation task (RSVP). For detection of single-trial, event-related potentials (ERPs), the proposed method can significantly outperform existing calibration-free techniques as well as out perform traditional within-subject calibration techniques when limited data is available. Here, this method demonstrates that unsupervised transfer learning for single-trial detection in ERP-based BCIs can be achieved without the requirement of costly training data, representing a step-forward in the overall goal of achieving a practical user-independent BCI system.« less

  11. A New Calibration Method Using Low Cost MEM IMUs to Verify the Performance of UAV-Borne MMS Payloads

    PubMed Central

    Chiang, Kai-Wei; Tsai, Meng-Lun; Naser, El-Sheimy; Habib, Ayman; Chu, Chien-Hsun

    2015-01-01

    Spatial information plays a critical role in remote sensing and mapping applications such as environment surveying and disaster monitoring. An Unmanned Aerial Vehicle (UAV)-borne mobile mapping system (MMS) can accomplish rapid spatial information acquisition under limited sky conditions with better mobility and flexibility than other means. This study proposes a long endurance Direct Geo-referencing (DG)-based fixed-wing UAV photogrammetric platform and two DG modules that each use different commercial Micro-Electro Mechanical Systems’ (MEMS) tactical grade Inertial Measurement Units (IMUs). Furthermore, this study develops a novel kinematic calibration method which includes lever arms, boresight angles and camera shutter delay to improve positioning accuracy. The new calibration method is then compared with the traditional calibration approach. The results show that the accuracy of the DG can be significantly improved by flying at a lower altitude using the new higher specification hardware. The new proposed method improves the accuracy of DG by about 20%. The preliminary results show that two-dimensional (2D) horizontal DG positioning accuracy is around 5.8 m at a flight height of 300 m using the newly designed tactical grade integrated Positioning and Orientation System (POS). The positioning accuracy in three-dimensions (3D) is less than 8 m. PMID:25808764

  12. New calibration method using low cost MEM IMUs to verify the performance of UAV-borne MMS payloads.

    PubMed

    Chiang, Kai-Wei; Tsai, Meng-Lun; Naser, El-Sheimy; Habib, Ayman; Chu, Chien-Hsun

    2015-03-19

    Spatial information plays a critical role in remote sensing and mapping applications such as environment surveying and disaster monitoring. An Unmanned Aerial Vehicle (UAV)-borne mobile mapping system (MMS) can accomplish rapid spatial information acquisition under limited sky conditions with better mobility and flexibility than other means. This study proposes a long endurance Direct Geo-referencing (DG)-based fixed-wing UAV photogrammetric platform and two DG modules that each use different commercial Micro-Electro Mechanical Systems' (MEMS) tactical grade Inertial Measurement Units (IMUs). Furthermore, this study develops a novel kinematic calibration method which includes lever arms, boresight angles and camera shutter delay to improve positioning accuracy. The new calibration method is then compared with the traditional calibration approach. The results show that the accuracy of the DG can be significantly improved by flying at a lower altitude using the new higher specification hardware. The new proposed method improves the accuracy of DG by about 20%. The preliminary results show that two-dimensional (2D) horizontal DG positioning accuracy is around 5.8 m at a flight height of 300 m using the newly designed tactical grade integrated Positioning and Orientation System (POS). The positioning accuracy in three-dimensions (3D) is less than 8 m.

  13. Spectral Transfer Learning Using Information Geometry for a User-Independent Brain-Computer Interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waytowich, Nicholas R.; Lawhern, Vernon J.; Bohannon, Addison W.

    Recent advances in signal processing and machine learning techniques have enabled the application of Brain-Computer Interface (BCI) technologies to fields such as medicine, industry, and recreation; however, BCIs still suffer from the requirement of frequent calibration sessions due to the intra- and inter-individual variability of brain-signals, which makes calibration suppression through transfer learning an area of increasing interest for the development of practical BCI systems. In this paper, we present an unsupervised transfer method (spectral transfer using information geometry,STIG),which ranks and combines unlabeled predictions from an ensemble of information geometry classifiers built on data from individual training subjects. The STIGmore » method is validated in both off-line and real-time feedback analysis during a rapid serial visual presentation task (RSVP). For detection of single-trial, event-related potentials (ERPs), the proposed method can significantly outperform existing calibration-free techniques as well as out perform traditional within-subject calibration techniques when limited data is available. Here, this method demonstrates that unsupervised transfer learning for single-trial detection in ERP-based BCIs can be achieved without the requirement of costly training data, representing a step-forward in the overall goal of achieving a practical user-independent BCI system.« less

  14. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach

    PubMed Central

    Enns, Eva A.; Cipriano, Lauren E.; Simons, Cyrena T.; Kong, Chung Yin

    2014-01-01

    Background To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single “goodness-of-fit” (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. Methods We demonstrate the Pareto frontier approach in the calibration of two models: a simple, illustrative Markov model and a previously-published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to two possible weighted-sum GOF scoring systems, and compare the health economic conclusions arising from these different definitions of best-fitting. Results For the simple model, outcomes evaluated over the best-fitting input sets according to the two weighted-sum GOF schemes were virtually non-overlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95%CI: 72,500 – 87,600] vs. $139,700 [95%CI: 79,900 - 182,800] per QALY gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95%CI: 64,900 – 156,200] per QALY gained). The TAVR model yielded similar results. Conclusions Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. PMID:24799456

  15. Geometrical calibration television measuring systems with solid state photodetectors

    NASA Astrophysics Data System (ADS)

    Matiouchenko, V. G.; Strakhov, V. V.; Zhirkov, A. O.

    2000-11-01

    The various optical measuring methods for deriving information about the size and form of objects are now used in difference branches- mechanical engineering, medicine, art, criminalistics. Measuring by means of the digital television systems is one of these methods. The development of this direction is promoted by occurrence on the market of various types and costs small-sized television cameras and frame grabbers. There are many television measuring systems using the expensive cameras, but accuracy performances of low cost cameras are also interested for the system developers. For this reason inexpensive mountingless camera SK1004CP (format 1/3', cost up to 40$) and frame grabber Aver2000 were used in experiments.

  16. Guidelines for Calibration and Application of Storm.

    DTIC Science & Technology

    1977-12-01

    combination method uses the SCS method on pervious areas and the coefficient method on impervious areas of the watershed. Storm water quality is computed...stations, it should be accomplished according to procedures outlined In Reference 7. Adequate storm water quality data are the most difficult and costly...mass discharge of pollutants is negligible. The state-of-the-art in urban storm water quality modeling precludes highly accurate simulation of

  17. Calibration of limited-area ensemble precipitation forecasts for hydrological predictions

    NASA Astrophysics Data System (ADS)

    Diomede, Tommaso; Marsigli, Chiara; Montani, Andrea; Nerozzi, Fabrizio; Paccagnella, Tiziana

    2015-04-01

    The main objective of this study is to investigate the impact of calibration for limited-area ensemble precipitation forecasts, to be used for driving discharge predictions up to 5 days in advance. A reforecast dataset, which spans 30 years, based on the Consortium for Small Scale Modeling Limited-Area Ensemble Prediction System (COSMO-LEPS) was used for testing the calibration strategy. Three calibration techniques were applied: quantile-to-quantile mapping, linear regression, and analogs. The performance of these methodologies was evaluated in terms of statistical scores for the precipitation forecasts operationally provided by COSMO-LEPS in the years 2003-2007 over Germany, Switzerland, and the Emilia-Romagna region (northern Italy). The analog-based method seemed to be preferred because of its capability of correct position errors and spread deficiencies. A suitable spatial domain for the analog search can help to handle model spatial errors as systematic errors. However, the performance of the analog-based method may degrade in cases where a limited training dataset is available. A sensitivity test on the length of the training dataset over which to perform the analog search has been performed. The quantile-to-quantile mapping and linear regression methods were less effective, mainly because the forecast-analysis relation was not so strong for the available training dataset. A comparison between the calibration based on the deterministic reforecast and the calibration based on the full operational ensemble used as training dataset has been considered, with the aim to evaluate whether reforecasts are really worthy for calibration, given that their computational cost is remarkable. The verification of the calibration process was then performed by coupling ensemble precipitation forecasts with a distributed rainfall-runoff model. This test was carried out for a medium-sized catchment located in Emilia-Romagna, showing a beneficial impact of the analog-based method on the reduction of missed events for discharge predictions.

  18. Gay-Lussac Experiment

    ERIC Educational Resources Information Center

    Ladino, L. A.; Rondón, S. H.

    2015-01-01

    In this paper, we present a low-cost method to study the Gay-Lussac's law. We use a heating wire wrapped around the test tube to heat the air inside and make use of a solid state pressure sensor which requires a previous calibration to measure the pressure in the test tube.

  19. Evaluation of a stepwise, multi-objective, multi-variable parameter optimization method for the APEX model

    USDA-ARS?s Scientific Manuscript database

    Hydrologic models are essential tools for environmental assessment of agricultural non-point source pollution. The automatic calibration of hydrologic models, though efficient, demands significant computational power, which can limit its application. The study objective was to investigate a cost e...

  20. A candidate reference method for serum potassium measurement by inductively coupled plasma mass spectrometry.

    PubMed

    Yan, Ying; Han, Bingqing; Zeng, Jie; Zhou, Weiyan; Zhang, Tianjiao; Zhang, Jiangtao; Chen, Wenxiang; Zhang, Chuanbao

    2017-08-28

    Potassium is an important serum ion that is frequently assayed in clinical laboratories. Quality assurance requires reference methods; thus, the establishment of a candidate reference method for serum potassium measurements is important. An inductively coupled plasma mass spectrometry (ICP-MS) method was developed. Serum samples were gravimetrically spiked with an aluminum internal standard, digested with 69% ultrapure nitric acid, and diluted to the required concentration. The 39K/27Al ratios were measured by ICP-MS in hydrogen mode. The method was calibrated using 5% nitric acid matrix calibrators, and the calibration function was established using the bracketing method. The correlation coefficients between the measured 39K/27Al ratios and the analyte concentration ratios were >0.9999. The coefficients of variation were 0.40%, 0.68%, and 0.22% for the three serum samples, and the analytical recovery was 99.8%. The accuracy of the measurement was also verified by measuring certified reference materials, SRM909b and SRM956b. Comparison with the ion selective electrode routine method and international inter-laboratory comparisons gave satisfied results. The new ICP-MS method is specific, precise, simple, and low-cost, and it may be used as a candidate reference method for standardizing serum potassium measurements.

  1. Design of transonic airfoil sections using a similarity theory

    NASA Technical Reports Server (NTRS)

    Nixon, D.

    1978-01-01

    A study of the available methods for transonic airfoil and wing design indicates that the most powerful technique is the numerical optimization procedure. However, the computer time for this method is relatively large because of the amount of computation required in the searches during optimization. The optimization method requires that base and calibration solutions be computed to determine a minimum drag direction. The design space is then computationally searched in this direction; it is these searches that dominate the computation time. A recent similarity theory allows certain transonic flows to be calculated rapidly from the base and calibration solutions. In this paper the application of the similarity theory to design problems is examined with the object of at least partially eliminating the costly searches of the design optimization method. An example of an airfoil design is presented.

  2. Calibration of asynchronous smart phone cameras from moving objects

    NASA Astrophysics Data System (ADS)

    Hagen, Oksana; Istenič, Klemen; Bharti, Vibhav; Dhali, Maruf Ahmed; Barmaimon, Daniel; Houssineau, Jérémie; Clark, Daniel

    2015-04-01

    Calibrating multiple cameras is a fundamental prerequisite for many Computer Vision applications. Typically this involves using a pair of identical synchronized industrial or high-end consumer cameras. This paper considers an application on a pair of low-cost portable cameras with different parameters that are found in smart phones. This paper addresses the issues of acquisition, detection of moving objects, dynamic camera registration and tracking of arbitrary number of targets. The acquisition of data is performed using two standard smart phone cameras and later processed using detections of moving objects in the scene. The registration of cameras onto the same world reference frame is performed using a recently developed method for camera calibration using a disparity space parameterisation and the single-cluster PHD filter.

  3. Development of IR Contrast Data Analysis Application for Characterizing Delaminations in Graphite-Epoxy Structures

    NASA Technical Reports Server (NTRS)

    Havican, Marie

    2012-01-01

    Objective: Develop infrared (IR) flash thermography application based on use of a calibration standard for inspecting graphite-epoxy laminated/honeycomb structures. Background: Graphite/Epoxy composites (laminated and honeycomb) are widely used on NASA programs. Composite materials are susceptible for impact damage that is not readily detected by visual inspection. IR inspection can provide required sensitivity to detect surface damage in composites during manufacturing and during service. IR contrast analysis can provide characterization of depth, size and gap thickness of impact damage. Benefits/Payoffs: The research provides an empirical method of calibrating the flash thermography response in nondestructive evaluation. A physical calibration standard with artificial flaws such as flat bottom holes with desired diameter and depth values in a desired material is used in calibration. The research devises several probability of detection (POD) analysis approaches to enable cost effective POD study to meet program requirements.

  4. A calibration service for biomedical instrumentation maintenance laboratories.

    PubMed

    Barnes, A; Evans, A L; Job, H M; Laing, R; Smith, D C

    1999-01-01

    An in-house calibration laboratory for the Biomedical Instrumentation Maintenance Services of the hospitals in the West of Scotland was established in 1993. This paper describes the development of this calibration service in the context of an overall quality system and also estimates its costs. Not only does the in-house service have many advantages but it is shown to be cost effective for workloads exceeding 260 items per annum.

  5. Anatomical calibration for wearable motion capture systems: Video calibrated anatomical system technique.

    PubMed

    Bisi, Maria Cristina; Stagni, Rita; Caroselli, Alessio; Cappello, Angelo

    2015-08-01

    Inertial sensors are becoming widely used for the assessment of human movement in both clinical and research applications, thanks to their usability out of the laboratory. This work aims to propose a method for calibrating anatomical landmark position in the wearable sensor reference frame with an ease to use, portable and low cost device. An off-the-shelf camera, a stick and a pattern, attached to the inertial sensor, compose the device. The proposed technique is referred to as video Calibrated Anatomical System Technique (vCAST). The absolute orientation of a synthetic femur was tracked both using the vCAST together with an inertial sensor and using stereo-photogrammetry as reference. Anatomical landmark calibration showed mean absolute error of 0.6±0.5 mm: these errors are smaller than those affecting the in-vivo identification of anatomical landmarks. The roll, pitch and yaw anatomical frame orientations showed root mean square errors close to the accuracy limit of the wearable sensor used (1°), highlighting the reliability of the proposed technique. In conclusion, the present paper proposes and preliminarily verifies the performance of a method (vCAST) for calibrating anatomical landmark position in the wearable sensor reference frame: the technique is low time consuming, highly portable, easy to implement and usable outside laboratory. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  6. Using Calibrated RGB Imagery from Low-Cost Uavs for Grassland Monitoring: Case Study at the Rengen Grassland Experiment (rge), Germany

    NASA Astrophysics Data System (ADS)

    Lussem, U.; Hollberg, J.; Menne, J.; Schellberg, J.; Bareth, G.

    2017-08-01

    Monitoring the spectral response of intensively managed grassland throughout the growing season allows optimizing fertilizer inputs by monitoring plant growth. For example, site-specific fertilizer application as part of precision agriculture (PA) management requires information within short time. But, this requires field-based measurements with hyper- or multispectral sensors, which may not be feasible on a day to day farming practice. Exploiting the information of RGB images from consumer grade cameras mounted on unmanned aerial vehicles (UAV) can offer cost-efficient as well as near-real time analysis of grasslands with high temporal and spatial resolution. The potential of RGB imagery-based vegetation indices (VI) from consumer grade cameras mounted on UAVs has been explored recently in several. However, for multitemporal analyses it is desirable to calibrate the digital numbers (DN) of RGB-images to physical units. In this study, we explored the comparability of the RGBVI from a consumer grade camera mounted on a low-cost UAV to well established vegetation indices from hyperspectral field measurements for applications in grassland. The study was conducted in 2014 on the Rengen Grassland Experiment (RGE) in Germany. Image DN values were calibrated into reflectance by using the Empirical Line Method (Smith & Milton 1999). Depending on sampling date and VI the correlation between the UAV-based RGBVI and VIs such as the NDVI resulted in varying R2 values from no correlation to up to 0.9. These results indicate, that calibrated RGB-based VIs have the potential to support or substitute hyperspectral field measurements to facilitate management decisions on grasslands.

  7. A novel approach to calibrate the hemodynamic model using functional Magnetic Resonance Imaging (fMRI) measurements.

    PubMed

    Khoram, Nafiseh; Zayane, Chadia; Djellouli, Rabia; Laleg-Kirati, Taous-Meriem

    2016-03-15

    The calibration of the hemodynamic model that describes changes in blood flow and blood oxygenation during brain activation is a crucial step for successfully monitoring and possibly predicting brain activity. This in turn has the potential to provide diagnosis and treatment of brain diseases in early stages. We propose an efficient numerical procedure for calibrating the hemodynamic model using some fMRI measurements. The proposed solution methodology is a regularized iterative method equipped with a Kalman filtering-type procedure. The Newton component of the proposed method addresses the nonlinear aspect of the problem. The regularization feature is used to ensure the stability of the algorithm. The Kalman filter procedure is incorporated here to address the noise in the data. Numerical results obtained with synthetic data as well as with real fMRI measurements are presented to illustrate the accuracy, robustness to the noise, and the cost-effectiveness of the proposed method. We present numerical results that clearly demonstrate that the proposed method outperforms the Cubature Kalman Filter (CKF), one of the most prominent existing numerical methods. We have designed an iterative numerical technique, called the TNM-CKF algorithm, for calibrating the mathematical model that describes the single-event related brain response when fMRI measurements are given. The method appears to be highly accurate and effective in reconstructing the BOLD signal even when the measurements are tainted with high noise level (as high as 30%). Published by Elsevier B.V.

  8. Self-Calibration and Optimal Response in Intelligent Sensors Design Based on Artificial Neural Networks

    PubMed Central

    Rivera, José; Carrillo, Mariano; Chacón, Mario; Herrera, Gilberto; Bojorquez, Gilberto

    2007-01-01

    The development of smart sensors involves the design of reconfigurable systems capable of working with different input sensors. Reconfigurable systems ideally should spend the least possible amount of time in their calibration. An autocalibration algorithm for intelligent sensors should be able to fix major problems such as offset, variation of gain and lack of linearity, as accurately as possible. This paper describes a new autocalibration methodology for nonlinear intelligent sensors based on artificial neural networks, ANN. The methodology involves analysis of several network topologies and training algorithms. The proposed method was compared against the piecewise and polynomial linearization methods. Method comparison was achieved using different number of calibration points, and several nonlinear levels of the input signal. This paper also shows that the proposed method turned out to have a better overall accuracy than the other two methods. Besides, experimentation results and analysis of the complete study, the paper describes the implementation of the ANN in a microcontroller unit, MCU. In order to illustrate the method capability to build autocalibration and reconfigurable systems, a temperature measurement system was designed and tested. The proposed method is an improvement over the classic autocalibration methodologies, because it impacts on the design process of intelligent sensors, autocalibration methodologies and their associated factors, like time and cost.

  9. End-user perspective of low-cost sensors for outdoor air pollution monitoring.

    PubMed

    Rai, Aakash C; Kumar, Prashant; Pilla, Francesco; Skouloudis, Andreas N; Di Sabatino, Silvana; Ratti, Carlo; Yasar, Ansar; Rickerby, David

    2017-12-31

    Low-cost sensor technology can potentially revolutionise the area of air pollution monitoring by providing high-density spatiotemporal pollution data. Such data can be utilised for supplementing traditional pollution monitoring, improving exposure estimates, and raising community awareness about air pollution. However, data quality remains a major concern that hinders the widespread adoption of low-cost sensor technology. Unreliable data may mislead unsuspecting users and potentially lead to alarming consequences such as reporting acceptable air pollutant levels when they are above the limits deemed safe for human health. This article provides scientific guidance to the end-users for effectively deploying low-cost sensors for monitoring air pollution and people's exposure, while ensuring reasonable data quality. We review the performance characteristics of several low-cost particle and gas monitoring sensors and provide recommendations to end-users for making proper sensor selection by summarizing the capabilities and limitations of such sensors. The challenges, best practices, and future outlook for effectively deploying low-cost sensors, and maintaining data quality are also discussed. For data quality assurance, a two-stage sensor calibration process is recommended, which includes laboratory calibration under controlled conditions by the manufacturer supplemented with routine calibration checks performed by the end-user under final deployment conditions. For large sensor networks where routine calibration checks are impractical, statistical techniques for data quality assurance should be utilised. Further advancements and adoption of sophisticated mathematical and statistical techniques for sensor calibration, fault detection, and data quality assurance can indeed help to realise the promised benefits of a low-cost air pollution sensor network. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. A single camera photogrammetry system for multi-angle fast localization of EEG electrodes.

    PubMed

    Qian, Shuo; Sheng, Yang

    2011-11-01

    Photogrammetry has become an effective method for the determination of electroencephalography (EEG) electrode positions in three dimensions (3D). Capturing multi-angle images of the electrodes on the head is a fundamental objective in the design of photogrammetry system for EEG localization. Methods in previous studies are all based on the use of either a rotating camera or multiple cameras, which are time-consuming or not cost-effective. This study aims to present a novel photogrammetry system that can realize simultaneous acquisition of multi-angle head images in a single camera position. Aligning two planar mirrors with the angle of 51.4°, seven views of the head with 25 electrodes are captured simultaneously by the digital camera placed in front of them. A complete set of algorithms for electrode recognition, matching, and 3D reconstruction is developed. It is found that the elapsed time of the whole localization procedure is about 3 min, and camera calibration computation takes about 1 min, after the measurement of calibration points. The positioning accuracy with the maximum error of 1.19 mm is acceptable. Experimental results demonstrate that the proposed system provides a fast and cost-effective method for the EEG positioning.

  11. 3D kinematic measurement of human movement using low cost fish-eye cameras

    NASA Astrophysics Data System (ADS)

    Islam, Atiqul; Asikuzzaman, Md.; Garratt, Matthew A.; Pickering, Mark R.

    2017-02-01

    3D motion capture is difficult when the capturing is performed in an outdoor environment without controlled surroundings. In this paper, we propose a new approach of using two ordinary cameras arranged in a special stereoscopic configuration and passive markers on a subject's body to reconstruct the motion of the subject. Firstly for each frame of the video, an adaptive thresholding algorithm is applied for extracting the markers on the subject's body. Once the markers are extracted, an algorithm for matching corresponding markers in each frame is applied. Zhang's planar calibration method is used to calibrate the two cameras. As the cameras use the fisheye lens, they cannot be well estimated using a pinhole camera model which makes it difficult to estimate the depth information. In this work, to restore the 3D coordinates we use a unique calibration method for fisheye lenses. The accuracy of the 3D coordinate reconstruction is evaluated by comparing with results from a commercially available Vicon motion capture system.

  12. Evaluating the use of in-situ turbidity measurements to quantify fluvial sediment and phosphorus concentrations and fluxes in agricultural streams.

    PubMed

    Stutter, Marc; Dawson, Julian J C; Glendell, Miriam; Napier, Fiona; Potts, Jacqueline M; Sample, James; Vinten, Andrew; Watson, Helen

    2017-12-31

    Accurate quantification of suspended sediments (SS) and particulate phosphorus (PP) concentrations and loads is complex due to episodic delivery associated with storms and management activities often missed by infrequent sampling. Surrogate measurements such as turbidity can improve understanding of pollutant behaviour, providing calibrations can be made cost-effectively and with quantified uncertainties. Here, we compared fortnightly and storm intensive water quality sampling with semi-continuous turbidity monitoring calibrated against spot samples as three potential methods for determining SS and PP concentrations and loads in an agricultural catchment over two-years. In the second year of sampling we evaluated the transferability of turbidity calibration relationships to an adjacent catchment with similar soils and land cover. When data from nine storm events were pooled, both SS and PP concentrations (all in log space) were better related to turbidity than they were to discharge. Developing separate calibration relationship for the rising and falling limbs of the hydrograph provided further improvement. However, the ability to transfer calibrations between adjacent catchments was not evident as the relationships of both SS and PP with turbidity differed both in gradient and intercept on the rising limb of the hydrograph between the two catchments. We conclude that the reduced uncertainty in load estimation derived from the use of turbidity as a proxy for specific water quality parameters in long-term regulatory monitoring programmes, must be considered alongside the increased capital and maintenance costs of turbidity equipment, potentially noisy turbidity data and the need for site-specific prolonged storm calibration periods. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Assessing the Accuracy of Ortho-image using Photogrammetric Unmanned Aerial System

    NASA Astrophysics Data System (ADS)

    Jeong, H. H.; Park, J. W.; Kim, J. S.; Choi, C. U.

    2016-06-01

    Smart-camera can not only be operated under network environment anytime and any place but also cost less than the existing photogrammetric UAV since it provides high-resolution image, 3D location and attitude data on a real-time basis from a variety of built-in sensors. This study's proposed UAV photogrammetric method, low-cost UAV and smart camera were used. The elements of interior orientation were acquired through camera calibration. The image triangulation was conducted in accordance with presence or absence of consideration of the interior orientation (IO) parameters determined by camera calibration, The Digital Elevation Model (DEM) was constructed using the image data photographed at the target area and the results of the ground control point survey. This study also analyzes the proposed method's application possibility by comparing a Ortho-image the results of the ground control point survey. Considering these study findings, it is suggested that smartphone is very feasible as a payload for UAV system. It is also expected that smartphone may be loaded onto existing UAV playing direct or indirect roles significantly.

  14. Rapid prediction of total petroleum hydrocarbons concentration in contaminated soil using vis-NIR spectroscopy and regression techniques.

    PubMed

    Douglas, R K; Nawar, S; Alamar, M C; Mouazen, A M; Coulon, F

    2018-03-01

    Visible and near infrared spectrometry (vis-NIRS) coupled with data mining techniques can offer fast and cost-effective quantitative measurement of total petroleum hydrocarbons (TPH) in contaminated soils. Literature showed however significant differences in the performance on the vis-NIRS between linear and non-linear calibration methods. This study compared the performance of linear partial least squares regression (PLSR) with a nonlinear random forest (RF) regression for the calibration of vis-NIRS when analysing TPH in soils. 88 soil samples (3 uncontaminated and 85 contaminated) collected from three sites located in the Niger Delta were scanned using an analytical spectral device (ASD) spectrophotometer (350-2500nm) in diffuse reflectance mode. Sequential ultrasonic solvent extraction-gas chromatography (SUSE-GC) was used as reference quantification method for TPH which equal to the sum of aliphatic and aromatic fractions ranging between C 10 and C 35 . Prior to model development, spectra were subjected to pre-processing including noise cut, maximum normalization, first derivative and smoothing. Then 65 samples were selected as calibration set and the remaining 20 samples as validation set. Both vis-NIR spectrometry and gas chromatography profiles of the 85 soil samples were subjected to RF and PLSR with leave-one-out cross-validation (LOOCV) for the calibration models. Results showed that RF calibration model with a coefficient of determination (R 2 ) of 0.85, a root means square error of prediction (RMSEP) 68.43mgkg -1 , and a residual prediction deviation (RPD) of 2.61 outperformed PLSR (R 2 =0.63, RMSEP=107.54mgkg -1 and RDP=2.55) in cross-validation. These results indicate that RF modelling approach is accounting for the nonlinearity of the soil spectral responses hence, providing significantly higher prediction accuracy compared to the linear PLSR. It is recommended to adopt the vis-NIRS coupled with RF modelling approach as a portable and cost effective method for the rapid quantification of TPH in soils. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Calibration of Magnetometers with GNSS Receivers and Magnetometer-Aided GNSS Ambiguity Fixing

    PubMed Central

    Henkel, Patrick

    2017-01-01

    Magnetometers provide compass information, and are widely used for navigation, orientation and alignment of objects. As magnetometers are affected by sensor biases and eventually by systematic distortions of the Earth magnetic field, a calibration is needed. In this paper, a method for calibration of magnetometers with three Global Navigation Satellite System (GNSS) receivers is presented. We perform a least-squares estimation of the magnetic flux and sensor biases using GNSS-based attitude information. The attitude is obtained from the relative positions between the GNSS receivers in the North-East-Down coordinate frame and prior knowledge of these relative positions in the platform’s coordinate frame. The relative positions and integer ambiguities of the periodic carrier phase measurements are determined with an integer least-squares estimation using an integer decorrelation and sequential tree search. Prior knowledge on the relative positions is used to increase the success rate of ambiguity fixing. We have validated the proposed method with low-cost magnetometers and GNSS receivers on a vehicle in a test drive. The calibration enabled a consistent heading determination with an accuracy of five degrees. This precise magnetometer-based attitude information allows an instantaneous GNSS integer ambiguity fixing. PMID:28594369

  16. Calibration of Magnetometers with GNSS Receivers and Magnetometer-Aided GNSS Ambiguity Fixing.

    PubMed

    Henkel, Patrick

    2017-06-08

    Magnetometers provide compass information, and are widely used for navigation, orientation and alignment of objects. As magnetometers are affected by sensor biases and eventually by systematic distortions of the Earth magnetic field, a calibration is needed. In this paper, a method for calibration of magnetometers with three Global Navigation Satellite System (GNSS) receivers is presented. We perform a least-squares estimation of the magnetic flux and sensor biases using GNSS-based attitude information. The attitude is obtained from the relative positions between the GNSS receivers in the North-East-Down coordinate frame and prior knowledge of these relative positions in the platform's coordinate frame. The relative positions and integer ambiguities of the periodic carrier phase measurements are determined with an integer least-squares estimation using an integer decorrelation and sequential tree search. Prior knowledge on the relative positions is used to increase the success rate of ambiguity fixing. We have validated the proposed method with low-cost magnetometers and GNSS receivers on a vehicle in a test drive. The calibration enabled a consistent heading determination with an accuracy of five degrees. This precise magnetometer-based attitude information allows an instantaneous GNSS integer ambiguity fixing.

  17. Calibration free beam hardening correction for cardiac CT perfusion imaging

    NASA Astrophysics Data System (ADS)

    Levi, Jacob; Fahmi, Rachid; Eck, Brendan L.; Fares, Anas; Wu, Hao; Vembar, Mani; Dhanantwari, Amar; Bezerra, Hiram G.; Wilson, David L.

    2016-03-01

    Myocardial perfusion imaging using CT (MPI-CT) and coronary CTA have the potential to make CT an ideal noninvasive gate-keeper for invasive coronary angiography. However, beam hardening artifacts (BHA) prevent accurate blood flow calculation in MPI-CT. BH Correction (BHC) methods require either energy-sensitive CT, not widely available, or typically a calibration-based method. We developed a calibration-free, automatic BHC (ABHC) method suitable for MPI-CT. The algorithm works with any BHC method and iteratively determines model parameters using proposed BHA-specific cost function. In this work, we use the polynomial BHC extended to three materials. The image is segmented into soft tissue, bone, and iodine images, based on mean HU and temporal enhancement. Forward projections of bone and iodine images are obtained, and in each iteration polynomial correction is applied. Corrections are then back projected and combined to obtain the current iteration's BHC image. This process is iterated until cost is minimized. We evaluate the algorithm on simulated and physical phantom images and on preclinical MPI-CT data. The scans were obtained on a prototype spectral detector CT (SDCT) scanner (Philips Healthcare). Mono-energetic reconstructed images were used as the reference. In the simulated phantom, BH streak artifacts were reduced from 12+/-2HU to 1+/-1HU and cupping was reduced by 81%. Similarly, in physical phantom, BH streak artifacts were reduced from 48+/-6HU to 1+/-5HU and cupping was reduced by 86%. In preclinical MPI-CT images, BHA was reduced from 28+/-6 HU to less than 4+/-4HU at peak enhancement. Results suggest that the algorithm can be used to reduce BHA in conventional CT and improve MPI-CT accuracy.

  18. The next generation of low-cost personal air quality sensors for quantitative exposure monitoring

    NASA Astrophysics Data System (ADS)

    Piedrahita, R.; Xiang, Y.; Masson, N.; Ortega, J.; Collier, A.; Jiang, Y.; Li, K.; Dick, R.; Lv, Q.; Hannigan, M.; Shang, L.

    2014-03-01

    Advances in embedded systems and low-cost gas sensors are enabling a new wave of low cost air quality monitoring tools. Our team has been engaged in the development of low-cost wearable air quality monitors (M-Pods) using the Arduino platform. The M-Pods use commercially available metal oxide semiconductor (MOx) sensors to measure CO, O3, NO2, and total VOCs, and NDIR sensors to measure CO2. MOx sensors are low in cost and show high sensitivity near ambient levels; however they display non-linear output signals and have cross sensitivity effects. Thus, a quantification system was developed to convert the MOx sensor signals into concentrations. Two deployments were conducted at a regulatory monitoring station in Denver, Colorado. M-Pod concentrations were determined using laboratory calibration techniques and co-location calibrations, in which we place the M-Pods near regulatory monitors to then derive calibration function coefficients using the regulatory monitors as the standard. The form of the calibration function was derived based on laboratory experiments. We discuss various techniques used to estimate measurement uncertainties. A separate user study was also conducted to assess personal exposure and M-Pod reliability. In this study, 10 M-Pods were calibrated via co-location multiple times over 4 weeks and sensor drift was analyzed with the result being a calibration function that included drift. We found that co-location calibrations perform better than laboratory calibrations. Lab calibrations suffer from bias and difficulty in covering the necessary parameter space. During co-location calibrations, median standard errors ranged between 4.0-6.1 ppb for O3, 6.4-8.4 ppb for NO2, 0.28-0.44 ppm for CO, and 16.8 ppm for CO2. Median signal to noise (S/N) ratios for the M-Pod sensors were higher for M-Pods than the regulatory instruments: for NO2, 3.6 compared to 23.4; for O3, 1.4 compared to 1.6; for CO, 1.1 compared to 10.0; and for CO2, 42.2 compared to 300-500. The user study provided trends and location-specific information on pollutants, and affected change in user behavior. The study demonstrated the utility of the M-Pod as a tool to assess personal exposure.

  19. Advances in Digital Calibration Techniques Enabling Real-Time Beamforming SweepSAR Architectures

    NASA Technical Reports Server (NTRS)

    Hoffman, James P.; Perkovic, Dragana; Ghaemi, Hirad; Horst, Stephen; Shaffer, Scott; Veilleux, Louise

    2013-01-01

    Real-time digital beamforming, combined with lightweight, large aperture reflectors, enable SweepSAR architectures, which promise significant increases in instrument capability for solid earth and biomass remote sensing. These new instrument concepts require new methods for calibrating the multiple channels, which are combined on-board, in real-time. The benefit of this effort is that it enables a new class of lightweight radar architecture, Digital Beamforming with SweepSAR, providing significantly larger swath coverage than conventional SAR architectures for reduced mass and cost. This paper will review the on-going development of the digital calibration architecture for digital beamforming radar instrument, such as the proposed Earth Radar Mission's DESDynI (Deformation, Ecosystem Structure, and Dynamics of Ice) instrument. This proposed instrument's baseline design employs SweepSAR digital beamforming and requires digital calibration. We will review the overall concepts and status of the system architecture, algorithm development, and the digital calibration testbed currently being developed. We will present results from a preliminary hardware demonstration. We will also discuss the challenges and opportunities specific to this novel architecture.

  20. Data multiplexing in radio interferometric calibration

    NASA Astrophysics Data System (ADS)

    Yatawatta, Sarod; Diblen, Faruk; Spreeuw, Hanno; Koopmans, L. V. E.

    2018-03-01

    New and upcoming radio interferometers will produce unprecedented amount of data that demand extremely powerful computers for processing. This is a limiting factor due to the large computational power and energy costs involved. Such limitations restrict several key data processing steps in radio interferometry. One such step is calibration where systematic errors in the data are determined and corrected. Accurate calibration is an essential component in reaching many scientific goals in radio astronomy and the use of consensus optimization that exploits the continuity of systematic errors across frequency significantly improves calibration accuracy. In order to reach full consensus, data at all frequencies need to be calibrated simultaneously. In the SKA regime, this can become intractable if the available compute agents do not have the resources to process data from all frequency channels simultaneously. In this paper, we propose a multiplexing scheme that is based on the alternating direction method of multipliers with cyclic updates. With this scheme, it is possible to simultaneously calibrate the full data set using far fewer compute agents than the number of frequencies at which data are available. We give simulation results to show the feasibility of the proposed multiplexing scheme in simultaneously calibrating a full data set when a limited number of compute agents are available.

  1. e-Calibrations: using the Internet to deliver calibration services in real time at lower cost

    NASA Astrophysics Data System (ADS)

    Desrosiers, Marc; Nagy, Vitaly; Puhl, James; Glenn, Robert; Densock, Robert; Stieren, David; Lang, Brian; Kamlowski, Andreas; Maier, Diether; Heiss, Arthur

    2002-03-01

    The National Institute of Standards and Technology (NIST) is expanding into a new frontier in the delivery of measurement services. The Internet will be employed to provide industry with electronic traceability to national standards. This is a radical departure from the traditional modes of traceability and presents many new challenges. The traditional mail-based calibration service relies on sending artifacts to the user, who then mails them back to NIST for evaluation. The new service will deliver calibration results to the industry customer on-demand, in real-time, at a lower cost. The calibration results can be incorporated rapidly into the production process to ensure the highest quality manufacturing. The service would provide the US radiation processing industry with a direct link to the NIST calibration facilities and its expertise, and provide an interactive feedback process between industrial processing and the national measurement standard. Moreover, an Internet calibration system should contribute to the removal of measurement-related trade barriers.

  2. Color calibration and fusion of lens-free and mobile-phone microscopy images for high-resolution and accurate color reproduction

    NASA Astrophysics Data System (ADS)

    Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan

    2016-06-01

    Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed “digital color fusion microscopy” (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available.

  3. Color calibration and fusion of lens-free and mobile-phone microscopy images for high-resolution and accurate color reproduction

    PubMed Central

    Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan

    2016-01-01

    Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed “digital color fusion microscopy” (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available. PMID:27283459

  4. Color calibration and fusion of lens-free and mobile-phone microscopy images for high-resolution and accurate color reproduction.

    PubMed

    Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan

    2016-06-10

    Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed "digital color fusion microscopy" (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available.

  5. Experimental determination of the oral bioavailability and bioaccessibility of lead particles

    PubMed Central

    2012-01-01

    In vivo estimations of Pb particle bioavailability are costly and variable, because of the nature of animal assays. The most feasible alternative for increasing the number of investigations carried out on Pb particle bioavailability is in vitro testing. This testing method requires calibration using in vivo data on an adapted animal model, so that the results will be valid for childhood exposure assessment. Also, the test results must be reproducible within and between laboratories. The Relative Bioaccessibility Leaching Procedure, which is calibrated with in vivo data on soils, presents the highest degree of validation and simplicity. This method could be applied to Pb particles, including those in paint and dust, and those in drinking water systems, which although relevant, have been poorly investigated up to now for childhood exposure assessment. PMID:23173867

  6. A calibration procedure for load cells to improve accuracy of mini-lysimeters in monitoring evapotranspiration

    NASA Astrophysics Data System (ADS)

    Misra, R. K.; Padhi, J.; Payero, J. O.

    2011-08-01

    SummaryWe used twelve load cells (20 kg capacity) in a mini-lysimeter system to measure evapotranspiration simultaneously from twelve plants growing in separate pots in a glasshouse. A data logger combined with a multiplexer was used to connect all load cells with the full-bridge excitation mode to acquire load-cell signal. Each load cell was calibrated using fixed load within the range of 0-0.8 times the full load capacity of load cells. Performance of all load cells was assessed on the basis of signal settling time, excitation compensation, hysteresis and temperature. Final calibration of load cells included statistical consideration of these effects to allow prediction of lysimeter weights and evapotranspiration over short-time intervals for improved accuracy and sustained performance. Analysis of the costs for the mini-lysimeter system indicates that evapotranspiration can be measured economically at a reasonable accuracy and sufficient resolution with robust method of load-cell calibration.

  7. Multi-scale soil moisture model calibration and validation: An ARS Watershed on the South Fork of the Iowa River

    USDA-ARS?s Scientific Manuscript database

    Soil moisture monitoring with in situ technology is a time consuming and costly endeavor for which a method of increasing the resolution of spatial estimates across in situ networks is necessary. Using a simple hydrologic model, the resolution of an in situ watershed network can be increased beyond...

  8. An alternative method for calibration of flow field flow fractionation channels for hydrodynamic radius determination: The nanoemulsion method (featuring multi angle light scattering).

    PubMed

    Bolinsson, Hans; Lu, Yi; Hall, Stephen; Nilsson, Lars; Håkansson, Andreas

    2018-01-19

    This study suggests a novel method for determination of the channel height in asymmetrical flow field-flow fractionation (AF4), which can be used for calibration of the channel for hydrodynamic radius determinations. The novel method uses an oil-in-water nanoemulsion together with multi angle light scattering (MALS) and elution theory to determine channel height from an AF4 experiment. The method is validated using two orthogonal methods; first, by using standard particle elution experiments and, secondly, by imaging an assembled and carrier liquid filled channel by x-ray computed tomography (XCT). It is concluded that the channel height can be determined with approximately the same accuracy as with the traditional channel height determination technique. However, the nanoemulsion method can be used under more challenging conditions than standard particles, as the nanoemulsion remains stable in a wider pH range than the previously used standard particles. Moreover, the novel method is also more cost effective. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Construction and calibration of a low cost and fully automated vibrating sample magnetometer

    NASA Astrophysics Data System (ADS)

    El-Alaily, T. M.; El-Nimr, M. K.; Saafan, S. A.; Kamel, M. M.; Meaz, T. M.; Assar, S. T.

    2015-07-01

    A low cost vibrating sample magnetometer (VSM) has been constructed by using an electromagnet and an audio loud speaker; where both are controlled by a data acquisition device. The constructed VSM records the magnetic hysteresis loop up to 8.3 KG at room temperature. The apparatus has been calibrated and tested by using magnetic hysteresis data of some ferrite samples measured by two scientifically calibrated magnetometers; model (Lake Shore 7410) and model (LDJ Electronics Inc. Troy, MI). Our VSM lab-built new design proved success and reliability.

  10. Germanium resistance thermometer calibration at superfluid helium temperatures

    NASA Technical Reports Server (NTRS)

    Mason, F. C.

    1985-01-01

    The rapid increase in resistance of high purity semi-conducting germanium with decreasing temperature in the superfluid helium range of temperatures makes this material highly adaptable as a very sensitive thermometer. Also, a germanium thermometer exhibits a highly reproducible resistance versus temperature characteristic curve upon cycling between liquid helium temperatures and room temperature. These two factors combine to make germanium thermometers ideally suited for measuring temperatures in many cryogenic studies at superfluid helium temperatures. One disadvantage, however, is the relatively high cost of calibrated germanium thermometers. In space helium cryogenic systems, many such thermometers are often required, leading to a high cost for calibrated thermometers. The construction of a thermometer calibration cryostat and probe which will allow for calibrating six germanium thermometers at one time, thus effecting substantial savings in the purchase of thermometers is considered.

  11. Classification of high-resolution multi-swath hyperspectral data using Landsat 8 surface reflectance data as a calibration target and a novel histogram based unsupervised classification technique to determine natural classes from biophysically relevant fit parameters

    NASA Astrophysics Data System (ADS)

    McCann, C.; Repasky, K. S.; Morin, M.; Lawrence, R. L.; Powell, S. L.

    2016-12-01

    Compact, cost-effective, flight-based hyperspectral imaging systems can provide scientifically relevant data over large areas for a variety of applications such as ecosystem studies, precision agriculture, and land management. To fully realize this capability, unsupervised classification techniques based on radiometrically-calibrated data that cluster based on biophysical similarity rather than simply spectral similarity are needed. An automated technique to produce high-resolution, large-area, radiometrically-calibrated hyperspectral data sets based on the Landsat surface reflectance data product as a calibration target was developed and applied to three subsequent years of data covering approximately 1850 hectares. The radiometrically-calibrated data allows inter-comparison of the temporal series. Advantages of the radiometric calibration technique include the need for minimal site access, no ancillary instrumentation, and automated processing. Fitting the reflectance spectra of each pixel using a set of biophysically relevant basis functions reduces the data from 80 spectral bands to 9 parameters providing noise reduction and data compression. Examination of histograms of these parameters allows for determination of natural splitting into biophysical similar clusters. This method creates clusters that are similar in terms of biophysical parameters, not simply spectral proximity. Furthermore, this method can be applied to other data sets, such as urban scenes, by developing other physically meaningful basis functions. The ability to use hyperspectral imaging for a variety of important applications requires the development of data processing techniques that can be automated. The radiometric-calibration combined with the histogram based unsupervised classification technique presented here provide one potential avenue for managing big-data associated with hyperspectral imaging.

  12. Calibrating page sized Gafchromic EBT3 films

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crijns, W.; Maes, F.; Heide, U. A. van der

    2013-01-15

    Purpose: The purpose is the development of a novel calibration method for dosimetry with Gafchromic EBT3 films. The method should be applicable for pretreatment verification of volumetric modulated arc, and intensity modulated radiotherapy. Because the exposed area on film can be large for such treatments, lateral scan errors must be taken into account. The correction for the lateral scan effect is obtained from the calibration data itself. Methods: In this work, the film measurements were modeled using their relative scan values (Transmittance, T). Inside the transmittance domain a linear combination and a parabolic lateral scan correction described the observed transmittancemore » values. The linear combination model, combined a monomer transmittance state (T{sub 0}) and a polymer transmittance state (T{sub {infinity}}) of the film. The dose domain was associated with the observed effects in the transmittance domain through a rational calibration function. On the calibration film only simple static fields were applied and page sized films were used for calibration and measurements (treatment verification). Four different calibration setups were considered and compared with respect to dose estimation accuracy. The first (I) used a calibration table from 32 regions of interest (ROIs) spread on 4 calibration films, the second (II) used 16 ROIs spread on 2 calibration films, the third (III), and fourth (IV) used 8 ROIs spread on a single calibration film. The calibration tables of the setups I, II, and IV contained eight dose levels delivered to different positions on the films, while for setup III only four dose levels were applied. Validation was performed by irradiating film strips with known doses at two different time points over the course of a week. Accuracy of the dose response and the lateral effect correction was estimated using the dose difference and the root mean squared error (RMSE), respectively. Results: A calibration based on two films was the optimal balance between cost effectiveness and dosimetric accuracy. The validation resulted in dose errors of 1%-2% for the two different time points, with a maximal absolute dose error around 0.05 Gy. The lateral correction reduced the RMSE values on the sides of the film to the RMSE values at the center of the film. Conclusions: EBT3 Gafchromic films were calibrated for large field dosimetry with a limited number of page sized films and simple static calibration fields. The transmittance was modeled as a linear combination of two transmittance states, and associated with dose using a rational calibration function. Additionally, the lateral scan effect was resolved in the calibration function itself. This allows the use of page sized films. Only two calibration films were required to estimate both the dose and the lateral response. The calibration films were used over the course of a week, with residual dose errors Less-Than-Or-Slanted-Equal-To 2% or Less-Than-Or-Slanted-Equal-To 0.05 Gy.« less

  13. An operational epidemiological model for calibrating agent-based simulations of pandemic influenza outbreaks.

    PubMed

    Prieto, D; Das, T K

    2016-03-01

    Uncertainty of pandemic influenza viruses continue to cause major preparedness challenges for public health policymakers. Decisions to mitigate influenza outbreaks often involve tradeoff between the social costs of interventions (e.g., school closure) and the cost of uncontrolled spread of the virus. To achieve a balance, policymakers must assess the impact of mitigation strategies once an outbreak begins and the virus characteristics are known. Agent-based (AB) simulation is a useful tool for building highly granular disease spread models incorporating the epidemiological features of the virus as well as the demographic and social behavioral attributes of tens of millions of affected people. Such disease spread models provide excellent basis on which various mitigation strategies can be tested, before they are adopted and implemented by the policymakers. However, to serve as a testbed for the mitigation strategies, the AB simulation models must be operational. A critical requirement for operational AB models is that they are amenable for quick and simple calibration. The calibration process works as follows: the AB model accepts information available from the field and uses those to update its parameters such that some of its outputs in turn replicate the field data. In this paper, we present our epidemiological model based calibration methodology that has a low computational complexity and is easy to interpret. Our model accepts a field estimate of the basic reproduction number, and then uses it to update (calibrate) the infection probabilities in a way that its effect combined with the effects of the given virus epidemiology, demographics, and social behavior results in an infection pattern yielding a similar value of the basic reproduction number. We evaluate the accuracy of the calibration methodology by applying it for an AB simulation model mimicking a regional outbreak in the US. The calibrated model is shown to yield infection patterns closely replicating the input estimates of the basic reproduction number. The calibration method is also tested to replicate an initial infection incidence trend for a H1N1 outbreak like that of 2009.

  14. Inter-printer color calibration using constrained printer gamut

    NASA Astrophysics Data System (ADS)

    Zeng, Huanzhao; Humet, Jacint

    2005-01-01

    Due to the drop size variation of the print heads in inkjet printers, consistent color reproduction becomes challenge for high quality color printing. To improve the color consistency, we developed a method and system to characterize a pair of printers using a colorimeter or a color scanner. Different from prior known approaches that simply try to match colors of one printer to the other without considering the gamut differences, we first constructed an overlapped gamut in which colors can be produced by both printers, and then characterized both printers using a pair of 3-D or 4-D lookup tables (LUT) to produce same colors limited to the overlapped gamut. Each LUT converts nominal device color values into engine-dependent device color values limited to the overlapped gamut. Compared to traditional approaches, the color calibration accuracy is significantly improved. This method can be simply extended to calibrate more than two engines. In a color imaging system that includes a scanner and more than one print engine, this method improves the color consistency very effectively without increasing hardware costs. A few examples for applying this method are: 1) one-pass bi-directional inkjet printing; 2) a printer with two or more sets of pens for printing; and 3) a system embedded with a pair of printers (the number of printers could be easily incremented).

  15. Spectro-photometric determinations of Mn, Fe and Cu in aluminum master alloys

    NASA Astrophysics Data System (ADS)

    Rehan; Naveed, A.; Shan, A.; Afzal, M.; Saleem, J.; Noshad, M. A.

    2016-08-01

    Highly reliable, fast and cost effective Spectro-photometric methods have been developed for the determination of Mn, Fe & Cu in aluminum master alloys, based on the development of calibration curves being prepared via laboratory standards. The calibration curves are designed so as to induce maximum sensitivity and minimum instrumental error (Mn 1mg/100ml-2mg/100ml, Fe 0.01mg/100ml-0.2mg/100ml and Cu 2mg/100ml-10mg/ 100ml). The developed Spectro-photometric methods produce accurate results while analyzing Mn, Fe and Cu in certified reference materials. Particularly, these methods are suitable for all types of Al-Mn, Al-Fe and Al-Cu master alloys (5%, 10%, 50% etc. master alloys).Moreover, the sampling practices suggested herein include a reasonable amount of analytical sample, which truly represent the whole lot of a particular master alloy. Successive dilution technique was utilized to meet the calibration curve range. Furthermore, the workout methods were also found suitable for the analysis of said elements in ordinary aluminum alloys. However, it was observed that Cush owed a considerable interference with Fe, the later one may not be accurately measured in the presence of Cu greater than 0.01 %.

  16. The next generation of low-cost personal air quality sensors for quantitative exposure monitoring

    NASA Astrophysics Data System (ADS)

    Piedrahita, R.; Xiang, Y.; Masson, N.; Ortega, J.; Collier, A.; Jiang, Y.; Li, K.; Dick, R. P.; Lv, Q.; Hannigan, M.; Shang, L.

    2014-10-01

    Advances in embedded systems and low-cost gas sensors are enabling a new wave of low-cost air quality monitoring tools. Our team has been engaged in the development of low-cost, wearable, air quality monitors (M-Pods) using the Arduino platform. These M-Pods house two types of sensors - commercially available metal oxide semiconductor (MOx) sensors used to measure CO, O3, NO2, and total VOCs, and NDIR sensors used to measure CO2. The MOx sensors are low in cost and show high sensitivity near ambient levels; however they display non-linear output signals and have cross-sensitivity effects. Thus, a quantification system was developed to convert the MOx sensor signals into concentrations. We conducted two types of validation studies - first, deployments at a regulatory monitoring station in Denver, Colorado, and second, a user study. In the two deployments (at the regulatory monitoring station), M-Pod concentrations were determined using collocation calibrations and laboratory calibration techniques. M-Pods were placed near regulatory monitors to derive calibration function coefficients using the regulatory monitors as the standard. The form of the calibration function was derived based on laboratory experiments. We discuss various techniques used to estimate measurement uncertainties. The deployments revealed that collocation calibrations provide more accurate concentration estimates than laboratory calibrations. During collocation calibrations, median standard errors ranged between 4.0-6.1 ppb for O3, 6.4-8.4 ppb for NO2, 0.28-0.44 ppm for CO, and 16.8 ppm for CO2. Median signal to noise (S / N) ratios for the M-Pod sensors were higher than the regulatory instruments: for NO2, 3.6 compared to 23.4; for O3, 1.4 compared to 1.6; for CO, 1.1 compared to 10.0; and for CO2, 42.2 compared to 300-500. By contrast, lab calibrations added bias and made it difficult to cover the necessary range of environmental conditions to obtain a good calibration. A separate user study was also conducted to assess uncertainty estimates and sensor variability. In this study, 9 M-Pods were calibrated via collocation multiple times over 4 weeks, and sensor drift was analyzed, with the result being a calibration function that included baseline drift. Three pairs of M-Pods were deployed, while users individually carried the other three. The user study suggested that inter-M-Pod variability between paired units was on the same order as calibration uncertainty; however, it is difficult to make conclusions about the actual personal exposure levels due to the level of user engagement. The user study provided real-world sensor drift data, showing limited CO drift (under -0.05 ppm day-1), and higher for O3 (-2.6 to 2.0 ppb day-1), NO2 (-1.56 to 0.51 ppb day-1), and CO2 (-4.2 to 3.1 ppm day-1). Overall, the user study confirmed the utility of the M-Pod as a low-cost tool to assess personal exposure.

  17. Learning an Eddy Viscosity Model Using Shrinkage and Bayesian Calibration: A Jet-in-Crossflow Case Study

    DOE PAGES

    Ray, Jaideep; Lefantzi, Sophia; Arunajatesan, Srinivasan; ...

    2017-09-07

    In this paper, we demonstrate a statistical procedure for learning a high-order eddy viscosity model (EVM) from experimental data and using it to improve the predictive skill of a Reynolds-averaged Navier–Stokes (RANS) simulator. The method is tested in a three-dimensional (3D), transonic jet-in-crossflow (JIC) configuration. The process starts with a cubic eddy viscosity model (CEVM) developed for incompressible flows. It is fitted to limited experimental JIC data using shrinkage regression. The shrinkage process removes all the terms from the model, except an intercept, a linear term, and a quadratic one involving the square of the vorticity. The shrunk eddy viscositymore » model is implemented in an RANS simulator and calibrated, using vorticity measurements, to infer three parameters. The calibration is Bayesian and is solved using a Markov chain Monte Carlo (MCMC) method. A 3D probability density distribution for the inferred parameters is constructed, thus quantifying the uncertainty in the estimate. The phenomenal cost of using a 3D flow simulator inside an MCMC loop is mitigated by using surrogate models (“curve-fits”). A support vector machine classifier (SVMC) is used to impose our prior belief regarding parameter values, specifically to exclude nonphysical parameter combinations. The calibrated model is compared, in terms of its predictive skill, to simulations using uncalibrated linear and CEVMs. Finally, we find that the calibrated model, with one quadratic term, is more accurate than the uncalibrated simulator. The model is also checked at a flow condition at which the model was not calibrated.« less

  18. Learning an Eddy Viscosity Model Using Shrinkage and Bayesian Calibration: A Jet-in-Crossflow Case Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ray, Jaideep; Lefantzi, Sophia; Arunajatesan, Srinivasan

    In this paper, we demonstrate a statistical procedure for learning a high-order eddy viscosity model (EVM) from experimental data and using it to improve the predictive skill of a Reynolds-averaged Navier–Stokes (RANS) simulator. The method is tested in a three-dimensional (3D), transonic jet-in-crossflow (JIC) configuration. The process starts with a cubic eddy viscosity model (CEVM) developed for incompressible flows. It is fitted to limited experimental JIC data using shrinkage regression. The shrinkage process removes all the terms from the model, except an intercept, a linear term, and a quadratic one involving the square of the vorticity. The shrunk eddy viscositymore » model is implemented in an RANS simulator and calibrated, using vorticity measurements, to infer three parameters. The calibration is Bayesian and is solved using a Markov chain Monte Carlo (MCMC) method. A 3D probability density distribution for the inferred parameters is constructed, thus quantifying the uncertainty in the estimate. The phenomenal cost of using a 3D flow simulator inside an MCMC loop is mitigated by using surrogate models (“curve-fits”). A support vector machine classifier (SVMC) is used to impose our prior belief regarding parameter values, specifically to exclude nonphysical parameter combinations. The calibrated model is compared, in terms of its predictive skill, to simulations using uncalibrated linear and CEVMs. Finally, we find that the calibrated model, with one quadratic term, is more accurate than the uncalibrated simulator. The model is also checked at a flow condition at which the model was not calibrated.« less

  19. Using a gradient boosting model to improve the performance of low-cost aerosol monitors in a dense, heterogeneous urban environment

    NASA Astrophysics Data System (ADS)

    Johnson, Nicholas E.; Bonczak, Bartosz; Kontokosta, Constantine E.

    2018-07-01

    The increased availability and improved quality of new sensing technologies have catalyzed a growing body of research to evaluate and leverage these tools in order to quantify and describe urban environments. Air quality, in particular, has received greater attention because of the well-established links to serious respiratory illnesses and the unprecedented levels of air pollution in developed and developing countries and cities around the world. Though numerous laboratory and field evaluation studies have begun to explore the use and potential of low-cost air quality monitoring devices, the performance and stability of these tools has not been adequately evaluated in complex urban environments, and further research is needed. In this study, we present the design of a low-cost air quality monitoring platform based on the Shinyei PPD42 aerosol monitor and examine the suitability of the sensor for deployment in a dense heterogeneous urban environment. We assess the sensor's performance during a field calibration campaign from February 7th to March 25th 2017 with a reference instrument in New York City, and present a novel calibration approach using a machine learning method that incorporates publicly available meteorological data in order to improve overall sensor performance. We find that while the PPD42 performs well in relation to the reference instrument using linear regression (R2 = 0.36-0.51), a gradient boosting regression tree model can significantly improve device calibration (R2 = 0.68-0.76). We discuss the sensor's performance and reliability when deployed in a dense, heterogeneous urban environment during a period of significant variation in weather conditions, and important considerations when using machine learning techniques to improve the performance of low-cost air quality monitors.

  20. Automatic inference of geometric camera parameters and inter-camera topology in uncalibrated disjoint surveillance cameras

    NASA Astrophysics Data System (ADS)

    den Hollander, Richard J. M.; Bouma, Henri; Baan, Jan; Eendebak, Pieter T.; van Rest, Jeroen H. C.

    2015-10-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short configuration time, and the use of video analytics in a wider range of scenarios, including ad-hoc crisis situations and large scale surveillance systems. We show an autocalibration method entirely based on pedestrian detections in surveillance video in multiple non-overlapping cameras. In this paper, we show the two main components of automatic calibration. The first shows the intra-camera geometry estimation that leads to an estimate of the tilt angle, focal length and camera height, which is important for the conversion from pixels to meters and vice versa. The second component shows the inter-camera topology inference that leads to an estimate of the distance between cameras, which is important for spatio-temporal analysis of multi-camera tracking. This paper describes each of these methods and provides results on realistic video data.

  1. Strategic development of a multivariate calibration model for the uniformity testing of tablets by transmission NIR analysis.

    PubMed

    Sasakura, D; Nakayama, K; Sakamoto, T; Chikuma, T

    2015-05-01

    The use of transmission near infrared spectroscopy (TNIRS) is of particular interest in the pharmaceutical industry. This is because TNIRS does not require sample preparation and can analyze several tens of tablet samples in an hour. It has the capability to measure all relevant information from a tablet, while still on the production line. However, TNIRS has a narrow spectrum range and overtone vibrations often overlap. To perform content uniformity testing in tablets by TNIRS, various properties in the tableting process need to be analyzed by a multivariate prediction model, such as a Partial Least Square Regression modeling. One issue is that typical approaches require several hundred reference samples to act as the basis of the method rather than a strategically designed method. This means that many batches are needed to prepare the reference samples; this requires time and is not cost effective. Our group investigated the concentration dependence of the calibration model with a strategic design. Consequently, we developed a more effective approach to the TNIRS calibration model than the existing methodology.

  2. Calibration schemes of a field-compatible optical spectroscopic system to quantify neovascular changes in the dysplastic cervix

    NASA Astrophysics Data System (ADS)

    Chang, Vivide Tuan-Chyan; Merisier, Delson; Yu, Bing; Walmer, David K.; Ramanujam, Nirmala

    2011-03-01

    A significant challenge in detecting cervical pre-cancer in low-resource settings is the lack of effective screening facilities and trained personnel to detect the disease before it is advanced. Light based technologies, particularly quantitative optical spectroscopy, have the potential to provide an effective, low cost, and portable solution for cervical pre-cancer screening in these communities. We have developed and characterized a portable USB-powered optical spectroscopic system to quantify total hemoglobin content, hemoglobin saturation, and reduced scattering coefficient of cervical tissue in vivo. The system consists of a high-power LED as light source, a bifurcated fiber optic assembly, and two USB spectrometers for sample and calibration spectra acquisitions. The system was subsequently tested in Leogane, Haiti, where diffuse reflectance spectra from 33 colposcopically normal sites in 21 patients were acquired. Two different calibration methods, i.e., a post-study diffuse reflectance standard measurement and a real time self-calibration channel were studied. Our results suggest that a self-calibration channel enabled more accurate extraction of scattering contrast through simultaneous real-time correction of intensity drifts in the system. A self-calibration system also minimizes operator bias and required training. Hence, future contact spectroscopy or imaging systems should incorporate a selfcalibration channel to reliably extract scattering contrast.

  3. Differential Evolution algorithm applied to FSW model calibration

    NASA Astrophysics Data System (ADS)

    Idagawa, H. S.; Santos, T. F. A.; Ramirez, A. J.

    2014-03-01

    Friction Stir Welding (FSW) is a solid state welding process that can be modelled using a Computational Fluid Dynamics (CFD) approach. These models use adjustable parameters to control the heat transfer and the heat input to the weld. These parameters are used to calibrate the model and they are generally determined using the conventional trial and error approach. Since this method is not very efficient, we used the Differential Evolution (DE) algorithm to successfully determine these parameters. In order to improve the success rate and to reduce the computational cost of the method, this work studied different characteristics of the DE algorithm, such as the evolution strategy, the objective function, the mutation scaling factor and the crossover rate. The DE algorithm was tested using a friction stir weld performed on a UNS S32205 Duplex Stainless Steel.

  4. Boresight alignment method for mobile laser scanning systems

    NASA Astrophysics Data System (ADS)

    Rieger, P.; Studnicka, N.; Pfennigbauer, M.; Zach, G.

    2010-06-01

    Mobile laser scanning (MLS) is the latest approach towards fast and cost-efficient acquisition of 3-dimensional spatial data. Accurately evaluating the boresight alignment in MLS systems is an obvious necessity. However, recent systems available on the market may lack of suitable and efficient practical workflows on how to perform this calibration. This paper discusses an innovative method for accurately determining the boresight alignment of MLS systems by employing 3D laser scanners. Scanning objects using a 3D laser scanner operating in a 2D line-scan mode from various different runs and scan directions provides valuable scan data for determining the angular alignment between inertial measurement unit and laser scanner. Field data is presented demonstrating the final accuracy of the calibration and the high quality of the point cloud acquired during an MLS campaign.

  5. Purity assessment of organic calibration standards using a combination of quantitative NMR and mass balance.

    PubMed

    Davies, Stephen R; Jones, Kai; Goldys, Anna; Alamgir, Mahuiddin; Chan, Benjamin K H; Elgindy, Cecile; Mitchell, Peter S R; Tarrant, Gregory J; Krishnaswami, Maya R; Luo, Yawen; Moawad, Michael; Lawes, Douglas; Hook, James M

    2015-04-01

    Quantitative NMR spectroscopy (qNMR) has been examined for purity assessment using a range of organic calibration standards of varying structural complexities, certified using the traditional mass balance approach. Demonstrated equivalence between the two independent purity values confirmed the accuracy of qNMR and highlighted the benefit of using both methods in tandem to minimise the potential for hidden bias, thereby conferring greater confidence in the overall purity assessment. A comprehensive approach to purity assessment is detailed, utilising, where appropriate, multiple peaks in the qNMR spectrum, chosen on the basis of scientific reason and statistical analysis. Two examples are presented in which differences between the purity assignment by qNMR and mass balance are addressed in different ways depending on the requirement of the end user, affording fit-for-purpose calibration standards in a cost-effective manner.

  6. Ratio manipulating spectrophotometry versus chemometry as stability indicating methods for cefquinome sulfate determination

    NASA Astrophysics Data System (ADS)

    Yehia, Ali M.; Arafa, Reham M.; Abbas, Samah S.; Amer, Sawsan M.

    2016-01-01

    Spectral resolution of cefquinome sulfate (CFQ) in the presence of its degradation products was studied. Three selective, accurate and rapid spectrophotometric methods were performed for the determination of CFQ in the presence of either its hydrolytic, oxidative or photo-degradation products. The proposed ratio difference, derivative ratio and mean centering are ratio manipulating spectrophotometric methods that were satisfactorily applied for selective determination of CFQ within linear range of 5.0-40.0 μg mL- 1. Concentration Residuals Augmented Classical Least Squares was applied and evaluated for the determination of the cited drug in the presence of its all degradation products. Traditional Partial Least Squares regression was also applied and benchmarked against the proposed advanced multivariate calibration. Experimentally designed 25 synthetic mixtures of three factors at five levels were used to calibrate and validate the multivariate models. Advanced chemometrics succeeded in quantitative and qualitative analyses of CFQ along with its hydrolytic, oxidative and photo-degradation products. The proposed methods were applied successfully for different pharmaceutical formulations analyses. These developed methods were simple and cost-effective compared with the manufacturer's RP-HPLC method.

  7. Fusion of lens-free microscopy and mobile-phone microscopy images for high-color-accuracy and high-resolution pathology imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Yibo; Wu, Yichen; Zhang, Yun; Ozcan, Aydogan

    2017-03-01

    Digital pathology and telepathology require imaging tools with high-throughput, high-resolution and accurate color reproduction. Lens-free on-chip microscopy based on digital in-line holography is a promising technique towards these needs, as it offers a wide field of view (FOV >20 mm2) and high resolution with a compact, low-cost and portable setup. Color imaging has been previously demonstrated by combining reconstructed images at three discrete wavelengths in the red, green and blue parts of the visible spectrum, i.e., the RGB combination method. However, this RGB combination method is subject to color distortions. To improve the color performance of lens-free microscopy for pathology imaging, here we present a wavelet-based color fusion imaging framework, termed "digital color fusion microscopy" (DCFM), which digitally fuses together a grayscale lens-free microscope image taken at a single wavelength and a low-resolution and low-magnification color-calibrated image taken by a lens-based microscope, which can simply be a mobile phone based cost-effective microscope. We show that the imaging results of an H&E stained breast cancer tissue slide with the DCFM technique come very close to a color-calibrated microscope using a 40x objective lens with 0.75 NA. Quantitative comparison showed 2-fold reduction in the mean color distance using the DCFM method compared to the RGB combination method, while also preserving the high-resolution features of the lens-free microscope. Due to the cost-effective and field-portable nature of both lens-free and mobile-phone microscopy techniques, their combination through the DCFM framework could be useful for digital pathology and telepathology applications, in low-resource and point-of-care settings.

  8. Taguchi Approach to Design Optimization for Quality and Cost: An Overview

    NASA Technical Reports Server (NTRS)

    Unal, Resit; Dean, Edwin B.

    1990-01-01

    Calibrations to existing cost of doing business in space indicate that to establish human presence on the Moon and Mars with the Space Exploration Initiative (SEI) will require resources, felt by many, to be more than the national budget can afford. In order for SEI to succeed, we must actually design and build space systems at lower cost this time, even with tremendous increases in quality and performance requirements, such as extremely high reliability. This implies that both government and industry must change the way they do business. Therefore, new philosophy and technology must be employed to design and produce reliable, high quality space systems at low cost. In recognizing the need to reduce cost and improve quality and productivity, Department of Defense (DoD) and National Aeronautics and Space Administration (NASA) have initiated Total Quality Management (TQM). TQM is a revolutionary management strategy in quality assurance and cost reduction. TQM requires complete management commitment, employee involvement, and use of statistical tools. The quality engineering methods of Dr. Taguchi, employing design of experiments (DOE), is one of the most important statistical tools of TQM for designing high quality systems at reduced cost. Taguchi methods provide an efficient and systematic way to optimize designs for performance, quality, and cost. Taguchi methods have been used successfully in Japan and the United States in designing reliable, high quality products at low cost in such areas as automobiles and consumer electronics. However, these methods are just beginning to see application in the aerospace industry. The purpose of this paper is to present an overview of the Taguchi methods for improving quality and reducing cost, describe the current state of applications and its role in identifying cost sensitive design parameters.

  9. Absolute calibration of the Jenoptik CHM15k-x ceilometer and its applicability for quantitative aerosol monitoring

    NASA Astrophysics Data System (ADS)

    Geiß, Alexander; Wiegner, Matthias

    2014-05-01

    The knowledge of the spatiotemporal distribution of atmospheric aerosols and its optical characterization is essential for the understanding of the radiation budget, air quality, and climate. For this purpose, lidar is an excellent system as it is an active remote sensing technique. As multi-wavelength research lidars with depolarization channels are quite complex and cost-expensive, increasing attention is paid to so-called ceilometers. They are simple one-wavelength backscatter lidars with low pulse energy for eye-safe operation. As maintenance costs are low and continuous and unattended measurements can be performed, they are suitable for long-term aerosol monitoring in a network. However, the signal-to-noise ratio is low, and the signals are not calibrated. The only optical property that can be derived from a ceilometer is the particle backscatter coefficient, but even this quantity requires a calibration of the signals. With four years of measurements from a Jenoptik ceilometer CHM15k-x, we developed two methods for an absolute calibration on this system. This advantage of our approach is that only a few days with favorable meteorological conditions are required where Rayleigh-calibration and comparison with our research lidar is possible to estimate the lidar constant. This method enables us to derive the particle backscatter coefficient at 1064 nm, and we retrieved for the first time profiles in near real-time within an accuracy of 10 %. If an appropriate lidar ratio is assumed the aerosol optical depth of e.g. the mixing layer can be determined with an accuracy depending on the accuracy of the lidar ratio estimate. Even for 'simple' applications, e.g. assessment of the mixing layer height, cloud detection, detection of elevated aerosol layers, the particle backscatter coefficient has significant advantages over the measured (uncalibrated) attenuated backscatter. The possibility of continuous operation under nearly any meteorological condition with temporal resolution in the order of 30 seconds makes it also possible to apply time-height-tracking methods for detecting mixing layer heights. The combination of methods for edge detection (e.g. wavelet covariance transform, gradient method, variance method) and edge tracking techniques is used to increase the reliability of the layer detection and attribution. Thus, a feature mask of aerosols and clouds can be derived. Four years of measurements constitute an excellent basis for a climatology including a homogeneous time series of mixing layer heights, aerosol layers and cloud base heights of the troposphere. With a low overlap region of 180 m of the Jenoptik CHM15k-x even very narrow mixing layers, typical for winter conditions, can be considered.

  10. Artificial Vector Calibration Method for Differencing Magnetic Gradient Tensor Systems

    PubMed Central

    Li, Zhining; Zhang, Yingtang; Yin, Gang

    2018-01-01

    The measurement error of the differencing (i.e., using two homogenous field sensors at a known baseline distance) magnetic gradient tensor system includes the biases, scale factors, nonorthogonality of the single magnetic sensor, and the misalignment error between the sensor arrays, all of which can severely affect the measurement accuracy. In this paper, we propose a low-cost artificial vector calibration method for the tensor system. Firstly, the error parameter linear equations are constructed based on the single-sensor’s system error model to obtain the artificial ideal vector output of the platform, with the total magnetic intensity (TMI) scalar as a reference by two nonlinear conversions, without any mathematical simplification. Secondly, the Levenberg–Marquardt algorithm is used to compute the integrated model of the 12 error parameters by nonlinear least-squares fitting method with the artificial vector output as a reference, and a total of 48 parameters of the system is estimated simultaneously. The calibrated system outputs along the reference platform-orthogonal coordinate system. The analysis results show that the artificial vector calibrated output can track the orientation fluctuations of TMI accurately, effectively avoiding the “overcalibration” problem. The accuracy of the error parameters’ estimation in the simulation is close to 100%. The experimental root-mean-square error (RMSE) of the TMI and tensor components is less than 3 nT and 20 nT/m, respectively, and the estimation of the parameters is highly robust. PMID:29373544

  11. Node-to-node field calibration of wireless distributed air pollution sensor network.

    PubMed

    Kizel, Fadi; Etzion, Yael; Shafran-Nathan, Rakefet; Levy, Ilan; Fishbain, Barak; Bartonova, Alena; Broday, David M

    2018-02-01

    Low-cost air quality sensors offer high-resolution spatiotemporal measurements that can be used for air resources management and exposure estimation. Yet, such sensors require frequent calibration to provide reliable data, since even after a laboratory calibration they might not report correct values when they are deployed in the field, due to interference with other pollutants, as a result of sensitivity to environmental conditions and due to sensor aging and drift. Field calibration has been suggested as a means for overcoming these limitations, with the common strategy involving periodical collocations of the sensors at an air quality monitoring station. However, the cost and complexity involved in relocating numerous sensor nodes back and forth, and the loss of data during the repeated calibration periods make this strategy inefficient. This work examines an alternative approach, a node-to-node (N2N) calibration, where only one sensor in each chain is directly calibrated against the reference measurements and the rest of the sensors are calibrated sequentially one against the other while they are deployed and collocated in pairs. The calibration can be performed multiple times as a routine procedure. This procedure minimizes the total number of sensor relocations, and enables calibration while simultaneously collecting data at the deployment sites. We studied N2N chain calibration and the propagation of the calibration error analytically, computationally and experimentally. The in-situ N2N calibration is shown to be generic and applicable for different pollutants, sensing technologies, sensor platforms, chain lengths, and sensor order within the chain. In particular, we show that chain calibration of three nodes, each calibrated for a week, propagate calibration errors that are similar to those found in direct field calibration. Hence, N2N calibration is shown to be suitable for calibration of distributed sensor networks. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. The BErkeley Atmospheric CO2 Observation Network: field calibration and evaluation of low-cost air quality sensors

    NASA Astrophysics Data System (ADS)

    Kim, Jinsol; Shusterman, Alexis A.; Lieschke, Kaitlyn J.; Newman, Catherine; Cohen, Ronald C.

    2018-04-01

    The newest generation of air quality sensors is small, low cost, and easy to deploy. These sensors are an attractive option for developing dense observation networks in support of regulatory activities and scientific research. They are also of interest for use by individuals to characterize their home environment and for citizen science. However, these sensors are difficult to interpret. Although some have an approximately linear response to the target analyte, that response may vary with time, temperature, and/or humidity, and the cross-sensitivity to non-target analytes can be large enough to be confounding. Standard approaches to calibration that are sufficient to account for these variations require a quantity of equipment and labor that negates the attractiveness of the sensors' low cost. Here we describe a novel calibration strategy for a set of sensors, including CO, NO, NO2, and O3, that makes use of (1) multiple co-located sensors, (2) a priori knowledge about the chemistry of NO, NO2, and O3, (3) an estimate of mean emission factors for CO, and (4) the global background of CO. The strategy requires one or more well calibrated anchor points within the network domain, but it does not require direct calibration of any of the individual low-cost sensors. The procedure nonetheless accounts for temperature and drift, in both the sensitivity and zero offset. We demonstrate this calibration on a subset of the sensors comprising BEACO2N, a distributed network of approximately 50 sensor nodes, each measuring CO2, CO, NO, NO2, O3 and particulate matter at 10 s time resolution and approximately 2 km spacing within the San Francisco Bay Area.

  13. Challenges in the Development of a Self-Calibrating Network of Ceilometers.

    NASA Astrophysics Data System (ADS)

    Hervo, Maxime; Wagner, Frank; Mattis, Ina; Baars, Holger; Haefele, Alexander

    2015-04-01

    There are more than 700 Automatic Lidars and Ceilometers (ALCs) currently operating in Europe. Modern ceilometers can do more than simply measure the cloud base height. They can also measure aerosol layers like volcanic ash, Saharan dust or aerosols within the planetary boundary layer. In the frame of E-PROFILE, which is part of EUMETNET, a European network of automatic lidars and ceilometers will be set up exploiting this new capability. To be able to monitor the evolution of aerosol layers over a large spatial scale, the measurements need to be consistent from one site to another. Currently, most of the instruments do not provide calibrated, only relative measurements. Thus, it is necessary to calibrate the instruments to develop a consistent product for all the instruments from various network and to combine them in an European Network like E-PROFILE. As it is not possible to use an external reference (like a sun photometer or a Raman Lidar) to calibrate all the ALCs in the E-PROFILE network, it is necessary to use a self-calibration algorithm. Two calibration methods have been identified which are suited for automated use in a network: the Rayleigh and the liquid cloud calibration methods In the Rayleigh method, backscatter signals from molecules (this is the Rayleigh signal) can be measured and used to calculate the lidar constant (Wiegner et al. 2012). At the wavelength used for most ceilometers, this signal is weak and can be easily measured only during cloud-free nights. However, with the new algorithm implemented in the frame of the TOPROF COST Action, the Rayleigh calibration was successfully performed on a CHM15k for more than 50% of the nights from October 2013 to September 2014. This method was validated against two reference instruments, the collocated EARLINET PollyXT lidar and the CALIPSO space-borne lidar. The lidar constant was on average within 5.5% compare to the lidar constant determined by the EARLINET lidar. It confirms the validity of the self-calibration method. For 3 CALIPSO overpasses the agreement was on average 20.0%. It is less accurate due to the large uncertainties of CALIPSO data close to the surface. In opposition to the Rayleigh method, Cloud calibration method uses the complete attenuation of the transmitter beam by a liquid water cloud to calculate the lidar constant (O'Connor 2004). The main challenge is the selection of accurately measured water clouds. These clouds should not contain any ice crystals and the detector should not get into saturation. The first problem is especially important during winter time and the second problem is especially important for low clouds. Furthermore the overlap function should be known accurately, especially when the water cloud is located at a distance where the overlap between laser beam and telescope field-of-view is still incomplete. In the E-PROFILE pilot network, the Rayleigh calibration is already performed automatically. This demonstration network maked available, in real time, calibrated ALC measurements from 8 instruments of 4 different types in 6 countries. In collaboration with TOPROF and 20 national weathers services, E-PROFILE will provide, in 2017, near real time ALC measurements in most of Europe.

  14. Analysis of various quality attributes of sunflower and soybean plants by near infra-red reflectance spectroscopy: Development and validation of calibration models

    USDA-ARS?s Scientific Manuscript database

    Soybean and sunflower are summer annuals that can be grown as an alternative to corn and may be particularly useful in organic production systems for forage in addition to their traditional use as protein and/or oil yielding crops. Rapid and low cost methods of analyzing plant quality would be helpf...

  15. Calibration of low-cost gas sensors for an urban air quality monitoring network

    NASA Astrophysics Data System (ADS)

    Scott, A.; Kelley, C.; He, C.; Ghugare, P.; Lehman, A.; Benish, S.; Stratton, P.; Dickerson, R. R.; Zuidema, C.; Azdoud, Y.; Ren, X.

    2017-12-01

    In a warming world, environmental pollution may be exacerbated by anthropogenic activities, such as climate change and the urban heat island effect, as well as natural phenomena such as heat waves. However, monitoring air pollution at federal reference standards (approximately 1 part per billion or ppb for ambient ozone) is cost-prohibitive in heterogeneous urban areas as many expensive devices are required to fully capture a region's geo-spatial variability. Innovation in low-cost sensors provide a potential solution, yet technical challenges remain to overcome possible imprecision in the data. We present the calibrations of ozone and nitrous dioxide from a low-cost air quality monitoring device designed for the Baltimore Open Air Project. The sensors used in this study are commercially available thin film electrochemical sensors from SPEC Sensor, which are amperometric, meaning they generate current proportional to volumetric fraction of gas. The results of sensor calibrations in the laboratory and field are presented.

  16. Human wound photogrammetry with low-cost hardware based on automatic calibration of geometry and color

    NASA Astrophysics Data System (ADS)

    Jose, Abin; Haak, Daniel; Jonas, Stephan; Brandenburg, Vincent; Deserno, Thomas M.

    2015-03-01

    Photographic documentation and image-based wound assessment is frequently performed in medical diagnostics, patient care, and clinical research. To support quantitative assessment, photographic imaging is based on expensive and high-quality hardware and still needs appropriate registration and calibration. Using inexpensive consumer hardware such as smartphone-integrated cameras, calibration of geometry, color, and contrast is challenging. Some methods involve color calibration using a reference pattern such as a standard color card, which is located manually in the photographs. In this paper, we adopt the lattice detection algorithm by Park et al. from real world to medicine. At first, the algorithm extracts and clusters feature points according to their local intensity patterns. Groups of similar points are fed into a selection process, which tests for suitability as a lattice grid. The group which describes the largest probability of the meshes of a lattice is selected and from it a template for an initial lattice cell is extracted. Then, a Markov random field is modeled. Using the mean-shift belief propagation, the detection of the 2D lattice is solved iteratively as a spatial tracking problem. Least-squares geometric calibration of projective distortions and non-linear color calibration in RGB space is supported by 35 corner points of 24 color patches, respectively. The method is tested on 37 photographs taken from the German Calciphylaxis registry, where non-standardized photographic documentation is collected nationwide from all contributing trial sites. In all images, the reference card location is correctly identified. At least, 28 out of 35 lattice points were detected, outperforming the SIFT-based approach previously applied. Based on these coordinates, robust geometry and color registration is performed making the photographs comparable for quantitative analysis.

  17. Online low-field NMR spectroscopy for process control of an industrial lithiation reaction-automated data analysis.

    PubMed

    Kern, Simon; Meyer, Klas; Guhl, Svetlana; Gräßer, Patrick; Paul, Andrea; King, Rudibert; Maiwald, Michael

    2018-05-01

    Monitoring specific chemical properties is the key to chemical process control. Today, mainly optical online methods are applied, which require time- and cost-intensive calibration effort. NMR spectroscopy, with its advantage being a direct comparison method without need for calibration, has a high potential for enabling closed-loop process control while exhibiting short set-up times. Compact NMR instruments make NMR spectroscopy accessible in industrial and rough environments for process monitoring and advanced process control strategies. We present a fully automated data analysis approach which is completely based on physically motivated spectral models as first principles information (indirect hard modeling-IHM) and applied it to a given pharmaceutical lithiation reaction in the framework of the European Union's Horizon 2020 project CONSENS. Online low-field NMR (LF NMR) data was analyzed by IHM with low calibration effort, compared to a multivariate PLS-R (partial least squares regression) approach, and both validated using online high-field NMR (HF NMR) spectroscopy. Graphical abstract NMR sensor module for monitoring of the aromatic coupling of 1-fluoro-2-nitrobenzene (FNB) with aniline to 2-nitrodiphenylamine (NDPA) using lithium-bis(trimethylsilyl) amide (Li-HMDS) in continuous operation. Online 43.5 MHz low-field NMR (LF) was compared to 500 MHz high-field NMR spectroscopy (HF) as reference method.

  18. Simulation and optimization of an experimental membrane wastewater treatment plant using computational intelligence methods.

    PubMed

    Ludwig, T; Kern, P; Bongards, M; Wolf, C

    2011-01-01

    The optimization of relaxation and filtration times of submerged microfiltration flat modules in membrane bioreactors used for municipal wastewater treatment is essential for efficient plant operation. However, the optimization and control of such plants and their filtration processes is a challenging problem due to the underlying highly nonlinear and complex processes. This paper presents the use of genetic algorithms for this optimization problem in conjunction with a fully calibrated simulation model, as computational intelligence methods are perfectly suited to the nonconvex multi-objective nature of the optimization problems posed by these complex systems. The simulation model is developed and calibrated using membrane modules from the wastewater simulation software GPS-X based on the Activated Sludge Model No.1 (ASM1). Simulation results have been validated at a technical reference plant. They clearly show that filtration process costs for cleaning and energy can be reduced significantly by intelligent process optimization.

  19. Magnetic nanoparticle temperature estimation.

    PubMed

    Weaver, John B; Rauwerdink, Adam M; Hansen, Eric W

    2009-05-01

    The authors present a method of measuring the temperature of magnetic nanoparticles that can be adapted to provide in vivo temperature maps. Many of the minimally invasive therapies that promise to reduce health care costs and improve patient outcomes heat tissue to very specific temperatures to be effective. Measurements are required because physiological cooling, primarily blood flow, makes the temperature difficult to predict a priori. The ratio of the fifth and third harmonics of the magnetization generated by magnetic nanoparticles in a sinusoidal field is used to generate a calibration curve and to subsequently estimate the temperature. The calibration curve is obtained by varying the amplitude of the sinusoidal field. The temperature can then be estimated from any subsequent measurement of the ratio. The accuracy was 0.3 degree K between 20 and 50 degrees C using the current apparatus and half-second measurements. The method is independent of nanoparticle concentration and nanoparticle size distribution.

  20. Technical note: A simple approach for efficient collection of field reference data for calibrating remote sensing mapping of northern wetlands

    NASA Astrophysics Data System (ADS)

    Gålfalk, Magnus; Karlson, Martin; Crill, Patrick; Bousquet, Philippe; Bastviken, David

    2018-03-01

    The calibration and validation of remote sensing land cover products are highly dependent on accurate field reference data, which are costly and practically challenging to collect. We describe an optical method for collection of field reference data that is a fast, cost-efficient, and robust alternative to field surveys and UAV imaging. A lightweight, waterproof, remote-controlled RGB camera (GoPro HERO4 Silver, GoPro Inc.) was used to take wide-angle images from 3.1 to 4.5 m in altitude using an extendable monopod, as well as representative near-ground (< 1 m) images to identify spectral and structural features that correspond to various land covers in present lighting conditions. A semi-automatic classification was made based on six surface types (graminoids, water, shrubs, dry moss, wet moss, and rock). The method enables collection of detailed field reference data, which is critical in many remote sensing applications, such as satellite-based wetland mapping. The method uses common non-expensive equipment, does not require special skills or training, and is facilitated by a step-by-step manual that is included in the Supplement. Over time a global ground cover database can be built that can be used as reference data for studies of non-forested wetlands from satellites such as Sentinel 1 and 2 (10 m pixel size).

  1. Data Verification Tools for Minimizing Management Costs of Dense Air-Quality Monitoring Networks.

    PubMed

    Miskell, Georgia; Salmond, Jennifer; Alavi-Shoshtari, Maryam; Bart, Mark; Ainslie, Bruce; Grange, Stuart; McKendry, Ian G; Henshaw, Geoff S; Williams, David E

    2016-01-19

    Aiming at minimizing the costs, both of capital expenditure and maintenance, of an extensive air-quality measurement network, we present simple statistical methods that do not require extensive training data sets for automated real-time verification of the reliability of data delivered by a spatially dense hybrid network of both low-cost and reference ozone measurement instruments. Ozone is a pollutant that has a relatively smooth spatial spread over a large scale although there can be significant small-scale variations. We take advantage of these characteristics and demonstrate detection of instrument calibration drift within a few days using a rolling 72 h comparison of hourly averaged data from the test instrument with that from suitably defined proxies. We define the required characteristics of the proxy measurements by working from a definition of the network purpose and specification, in this case reliable determination of the proportion of hourly averaged ozone measurements that are above a threshold in any given day, and detection of calibration drift of greater than ±30% in slope or ±5 parts-per-billion in offset. By analyzing results of a study of an extensive deployment of low-cost instruments in the Lower Fraser Valley, we demonstrate that proxies can be established using land-use criteria and that simple statistical comparisons can identify low-cost instruments that are not stable and therefore need replacing. We propose that a minimal set of compliant reference instruments can be used to verify the reliability of data from a much more extensive network of low-cost devices.

  2. Network operability of ground-based microwave radiometers: Calibration and standardization efforts

    NASA Astrophysics Data System (ADS)

    Pospichal, Bernhard; Löhnert, Ulrich; Küchler, Nils; Czekala, Harald

    2017-04-01

    Ground-based microwave radiometers (MWR) are already widely used by national weather services and research institutions all around the world. Most of the instruments operate continuously and are beginning to be implemented into data assimilation for atmospheric models. Especially their potential for continuously observing boundary-layer temperature profiles as well as integrated water vapor and cloud liquid water path makes them valuable for improving short-term weather forecasts. However until now, most MWR have been operated as stand-alone instruments. In order to benefit from a network of these instruments, standardization of calibration, operation and data format is necessary. In the frame of TOPROF (COST Action ES1303) several efforts have been undertaken, such as uncertainty and bias assessment, or calibration intercomparison campaigns. The goal was to establish protocols for providing quality controlled (QC) MWR data and their uncertainties. To this end, standardized calibration procedures for MWR have been developed and recommendations for radiometer users compiled. Based on the results of the TOPROF campaigns, a new, high-accuracy liquid-nitrogen calibration load has been introduced for MWR manufactured by Radiometer Physics GmbH (RPG). The new load improves the accuracy of the measurements considerably and will lead to even more reliable atmospheric observations. Next to the recommendations for set-up, calibration and operation of ground-based MWR within a future network, we will present homogenized methods to determine the accuracy of a running calibration as well as means for automatic data quality control. This sets the stage for the planned microwave calibration center at JOYCE (Jülich Observatory for Cloud Evolution), which will be shortly introduced.

  3. Direct injection GC method for measuring light hydrocarbon emissions from cooling-tower water.

    PubMed

    Lee, Max M; Logan, Tim D; Sun, Kefu; Hurley, N Spencer; Swatloski, Robert A; Gluck, Steve J

    2003-12-15

    A Direct Injection GC method for quantifying low levels of light hydrocarbons (C6 and below) in cooling water has been developed. It is intended to overcome the limitations of the currently available technology. The principle of this method is to use a stripper column in a GC to strip waterfrom the hydrocarbons prior to entering the separation column. No sample preparation is required since the water sample is introduced directly into the GC. Method validation indicates that the Direct Injection GC method offers approximately 15 min analysis time with excellent precision and recovery. The calibration studies with ethylene and propylene show that both liquid and gas standards are suitable for routine calibration and calibration verification. The sampling method using zero headspace traditional VOA (Volatile Organic Analysis) vials and a sample chiller has also been validated. It is apparent that the sampling method is sufficient to minimize the potential for losses of light hydrocarbons, and samples can be held at 4 degrees C for up to 7 days with more than 93% recovery. The Direct Injection GC method also offers <1 ppb (w/v) level method detection limits for ethylene, propylene, and benzene. It is superior to the existing El Paso stripper method. In addition to lower detection limits for ethylene and propylene, the Direct Injection GC method quantifies individual light hydrocarbons in cooling water, provides better recoveries, and requires less maintenance and setup costs. Since the instrumentation and supplies are readily available, this technique could easily be established as a standard or alternative method for routine emission monitoring and leak detection of light hydrocarbons in cooling-tower water.

  4. Laboratory Performance of Five Selected Soil Moisture Sensors Applying Factory and Own Calibration Equations for Two Soil Media of Different Bulk Density and Salinity Levels.

    PubMed

    Matula, Svatopluk; Báťková, Kamila; Legese, Wossenu Lemma

    2016-11-15

    Non-destructive soil water content determination is a fundamental component for many agricultural and environmental applications. The accuracy and costs of the sensors define the measurement scheme and the ability to fit the natural heterogeneous conditions. The aim of this study was to evaluate five commercially available and relatively cheap sensors usually grouped with impedance and FDR sensors. ThetaProbe ML2x (impedance) and ECH₂O EC-10, ECH₂O EC-20, ECH₂O EC-5, and ECH₂O TE (all FDR) were tested on silica sand and loess of defined characteristics under controlled laboratory conditions. The calibrations were carried out in nine consecutive soil water contents from dry to saturated conditions (pure water and saline water). The gravimetric method was used as a reference method for the statistical evaluation (ANOVA with significance level 0.05). Generally, the results showed that our own calibrations led to more accurate soil moisture estimates. Variance component analysis arranged the factors contributing to the total variation as follows: calibration (contributed 42%), sensor type (contributed 29%), material (contributed 18%), and dry bulk density (contributed 11%). All the tested sensors performed very well within the whole range of water content, especially the sensors ECH₂O EC-5 and ECH₂O TE, which also performed surprisingly well in saline conditions.

  5. Laboratory Performance of Five Selected Soil Moisture Sensors Applying Factory and Own Calibration Equations for Two Soil Media of Different Bulk Density and Salinity Levels

    PubMed Central

    Matula, Svatopluk; Báťková, Kamila; Legese, Wossenu Lemma

    2016-01-01

    Non-destructive soil water content determination is a fundamental component for many agricultural and environmental applications. The accuracy and costs of the sensors define the measurement scheme and the ability to fit the natural heterogeneous conditions. The aim of this study was to evaluate five commercially available and relatively cheap sensors usually grouped with impedance and FDR sensors. ThetaProbe ML2x (impedance) and ECH2O EC-10, ECH2O EC-20, ECH2O EC-5, and ECH2O TE (all FDR) were tested on silica sand and loess of defined characteristics under controlled laboratory conditions. The calibrations were carried out in nine consecutive soil water contents from dry to saturated conditions (pure water and saline water). The gravimetric method was used as a reference method for the statistical evaluation (ANOVA with significance level 0.05). Generally, the results showed that our own calibrations led to more accurate soil moisture estimates. Variance component analysis arranged the factors contributing to the total variation as follows: calibration (contributed 42%), sensor type (contributed 29%), material (contributed 18%), and dry bulk density (contributed 11%). All the tested sensors performed very well within the whole range of water content, especially the sensors ECH2O EC-5 and ECH2O TE, which also performed surprisingly well in saline conditions. PMID:27854263

  6. Validating accelerometry estimates of energy expenditure across behaviours using heart rate data in a free-living seabird.

    PubMed

    Hicks, Olivia; Burthe, Sarah; Daunt, Francis; Butler, Adam; Bishop, Charles; Green, Jonathan A

    2017-05-15

    Two main techniques have dominated the field of ecological energetics: the heart rate and doubly labelled water methods. Although well established, they are not without their weaknesses, namely expense, intrusiveness and lack of temporal resolution. A new technique has been developed using accelerometers; it uses the overall dynamic body acceleration (ODBA) of an animal as a calibrated proxy for energy expenditure. This method provides high-resolution data without the need for surgery. Significant relationships exist between the rate of oxygen consumption ( V̇ O 2 ) and ODBA in controlled conditions across a number of taxa; however, it is not known whether ODBA represents a robust proxy for energy expenditure consistently in all natural behaviours and there have been specific questions over its validity during diving, in diving endotherms. Here, we simultaneously deployed accelerometers and heart rate loggers in a wild population of European shags ( Phalacrocorax aristotelis ). Existing calibration relationships were then used to make behaviour-specific estimates of energy expenditure for each of these two techniques. Compared with heart rate-derived estimates, the ODBA method predicts energy expenditure well during flight and diving behaviour, but overestimates the cost of resting behaviour. We then combined these two datasets to generate a new calibration relationship between ODBA and V̇ O 2  that accounts for this by being informed by heart rate-derived estimates. Across behaviours we found a good relationship between ODBA and V̇ O 2 Within individual behaviours, we found useable relationships between ODBA and V̇ O 2  for flight and resting, and a poor relationship during diving. The error associated with these new calibration relationships mostly originates from the previous heart rate calibration rather than the error associated with the ODBA method. The equations provide tools for understanding how energy constrains ecology across the complex behaviour of free-living diving birds. © 2017. Published by The Company of Biologists Ltd.

  7. Temperature and Humidity Calibration of a Low-Cost Wireless Dust Sensor for Real-Time Monitoring.

    PubMed

    Hojaiji, Hannaneh; Kalantarian, Haik; Bui, Alex A T; King, Christine E; Sarrafzadeh, Majid

    2017-03-01

    This paper introduces the design, calibration, and validation of a low-cost portable sensor for the real-time measurement of dust particles within the environment. The proposed design consists of low hardware cost and calibration based on temperature and humidity sensing to achieve accurate processing of airborne dust density. Using commercial particulate matter sensors, a highly accurate air quality monitoring sensor was designed and calibrated using real world variations in humidity and temperature for indoor and outdoor applications. Furthermore, to provide a low-cost secure solution for real-time data transfer and monitoring, an onboard Bluetooth module with AES data encryption protocol was implemented. The wireless sensor was tested against a Dylos DC1100 Pro Air Quality Monitor, as well as an Alphasense OPC-N2 optical air quality monitoring sensor for accuracy. The sensor was also tested for reliability by comparing the sensor to an exact copy of itself under indoor and outdoor conditions. It was found that accurate measurements under real-world humid and temperature varying and dynamically changing conditions were achievable using the proposed sensor when compared to the commercially available sensors. In addition to accurate and reliable sensing, this sensor was designed to be wearable and perform real-time data collection and transmission, making it easy to collect and analyze data for air quality monitoring and real-time feedback in remote health monitoring applications. Thus, the proposed device achieves high quality measurements at lower-cost solutions than commercially available wireless sensors for air quality.

  8. A Novel Sensor System for Measuring Wheel Loads of Vehicles on Highways

    PubMed Central

    Zhang, Wenbin; Suo, Chunguang; Wang, Qi

    2008-01-01

    With the development of the highway transportation and business trade, vehicle Weigh-In-Motion (WIM) technology has become a key technology for measuring traffic loads. In this paper a novel WIM system based on monitoring of pavement strain responses in rigid pavement was investigated. In this WIM system multiple low cost, light weight, small volume and high accuracy embedded concrete strain sensors were used as WIM sensors to measure rigid pavement strain responses. In order to verify the feasibility of the method, a system prototype based on multiple sensors was designed and deployed on a relatively busy freeway. Field calibration and tests were performed with known two-axle truck wheel loads and the measurement errors were calculated based on the static weights measured with a static weighbridge. This enables the weights of other vehicles to be calculated from the calibration constant. Calibration and test results for individual sensors or three-sensor fusions are both provided. Repeatability, sources of error, and weight accuracy are discussed. Successful results showed that the proposed method was feasible and proven to have a high accuracy. Furthermore, a sample mean approach using multiple fused individual sensors could provide better performance compared to individual sensors. PMID:27873952

  9. Stereoscopic 3D reconstruction using motorized zoom lenses within an embedded system

    NASA Astrophysics Data System (ADS)

    Liu, Pengcheng; Willis, Andrew; Sui, Yunfeng

    2009-02-01

    This paper describes a novel embedded system capable of estimating 3D positions of surfaces viewed by a stereoscopic rig consisting of a pair of calibrated cameras. Novel theoretical and technical aspects of the system are tied to two aspects of the design that deviate from typical stereoscopic reconstruction systems: (1) incorporation of an 10x zoom lens (Rainbow- H10x8.5) and (2) implementation of the system on an embedded system. The system components include a DSP running μClinux, an embedded version of the Linux operating system, and an FPGA. The DSP orchestrates data flow within the system and performs complex computational tasks and the FPGA provides an interface to the system devices which consist of a CMOS camera pair and a pair of servo motors which rotate (pan) each camera. Calibration of the camera pair is accomplished using a collection of stereo images that view a common chess board calibration pattern for a set of pre-defined zoom positions. Calibration settings for an arbitrary zoom setting are estimated by interpolation of the camera parameters. A low-computational cost method for dense stereo matching is used to compute depth disparities for the stereo image pairs. Surface reconstruction is accomplished by classical triangulation of the matched points from the depth disparities. This article includes our methods and results for the following problems: (1) automatic computation of the focus and exposure settings for the lens and camera sensor, (2) calibration of the system for various zoom settings and (3) stereo reconstruction results for several free form objects.

  10. A novel instrumentation circuit for electrochemical measurements.

    PubMed

    Yin, Li-Te; Wang, Hung-Yu; Lin, Yang-Chiuan; Huang, Wen-Chung

    2012-01-01

    In this paper, a novel signal processing circuit which can be used for the measurement of H(+) ion and urea concentration is presented. A potentiometric method is used to detect the concentrations of H(+) ions and urea by using H(+) ion-selective electrodes and urea electrodes, respectively. The experimental data shows that this measuring structure has a linear pH response for the concentration range within pH 2 and 12, and the dynamic range for urea concentration measurement is in the range of 0.25 to 64 mg/dL. The designed instrumentation circuit possesses a calibration function and it can be applied to different sensing electrodes for electrochemical analysis. It possesses the advantageous properties of being multi-purpose, easy calibration and low cost.

  11. A calibration method for fringe reflection technique based on the analytical phase-slope description

    NASA Astrophysics Data System (ADS)

    Wu, Yuxiang; Yue, Huimin; Pan, Zhipeng; Liu, Yong

    2018-05-01

    The fringe reflection technique (FRT) has been one of the most popular methods to measure the shape of specular surface these years. The existing system calibration methods of FRT usually contain two parts, which are camera calibration and geometric calibration. In geometric calibration, the liquid crystal display (LCD) screen position calibration is one of the most difficult steps among all the calibration procedures, and its accuracy is affected by the factors such as the imaging aberration, the plane mirror flatness, and LCD screen pixel size accuracy. In this paper, based on the deduction of FRT analytical phase-slope description, we present a novel calibration method with no requirement to calibrate the position of LCD screen. On the other hand, the system can be arbitrarily arranged, and the imaging system can either be telecentric or non-telecentric. In our experiment of measuring the 5000mm radius sphere mirror, the proposed calibration method achieves 2.5 times smaller measurement error than the geometric calibration method. In the wafer surface measuring experiment, the measurement result with the proposed calibration method is closer to the interferometer result than the geometric calibration method.

  12. Fiber optic medical pressure-sensing system employing intelligent self-calibration

    NASA Astrophysics Data System (ADS)

    He, Gang

    1996-01-01

    In this article, we describe a fiber-optic catheter-type pressure-sensing system that has been successfully introduced for medical diagnostic applications. We present overall sensors and optoelectronics designs, and highlight product development efforts that lead to a reliable and accurate disposable pressure-sensing system. In particular, the incorporation of an intelligent on-site self-calibration approach allows limited sensor reuses for reducing end-user costs and for system adaptation to wide sensor variabilities associated with low-cost manufacturing processes. We demonstrate that fiber-optic sensors can be cost-effectively produced to satisfy needs of certain medical market segments.

  13. A porphyrin-based fluorescence method for zinc determination in commercial propolis extracts without sample pretreatment.

    PubMed

    Pierini, Gastón Darío; Pinto, Victor Hugo A; Maia, Clarissa G C; Fragoso, Wallace D; Reboucas, Julio S; Centurión, María Eugenia; Pistonesi, Marcelo Fabián; Di Nezio, María Susana

    2017-11-01

    The quantification of zinc in over-the-counter drugs as commercial propolis extracts by molecular fluorescence technique using meso-tetrakis(4-carboxyphenyl)porphyrin (H 2 TCPP 4 ) was developed for the first time. The calibration curve is linear from 6.60 to 100 nmol L -1 of Zn 2+ . The detection and quantification limits were 6.22 nmol L -1 and 19.0 nmol L -1 , respectively. The reproducibility and repeatability calculated as the percentage variation of slopes of seven calibration curves were 6.75% and 4.61%, respectively. Commercial propolis extract samples from four Brazilian states were analyzed and the results (0.329-0.797 mg/100 mL) obtained with this method are in good agreement with that obtained with the Atomic Absorption Spectroscopy (AAS) technique. The method is simple, fast, of low cost and allows the analysis of the samples without pretreatment. Moreover the major advantage is that Zn-porphyrin complex presents fluorescent characteristic promoting the selectivity and sensitivity of the method. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Predicting herbivore faecal nitrogen using a multispecies near-infrared reflectance spectroscopy calibration.

    PubMed

    Villamuelas, Miriam; Serrano, Emmanuel; Espunyes, Johan; Fernández, Néstor; López-Olvera, Jorge R; Garel, Mathieu; Santos, João; Parra-Aguado, María Ángeles; Ramanzin, Maurizio; Fernández-Aguilar, Xavier; Colom-Cadena, Andreu; Marco, Ignasi; Lavín, Santiago; Bartolomé, Jordi; Albanell, Elena

    2017-01-01

    Optimal management of free-ranging herbivores requires the accurate assessment of an animal's nutritional status. For this purpose 'near-infrared reflectance spectroscopy' (NIRS) is very useful, especially when nutritional assessment is done through faecal indicators such as faecal nitrogen (FN). In order to perform an NIRS calibration, the default protocol recommends starting by generating an initial equation based on at least 50-75 samples from the given species. Although this protocol optimises prediction accuracy, it limits the use of NIRS with rare or endangered species where sample sizes are often small. To overcome this limitation we tested a single NIRS equation (i.e., multispecies calibration) to predict FN in herbivores. Firstly, we used five herbivore species with highly contrasting digestive physiologies to build monospecies and multispecies calibrations, namely horse, sheep, Pyrenean chamois, red deer and European rabbit. Secondly, the equation accuracy was evaluated by two procedures using: (1) an external validation with samples from the same species, which were not used in the calibration process; and (2) samples from different ungulate species, specifically Alpine ibex, domestic goat, European mouflon, roe deer and cattle. The multispecies equation was highly accurate in terms of the coefficient of determination for calibration R2 = 0.98, standard error of validation SECV = 0.10, standard error of external validation SEP = 0.12, ratio of performance to deviation RPD = 5.3, and range error of prediction RER = 28.4. The accuracy of the multispecies equation to predict other herbivore species was also satisfactory (R2 > 0.86, SEP < 0.27, RPD > 2.6, and RER > 8.1). Lastly, the agreement between multi- and monospecies calibrations was also confirmed by the Bland-Altman method. In conclusion, our single multispecies equation can be used as a reliable, cost-effective, easy and powerful analytical method to assess FN in a wide range of herbivore species.

  15. The Scottish way - getting results in soil spectroscopy without spending money

    NASA Astrophysics Data System (ADS)

    Aitkenhead, Matt; Cameron, Clare; Gaskin, Graham; Choisy, Bastien; Coull, Malcolm; Black, Helaina

    2016-04-01

    Achieving soil characterisation using spectroscopy requires several things. These include soil data to develop or train a calibration model, a method of capturing spectra, the ability to actually develop a calibration model and also additional data to reinforce the model by introducing some form of stratification or site-specific information. Each of these steps requires investment in both time and money. Here we present an approach developed at the James Hutton Institute that achieves the end goal with minimal cost, by making as much use as possible of existing soil and environmental datasets for Scotland. The spectroscopy device that has been developed is PHYLIS (Prototype HYperspectral Low-cost Imaging System) that was constructed using inexpensive optical components, and uses a basic digital camera to produce visible-range spectra. The results show that for a large number of soil parameters, it is possible to estimate values either very well (RSQ > 0.9) (LOI, C, exchangeable H), well (RSQ > 0.75) (N, pH) or moderately (RSQ > 0.5) (Mg, Na, K, Fe, Al, sand, silt, clay). The methods used to achieve these results are described. A number of additional parameters were not well estimated (elemental concentrations), and we describe how work is ongoing to improve our ability to estimate these using similar technology and data.

  16. Automated Attitude Sensor Calibration: Progress and Plans

    NASA Technical Reports Server (NTRS)

    Sedlak, Joseph; Hashmall, Joseph

    2004-01-01

    This paper describes ongoing work a NASA/Goddard Space Flight Center to improve the quality of spacecraft attitude sensor calibration and reduce costs by automating parts of the calibration process. The new calibration software can autonomously preview data quality over a given time span, select a subset of the data for processing, perform the requested calibration, and output a report. This level of automation is currently being implemented for two specific applications: inertial reference unit (IRU) calibration and sensor alignment calibration. The IRU calibration utility makes use of a sequential version of the Davenport algorithm. This utility has been successfully tested with simulated and actual flight data. The alignment calibration is still in the early testing stage. Both utilities will be incorporated into the institutional attitude ground support system.

  17. Application of partial inversion pulse to ultrasonic time-domain correlation method to measure the flow rate in a pipe

    NASA Astrophysics Data System (ADS)

    Wada, Sanehiro; Furuichi, Noriyuki; Shimada, Takashi

    2017-11-01

    This paper proposes the application of a novel ultrasonic pulse, called a partial inversion pulse (PIP), to the measurement of the velocity profile and flow rate in a pipe using the ultrasound time-domain correlation (UTDC) method. In general, the measured flow rate depends on the velocity profile in the pipe; thus, on-site calibration is the only method of checking the accuracy of on-site flow rate measurements. Flow rate calculation using UTDC is based on the integration of the measured velocity profile. The advantages of this method compared with the ultrasonic pulse Doppler method include the possibility of the velocity range having no limitation and its applicability to flow fields without a sufficient amount of reflectors. However, it has been previously reported that the measurable velocity range for UTDC is limited by false detections. Considering the application of this method to on-site flow fields, the issue of velocity range is important. To reduce the effect of false detections, a PIP signal, which is an ultrasound signal that contains a partially inverted region, was developed in this study. The advantages of the PIP signal are that it requires little additional hardware cost and no additional software cost in comparison with conventional methods. The effects of inversion on the characteristics of the ultrasound transmission were estimated through numerical calculation. Then, experimental measurements were performed at a national standard calibration facility for water flow rate in Japan. The experimental results demonstrate that measurements made using a PIP signal are more accurate and yield a higher detection ratio than measurements using a normal pulse signal.

  18. Development of new analytical methods for the determination of caffeine content in aqueous solution of green coffee beans.

    PubMed

    Weldegebreal, Blen; Redi-Abshiro, Mesfin; Chandravanshi, Bhagwan Singh

    2017-12-05

    This study was conducted to develop fast and cost effective methods for the determination of caffeine in green coffee beans. In the present work direct determination of caffeine in aqueous solution of green coffee bean was performed using FT-IR-ATR and fluorescence spectrophotometry. Caffeine was also directly determined in dimethylformamide solution using NIR spectroscopy with univariate calibration technique. The percentage of caffeine for the same sample of green coffee beans was determined using the three newly developed methods. The caffeine content of the green coffee beans was found to be 1.52 ± 0.09 (% w/w) using FT-IR-ATR, 1.50 ± 0.14 (% w/w) using NIR and 1.50 ± 0.05 (% w/w) using fluorescence spectroscopy. The means of the three methods were compared by applying one way analysis of variance and at p = 0.05 significance level the means were not significantly different. The percentage of caffeine in the same sample of green coffee bean was also determined by using the literature reported UV/Vis spectrophotometric method for comparison and found to be 1.40 ± 0.02 (% w/w). New simple, rapid and inexpensive methods were developed for direct determination of caffeine content in aqueous solution of green coffee beans using FT-IR-ATR and fluorescence spectrophotometries. NIR spectrophotometry can also be used as alternative choice of caffeine determination using reduced amount of organic solvent (dimethylformamide) and univariate calibration technique. These analytical methods may therefore, be recommended for the rapid, simple, safe and cost effective determination of caffeine in green coffee beans.

  19. Risk Costs for New Dams: Economic Analysis and Effects of Monitoring

    NASA Astrophysics Data System (ADS)

    Paté-Cornell, M. Elisabeth; Tagaras, George

    1986-01-01

    This paper presents new developments and illustrations of the introduction of risk and costs in cost-benefit analysis for new dams. The emphasis is on a method of evaluation of the risk costs based on the structure of the local economy. Costs to agricultural property as well as residential, commercial, industrial, and public property are studied in detail. Of particular interest is the case of sequential dam failure and the evaluation of the risk costs attributable to a new dam upstream from an existing one. Three real cases are presented as illustrations of the method: the Auburn Dam, the Dickey-Lincoln School Project, and the Teton Dam, which failed in 1976. This last case provides a calibration tool for the estimation of loss ratios. For these three projects, the risk-modified benefit-cost ratios are computed to assess the effect of the risk on the economic performance of the project. The role of a warning system provided by systematic monitoring of the dam is analyzed: by reducing the risk costs, the warning system attenuates their effect on the benefit-cost ratio. The precursors, however, can be missed or misinterpreted: monitoring does not guarantee that the risks to human life can be reduced to zero. This study shows, in particular, that it is critical to consider the risk costs in the decision to build a new dam when the flood area is large and densely populated.

  20. Ratio manipulating spectrophotometry versus chemometry as stability indicating methods for cefquinome sulfate determination.

    PubMed

    Yehia, Ali M; Arafa, Reham M; Abbas, Samah S; Amer, Sawsan M

    2016-01-15

    Spectral resolution of cefquinome sulfate (CFQ) in the presence of its degradation products was studied. Three selective, accurate and rapid spectrophotometric methods were performed for the determination of CFQ in the presence of either its hydrolytic, oxidative or photo-degradation products. The proposed ratio difference, derivative ratio and mean centering are ratio manipulating spectrophotometric methods that were satisfactorily applied for selective determination of CFQ within linear range of 5.0-40.0 μg mL(-1). Concentration Residuals Augmented Classical Least Squares was applied and evaluated for the determination of the cited drug in the presence of its all degradation products. Traditional Partial Least Squares regression was also applied and benchmarked against the proposed advanced multivariate calibration. Experimentally designed 25 synthetic mixtures of three factors at five levels were used to calibrate and validate the multivariate models. Advanced chemometrics succeeded in quantitative and qualitative analyses of CFQ along with its hydrolytic, oxidative and photo-degradation products. The proposed methods were applied successfully for different pharmaceutical formulations analyses. These developed methods were simple and cost-effective compared with the manufacturer's RP-HPLC method. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. TECHNICAL JUSTIFICATION FOR CHOOSING PROPANE AS A CALIBRATION AGENT FOR TOTAL FLAMMABLE VOLATILE ORGANIC COMPOUND (VOC) DETERMINATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DOUGLAS, J.G.

    2006-07-06

    This document presents the technical justification for choosing and using propane as a calibration standard for estimating total flammable volatile organic compounds (VOCs) in an air matrix. A propane-in-nitrogen standard was selected based on a number of criteria: (1) has an analytical response similar to the VOCs of interest, (2) can be made with known accuracy and traceability, (3) is available with good purity, (4) has a matrix similar to the sample matrix, (5) is stable during storage and use, (6) is relatively non-hazardous, and (7) is a recognized standard for similar analytical applications. The Waste Retrieval Project (WRP) desiresmore » a fast, reliable, and inexpensive method for screening the flammable VOC content in the vapor-phase headspace of waste containers. Table 1 lists the flammable VOCs of interest to the WRP. The current method used to determine the VOC content of a container is to sample the container's headspace and submit the sample for gas chromatography--mass spectrometry (GC-MS) analysis. The driver for the VOC measurement requirement is safety: potentially flammable atmospheres in the waste containers must be allowed to diffuse prior to processing the container. The proposed flammable VOC screening method is to inject an aliquot of the headspace sample into an argon-doped pulsed-discharge helium ionization detector (Ar-PDHID) contained within a gas chromatograph. No actual chromatography is performed; the sample is transferred directly from a sample loop to the detector through a short, inert transfer line. The peak area resulting from the injected sample is proportional to the flammable VOC content of the sample. However, because the Ar-PDHID has different response factors for different flammable VOCs, a fundamental assumption must be made that the agent used to calibrate the detector is representative of the flammable VOCs of interest that may be in the headspace samples. At worst, we desire that calibration with the selected calibrating agent overestimate the value of the VOCs in a sample. By overestimating the VOC content of a sample, we want to minimize false negatives. A false negative is defined as incorrectly estimating the VOC content of the sample to be below programmatic action limits when, in fact, the sample,exceeds the action limits. The disadvantage of overestimating the flammable VOC content of a sample is that additional cost may be incurred because additional sampling and GC-MS analysis may be required to confirm results over programmatic action limits. Therefore, choosing an appropriate calibration standard for the Ar-PDHID is critical to avoid false negatives and to minimize additional analytical costs.« less

  2. Measurement of Antenna Bore-Sight Gain

    NASA Technical Reports Server (NTRS)

    Fortinberry, Jarrod; Shumpert, Thomas

    2016-01-01

    The absolute or free-field gain of a simple antenna can be approximated using standard antenna theory formulae or for a more accurate prediction, numerical methods may be employed to solve for antenna parameters including gain. Both of these methods will result in relatively reasonable estimates but in practice antenna gain is usually verified and documented via measurements and calibration. In this paper, a relatively simple and low-cost, yet effective means of determining the bore-sight free-field gain of a VHF/UHF antenna is proposed by using the Brewster angle relationship.

  3. 3D printing in X-ray and Gamma-Ray Imaging: A novel method for fabricating high-density imaging apertures☆

    PubMed Central

    Miller, Brian W.; Moore, Jared W.; Barrett, Harrison H.; Fryé, Teresa; Adler, Steven; Sery, Joe; Furenlid, Lars R.

    2011-01-01

    Advances in 3D rapid-prototyping printers, 3D modeling software, and casting techniques allow for cost-effective fabrication of custom components in gamma-ray and X-ray imaging systems. Applications extend to new fabrication methods for custom collimators, pinholes, calibration and resolution phantoms, mounting and shielding components, and imaging apertures. Details of the fabrication process for these components, specifically the 3D printing process, cold casting with a tungsten epoxy, and lost-wax casting in platinum are presented. PMID:22199414

  4. Texas flexible pavements and overlays : year 5 report - complete data documentation.

    DOT National Transportation Integrated Search

    2017-05-01

    Proper calibration and validation of pavement design and performance models to Texas conditions is : essential for cost-effective flexible pavement design, performance predictions, and maintenance/rehab : strategies. The veracity of the calibration o...

  5. The Comparison Of In-Flight Pitot Static Calibration Method By Using Radio Altimeter As Reference with GPS and Tower Fly By Methods On CN235-100 MPA

    NASA Astrophysics Data System (ADS)

    Derajat; Hariowibowo, Hindawan

    2018-04-01

    The new proposed In-Flight Pitot Static Calibration Method has been carried out during Development and Qualification of CN235-100 MPA (Military Patrol Aircraft). This method is expected to reduce flight hours, less human resources required, no additional special equipment, simple analysis calculation and finally by using this method it is expected to automatically minimized operational cost. At The Indonesian Aerospace (IAe) Flight Test Center Division, the development and updating of new flight test technique and data analysis method as specially for flight physics test subject are still continued to be developed as long as it safety for flight and give additional value for the industrial side. More than 30 years, Flight Test Data Engineers at The Flight Test center Division work together with the Air Crew (Test Pilots, Co-Pilots, and Flight Test Engineers) to execute the flight test activity with standard procedure for both the existance or development test techniques and test data analysis. In this paper the approximation of mathematical model, data reduction and flight test technique of The In-Flight Pitot Static Calibration by using Radio Altimeter as reference will be described and the test results had been compared with another methods ie. By using Global Position System (GPS) and the traditional method (Tower Fly By Method) which were used previously during this Flight Test Program (Ref. [10]). The flight test data case are using CN235-100 MPA flight test data during development and Qualification Flight Test Program at Cazaux Airport, France, in June-November 2009 (Ref. [2]).

  6. Sediment measurement and transport modeling: impact of riparian and filter strip buffers.

    PubMed

    Moriasi, Daniel N; Steiner, Jean L; Arnold, Jeffrey G

    2011-01-01

    Well-calibrated models are cost-effective tools to quantify environmental benefits of conservation practices, but lack of data for parameterization and evaluation remains a weakness to modeling. Research was conducted in southwestern Oklahoma within the Cobb Creek subwatershed (CCSW) to develop cost-effective methods to collect stream channel parameterization and evaluation data for modeling in watersheds with sparse data. Specifically, (i) simple stream channel observations obtained by rapid geomorphic assessment (RGA) were used to parameterize the Soil and Water Assessment Tool (SWAT) model stream channel variables before calibrating SWAT for streamflow and sediment, and (ii) average annual reservoir sedimentation rate, measured at the Crowder Lake using the acoustic profiling system (APS), was used to cross-check Crowder Lake sediment accumulation rate simulated by SWAT. Additionally, the calibrated and cross-checked SWAT model was used to simulate impacts of riparian forest buffer (RF) and bermudagrass [ (L.) Pers.] filter strip buffer (BFS) on sediment yield and concentration in the CCSW. The measured average annual sedimentation rate was between 1.7 and 3.5 t ha yr compared with simulated sediment rate of 2.4 t ha yr Application of BFS across cropped fields resulted in a 72% reduction of sediment delivery to the stream, while the RF and the combined RF and BFS reduced the suspended sediment concentration at the CCSW outlet by 68 and 73%, respectively. Effective riparian practices have potential to increase reservoir life. These results indicate promise for using the RGA and APS methods to obtain data to improve water quality simulations in ungauged watersheds. American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America.

  7. Calibration of the DRASTIC ground water vulnerability mapping method

    USGS Publications Warehouse

    Rupert, M.G.

    2001-01-01

    Ground water vulnerability maps developed using the DRASTIC method have been produced in many parts of the world. Comparisons of those maps with actual ground water quality data have shown that the DRASTIC method is typically a poor predictor of ground water contamination. This study significantly improved the effectiveness of a modified DRASTIC ground water vulnerability map by calibrating the point rating schemes to actual ground water quality data by using nonparametric statistical techniques and a geographic information system. Calibration was performed by comparing data on nitrite plus nitrate as nitrogen (NO2 + NO3-N) concentrations in ground water to land-use, soils, and depth to first-encountered ground water data. These comparisons showed clear statistical differences between NO2 + NO3-N concentrations and the various categories. Ground water probability point ratings for NO2 + NO3-N contamination were developed from the results of these comparisons, and a probability map was produced. This ground water probability map was then correlated with an independent set of NO2 + NO3-N data to demonstrate its effectiveness in predicting elevated NO2 + NO3-N concentrations in ground water. This correlation demonstrated that the probability map was effective, but a vulnerability map produced with the uncalibrated DRASTIC method in the same area and using the same data layers was not effective. Considerable time and expense have been outlaid to develop ground water vulnerability maps with the DRASTIC method. This study demonstrates a cost-effective method to improve and verify the effectiveness of ground water vulnerability maps.

  8. Calibrating Parameters of Power System Stability Models using Advanced Ensemble Kalman Filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Renke; Diao, Ruisheng; Li, Yuanyuan

    With the ever increasing penetration of renewable energy, smart loads, energy storage, and new market behavior, today’s power grid becomes more dynamic and stochastic, which may invalidate traditional study assumptions and pose great operational challenges. Thus, it is of critical importance to maintain good-quality models for secure and economic planning and real-time operation. Following the 1996 Western Systems Coordinating Council (WSCC) system blackout, North American Electric Reliability Corporation (NERC) and Western Electricity Coordinating Council (WECC) in North America enforced a number of policies and standards to guide the power industry to periodically validate power grid models and calibrate poor parametersmore » with the goal of building sufficient confidence in model quality. The PMU-based approach using online measurements without interfering with the operation of generators provides a low-cost alternative to meet NERC standards. This paper presents an innovative procedure and tool suites to validate and calibrate models based on a trajectory sensitivity analysis method and an advanced ensemble Kalman filter algorithm. The developed prototype demonstrates excellent performance in identifying and calibrating bad parameters of a realistic hydro power plant against multiple system events.« less

  9. Diagnosis of growth hormone deficiency is affected by calibrators used in GH immunoassays.

    PubMed

    Meazza, C; Albertini, R; Pagani, S; Sessa, N; Laarej, K; Falcone, R; Bozzola, E; Calcaterra, V; Bozzola, M

    2012-11-01

    Growth hormone (GH) values vary among immunoassays depending on different factors, such as the assay method used, specificity of antibodies, matrix difference between standards and samples, and interference with endogenous GH binding proteins (GHBPs). We evaluated whether the use of different calibrators for GH measurement may affect GH values and, consequently, the formulation of GH deficiency (GHD) diagnosis in children. Twenty-three short children (5 F, 18 M; age 11.4±3.1 years), with the clinical characteristics of GHD (height:  -2.3±0.5 SDS; height velocity  -2.3±1.5 SDS; IGF-I  -1.2±0.9 SDS), underwent GH stimulation tests to confirm the clinical diagnosis of GHD. Serum GH values were measured with Immulite 2000, using 2 different calibrators, IS 98/574, a recombinant 22 kDa molecule of more than 95% purity, and IS 80/505, of pituitary origin and resembling a variety of GH isoforms. We found blunted GH secretion in 20 subjects with the Immulite assay using the IS 98/574 GH as a calibrator, confirming the diagnosis of GHD. Subsequently, using IS 80/505 GH as a calibrator, in the same samples only 14 children showed reduced GH levels. The total cost for the first year of GH therapy of patients diagnosed with IS 98/574 as a calibrator was higher than that for patients diagnosed with IS 80/505 as a calibrator. These data confirm that GH values may depend on different calibrators used in the GH assay, affecting the formulation of GHD diagnosis and the consequent decision to start GH treatment. © Georg Thieme Verlag KG Stuttgart · New York.

  10. Calibration of ground-based microwave radiometers - Accuracy assessment and recommendations for network users

    NASA Astrophysics Data System (ADS)

    Pospichal, Bernhard; Küchler, Nils; Löhnert, Ulrich; Crewell, Susanne; Czekala, Harald; Güldner, Jürgen

    2016-04-01

    Ground-based microwave radiometers (MWR) are becoming widely used in atmospheric remote sensing and start to be routinely operated by national weather services and other institutions. However, common standards for calibration of these radiometers and a detailed knowledge about the error characteristics is needed, in order to assimilate the data into models. Intercomparisons of calibrations by different MWRs have rarely been done. Therefore, two calibration experiments in Lindenberg (2014) and Meckenheim (2015) were performed in the frame of TOPROF (Cost action ES1303) in order to assess uncertainties and differences between various instruments. In addition, a series of experiments were taken in Oklahoma in autumn 2014. The focus lay on the performance of the two main instrument types, which are currently used operationally. These are the MP-Profiler series by Radiometrics Corporation as well as the HATPRO series by Radiometer Physics GmbH (RPG). Both instrument types are operating in two frequency bands, one along the 22 GHz water vapour line, the other one at the lower wing of the 60 GHz oxygen absorption complex. The goal was to establish protocols for providing quality controlled (QC) MWR data and their uncertainties. To this end, standardized calibration procedures for MWR were developed and recommendations for radiometer users were compiled. We focus here mainly on data types, integration times and optimal settings for calibration intervals, both for absolute (liquid nitrogen, tipping curve) as well as relative (hot load, noise diode) calibrations. Besides the recommendations for ground-based MWR operators, we will present methods to determine the accuracy of the calibration as well as means for automatic data quality control. In addition, some results from the intercomparison of different radiometers will be discussed.

  11. Calibration of Elasto-Magnetic Sensors on In-Service Cable-Stayed Bridges for Stress Monitoring.

    PubMed

    Cappello, Carlo; Zonta, Daniele; Laasri, Hassan Ait; Glisic, Branko; Wang, Ming

    2018-02-05

    The recent developments in measurement technology have led to the installation of efficient monitoring systems on many bridges and other structures all over the world. Nowadays, more and more structures have been built and instrumented with sensors. However, calibration and installation of sensors remain challenging tasks. In this paper, we use a case study, Adige Bridge, in order to present a low-cost method for the calibration and installation of elasto-magnetic sensors on cable-stayed bridges. Elasto-magnetic sensors enable monitoring of cable stress. The sensor installation took place two years after the bridge construction. The calibration was conducted in two phases: one in the laboratory and the other one on site. In the laboratory, a sensor was built around a segment of cable that was identical to those of the cable-stayed bridge. Then, the sample was subjected to a defined tension force. The sensor response was compared with the applied load. Experimental results showed that the relationship between load and magnetic permeability does not depend on the sensor fabrication process except for an offset. The determination of this offset required in situ calibration after installation. In order to perform the in situ calibration without removing the cables from the bridge, vibration tests were carried out for the estimation of the cables' tensions. At the end of the paper, we show and discuss one year of data from the elasto-magnetic sensors. Calibration results demonstrate the simplicity of the installation of these sensors on existing bridges and new structures.

  12. Calibration of Elasto-Magnetic Sensors on In-Service Cable-Stayed Bridges for Stress Monitoring

    PubMed Central

    Ait Laasri, Hassan; Glisic, Branko; Wang, Ming

    2018-01-01

    The recent developments in measurement technology have led to the installation of efficient monitoring systems on many bridges and other structures all over the world. Nowadays, more and more structures have been built and instrumented with sensors. However, calibration and installation of sensors remain challenging tasks. In this paper, we use a case study, Adige Bridge, in order to present a low-cost method for the calibration and installation of elasto-magnetic sensors on cable-stayed bridges. Elasto-magnetic sensors enable monitoring of cable stress. The sensor installation took place two years after the bridge construction. The calibration was conducted in two phases: one in the laboratory and the other one on site. In the laboratory, a sensor was built around a segment of cable that was identical to those of the cable-stayed bridge. Then, the sample was subjected to a defined tension force. The sensor response was compared with the applied load. Experimental results showed that the relationship between load and magnetic permeability does not depend on the sensor fabrication process except for an offset. The determination of this offset required in situ calibration after installation. In order to perform the in situ calibration without removing the cables from the bridge, vibration tests were carried out for the estimation of the cables’ tensions. At the end of the paper, we show and discuss one year of data from the elasto-magnetic sensors. Calibration results demonstrate the simplicity of the installation of these sensors on existing bridges and new structures. PMID:29401751

  13. Texas flexible pavements overlays : review and analysis of existing databases.

    DOT National Transportation Integrated Search

    2011-12-01

    Proper calibration of pavement design and rehabilitation performance models to : conditions in Texas is essential for cost-effective flexible pavement design. The degree of : excellence with which TxDOTs pavement design models is calibrated will d...

  14. Low cost 3D-printing used in an undergraduate project: an integrating sphere for measurement of photoluminescence quantum yield

    NASA Astrophysics Data System (ADS)

    Tomes, John J.; Finlayson, Chris E.

    2016-09-01

    We report upon the exploitation of the latest 3D printing technologies to provide low-cost instrumentation solutions, for use in an undergraduate level final-year project. The project addresses prescient research issues in optoelectronics, which would otherwise be inaccessible to such undergraduate student projects. The experimental use of an integrating sphere in conjunction with a desktop spectrometer presents opportunities to use easily handled, low cost materials as a means to illustrate many areas of physics such as spectroscopy, lasers, optics, simple circuits, black body radiation and data gathering. Presented here is a 3rd year undergraduate physics project which developed a low cost (£25) method to manufacture an experimentally accurate integrating sphere by 3D printing. Details are given of both a homemade internal reflectance coating formulated from readily available materials, and a robust instrument calibration method using a tungsten bulb. The instrument is demonstrated to give accurate and reproducible experimental measurements of luminescence quantum yield of various semiconducting fluorophores, in excellent agreement with literature values.

  15. A projector calibration method for monocular structured light system based on digital image correlation

    NASA Astrophysics Data System (ADS)

    Feng, Zhixin

    2018-02-01

    Projector calibration is crucial for a camera-projector three-dimensional (3-D) structured light measurement system, which has one camera and one projector. In this paper, a novel projector calibration method is proposed based on digital image correlation. In the method, the projector is viewed as an inverse camera, and a plane calibration board with feature points is used to calibrate the projector. During the calibration processing, a random speckle pattern is projected onto the calibration board with different orientations to establish the correspondences between projector images and camera images. Thereby, dataset for projector calibration are generated. Then the projector can be calibrated using a well-established camera calibration algorithm. The experiment results confirm that the proposed method is accurate and reliable for projector calibration.

  16. New Cost-Effective Method for Long-Term Groundwater Monitoring Programs

    DTIC Science & Technology

    2013-05-01

    with a small-volume, gas -tight syringe (< 1 mL) and injected directly into the field-portable GC. Alternatively, the well headspace sample can be...according to manufacturers’ protocols. Isobutylene was used as the calibration standard for the PID. The standard gas mixtures were used for 3-point...monitoring wells are being evaluated: 1) direct headspace sampling, 2) sampling tube with gas permeable membrane, and 3) gas -filled passive vapor

  17. Estimating economic value of agricultural water under changing conditions and the effects of spatial aggregation.

    PubMed

    Medellín-Azuara, Josué; Harou, Julien J; Howitt, Richard E

    2010-11-01

    Given the high proportion of water used for agriculture in certain regions, the economic value of agricultural water can be an important tool for water management and policy development. This value is quantified using economic demand curves for irrigation water. Such demand functions show the incremental contribution of water to agricultural production. Water demand curves are estimated using econometric or optimisation techniques. Calibrated agricultural optimisation models allow the derivation of demand curves using smaller datasets than econometric models. This paper introduces these subject areas then explores the effect of spatial aggregation (upscaling) on the valuation of water for irrigated agriculture. A case study from the Rio Grande-Rio Bravo Basin in North Mexico investigates differences in valuation at farm and regional aggregated levels under four scenarios: technological change, warm-dry climate change, changes in agricultural commodity prices, and water costs for agriculture. The scenarios consider changes due to external shocks or new policies. Positive mathematical programming (PMP), a calibrated optimisation method, is the deductive valuation method used. An exponential cost function is compared to the quadratic cost functions typically used in PMP. Results indicate that the economic value of water at the farm level and the regionally aggregated level are similar, but that the variability and distributional effects of each scenario are affected by aggregation. Moderately aggregated agricultural production models are effective at capturing average-farm adaptation to policy changes and external shocks. Farm-level models best reveal the distribution of scenario impacts. Copyright © 2009 Elsevier B.V. All rights reserved.

  18. Hybrid x-space: a new approach for MPI reconstruction.

    PubMed

    Tateo, A; Iurino, A; Settanni, G; Andrisani, A; Stifanelli, P F; Larizza, P; Mazzia, F; Mininni, R M; Tangaro, S; Bellotti, R

    2016-06-07

    Magnetic particle imaging (MPI) is a new medical imaging technique capable of recovering the distribution of superparamagnetic particles from their measured induced signals. In literature there are two main MPI reconstruction techniques: measurement-based (MB) and x-space (XS). The MB method is expensive because it requires a long calibration procedure as well as a reconstruction phase that can be numerically costly. On the other side, the XS method is simpler than MB but the exact knowledge of the field free point (FFP) motion is essential for its implementation. Our simulation work focuses on the implementation of a new approach for MPI reconstruction: it is called hybrid x-space (HXS), representing a combination of the previous methods. Specifically, our approach is based on XS reconstruction because it requires the knowledge of the FFP position and velocity at each time instant. The difference with respect to the original XS formulation is how the FFP velocity is computed: we estimate it from the experimental measurements of the calibration scans, typical of the MB approach. Moreover, a compressive sensing technique is applied in order to reduce the calibration time, setting a fewer number of sampling positions. Simulations highlight that HXS and XS methods give similar results. Furthermore, an appropriate use of compressive sensing is crucial for obtaining a good balance between time reduction and reconstructed image quality. Our proposal is suitable for open geometry configurations of human size devices, where incidental factors could make the currents, the fields and the FFP trajectory irregular.

  19. Development and Evaluation of an Automated Machine Learning Algorithm for In-Hospital Mortality Risk Adjustment Among Critical Care Patients.

    PubMed

    Delahanty, Ryan J; Kaufman, David; Jones, Spencer S

    2018-06-01

    Risk adjustment algorithms for ICU mortality are necessary for measuring and improving ICU performance. Existing risk adjustment algorithms are not widely adopted. Key barriers to adoption include licensing and implementation costs as well as labor costs associated with human-intensive data collection. Widespread adoption of electronic health records makes automated risk adjustment feasible. Using modern machine learning methods and open source tools, we developed and evaluated a retrospective risk adjustment algorithm for in-hospital mortality among ICU patients. The Risk of Inpatient Death score can be fully automated and is reliant upon data elements that are generated in the course of usual hospital processes. One hundred thirty-one ICUs in 53 hospitals operated by Tenet Healthcare. A cohort of 237,173 ICU patients discharged between January 2014 and December 2016. The data were randomly split into training (36 hospitals), and validation (17 hospitals) data sets. Feature selection and model training were carried out using the training set while the discrimination, calibration, and accuracy of the model were assessed in the validation data set. Model discrimination was evaluated based on the area under receiver operating characteristic curve; accuracy and calibration were assessed via adjusted Brier scores and visual analysis of calibration curves. Seventeen features, including a mix of clinical and administrative data elements, were retained in the final model. The Risk of Inpatient Death score demonstrated excellent discrimination (area under receiver operating characteristic curve = 0.94) and calibration (adjusted Brier score = 52.8%) in the validation dataset; these results compare favorably to the published performance statistics for the most commonly used mortality risk adjustment algorithms. Low adoption of ICU mortality risk adjustment algorithms impedes progress toward increasing the value of the healthcare delivered in ICUs. The Risk of Inpatient Death score has many attractive attributes that address the key barriers to adoption of ICU risk adjustment algorithms and performs comparably to existing human-intensive algorithms. Automated risk adjustment algorithms have the potential to obviate known barriers to adoption such as cost-prohibitive licensing fees and significant direct labor costs. Further evaluation is needed to ensure that the level of performance observed in this study could be achieved at independent sites.

  20. Collection of materials and performance data for Texas flexible pavements and overlays : project summary.

    DOT National Transportation Integrated Search

    2015-08-31

    Proper calibration of mechanistic-empirical : (M-E) design and rehabilitation performance : models to meet Texas conditions is essential : for cost-effective flexible pavement designs. : Such a calibration effort would require a : reliable source of ...

  1. Development of an experimental variable temperature set-up for a temperature range from 2.2 K to 325 K for cost-effective temperature sensor calibration

    NASA Astrophysics Data System (ADS)

    Pal, Sandip; Kar, Ranjan; Mandal, Anupam; Das, Ananda; Saha, Subrata

    2017-05-01

    A prototype of a variable temperature insert has been developed in-house as a cryogenic thermometer calibration facility. It was commissioned in fulfilment of the very stringent requirements of the temperature control of the cryogenic system. The calibration facility is designed for calibrating industrial cryogenic thermometers that include a temperature sensor and the wires heat-intercept in the 2.2 K-325 K temperature range. The isothermal section of the calibration block onto which the thermometers are mounted is weakly linked with the temperature control zone mounted with cooling capillary coil and cryogenic heater. The connecting wires of the thermometer are thermally anchored with the support of the temperature insert. The calibration procedure begins once the temperature of the support is stabilized. Homogeneity of the calibration block’s temperature is established both by simulation and by cross-comparison of two calibrated sensors. The absolute uncertainty present in temperature measurement is calculated and found comparable with the measured uncertainty at different temperature points. Measured data is presented in comparison to the standard thermometers at fixed points and it is possible to infer that the absolute accuracy achieved is better than  ±0.5% of the reading in comparison to the fixed point temperature. The design and development of simpler, low cost equipment, and approach to analysis of the calibration results are discussed further in this paper, so that it can be easily devised by other researchers.

  2. A game-theoretic approach for calibration of low-cost magnetometers under noise uncertainty

    NASA Astrophysics Data System (ADS)

    Siddharth, S.; Ali, A. S.; El-Sheimy, N.; Goodall, C. L.; Syed, Z. F.

    2012-02-01

    Pedestrian heading estimation is a fundamental challenge in Global Navigation Satellite System (GNSS)-denied environments. Additionally, the heading observability considerably degrades in low-speed mode of operation (e.g. walking), making this problem even more challenging. The goal of this work is to improve the heading solution when hand-held personal/portable devices, such as cell phones, are used for positioning and to improve the heading estimation in GNSS-denied signal environments. Most smart phones are now equipped with self-contained, low cost, small size and power-efficient sensors, such as magnetometers, gyroscopes and accelerometers. A magnetometer needs calibration before it can be properly employed for navigation purposes. Magnetometers play an important role in absolute heading estimation and are embedded in many smart phones. Before the users navigate with the phone, a calibration is invoked to ensure an improved signal quality. This signal is used later in the heading estimation. In most of the magnetometer-calibration approaches, the motion modes are seldom described to achieve a robust calibration. Also, suitable calibration approaches fail to discuss the stopping criteria for calibration. In this paper, the following three topics are discussed in detail that are important to achieve proper magnetometer-calibration results and in turn the most robust heading solution for the user while taking care of the device misalignment with respect to the user: (a) game-theoretic concepts to attain better filter parameter tuning and robustness in noise uncertainty, (b) best maneuvers with focus on 3D and 2D motion modes and related challenges and (c) investigation of the calibration termination criteria leveraging the calibration robustness and efficiency.

  3. A novel method to fast fix the post OPC weak-points through Calibre eqDRC application

    NASA Astrophysics Data System (ADS)

    Jin, YaDong; Lyu, Shizhi; Deng, ZeXi; Lu, Cong

    2018-03-01

    With shrinking nodes, as the layout patterns are becoming more and more complicated, OPC accuracy and performance is becoming increasingly challenging. While we are trying to perfect our OPC script to have a clean output without weak points, in a real urgent tape-out scenario, often there will be weak points and we cannot afford the cost to run the OPC again with an updated OPC recipe. Naturally the post OPC repair becomes the only cost-effective choice. The paper studies and compares a few methods for the post OPC weak-points repair: the manual OPC repair flow and traditional repair flow based on the DRC commands. Here, we introduce a novel method based on the eqDRC commands, which are widely used in the design house but have never been used in the post OPC flow. We discuss how to apply the eqDRC into the post OPC repairs and demonstrate its advantages over the traditional methods.

  4. Use of eddy-covariance methods to "calibrate" simple estimators of evapotranspiration

    USGS Publications Warehouse

    Sumner, David M.; Geurink, Jeffrey S.; Swancar, Amy

    2017-01-01

    Direct measurement of actual evapotranspiration (ET) provides quantification of this large component of the hydrologic budget, but typically requires long periods of record and large instrumentation and labor costs. Simple surrogate methods of estimating ET, if “calibrated” to direct measurements of ET, provide a reliable means to quantify ET. Eddy-covariance measurements of ET were made for 12 years (2004-2015) at an unimproved bahiagrass (Paspalum notatum) pasture in Florida. These measurements were compared to annual rainfall derived from rain gage data and monthly potential ET (PET) obtained from a long-term (since 1995) U.S. Geological Survey (USGS) statewide, 2-kilometer, daily PET product. The annual proportion of ET to rainfall indicates a strong correlation (r2=0.86) to annual rainfall; the ratio increases linearly with decreasing rainfall. Monthly ET rates correlated closely (r2=0.84) to the USGS PET product. The results indicate that simple surrogate methods of estimating actual ET show positive potential in the humid Florida climate given the ready availability of historical rainfall and PET.

  5. Optical power of VCSELs stabilized to 35 ppm/°C without a TEC

    NASA Astrophysics Data System (ADS)

    Downing, John

    2015-03-01

    This paper reports a method and system comprising a light source, an electronic method, and a calibration procedure for stabilizing the optical power of vertical-cavity surface-emitting lasers (VCSELs) and laser diodes (LDs) without the use thermoelectric coolers (TECs). The system eliminates the needs for custom interference coatings, polarization adjustments, and the exact alignment required by the optical method reported in 2013 [1]. It can precisely compensate for the effects of temperature and wavelength drift on photodiode responsivity as well as changes in VCSEL beam quality and polarization angle over a 50°C temperature range. Data obtained from light sources built with single-mode polarization-locked VCSELs demonstrate that 30 ppm/°C stability can be readily obtained. The system has advantages over TECstabilized laser modules that include: 1) 90% lower relative RMS optical power and temperature sensitivity, 2) a five-fold enhancement of wall-plug efficiency, 3) less component testing and sorting, 4) lower manufacturing costs, and 5) automated calibration in batches at time of manufacture is practical. The system is ideally suited for battery-powered environmental and in-home medical monitoring applications.

  6. Calibration transfer of a Raman spectroscopic quantification method for the assessment of liquid detergent compositions between two at-line instruments installed at two liquid detergent production plants.

    PubMed

    Brouckaert, D; Uyttersprot, J-S; Broeckx, W; De Beer, T

    2017-09-01

    Calibration transfer of partial least squares (PLS) quantification models is established between two Raman spectrometers located at two liquid detergent production plants. As full recalibration of existing calibration models is time-consuming, labour-intensive and costly, it is investigated whether the use of mathematical correction methods requiring only a handful of standardization samples can overcome the dissimilarities in spectral response observed between both measurement systems. Univariate and multivariate standardization approaches are investigated, ranging from simple slope/bias correction (SBC), local centring (LC) and single wavelength standardization (SWS) to more complex direct standardization (DS) and piecewise direct standardization (PDS). The results of these five calibration transfer methods are compared reciprocally, as well as with regard to a full recalibration. Four PLS quantification models, each predicting the concentration of one of the four main ingredients in the studied liquid detergent composition, are aimed at transferring. Accuracy profiles are established from the original and transferred quantification models for validation purposes. A reliable representation of the calibration models performance before and after transfer is thus established, based on β-expectation tolerance intervals. For each transferred model, it is investigated whether every future measurement that will be performed in routine will be close enough to the unknown true value of the sample. From this validation, it is concluded that instrument standardization is successful for three out of four investigated calibration models using multivariate (DS and PDS) transfer approaches. The fourth transferred PLS model could not be validated over the investigated concentration range, due to a lack of precision of the slave instrument. Comparing these transfer results to a full recalibration on the slave instrument allows comparison of the predictive power of both Raman systems and leads to the formulation of guidelines for further standardization projects. It is concluded that it is essential to evaluate the performance of the slave instrument prior to transfer, even when it is theoretically identical to the master apparatus. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Simultaneous determination of umbelliferone and scopoletin in Tibetan medicine Saussurea laniceps and traditional Chinese medicine Radix angelicae pubescentis using excitation-emission matrix fluorescence coupled with second-order calibration method

    NASA Astrophysics Data System (ADS)

    Wang, Li; Wu, Hai-Long; Yin, Xiao-Li; Hu, Yong; Gu, Hui-Wen; Yu, Ru-Qin

    2017-01-01

    A chemometrics-assisted excitation-emission matrix (EEM) fluorescence method is presented for simultaneous determination of umbelliferone and scopoletin in Tibetan medicine Saussurea laniceps (SL) and traditional Chinese medicine Radix angelicae pubescentis (RAP). Using the strategy of combining EEM fluorescence data with second-order calibration method based on the alternating trilinear decomposition (ATLD) algorithm, the simultaneous quantification of umbelliferone and scopoletin in the two different complex systems was achieved successfully, even in the presence of potential interferents. The pretreatment is simple due to the "second-order advantage" and the use of "mathematical separation" instead of awkward "physical or chemical separation". Satisfactory results have been achieved with the limits of detection (LODs) of umbelliferone and scopoletin being 0.06 ng mL- 1 and 0.16 ng mL- 1, respectively. The average spike recoveries of umbelliferone and scopoletin are 98.8 ± 4.3% and 102.5 ± 3.3%, respectively. Besides, HPLC-DAD method was used to further validate the presented strategy, and t-test indicates that prediction results of the two methods have no significant differences. Satisfactory experimental results imply that our method is fast, low-cost and sensitive when compared with HPLC-DAD method.

  8. Estimation of River Discharge at Ungauged Catchment using GIS Map Correlation Method as Applied in Sta. Lucia River in Mauban, Quezon, Philippines

    NASA Astrophysics Data System (ADS)

    Monjardin, Cris Edward F.; Uy, Francis Aldrine A.; Tan, Fibor J.

    2017-06-01

    This paper presents use of GIS Map Correlation Method, a novel method of Prediction of Ungauged Basin, which is used to estimate the river flow at an ungauged catchment. The PUB Method used here intends to reduce the time and costs of data gathering procedure since it will just rely on a reference calibrated watershed that has almost the same characteristics in terms of slope, curve number, land cover, climatic condition, and average basin elevation. Furthermore, this utilized a set of modelling software which used digital elevation models (DEM), rainfall and discharge data. The researchers estimated the river flow of Sta. Lucia River in Quezon province, which is the ungauged catchment. The researchers assessed 11 gauged catchments and determined which basin could be correlated to Sta. Lucia. After finding the most correlated basin, the researchers used the data considering adjusted parameters of the gauged catchment. In evaluating the accuracy of the method, the researchers simulated a rainfall event in the said catchment and compared the actual discharge and the generated discharge from HEC-HMS. The researchers found out that method showed a good fit in the compared results, proving GMC Method is effective for use in the calibration of ungauged catchments.

  9. Statistical emulation of landslide-induced tsunamis at the Rockall Bank, NE Atlantic

    PubMed Central

    Guillas, S.; Georgiopoulou, A.; Dias, F.

    2017-01-01

    Statistical methods constitute a useful approach to understand and quantify the uncertainty that governs complex tsunami mechanisms. Numerical experiments may often have a high computational cost. This forms a limiting factor for performing uncertainty and sensitivity analyses, where numerous simulations are required. Statistical emulators, as surrogates of these simulators, can provide predictions of the physical process in a much faster and computationally inexpensive way. They can form a prominent solution to explore thousands of scenarios that would be otherwise numerically expensive and difficult to achieve. In this work, we build a statistical emulator of the deterministic codes used to simulate submarine sliding and tsunami generation at the Rockall Bank, NE Atlantic Ocean, in two stages. First we calibrate, against observations of the landslide deposits, the parameters used in the landslide simulations. This calibration is performed under a Bayesian framework using Gaussian Process (GP) emulators to approximate the landslide model, and the discrepancy function between model and observations. Distributions of the calibrated input parameters are obtained as a result of the calibration. In a second step, a GP emulator is built to mimic the coupled landslide-tsunami numerical process. The emulator propagates the uncertainties in the distributions of the calibrated input parameters inferred from the first step to the outputs. As a result, a quantification of the uncertainty of the maximum free surface elevation at specified locations is obtained. PMID:28484339

  10. Statistical emulation of landslide-induced tsunamis at the Rockall Bank, NE Atlantic.

    PubMed

    Salmanidou, D M; Guillas, S; Georgiopoulou, A; Dias, F

    2017-04-01

    Statistical methods constitute a useful approach to understand and quantify the uncertainty that governs complex tsunami mechanisms. Numerical experiments may often have a high computational cost. This forms a limiting factor for performing uncertainty and sensitivity analyses, where numerous simulations are required. Statistical emulators, as surrogates of these simulators, can provide predictions of the physical process in a much faster and computationally inexpensive way. They can form a prominent solution to explore thousands of scenarios that would be otherwise numerically expensive and difficult to achieve. In this work, we build a statistical emulator of the deterministic codes used to simulate submarine sliding and tsunami generation at the Rockall Bank, NE Atlantic Ocean, in two stages. First we calibrate, against observations of the landslide deposits, the parameters used in the landslide simulations. This calibration is performed under a Bayesian framework using Gaussian Process (GP) emulators to approximate the landslide model, and the discrepancy function between model and observations. Distributions of the calibrated input parameters are obtained as a result of the calibration. In a second step, a GP emulator is built to mimic the coupled landslide-tsunami numerical process. The emulator propagates the uncertainties in the distributions of the calibrated input parameters inferred from the first step to the outputs. As a result, a quantification of the uncertainty of the maximum free surface elevation at specified locations is obtained.

  11. Automatic multi-camera calibration for deployable positioning systems

    NASA Astrophysics Data System (ADS)

    Axelsson, Maria; Karlsson, Mikael; Rudner, Staffan

    2012-06-01

    Surveillance with automated positioning and tracking of subjects and vehicles in 3D is desired in many defence and security applications. Camera systems with stereo or multiple cameras are often used for 3D positioning. In such systems, accurate camera calibration is needed to obtain a reliable 3D position estimate. There is also a need for automated camera calibration to facilitate fast deployment of semi-mobile multi-camera 3D positioning systems. In this paper we investigate a method for automatic calibration of the extrinsic camera parameters (relative camera pose and orientation) of a multi-camera positioning system. It is based on estimation of the essential matrix between each camera pair using the 5-point method for intrinsically calibrated cameras. The method is compared to a manual calibration method using real HD video data from a field trial with a multicamera positioning system. The method is also evaluated on simulated data from a stereo camera model. The results show that the reprojection error of the automated camera calibration method is close to or smaller than the error for the manual calibration method and that the automated calibration method can replace the manual calibration.

  12. Quantifying and reducing statistical uncertainty in sample-based health program costing studies in low- and middle-income countries

    PubMed Central

    Resch, Stephen

    2018-01-01

    Objectives: In many low- and middle-income countries, the costs of delivering public health programs such as for HIV/AIDS, nutrition, and immunization are not routinely tracked. A number of recent studies have sought to estimate program costs on the basis of detailed information collected on a subsample of facilities. While unbiased estimates can be obtained via accurate measurement and appropriate analyses, they are subject to statistical uncertainty. Quantification of this uncertainty, for example, via standard errors and/or 95% confidence intervals, provides important contextual information for decision-makers and for the design of future costing studies. While other forms of uncertainty, such as that due to model misspecification, are considered and can be investigated through sensitivity analyses, statistical uncertainty is often not reported in studies estimating the total program costs. This may be due to a lack of awareness/understanding of (1) the technical details regarding uncertainty estimation and (2) the availability of software with which to calculate uncertainty for estimators resulting from complex surveys. We provide an overview of statistical uncertainty in the context of complex costing surveys, emphasizing the various potential specific sources that contribute to overall uncertainty. Methods: We describe how analysts can compute measures of uncertainty, either via appropriately derived formulae or through resampling techniques such as the bootstrap. We also provide an overview of calibration as a means of using additional auxiliary information that is readily available for the entire program, such as the total number of doses administered, to decrease uncertainty and thereby improve decision-making and the planning of future studies. Results: A recent study of the national program for routine immunization in Honduras shows that uncertainty can be reduced by using information available prior to the study. This method can not only be used when estimating the total cost of delivering established health programs but also to decrease uncertainty when the interest lies in assessing the incremental effect of an intervention. Conclusion: Measures of statistical uncertainty associated with survey-based estimates of program costs, such as standard errors and 95% confidence intervals, provide important contextual information for health policy decision-making and key inputs for the design of future costing studies. Such measures are often not reported, possibly because of technical challenges associated with their calculation and a lack of awareness of appropriate software. Modern statistical analysis methods for survey data, such as calibration, provide a means to exploit additional information that is readily available but was not used in the design of the study to significantly improve the estimation of total cost through the reduction of statistical uncertainty. PMID:29636964

  13. Impact of automatic calibration techniques on HMD life cycle costs and sustainable performance

    NASA Astrophysics Data System (ADS)

    Speck, Richard P.; Herz, Norman E., Jr.

    2000-06-01

    Automatic test and calibration has become a valuable feature in many consumer products--ranging from antilock braking systems to auto-tune TVs. This paper discusses HMDs (Helmet Mounted Displays) and how similar techniques can reduce life cycle costs and increase sustainable performance if they are integrated into a program early enough. Optical ATE (Automatic Test Equipment) is already zeroing distortion in the HMDs and thereby making binocular displays a practical reality. A suitcase sized, field portable optical ATE unit could re-zero these errors in the Ready Room to cancel the effects of aging, minor damage and component replacement. Planning on this would yield large savings through relaxed component specifications and reduced logistic costs. Yet, the sustained performance would far exceed that attained with fixed calibration strategies. Major tactical benefits can come from reducing display errors, particularly in information fusion modules and virtual `beyond visual range' operations. Some versions of the ATE described are in production and examples of high resolution optical test data will be discussed.

  14. Fast and low-cost structured light pattern sequence projection.

    PubMed

    Wissmann, Patrick; Forster, Frank; Schmitt, Robert

    2011-11-21

    We present a high-speed and low-cost approach for structured light pattern sequence projection. Using a fast rotating binary spatial light modulator, our method is potentially capable of projection frequencies in the kHz domain, while enabling pattern rasterization as low as 2 μm pixel size and inherently linear grayscale reproduction quantized at 12 bits/pixel or better. Due to the circular arrangement of the projected fringe patterns, we extend the widely used ray-plane triangulation method to ray-cone triangulation and provide a detailed description of the optical calibration procedure. Using the proposed projection concept in conjunction with the recently published coded phase shift (CPS) pattern sequence, we demonstrate high accuracy 3-D measurement at 200 Hz projection frequency and 20 Hz 3-D reconstruction rate. © 2011 Optical Society of America

  15. A Novel Multi-Camera Calibration Method based on Flat Refractive Geometry

    NASA Astrophysics Data System (ADS)

    Huang, S.; Feng, M. C.; Zheng, T. X.; Li, F.; Wang, J. Q.; Xiao, L. F.

    2018-03-01

    Multi-camera calibration plays an important role in many field. In the paper, we present a novel multi-camera calibration method based on flat refractive geometry. All cameras can acquire calibration images of transparent glass calibration board (TGCB) at the same time. The application of TGCB leads to refractive phenomenon which can generate calibration error. The theory of flat refractive geometry is employed to eliminate the error. The new method can solve the refractive phenomenon of TGCB. Moreover, the bundle adjustment method is used to minimize the reprojection error and obtain optimized calibration results. Finally, the four-cameras calibration results of real data show that the mean value and standard deviation of the reprojection error of our method are 4.3411e-05 and 0.4553 pixel, respectively. The experimental results show that the proposed method is accurate and reliable.

  16. A comparison of entropy balance and probability weighting methods to generalize observational cohorts to a population: a simulation and empirical example.

    PubMed

    Harvey, Raymond A; Hayden, Jennifer D; Kamble, Pravin S; Bouchard, Jonathan R; Huang, Joanna C

    2017-04-01

    We compared methods to control bias and confounding in observational studies including inverse probability weighting (IPW) and stabilized IPW (sIPW). These methods often require iteration and post-calibration to achieve covariate balance. In comparison, entropy balance (EB) optimizes covariate balance a priori by calibrating weights using the target's moments as constraints. We measured covariate balance empirically and by simulation by using absolute standardized mean difference (ASMD), absolute bias (AB), and root mean square error (RMSE), investigating two scenarios: the size of the observed (exposed) cohort exceeds the target (unexposed) cohort and vice versa. The empirical application weighted a commercial health plan cohort to a nationally representative National Health and Nutrition Examination Survey target on the same covariates and compared average total health care cost estimates across methods. Entropy balance alone achieved balance (ASMD ≤ 0.10) on all covariates in simulation and empirically. In simulation scenario I, EB achieved the lowest AB and RMSE (13.64, 31.19) compared with IPW (263.05, 263.99) and sIPW (319.91, 320.71). In scenario II, EB outperformed IPW and sIPW with smaller AB and RMSE. In scenarios I and II, EB achieved the lowest mean estimate difference from the simulated population outcome ($490.05, $487.62) compared with IPW and sIPW, respectively. Empirically, only EB differed from the unweighted mean cost indicating IPW, and sIPW weighting was ineffective. Entropy balance demonstrated the bias-variance tradeoff achieving higher estimate accuracy, yet lower estimate precision, compared with IPW methods. EB weighting required no post-processing and effectively mitigated observed bias and confounding. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  17. Calibration of a COTS Integration Cost Model Using Local Project Data

    NASA Technical Reports Server (NTRS)

    Boland, Dillard; Coon, Richard; Byers, Kathryn; Levitt, David

    1997-01-01

    The software measures and estimation techniques appropriate to a Commercial Off the Shelf (COTS) integration project differ from those commonly used for custom software development. Labor and schedule estimation tools that model COTS integration are available. Like all estimation tools, they must be calibrated with the organization's local project data. This paper describes the calibration of a commercial model using data collected by the Flight Dynamics Division (FDD) of the NASA Goddard Spaceflight Center (GSFC). The model calibrated is SLIM Release 4.0 from Quantitative Software Management (QSM). By adopting the SLIM reuse model and by treating configuration parameters as lines of code, we were able to establish a consistent calibration for COTS integration projects. The paper summarizes the metrics, the calibration process and results, and the validation of the calibration.

  18. Prediction of Groundwater Level at Slope Areas using Electrical Resistivity Method

    NASA Astrophysics Data System (ADS)

    Baharuddin, M. F. T.; Hazreek, Z. A. M.; Azman, M. A. A.; Madun, A.

    2018-04-01

    Groundwater level plays an important role as an agent that triggers landslides. Commonly, the conventional method used to monitor the groundwater level is done by using standpipe piezometer. There were several disadvantages of the conventional method related to cost, time and data coverage. The aim of this study is to determine groundwater level at slope areas using electrical resistivity method and to verify groundwater level of the study area with standpipe piezometer data. The data acquisition was performed using ABEM Terrameter SAS4000. For data analysis and processing, RES2DINV and SURFER were used. The groundwater level was calibrated with reference of standpipe piezometer based on electrical resistivity value (ERV).

  19. Autotune Calibrates Models to Building Use Data

    ScienceCinema

    None

    2018-01-16

    Models of existing buildings are currently unreliable unless calibrated manually by a skilled professional. Autotune, as the name implies, automates this process by calibrating the model of an existing building to measured data, and is now available as open source software. This enables private businesses to incorporate Autotune into their products so that their customers can more effectively estimate cost savings of reduced energy consumption measures in existing buildings.

  20. Cost-Benefit Analysis of Computer Resources for Machine Learning

    USGS Publications Warehouse

    Champion, Richard A.

    2007-01-01

    Machine learning describes pattern-recognition algorithms - in this case, probabilistic neural networks (PNNs). These can be computationally intensive, in part because of the nonlinear optimizer, a numerical process that calibrates the PNN by minimizing a sum of squared errors. This report suggests efficiencies that are expressed as cost and benefit. The cost is computer time needed to calibrate the PNN, and the benefit is goodness-of-fit, how well the PNN learns the pattern in the data. There may be a point of diminishing returns where a further expenditure of computer resources does not produce additional benefits. Sampling is suggested as a cost-reduction strategy. One consideration is how many points to select for calibration and another is the geometric distribution of the points. The data points may be nonuniformly distributed across space, so that sampling at some locations provides additional benefit while sampling at other locations does not. A stratified sampling strategy can be designed to select more points in regions where they reduce the calibration error and fewer points in regions where they do not. Goodness-of-fit tests ensure that the sampling does not introduce bias. This approach is illustrated by statistical experiments for computing correlations between measures of roadless area and population density for the San Francisco Bay Area. The alternative to training efficiencies is to rely on high-performance computer systems. These may require specialized programming and algorithms that are optimized for parallel performance.

  1. Direct Sensor Orientation of a Land-Based Mobile Mapping System

    PubMed Central

    Rau, Jiann-Yeou; Habib, Ayman F.; Kersting, Ana P.; Chiang, Kai-Wei; Bang, Ki-In; Tseng, Yi-Hsing; Li, Yu-Hua

    2011-01-01

    A land-based mobile mapping system (MMS) is flexible and useful for the acquisition of road environment geospatial information. It integrates a set of imaging sensors and a position and orientation system (POS). The positioning quality of such systems is highly dependent on the accuracy of the utilized POS. This limitation is the major drawback due to the elevated cost associated with high-end GPS/INS units, particularly the inertial system. The potential accuracy of the direct sensor orientation depends on the architecture and quality of the GPS/INS integration process as well as the validity of the system calibration (i.e., calibration of the individual sensors as well as the system mounting parameters). In this paper, a novel single-step procedure using integrated sensor orientation with relative orientation constraint for the estimation of the mounting parameters is introduced. A comparative analysis between the proposed single-step and the traditional two-step procedure is carried out. Moreover, the estimated mounting parameters using the different methods are used in a direct geo-referencing procedure to evaluate their performance and the feasibility of the implemented system. Experimental results show that the proposed system using single-step system calibration method can achieve high 3D positioning accuracy. PMID:22164015

  2. A calibration method of infrared LVF based spectroradiometer

    NASA Astrophysics Data System (ADS)

    Liu, Jiaqing; Han, Shunli; Liu, Lei; Hu, Dexin

    2017-10-01

    In this paper, a calibration method of LVF-based spectroradiometer is summarize, including spectral calibration and radiometric calibration. The spectral calibration process as follow: first, the relationship between stepping motor's step number and transmission wavelength is derivative by theoretical calculation, including a non-linearity correction of LVF;second, a line-to-line method was used to corrected the theoretical wavelength; Finally, the 3.39 μm and 10.69 μm laser is used for spectral calibration validation, show the sought 0.1% accuracy or better is achieved.A new sub-region multi-point calibration method is used for radiometric calibration to improving accuracy, results show the sought 1% accuracy or better is achieved.

  3. Simple and cost-effective liquid chromatography-mass spectrometry method to measure dabrafenib quantitatively and six metabolites semi-quantitatively in human plasma.

    PubMed

    Vikingsson, Svante; Dahlberg, Jan-Olof; Hansson, Johan; Höiom, Veronica; Gréen, Henrik

    2017-06-01

    Dabrafenib is an inhibitor of BRAF V600E used for treating metastatic melanoma but a majority of patients experience adverse effects. Methods to measure the levels of dabrafenib and major metabolites during treatment are needed to allow development of individualized dosing strategies to reduce the burden of such adverse events. In this study, an LC-MS/MS method capable of measuring dabrafenib quantitatively and six metabolites semi-quantitatively is presented. The method is fully validated with regard to dabrafenib in human plasma in the range 5-5000 ng/mL. The analytes were separated on a C18 column after protein precipitation and detected in positive electrospray ionization mode using a Xevo TQ triple quadrupole mass spectrometer. As no commercial reference standards are available, the calibration curve of dabrafenib was used for semi-quantification of dabrafenib metabolites. Compared to earlier methods the presented method represents a simpler and more cost-effective approach suitable for clinical studies. Graphical abstract Combined multi reaction monitoring transitions of dabrafenib and metabolites in a typical case sample.

  4. Multiple-frequency continuous wave ultrasonic system for accurate distance measurement

    NASA Astrophysics Data System (ADS)

    Huang, C. F.; Young, M. S.; Li, Y. C.

    1999-02-01

    A highly accurate multiple-frequency continuous wave ultrasonic range-measuring system for use in air is described. The proposed system uses a method heretofore applied to radio frequency distance measurement but not to air-based ultrasonic systems. The method presented here is based upon the comparative phase shifts generated by three continuous ultrasonic waves of different but closely spaced frequencies. In the test embodiment to confirm concept feasibility, two low cost 40 kHz ultrasonic transducers are set face to face and used to transmit and receive ultrasound. Individual frequencies are transmitted serially, each generating its own phase shift. For any given frequency, the transmitter/receiver distance modulates the phase shift between the transmitted and received signals. Comparison of the phase shifts allows a highly accurate evaluation of target distance. A single-chip microcomputer-based multiple-frequency continuous wave generator and phase detector was designed to record and compute the phase shift information and the resulting distance, which is then sent to either a LCD or a PC. The PC is necessary only for calibration of the system, which can be run independently after calibration. Experiments were conducted to test the performance of the whole system. Experimentally, ranging accuracy was found to be within ±0.05 mm, with a range of over 1.5 m. The main advantages of this ultrasonic range measurement system are high resolution, low cost, narrow bandwidth requirements, and ease of implementation.

  5. The role of observational reference data for climate downscaling: Insights from the VALUE COST Action

    NASA Astrophysics Data System (ADS)

    Kotlarski, Sven; Gutiérrez, José M.; Boberg, Fredrik; Bosshard, Thomas; Cardoso, Rita M.; Herrera, Sixto; Maraun, Douglas; Mezghani, Abdelkader; Pagé, Christian; Räty, Olle; Stepanek, Petr; Soares, Pedro M. M.; Szabo, Peter

    2016-04-01

    VALUE is an open European network to validate and compare downscaling methods for climate change research (http://www.value-cost.eu). A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of downscaling methods. Such assessments can be expected to crucially depend on the existence of accurate and reliable observational reference data. In dynamical downscaling, observational data can influence model development itself and, later on, model evaluation, parameter calibration and added value assessment. In empirical-statistical downscaling, observations serve as predictand data and directly influence model calibration with corresponding effects on downscaled climate change projections. We here present a comprehensive assessment of the influence of uncertainties in observational reference data and of scale-related issues on several of the above-mentioned aspects. First, temperature and precipitation characteristics as simulated by a set of reanalysis-driven EURO-CORDEX RCM experiments are validated against three different gridded reference data products, namely (1) the EOBS dataset (2) the recently developed EURO4M-MESAN regional re-analysis, and (3) several national high-resolution and quality-controlled gridded datasets that recently became available. The analysis reveals a considerable influence of the choice of the reference data on the evaluation results, especially for precipitation. It is also illustrated how differences between the reference data sets influence the ranking of RCMs according to a comprehensive set of performance measures.

  6. An Agent-Based Model of Farmer Decision Making in Jordan

    NASA Astrophysics Data System (ADS)

    Selby, Philip; Medellin-Azuara, Josue; Harou, Julien; Klassert, Christian; Yoon, Jim

    2016-04-01

    We describe an agent based hydro-economic model of groundwater irrigated agriculture in the Jordan Highlands. The model employs a Multi-Agent-Simulation (MAS) framework and is designed to evaluate direct and indirect outcomes of climate change scenarios and policy interventions on farmer decision making, including annual land use, groundwater use for irrigation, and water sales to a water tanker market. Land use and water use decisions are simulated for groups of farms grouped by location and their behavioural and economic similarities. Decreasing groundwater levels, and the associated increase in pumping costs, are important drivers for change within Jordan'S agricultural sector. We describe how this is considered by coupling of agricultural and groundwater models. The agricultural production model employs Positive Mathematical Programming (PMP), a method for calibrating agricultural production functions to observed planted areas. PMP has successfully been used with disaggregate models for policy analysis. We adapt the PMP approach to allow explicit evaluation of the impact of pumping costs, groundwater purchase fees and a water tanker market. The work demonstrates the applicability of agent-based agricultural decision making assessment in the Jordan Highlands and its integration with agricultural model calibration methods. The proposed approach is designed and implemented with software such that it could be used to evaluate a variety of physical and human influences on decision making in agricultural water management.

  7. Optics-Only Calibration of a Neural-Net Based Optical NDE Method for Structural Health Monitoring

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.

    2004-01-01

    A calibration process is presented that uses optical measurements alone to calibrate a neural-net based NDE method. The method itself detects small changes in the vibration mode shapes of structures. The optics-only calibration process confirms previous work that the sensitivity to vibration-amplitude changes can be as small as 10 nanometers. A more practical value in an NDE service laboratory is shown to be 50 nanometers. Both model-generated and experimental calibrations are demonstrated using two implementations of the calibration technique. The implementations are based on previously published demonstrations of the NDE method and an alternative calibration procedure that depends on comparing neural-net and point sensor measurements. The optics-only calibration method, unlike the alternative method, does not require modifications of the structure being tested or the creation of calibration objects. The calibration process can be used to test improvements in the NDE process and to develop a vibration-mode-independence of damagedetection sensitivity. The calibration effort was intended to support NASA s objective to promote safety in the operations of ground test facilities or aviation safety, in general, by allowing the detection of the gradual onset of structural changes and damage.

  8. Potential and Limitations of an Improved Method to Produce Dynamometric Wheels

    PubMed Central

    García de Jalón, Javier

    2018-01-01

    A new methodology for the estimation of tyre-contact forces is presented. The new procedure is an evolution of a previous method based on harmonic elimination techniques developed with the aim of producing low cost dynamometric wheels. While the original method required stress measurement in many rim radial lines and the fulfillment of some rigid conditions of symmetry, the new methodology described in this article significantly reduces the number of required measurement points and greatly relaxes symmetry constraints. This can be done without compromising the estimation error level. The reduction of the number of measuring radial lines increases the ripple of demodulated signals due to non-eliminated higher order harmonics. Therefore, it is necessary to adapt the calibration procedure to this new scenario. A new calibration procedure that takes into account angular position of the wheel is completely described. This new methodology is tested on a standard commercial five-spoke car wheel. Obtained results are qualitatively compared to those derived from the application of former methodology leading to the conclusion that the new method is both simpler and more robust due to the reduction in the number of measuring points, while contact forces’ estimation error remains at an acceptable level. PMID:29439427

  9. A polychromatic adaption of the Beer-Lambert model for spectral decomposition

    NASA Astrophysics Data System (ADS)

    Sellerer, Thorsten; Ehn, Sebastian; Mechlem, Korbinian; Pfeiffer, Franz; Herzen, Julia; Noël, Peter B.

    2017-03-01

    We present a semi-empirical forward-model for spectral photon-counting CT which is fully compatible with state-of-the-art maximum-likelihood estimators (MLE) for basis material line integrals. The model relies on a minimum calibration effort to make the method applicable in routine clinical set-ups with the need for periodic re-calibration. In this work we present an experimental verifcation of our proposed method. The proposed method uses an adapted Beer-Lambert model, describing the energy dependent attenuation of a polychromatic x-ray spectrum using additional exponential terms. In an experimental dual-energy photon-counting CT setup based on a CdTe detector, the model demonstrates an accurate prediction of the registered counts for an attenuated polychromatic spectrum. Thereby deviations between model and measurement data lie within the Poisson statistical limit of the performed acquisitions, providing an effectively unbiased forward-model. The experimental data also shows that the model is capable of handling possible spectral distortions introduced by the photon-counting detector and CdTe sensor. The simplicity and high accuracy of the proposed model provides a viable forward-model for MLE-based spectral decomposition methods without the need of costly and time-consuming characterization of the system response.

  10. Wavelength calibration of dispersive near-infrared spectrometer using relative k-space distribution with low coherence interferometer

    NASA Astrophysics Data System (ADS)

    Kim, Ji-hyun; Han, Jae-Ho; Jeong, Jichai

    2016-05-01

    The commonly employed calibration methods for laboratory-made spectrometers have several disadvantages, including poor calibration when the number of characteristic spectral peaks is low. Therefore, we present a wavelength calibration method using relative k-space distribution with low coherence interferometer. The proposed method utilizes an interferogram with a perfect sinusoidal pattern in k-space for calibration. Zero-crossing detection extracts the k-space distribution of a spectrometer from the interferogram in the wavelength domain, and a calibration lamp provides information about absolute wavenumbers. To assign wavenumbers, wavelength-to-k-space conversion is required for the characteristic spectrum of the calibration lamp with the extracted k-space distribution. Then, the wavelength calibration is completed by inverse conversion of the k-space into wavelength domain. The calibration performance of the proposed method was demonstrated with two experimental conditions of four and eight characteristic spectral peaks. The proposed method elicited reliable calibration results in both cases, whereas the conventional method of third-order polynomial curve fitting failed to determine wavelengths in the case of four characteristic peaks. Moreover, for optical coherence tomography imaging, the proposed method could improve axial resolution due to higher suppression of sidelobes in point spread function than the conventional method. We believe that our findings can improve not only wavelength calibration accuracy but also resolution for optical coherence tomography.

  11. Verification of the ISO calibration method for field pyranometers under tropical sky conditions

    NASA Astrophysics Data System (ADS)

    Janjai, Serm; Tohsing, Korntip; Pattarapanitchai, Somjet; Detkhon, Pasakorn

    2017-02-01

    Field pyranomters need to be annually calibrated and the International Organization for Standardization (ISO) has defined a standard method (ISO 9847) for calibrating these pyranometers. According to this standard method for outdoor calibration, the field pyranometers have to be compared to a reference pyranometer for the period of 2 to 14 days, depending on sky conditions. In this work, the ISO 9847 standard method was verified under tropical sky conditions. To verify the standard method, calibration of field pyranometers was conducted at a tropical site located in Nakhon Pathom (13.82o N, 100.04o E), Thailand under various sky conditions. The conditions of the sky were monitored by using a sky camera. The calibration results for different time periods used for the calibration under various sky conditions were analyzed. It was found that the calibration periods given by this standard method could be reduced without significant change in the final calibration result. In addition, recommendation and discussion on the use of this standard method in the tropics were also presented.

  12. Research on camera on orbit radial calibration based on black body and infrared calibration stars

    NASA Astrophysics Data System (ADS)

    Wang, YuDu; Su, XiaoFeng; Zhang, WanYing; Chen, FanSheng

    2018-05-01

    Affected by launching process and space environment, the response capability of a space camera must be attenuated. So it is necessary for a space camera to have a spaceborne radiant calibration. In this paper, we propose a method of calibration based on accurate Infrared standard stars was proposed for increasing infrared radiation measurement precision. As stars can be considered as a point target, we use them as the radiometric calibration source and establish the Taylor expansion method and the energy extrapolation model based on WISE catalog and 2MASS catalog. Then we update the calibration results from black body. Finally, calibration mechanism is designed and the technology of design is verified by on orbit test. The experimental calibration result shows the irradiance extrapolation error is about 3% and the accuracy of calibration methods is about 10%, the results show that the methods could satisfy requirements of on orbit calibration.

  13. Self-Calibrating Pressure Transducer

    NASA Technical Reports Server (NTRS)

    Lueck, Dale E. (Inventor)

    2006-01-01

    A self-calibrating pressure transducer is disclosed. The device uses an embedded zirconia membrane which pumps a determined quantity of oxygen into the device. The associated pressure can be determined, and thus, the transducer pressure readings can be calibrated. The zirconia membrane obtains oxygen .from the surrounding environment when possible. Otherwise, an oxygen reservoir or other source is utilized. In another embodiment, a reversible fuel cell assembly is used to pump oxygen and hydrogen into the system. Since a known amount of gas is pumped across the cell, the pressure produced can be determined, and thus, the device can be calibrated. An isolation valve system is used to allow the device to be calibrated in situ. Calibration is optionally automated so that calibration can be continuously monitored. The device is preferably a fully integrated MEMS device. Since the device can be calibrated without removing it from the process, reductions in costs and down time are realized.

  14. A Comparison of Two Balance Calibration Model Building Methods

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard; Ulbrich, Norbert

    2007-01-01

    Simulated strain-gage balance calibration data is used to compare the accuracy of two balance calibration model building methods for different noise environments and calibration experiment designs. The first building method obtains a math model for the analysis of balance calibration data after applying a candidate math model search algorithm to the calibration data set. The second building method uses stepwise regression analysis in order to construct a model for the analysis. Four balance calibration data sets were simulated in order to compare the accuracy of the two math model building methods. The simulated data sets were prepared using the traditional One Factor At a Time (OFAT) technique and the Modern Design of Experiments (MDOE) approach. Random and systematic errors were introduced in the simulated calibration data sets in order to study their influence on the math model building methods. Residuals of the fitted calibration responses and other statistical metrics were compared in order to evaluate the calibration models developed with different combinations of noise environment, experiment design, and model building method. Overall, predicted math models and residuals of both math model building methods show very good agreement. Significant differences in model quality were attributable to noise environment, experiment design, and their interaction. Generally, the addition of systematic error significantly degraded the quality of calibration models developed from OFAT data by either method, but MDOE experiment designs were more robust with respect to the introduction of a systematic component of the unexplained variance.

  15. Multi-sensor calibration of low-cost magnetic, angular rate and gravity systems.

    PubMed

    Lüken, Markus; Misgeld, Berno J E; Rüschen, Daniel; Leonhardt, Steffen

    2015-10-13

    We present a new calibration procedure for low-cost nine degrees-of-freedom (9DOF) magnetic, angular rate and gravity (MARG) sensor systems, which relies on a calibration cube, a reference table and a body sensor network (BSN). The 9DOF MARG sensor is part of our recently-developed "Integrated Posture and Activity Network by Medit Aachen" (IPANEMA) BSN. The advantage of this new approach is the use of the calibration cube, which allows for easy integration of two sensor nodes of the IPANEMA BSN. One 9DOF MARG sensor node is thereby used for calibration; the second 9DOF MARG sensor node is used for reference measurements. A novel algorithm uses these measurements to further improve the performance of the calibration procedure by processing arbitrarily-executed motions. In addition, the calibration routine can be used in an alignment procedure to minimize errors in the orientation between the 9DOF MARG sensor system and a motion capture inertial reference system. A two-stage experimental study is conducted to underline the performance of our calibration procedure. In both stages of the proposed calibration procedure, the BSN data, as well as reference tracking data are recorded. In the first stage, the mean values of all sensor outputs are determined as the absolute measurement offset to minimize integration errors in the derived movement model of the corresponding body segment. The second stage deals with the dynamic characteristics of the measurement system where the dynamic deviation of the sensor output compared to a reference system is Sensors 2015, 15 25920 corrected. In practical validation experiments, this procedure showed promising results with a maximum RMS error of 3.89°.

  16. Multi-Sensor Calibration of Low-Cost Magnetic, Angular Rate and Gravity Systems

    PubMed Central

    Lüken, Markus; Misgeld, Berno J.E.; Rüschen, Daniel; Leonhardt, Steffen

    2015-01-01

    We present a new calibration procedure for low-cost nine degrees-of-freedom (9DOF) magnetic, angular rate and gravity (MARG) sensor systems, which relies on a calibration cube, a reference table and a body sensor network (BSN). The 9DOF MARG sensor is part of our recently-developed “Integrated Posture and Activity Network by Medit Aachen” (IPANEMA) BSN. The advantage of this new approach is the use of the calibration cube, which allows for easy integration of two sensor nodes of the IPANEMA BSN. One 9DOF MARG sensor node is thereby used for calibration; the second 9DOF MARG sensor node is used for reference measurements. A novel algorithm uses these measurements to further improve the performance of the calibration procedure by processing arbitrarily-executed motions. In addition, the calibration routine can be used in an alignment procedure to minimize errors in the orientation between the 9DOF MARG sensor system and a motion capture inertial reference system. A two-stage experimental study is conducted to underline the performance of our calibration procedure. In both stages of the proposed calibration procedure, the BSN data, as well as reference tracking data are recorded. In the first stage, the mean values of all sensor outputs are determined as the absolute measurement offset to minimize integration errors in the derived movement model of the corresponding body segment. The second stage deals with the dynamic characteristics of the measurement system where the dynamic deviation of the sensor output compared to a reference system is corrected. In practical validation experiments, this procedure showed promising results with a maximum RMS error of 3.89°. PMID:26473873

  17. Self-Calibrated In-Process Photogrammetry for Large Raw Part Measurement and Alignment before Machining

    PubMed Central

    Mendikute, Alberto; Zatarain, Mikel; Bertelsen, Álvaro; Leizea, Ibai

    2017-01-01

    Photogrammetry methods are being used more and more as a 3D technique for large scale metrology applications in industry. Optical targets are placed on an object and images are taken around it, where measuring traceability is provided by precise off-process pre-calibrated digital cameras and scale bars. According to the 2D target image coordinates, target 3D coordinates and camera views are jointly computed. One of the applications of photogrammetry is the measurement of raw part surfaces prior to its machining. For this application, post-process bundle adjustment has usually been adopted for computing the 3D scene. With that approach, a high computation time is observed, leading in practice to time consuming and user dependent iterative review and re-processing procedures until an adequate set of images is taken, limiting its potential for fast, easy-to-use, and precise measurements. In this paper, a new efficient procedure is presented for solving the bundle adjustment problem in portable photogrammetry. In-process bundle computing capability is demonstrated on a consumer grade desktop PC, enabling quasi real time 2D image and 3D scene computing. Additionally, a method for the self-calibration of camera and lens distortion has been integrated into the in-process approach due to its potential for highest precision when using low cost non-specialized digital cameras. Measurement traceability is set only by scale bars available in the measuring scene, avoiding the uncertainty contribution of off-process camera calibration procedures or the use of special purpose calibration artifacts. The developed self-calibrated in-process photogrammetry has been evaluated both in a pilot case scenario and in industrial scenarios for raw part measurement, showing a total in-process computing time typically below 1 s per image up to a maximum of 2 s during the last stages of the computed industrial scenes, along with a relative precision of 1/10,000 (e.g., 0.1 mm error in 1 m) with an error RMS below 0.2 pixels at image plane, ranging at the same performance reported for portable photogrammetry with precise off-process pre-calibrated cameras. PMID:28891946

  18. Self-Calibrated In-Process Photogrammetry for Large Raw Part Measurement and Alignment before Machining.

    PubMed

    Mendikute, Alberto; Yagüe-Fabra, José A; Zatarain, Mikel; Bertelsen, Álvaro; Leizea, Ibai

    2017-09-09

    Photogrammetry methods are being used more and more as a 3D technique for large scale metrology applications in industry. Optical targets are placed on an object and images are taken around it, where measuring traceability is provided by precise off-process pre-calibrated digital cameras and scale bars. According to the 2D target image coordinates, target 3D coordinates and camera views are jointly computed. One of the applications of photogrammetry is the measurement of raw part surfaces prior to its machining. For this application, post-process bundle adjustment has usually been adopted for computing the 3D scene. With that approach, a high computation time is observed, leading in practice to time consuming and user dependent iterative review and re-processing procedures until an adequate set of images is taken, limiting its potential for fast, easy-to-use, and precise measurements. In this paper, a new efficient procedure is presented for solving the bundle adjustment problem in portable photogrammetry. In-process bundle computing capability is demonstrated on a consumer grade desktop PC, enabling quasi real time 2D image and 3D scene computing. Additionally, a method for the self-calibration of camera and lens distortion has been integrated into the in-process approach due to its potential for highest precision when using low cost non-specialized digital cameras. Measurement traceability is set only by scale bars available in the measuring scene, avoiding the uncertainty contribution of off-process camera calibration procedures or the use of special purpose calibration artifacts. The developed self-calibrated in-process photogrammetry has been evaluated both in a pilot case scenario and in industrial scenarios for raw part measurement, showing a total in-process computing time typically below 1 s per image up to a maximum of 2 s during the last stages of the computed industrial scenes, along with a relative precision of 1/10,000 (e.g. 0.1 mm error in 1 m) with an error RMS below 0.2 pixels at image plane, ranging at the same performance reported for portable photogrammetry with precise off-process pre-calibrated cameras.

  19. Calibration of Hydrophone Stations: Lessons Learned from the Ascension Island Experiment

    DTIC Science & Technology

    2000-09-01

    source based on the implosion of a glass sphere for future long-range calibrations. RESEARCH ACCOMPLISHED The J.C. Ross, an icebreaker class...waters around Ascension Island. The blow - ups show the track in the immediate vicinity of the three hydrophones and plots their nominal location. The...used has practical and cost-driven limitations. Small implosive sources such as lightbulbs have been used from ships as hydrophone calibration sources

  20. Geometric calibration of Colour and Stereo Surface Imaging System of ESA's Trace Gas Orbiter

    NASA Astrophysics Data System (ADS)

    Tulyakov, Stepan; Ivanov, Anton; Thomas, Nicolas; Roloff, Victoria; Pommerol, Antoine; Cremonese, Gabriele; Weigel, Thomas; Fleuret, Francois

    2018-01-01

    There are many geometric calibration methods for "standard" cameras. These methods, however, cannot be used for the calibration of telescopes with large focal lengths and complex off-axis optics. Moreover, specialized calibration methods for the telescopes are scarce in literature. We describe the calibration method that we developed for the Colour and Stereo Surface Imaging System (CaSSIS) telescope, on board of the ExoMars Trace Gas Orbiter (TGO). Although our method is described in the context of CaSSIS, with camera-specific experiments, it is general and can be applied to other telescopes. We further encourage re-use of the proposed method by making our calibration code and data available on-line.

  1. Calibration of mass spectrometric peptide mass fingerprint data without specific external or internal calibrants

    PubMed Central

    Wolski, Witold E; Lalowski, Maciej; Jungblut, Peter; Reinert, Knut

    2005-01-01

    Background Peptide Mass Fingerprinting (PMF) is a widely used mass spectrometry (MS) method of analysis of proteins and peptides. It relies on the comparison between experimentally determined and theoretical mass spectra. The PMF process requires calibration, usually performed with external or internal calibrants of known molecular masses. Results We have introduced two novel MS calibration methods. The first method utilises the local similarity of peptide maps generated after separation of complex protein samples by two-dimensional gel electrophoresis. It computes a multiple peak-list alignment of the data set using a modified Minimum Spanning Tree (MST) algorithm. The second method exploits the idea that hundreds of MS samples are measured in parallel on one sample support. It improves the calibration coefficients by applying a two-dimensional Thin Plate Splines (TPS) smoothing algorithm. We studied the novel calibration methods utilising data generated by three different MALDI-TOF-MS instruments. We demonstrate that a PMF data set can be calibrated without resorting to external or relying on widely occurring internal calibrants. The methods developed here were implemented in R and are part of the BioConductor package mscalib available from . Conclusion The MST calibration algorithm is well suited to calibrate MS spectra of protein samples resulting from two-dimensional gel electrophoretic separation. The TPS based calibration algorithm might be used to correct systematic mass measurement errors observed for large MS sample supports. As compared to other methods, our combined MS spectra calibration strategy increases the peptide/protein identification rate by an additional 5 – 15%. PMID:16102175

  2. An investigation into force-moment calibration techniques applicable to a magnetic suspension and balance system. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Eskins, Jonathan

    1988-01-01

    The problem of determining the forces and moments acting on a wind tunnel model suspended in a Magnetic Suspension and Balance System is addressed. Two calibration methods were investigated for three types of model cores, i.e., Alnico, Samarium-Cobalt, and a superconducting solenoid. Both methods involve calibrating the currents in the electromagnetic array against known forces and moments. The first is a static calibration method using calibration weights and a system of pulleys. The other method, dynamic calibration, involves oscillating the model and using its inertia to provide calibration forces and moments. Static calibration data, found to produce the most reliable results, is presented for three degrees of freedom at 0, 15, and -10 deg angle of attack. Theoretical calculations are hampered by the inability to represent iron-cored electromagnets. Dynamic calibrations, despite being quicker and easier to perform, are not as accurate as static calibrations. Data for dynamic calibrations at 0 and 15 deg is compared with the relevant static data acquired. Distortion of oscillation traces is cited as a major source of error in dynamic calibrations.

  3. Artificial Neural Network and application in calibration transfer of AOTF-based NIR spectrometer

    NASA Astrophysics Data System (ADS)

    Wang, Wenbo; Jiang, Chengzhi; Xu, Kexin; Wang, Bin

    2002-09-01

    Chemometrics is widely applied to develop models for quantitative prediction of unknown samples in Near-infrared (NIR) spectroscopy. However, calibrated models generally fail when new instruments are introduced or replacement of the instrument parts occurs. Therefore, calibration transfer becomes necessary to avoid the costly, time-consuming recalibration of models. Piecewise Direct Standardization (PDS) has been proven to be a reference method for standardization. In this paper, Artificial Neural Network (ANN) is employed as an alternative to transfer spectra between instruments. Two Acousto-optic Tunable Filter NIR spectrometers are employed in the experiment. Spectra of glucose solution are collected on the spectrometers through transflectance mode. A Back propagation Network with two layers is employed to simulate the function between instruments piecewisely. Standardization subset is selected by Kennard and Stone (K-S) algorithm in the first two score space of Principal Component Analysis (PCA) of spectra matrix. In current experiment, it is noted that obvious nonlinearity exists between instruments and attempts are made to correct such nonlinear effect. Prediction results before and after successful calibration transfer are compared. Successful transfer can be achieved by adapting window size and training parameters. Final results reveal that ANN is effective in correcting the nonlinear instrumental difference and a only 1.5~2 times larger prediction error is expected after successful transfer.

  4. The calibration methods for Multi-Filter Rotating Shadowband Radiometer: a review

    NASA Astrophysics Data System (ADS)

    Chen, Maosi; Davis, John; Tang, Hongzhao; Ownby, Carolyn; Gao, Wei

    2013-09-01

    The continuous, over two-decade data record from the Multi-Filter Rotating Shadowband Radiometer (MFRSR) is ideal for climate research which requires timely and accurate information of important atmospheric components such as gases, aerosols, and clouds. Except for parameters derived from MFRSR measurement ratios, which are not impacted by calibration error, most applications require accurate calibration factor(s), angular correction, and spectral response function(s) from calibration. Although a laboratory lamp (or reference) calibration can provide all the information needed to convert the instrument readings to actual radiation, in situ calibration methods are implemented routinely (daily) to fill the gaps between lamp calibrations. In this paper, the basic structure and the data collection and pretreatment of the MFRSR are described. The laboratory lamp calibration and its limitations are summarized. The cloud screening algorithms for MFRSR data are presented. The in situ calibration methods, the standard Langley method and its variants, the ratio-Langley method, the general method, Alexandrov's comprehensive method, and Chen's multi-channel method, are outlined. The reason that all these methods do not fit for all situations is that they assume some properties, such as aerosol optical depth (AOD), total optical depth (TOD), precipitable water vapor (PWV), effective size of aerosol particles, or angstrom coefficient, are invariant over time. These properties are not universal and some of them rarely happen. In practice, daily calibration factors derived from these methods should be smoothed to restrain error.

  5. Self-calibration for lensless color microscopy.

    PubMed

    Flasseur, Olivier; Fournier, Corinne; Verrier, Nicolas; Denis, Loïc; Jolivet, Frédéric; Cazier, Anthony; Lépine, Thierry

    2017-05-01

    Lensless color microscopy (also called in-line digital color holography) is a recent quantitative 3D imaging method used in several areas including biomedical imaging and microfluidics. By targeting cost-effective and compact designs, the wavelength of the low-end sources used is known only imprecisely, in particular because of their dependence on temperature and power supply voltage. This imprecision is the source of biases during the reconstruction step. An additional source of error is the crosstalk phenomenon, i.e., the mixture in color sensors of signals originating from different color channels. We propose to use a parametric inverse problem approach to achieve self-calibration of a digital color holographic setup. This process provides an estimation of the central wavelengths and crosstalk. We show that taking the crosstalk phenomenon into account in the reconstruction step improves its accuracy.

  6. Simple transfer calibration method for a Cimel Sun-Moon photometer: calculating lunar calibration coefficients from Sun calibration constants.

    PubMed

    Li, Zhengqiang; Li, Kaitao; Li, Donghui; Yang, Jiuchun; Xu, Hua; Goloub, Philippe; Victori, Stephane

    2016-09-20

    The Cimel new technologies allow both daytime and nighttime aerosol optical depth (AOD) measurements. Although the daytime AOD calibration protocols are well established, accurate and simple nighttime calibration is still a challenging task. Standard lunar-Langley and intercomparison calibration methods both require specific conditions in terms of atmospheric stability and site condition. Additionally, the lunar irradiance model also has some known limits on its uncertainty. This paper presents a simple calibration method that transfers the direct-Sun calibration constant, V0,Sun, to the lunar irradiance calibration coefficient, CMoon. Our approach is a pure calculation method, independent of site limits, e.g., Moon phase. The method is also not affected by the lunar irradiance model limitations, which is the largest error source of traditional calibration methods. Besides, this new transfer calibration approach is easy to use in the field since CMoon can be obtained directly once V0,Sun is known. Error analysis suggests that the average uncertainty of CMoon over the 440-1640 nm bands obtained with the transfer method is 2.4%-2.8%, depending on the V0,Sun approach (Langley or intercomparison), which is comparable with that of lunar-Langley approach, theoretically. In this paper, the Sun-Moon transfer and the Langley methods are compared based on site measurements in Beijing, and the day-night measurement continuity and performance are analyzed.

  7. Optimized star sensors laboratory calibration method using a regularization neural network.

    PubMed

    Zhang, Chengfen; Niu, Yanxiong; Zhang, Hao; Lu, Jiazhen

    2018-02-10

    High-precision ground calibration is essential to ensure the performance of star sensors. However, the complex distortion and multi-error coupling have brought great difficulties to traditional calibration methods, especially for large field of view (FOV) star sensors. Although increasing the complexity of models is an effective way to improve the calibration accuracy, it significantly increases the demand for calibration data. In order to achieve high-precision calibration of star sensors with large FOV, a novel laboratory calibration method based on a regularization neural network is proposed. A multi-layer structure neural network is designed to represent the mapping of the star vector and the corresponding star point coordinate directly. To ensure the generalization performance of the network, regularization strategies are incorporated into the net structure and the training algorithm. Simulation and experiment results demonstrate that the proposed method can achieve high precision with less calibration data and without any other priori information. Compared with traditional methods, the calibration error of the star sensor decreased by about 30%. The proposed method can satisfy the precision requirement for large FOV star sensors.

  8. Features calibration of the dynamic force transducers

    NASA Astrophysics Data System (ADS)

    Sc., M. Yu Prilepko D.; Lysenko, V. G.

    2018-04-01

    The article discusses calibration methods of dynamic forces measuring instruments. The relevance of work is dictated by need to valid definition of the dynamic forces transducers metrological characteristics taking into account their intended application. The aim of this work is choice justification of calibration method, which provides the definition dynamic forces transducers metrological characteristics under simulation operating conditions for determining suitability for using in accordance with its purpose. The following tasks are solved: the mathematical model and the main measurements equation of calibration dynamic forces transducers by load weight, the main budget uncertainty components of calibration are defined. The new method of dynamic forces transducers calibration with use the reference converter “force-deformation” based on the calibrated elastic element and measurement of his deformation by a laser interferometer is offered. The mathematical model and the main measurements equation of the offered method is constructed. It is shown that use of calibration method based on measurements by the laser interferometer of calibrated elastic element deformations allows to exclude or to considerably reduce the uncertainty budget components inherent to method of load weight.

  9. Matrix Factorisation-based Calibration For Air Quality Crowd-sensing

    NASA Astrophysics Data System (ADS)

    Dorffer, Clement; Puigt, Matthieu; Delmaire, Gilles; Roussel, Gilles; Rouvoy, Romain; Sagnier, Isabelle

    2017-04-01

    Internet of Things (IoT) is extending internet to physical objects and places. The internet-enabled objects are thus able to communicate with each other and with their users. One main interest of IoT is the ease of production of huge masses of data (Big Data) using distributed networks of connected objects, thus making possible a fine-grained yet accurate analysis of physical phenomena. Mobile crowdsensing is a way to collect data using IoT. It basically consists of acquiring geolocalized data from the sensors (from or connected to the mobile devices, e.g., smartphones) of a crowd of volunteers. The sensed data are then collectively shared using wireless connection—such as GSM or WiFi—and stored on a dedicated server to be processed. One major application of mobile crowdsensing is environment monitoring. Indeed, with the proliferation of miniaturized yet sensitive sensors on one hand and, on the other hand, of low-cost microcontrollers/single-card PCs, it is easy to extend the sensing abilities of smartphones. Alongside the conventional, regulated, bulky and expensive instruments used in authoritative air quality stations, it is then possible to create a large-scale mobile sensor network providing insightful information about air quality. In particular, the finer spatial sampling rate due to such a dense network should allow air quality models to take into account local effects such as street canyons. However, one key issue with low-cost air quality sensors is the lack of trust in the sensed data. In most crowdsensing scenarios, the sensors (i) cannot be calibrated in a laboratory before or during their deployment and (ii) might be sparsely or continuously faulty (thus providing outliers in the data). Such issues should be automatically handled from the sensor readings. Indeed, due to the masses of generated data, solving the above issues cannot be performed by experts but requires specific data processing techniques. In this work, we assume that some mobile sensors share some information using the APISENSE® crowdsensing platform and we aim to calibrate the sensor responses from the data directly. For that purpose, we express the sensor readings as a low-rank matrix with missing entries and we revisit self-calibration as a Matrix Factorization (MF) problem. In our proposed framework, one factor matrix contains the calibration parameters while the other is structured by the calibration model and contains some values of the sensed phenomenon. The MF calibration approach also uses the precise measurements from ATMO—the French public institution—to drive the calibration of the mobile sensors. MF calibration can be improved using, e.g., the mean calibration parameters provided by the sensor manufacturers, or using sparse priors or a model of the physical phenomenon. All our approaches are shown to provide a better calibration accuracy than matrix-completion-based and robust-regression-based methods, even in difficult scenarios involving a lot of missing data and/or very few accurate references. When combined with a dictionary of air quality patterns, our experiments suggest that MF is not only able to perform sensor network calibration but also to provide detailed maps of air quality.

  10. Radiometer calibration methods and resulting irradiance differences: Radiometer calibration methods and resulting irradiance differences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, Aron; Sengupta, Manajit; Andreas, Afshin

    Accurate solar radiation measured by radiometers depends on instrument performance specifications, installation method, calibration procedure, measurement conditions, maintenance practices, location, and environmental conditions. This study addresses the effect of different calibration methodologies and resulting differences provided by radiometric calibration service providers such as the National Renewable Energy Laboratory (NREL) and manufacturers of radiometers. Some of these methods calibrate radiometers indoors and some outdoors. To establish or understand the differences in calibration methodologies, we processed and analyzed field-measured data from radiometers deployed for 10 months at NREL's Solar Radiation Research Laboratory. These different methods of calibration resulted in a difference ofmore » +/-1% to +/-2% in solar irradiance measurements. Analyzing these differences will ultimately assist in determining the uncertainties of the field radiometer data and will help develop a consensus on a standard for calibration. Further advancing procedures for precisely calibrating radiometers to world reference standards that reduce measurement uncertainties will help the accurate prediction of the output of planned solar conversion projects and improve the bankability of financing solar projects.« less

  11. Linking big models to big data: efficient ecosystem model calibration through Bayesian model emulation

    NASA Astrophysics Data System (ADS)

    Fer, I.; Kelly, R.; Andrews, T.; Dietze, M.; Richardson, A. D.

    2016-12-01

    Our ability to forecast ecosystems is limited by how well we parameterize ecosystem models. Direct measurements for all model parameters are not always possible and inverse estimation of these parameters through Bayesian methods is computationally costly. A solution to computational challenges of Bayesian calibration is to approximate the posterior probability surface using a Gaussian Process that emulates the complex process-based model. Here we report the integration of this method within an ecoinformatics toolbox, Predictive Ecosystem Analyzer (PEcAn), and its application with two ecosystem models: SIPNET and ED2.1. SIPNET is a simple model, allowing application of MCMC methods both to the model itself and to its emulator. We used both approaches to assimilate flux (CO2 and latent heat), soil respiration, and soil carbon data from Bartlett Experimental Forest. This comparison showed that emulator is reliable in terms of convergence to the posterior distribution. A 10000-iteration MCMC analysis with SIPNET itself required more than two orders of magnitude greater computation time than an MCMC run of same length with its emulator. This difference would be greater for a more computationally demanding model. Validation of the emulator-calibrated SIPNET against both the assimilated data and out-of-sample data showed improved fit and reduced uncertainty around model predictions. We next applied the validated emulator method to the ED2, whose complexity precludes standard Bayesian data assimilation. We used the ED2 emulator to assimilate demographic data from a network of inventory plots. For validation of the calibrated ED2, we compared the model to results from Empirical Succession Mapping (ESM), a novel synthesis of successional patterns in Forest Inventory and Analysis data. Our results revealed that while the pre-assimilation ED2 formulation cannot capture the emergent demographic patterns from ESM analysis, constrained model parameters controlling demographic processes increased their agreement considerably.

  12. A low-cost tracked C-arm (TC-arm) upgrade system for versatile quantitative intraoperative imaging.

    PubMed

    Amiri, Shahram; Wilson, David R; Masri, Bassam A; Anglin, Carolyn

    2014-07-01

    C-arm fluoroscopy is frequently used in clinical applications as a low-cost and mobile real-time qualitative assessment tool. C-arms, however, are not widely accepted for applications involving quantitative assessments, mainly due to the lack of reliable and low-cost position tracking methods, as well as adequate calibration and registration techniques. The solution suggested in this work is a tracked C-arm (TC-arm) which employs a low-cost sensor tracking module that can be retrofitted to any conventional C-arm for tracking the individual joints of the device. Registration and offline calibration methods were developed that allow accurate tracking of the gantry and determination of the exact intrinsic and extrinsic parameters of the imaging system for any acquired fluoroscopic image. The performance of the system was evaluated in comparison to an Optotrak[Formula: see text] motion tracking system and by a series of experiments on accurately built ball-bearing phantoms. Accuracies of the system were determined for 2D-3D registration, three-dimensional landmark localization, and for generating panoramic stitched views in simulated intraoperative applications. The system was able to track the center point of the gantry with an accuracy of [Formula: see text] mm or better. Accuracies of 2D-3D registrations were [Formula: see text] mm and [Formula: see text]. Three-dimensional landmark localization had an accuracy of [Formula: see text] of the length (or [Formula: see text] mm) on average, depending on whether the landmarks were located along, above, or across the table. The overall accuracies of the two-dimensional measurements conducted on stitched panoramic images of the femur and lumbar spine were 2.5 [Formula: see text] 2.0 % [Formula: see text] and [Formula: see text], respectively. The TC-arm system has the potential to achieve sophisticated quantitative fluoroscopy assessment capabilities using an existing C-arm imaging system. This technology may be useful to improve the quality of orthopedic surgery and interventional radiology.

  13. 3D print of polymer bonded rare-earth magnets, and 3D magnetic field scanning with an end-user 3D printer

    NASA Astrophysics Data System (ADS)

    Huber, C.; Abert, C.; Bruckner, F.; Groenefeld, M.; Muthsam, O.; Schuschnigg, S.; Sirak, K.; Thanhoffer, R.; Teliban, I.; Vogler, C.; Windl, R.; Suess, D.

    2016-10-01

    3D print is a recently developed technique, for single-unit production, and for structures that have been impossible to build previously. The current work presents a method to 3D print polymer bonded isotropic hard magnets with a low-cost, end-user 3D printer. Commercially available isotropic NdFeB powder inside a PA11 matrix is characterized, and prepared for the printing process. An example of a printed magnet with a complex shape that was designed to generate a specific stray field is presented, and compared with finite element simulation solving the macroscopic Maxwell equations. For magnetic characterization, and comparing 3D printed structures with injection molded parts, hysteresis measurements are performed. To measure the stray field outside the magnet, the printer is upgraded to a 3D magnetic flux density measurement system. To skip an elaborate adjusting of the sensor, a simulation is used to calibrate the angles, sensitivity, and the offset of the sensor. With this setup, a measurement resolution of 0.05 mm along the z-axes is achievable. The effectiveness of our calibration method is shown. With our setup, we are able to print polymer bonded magnetic systems with the freedom of having a specific complex shape with locally tailored magnetic properties. The 3D scanning setup is easy to mount, and with our calibration method we are able to get accurate measuring results of the stray field.

  14. Retrodirective Radar Calibration Nanosatellite

    DTIC Science & Technology

    2013-07-01

    Martin (Student Program Manager); Nicholas G. Fisher (Student Systems Engineer) University of Hawaii JULY 2013 Final Report...Cost-Effective, Rapid Design of a Student-Built Radar Calibration Nanosatellite Larry K. Martin , Nicholas G. Fisher, Toy Lim, John...University of Hawaii Reinventing Space Conference AIAA-RS-2012-3001 Martin 1 AIAA Reinventing Space Conference 2012

  15. Research on the calibration methods of the luminance parameter of radiation luminance meters

    NASA Astrophysics Data System (ADS)

    Cheng, Weihai; Huang, Biyong; Lin, Fangsheng; Li, Tiecheng; Yin, Dejin; Lai, Lei

    2017-10-01

    This paper introduces standard diffusion reflection white plate method and integrating sphere standard luminance source method to calibrate the luminance parameter. The paper compares the effects of calibration results by using these two methods through principle analysis and experimental verification. After using two methods to calibrate the same radiation luminance meter, the data obtained verifies the testing results of the two methods are both reliable. The results show that the display value using standard white plate method has fewer errors and better reproducibility. However, standard luminance source method is more convenient and suitable for on-site calibration. Moreover, standard luminance source method has wider range and can test the linear performance of the instruments.

  16. Novel Applications of Rapid Prototyping in Gamma-ray and X-ray Imaging

    PubMed Central

    Miller, Brian W.; Moore, Jared W.; Gehm, Michael E.; Furenlid, Lars R.; Barrett, Harrison H.

    2010-01-01

    Advances in 3D rapid-prototyping printers, 3D modeling software, and casting techniques allow for the fabrication of cost-effective, custom components in gamma-ray and x-ray imaging systems. Applications extend to new fabrication methods for custom collimators, pinholes, calibration and resolution phantoms, mounting and shielding components, and imaging apertures. Details of the fabrication process for these components are presented, specifically the 3D printing process, cold casting with a tungsten epoxy, and lost-wax casting in platinum. PMID:22984341

  17. Towards improved characterization of northern wetlands (or other landscapes) by remote sensing - a rapid approach to collect ground truth data

    NASA Astrophysics Data System (ADS)

    Gålfalk, Magnus; Karlson, Martin; Crill, Patrick; Bastviken, David

    2017-04-01

    The calibration and validation of remote sensing land cover products is highly dependent on accurate ground truth data, which are costly and practically challenging to collect. This study evaluates a novel and efficient alternative to field surveys and UAV imaging commonly applied for this task. The method consists of i) a light weight, water proof, remote controlled RGB-camera mounted on an extendable monopod used for acquiring wide-field images of the ground from a height of 4.5 meters, and ii) a script for semi-automatic image classification. In the post-processing, the wide-field images are corrected for optical distortion and geometrically rectified so that the spatial resolution is the same over the surface area used for classification. The script distinguishes land surface components by color, brightness and spatial variability. The method was evaluated in wetland areas located around Abisko, northern Sweden. Proportional estimates of the six main surface components in the wetlands (wet and dry Sphagnum, shrub, grass, water, rock) were derived for 200 images, equivalent to 10 × 10 m field plots. These photo plots were then used as calibration data for a regional scale satellite based classification which separates the six wetland surface components using a Sentinel-1 time series. The method presented in this study is accurate, rapid, robust and cost efficient in comparison to field surveys (time consuming) and drone mapping (which require low wind speeds and no rain, suffer from battery limited flight times, have potential GPS/compass errors far north, and in some areas are prohibited by law).

  18. Selective and sensitive fluorimetric determination of carbendazim in apple and orange after preconcentration with magnetite-molecularly imprinted polymer

    NASA Astrophysics Data System (ADS)

    İlktaç, Raif; Aksuner, Nur; Henden, Emur

    2017-03-01

    In this study, magnetite-molecularly imprinted polymer has been used for the first time as selective adsorbent before the fluorimetric determination of carbendazim. Adsorption capacity of the magnetite-molecularly imprinted polymer was found to be 2.31 ± 0.63 mg g- 1 (n = 3). Limit of detection (LOD) and limit of quantification (LOQ) of the method were found to be 2.3 and 7.8 μg L- 1, respectively. Calibration graph was linear in the range of 10-1000 μg L- 1. Rapidity is an important advantage of the method where re-binding and recovery processes of carbendazim can be completed within an hour. The same imprinted polymer can be used for the determination of carbendazim without any capacity loss repeatedly for at least ten times. Proposed method has been successfully applied to determine carbendazim residues in apple and orange, where the recoveries of the spiked samples were found to be in the range of 95.7-103%. Characterization of the adsorbent and the effects of some potential interferences were also evaluated. With the reasonably high capacity and reusability of the adsorbent, dynamic calibration range, rapidity, simplicity, cost-effectiveness and with suitable LOD and LOQ, the proposed method is an ideal method for the determination of carbendazim.

  19. A novel dual-camera calibration method for 3D optical measurement

    NASA Astrophysics Data System (ADS)

    Gai, Shaoyan; Da, Feipeng; Dai, Xianqiang

    2018-05-01

    A novel dual-camera calibration method is presented. In the classic methods, the camera parameters are usually calculated and optimized by the reprojection error. However, for a system designed for 3D optical measurement, this error does not denote the result of 3D reconstruction. In the presented method, a planar calibration plate is used. In the beginning, images of calibration plate are snapped from several orientations in the measurement range. The initial parameters of the two cameras are obtained by the images. Then, the rotation and translation matrix that link the frames of two cameras are calculated by using method of Centroid Distance Increment Matrix. The degree of coupling between the parameters is reduced. Then, 3D coordinates of the calibration points are reconstructed by space intersection method. At last, the reconstruction error is calculated. It is minimized to optimize the calibration parameters. This error directly indicates the efficiency of 3D reconstruction, thus it is more suitable for assessing the quality of dual-camera calibration. In the experiments, it can be seen that the proposed method is convenient and accurate. There is no strict requirement on the calibration plate position in the calibration process. The accuracy is improved significantly by the proposed method.

  20. Simultaneous planning of the project scheduling and material procurement problem under the presence of multiple suppliers

    NASA Astrophysics Data System (ADS)

    Tabrizi, Babak H.; Ghaderi, Seyed Farid

    2016-09-01

    Simultaneous planning of project scheduling and material procurement can improve the project execution costs. Hence, the issue has been addressed here by a mixed-integer programming model. The proposed model facilitates the procurement decisions by accounting for a number of suppliers offering a distinctive discount formula from which to purchase the required materials. It is aimed at developing schedules with the best net present value regarding the obtained benefit and costs of the project execution. A genetic algorithm is applied to deal with the problem, in addition to a modified version equipped with a variable neighbourhood search. The underlying factors of the solution methods are calibrated by the Taguchi method to obtain robust solutions. The performance of the aforementioned methods is compared for different problem sizes, in which the utilized local search proved efficient. Finally, a sensitivity analysis is carried out to check the effect of inflation on the objective function value.

  1. Finding trap stiffness of optical tweezers using digital filters.

    PubMed

    Almendarez-Rangel, Pedro; Morales-Cruzado, Beatriz; Sarmiento-Gómez, Erick; Pérez-Gutiérrez, Francisco G

    2018-02-01

    Obtaining trap stiffness and calibration of the position detection system is the basis of a force measurement using optical tweezers. Both calibration quantities can be calculated using several experimental methods available in the literature. In most cases, stiffness determination and detection system calibration are performed separately, often requiring procedures in very different conditions, and thus confidence of calibration methods is not assured due to possible changes in the environment. In this work, a new method to simultaneously obtain both the detection system calibration and trap stiffness is presented. The method is based on the calculation of the power spectral density of positions through digital filters to obtain the harmonic contributions of the position signal. This method has the advantage of calculating both trap stiffness and photodetector calibration factor from the same dataset in situ. It also provides a direct method to avoid unwanted frequencies that could greatly affect calibration procedure, such as electric noise, for example.

  2. Radiometric calibration of the Earth observing system's imaging sensors

    NASA Technical Reports Server (NTRS)

    Slater, P. N.

    1987-01-01

    Philosophy, requirements, and methods of calibration of multispectral space sensor systems as applicable to the Earth Observing System (EOS) are discussed. Vicarious methods for calibration of low spatial resolution systems, with respect to the Advanced Very High Resolution Radiometer (AVHRR), are then summarized. Finally, a theoretical introduction is given to a new vicarious method of calibration using the ratio of diffuse-to-global irradiance at the Earth's surfaces as the key input. This may provide an additional independent method for in-flight calibration.

  3. Configurations and calibration methods for passive sampling techniques.

    PubMed

    Ouyang, Gangfeng; Pawliszyn, Janusz

    2007-10-19

    Passive sampling technology has developed very quickly in the past 15 years, and is widely used for the monitoring of pollutants in different environments. The design and quantification of passive sampling devices require an appropriate calibration method. Current calibration methods that exist for passive sampling, including equilibrium extraction, linear uptake, and kinetic calibration, are presented in this review. A number of state-of-the-art passive sampling devices that can be used for aqueous and air monitoring are introduced according to their calibration methods.

  4. Design of system calibration for effective imaging

    NASA Astrophysics Data System (ADS)

    Varaprasad Babu, G.; Rao, K. M. M.

    2006-12-01

    A CCD based characterization setup comprising of a light source, CCD linear array, Electronics for signal conditioning/ amplification, PC interface has been developed to generate images at varying densities and at multiple view angles. This arrangement is used to simulate and evaluate images by Super Resolution technique with multiple overlaps and yaw rotated images at different view angles. This setup also generates images at different densities to analyze the response of the detector port wise separately. The light intensity produced by the source needs to be calibrated for proper imaging by the high sensitive CCD detector over the FOV. One approach is to design a complex integrating sphere arrangement which costs higher for such applications. Another approach is to provide a suitable intensity feed back correction wherein the current through the lamp is controlled in a closed loop arrangement. This method is generally used in the applications where the light source is a point source. The third method is to control the time of exposure inversely to the lamp variations where lamp intensity is not possible to control. In this method, light intensity during the start of each line is sampled and the correction factor is applied for the full line. The fourth method is to provide correction through Look Up Table where the response of all the detectors are normalized through the digital transfer function. The fifth method is to have a light line arrangement where the light through multiple fiber optic cables are derived from a single source and arranged them in line. This is generally applicable and economical for low width cases. In our applications, a new method wherein an inverse multi density filter is designed which provides an effective calibration for the full swath even at low light intensities. The light intensity along the length is measured, an inverse density is computed, a correction filter is generated and implemented in the CCD based Characterization setup. This paper describes certain novel techniques of design and implementation of system calibration for effective Imaging to produce better quality data product especially while handling high resolution data.

  5. Application of composite small calibration objects in traffic accident scene photogrammetry.

    PubMed

    Chen, Qiang; Xu, Hongguo; Tan, Lidong

    2015-01-01

    In order to address the difficulty of arranging large calibration objects and the low measurement accuracy of small calibration objects in traffic accident scene photogrammetry, a photogrammetric method based on a composite of small calibration objects is proposed. Several small calibration objects are placed around the traffic accident scene, and the coordinate system of the composite calibration object is given based on one of them. By maintaining the relative position and coplanar relationship of the small calibration objects, the local coordinate system of each small calibration object is transformed into the coordinate system of the composite calibration object. The two-dimensional direct linear transformation method is improved based on minimizing the reprojection error of the calibration points of all objects. A rectified image is obtained using the nonlinear optimization method. The increased accuracy of traffic accident scene photogrammetry using a composite small calibration object is demonstrated through the analysis of field experiments and case studies.

  6. Signal inference with unknown response: calibration-uncertainty renormalized estimator.

    PubMed

    Dorn, Sebastian; Enßlin, Torsten A; Greiner, Maksim; Selig, Marco; Boehm, Vanessa

    2015-01-01

    The calibration of a measurement device is crucial for every scientific experiment, where a signal has to be inferred from data. We present CURE, the calibration-uncertainty renormalized estimator, to reconstruct a signal and simultaneously the instrument's calibration from the same data without knowing the exact calibration, but its covariance structure. The idea of the CURE method, developed in the framework of information field theory, is to start with an assumed calibration to successively include more and more portions of calibration uncertainty into the signal inference equations and to absorb the resulting corrections into renormalized signal (and calibration) solutions. Thereby, the signal inference and calibration problem turns into a problem of solving a single system of ordinary differential equations and can be identified with common resummation techniques used in field theories. We verify the CURE method by applying it to a simplistic toy example and compare it against existent self-calibration schemes, Wiener filter solutions, and Markov chain Monte Carlo sampling. We conclude that the method is able to keep up in accuracy with the best self-calibration methods and serves as a noniterative alternative to them.

  7. Dose calibrator linearity test: 99mTc versus 18F radioisotopes*

    PubMed Central

    Willegaignon, José; Sapienza, Marcelo Tatit; Coura-Filho, George Barberio; Garcez, Alexandre Teles; Alves, Carlos Eduardo Gonzalez Ribeiro; Cardona, Marissa Anabel Rivera; Gutterres, Ricardo Fraga; Buchpiguel, Carlos Alberto

    2015-01-01

    Objective The present study was aimed at evaluating the viability of replacing 18F with 99mTc in dose calibrator linearity testing. Materials and Methods The test was performed with sources of 99mTc (62 GBq) and 18F (12 GBq) whose activities were measured up to values lower than 1 MBq. Ratios and deviations between experimental and theoretical 99mTc and 18F sources activities were calculated and subsequently compared. Results Mean deviations between experimental and theoretical 99mTc and 18F sources activities were 0.56 (± 1.79)% and 0.92 (± 1.19)%, respectively. The mean ratio between activities indicated by the device for the 99mTc source as measured with the equipment pre-calibrated to measure 99mTc and 18F was 3.42 (± 0.06), and for the 18F source this ratio was 3.39 (± 0.05), values considered constant over the measurement time. Conclusion The results of the linearity test using 99mTc were compatible with those obtained with the 18F source, indicating the viability of utilizing both radioisotopes in dose calibrator linearity testing. Such information in association with the high potential of radiation exposure and costs involved in 18F acquisition suggest 99mTc as the element of choice to perform dose calibrator linearity tests in centers that use 18F, without any detriment to the procedure as well as to the quality of the nuclear medicine service. PMID:25798005

  8. Flume and field-based calibration of surrogate sensors for monitoring bedload transport

    NASA Astrophysics Data System (ADS)

    Mao, L.; Carrillo, R.; Escauriaza, C.; Iroume, A.

    2016-01-01

    Bedload transport assessment is important for geomorphological, engineering, and ecological studies of gravel-bed rivers. Bedload can be monitored at experimental stations that require expensive maintenance or by using portable traps, which allows measuring instantaneous transport rates but at a single point and at high costs and operational risks. The need for continuously measuring bedload intensity and dynamics has therefore increased the use and enhancement of surrogate methods. This paper reports on a set of flume experiments in which a Japanese acoustic pipe and an impact plate have been tested using four well-sorted and three poorly sorted sediment mixtures. Additional data were collected in a glacierized high-gradient Andean stream (Estero Morales) using a portable Bunte-type bedload sampler. Results show that the data provided by the acoustic pipe (which is amplified on 6 channels having different gains) can be calibrated for the grain size and for the intensity of transported sediments coarser than 9 mm (R2 = 0.93 and 0.88, respectively). Even if the flume-based calibration is very robust, upscaling the calibration to field applications is more challenging, and the bedload intensity could be predicted better than the grain size of transported sediments (R2 = 0.61 and 0.43, respectively). The inexpensive impact plate equipped with accelerometer could be calibrated for bedload intensity quite well in the flume but only poorly in the field (R2 = 0.16) and could not provide information on the size of transported sediments.

  9. High-precision and low-cost vibration generator for low-frequency calibration system

    NASA Astrophysics Data System (ADS)

    Li, Rui-Jun; Lei, Ying-Jun; Zhang, Lian-Sheng; Chang, Zhen-Xin; Fan, Kuang-Chao; Cheng, Zhen-Ying; Hu, Peng-Hao

    2018-03-01

    Low-frequency vibration is one of the harmful factors that affect the accuracy of micro-/nano-measuring machines because its amplitude is significantly small and it is very difficult to avoid. In this paper, a low-cost and high-precision vibration generator was developed to calibrate an optical accelerometer, which is self-designed to detect low-frequency vibration. A piezoelectric actuator is used as vibration exciter, a leaf spring made of beryllium copper is used as an elastic component, and a high-resolution, low-thermal-drift eddy current sensor is applied to investigate the vibrator’s performance. Experimental results demonstrate that the vibration generator can achieve steady output displacement with frequency range from 0.6 Hz to 50 Hz, an analytical displacement resolution of 3.1 nm and an acceleration range from 3.72 mm s-2 to 1935.41 mm s-2 with a relative standard deviation less than 1.79%. The effectiveness of the high-precision and low-cost vibration generator was verified by calibrating our optical accelerometer.

  10. Structured light system calibration method with optimal fringe angle.

    PubMed

    Li, Beiwen; Zhang, Song

    2014-11-20

    For structured light system calibration, one popular approach is to treat the projector as an inverse camera. This is usually performed by projecting horizontal and vertical sequences of patterns to establish one-to-one mapping between camera points and projector points. However, for a well-designed system, either horizontal or vertical fringe images are not sensitive to depth variation and thus yield inaccurate mapping. As a result, the calibration accuracy is jeopardized if a conventional calibration method is used. To address this limitation, this paper proposes a novel calibration method based on optimal fringe angle determination. Experiments demonstrate that our calibration approach can increase the measurement accuracy up to 38% compared to the conventional calibration method with a calibration volume of 300(H)  mm×250(W)  mm×500(D)  mm.

  11. Simultaneous calibration phantom commission and geometry calibration in cone beam CT

    NASA Astrophysics Data System (ADS)

    Xu, Yuan; Yang, Shuai; Ma, Jianhui; Li, Bin; Wu, Shuyu; Qi, Hongliang; Zhou, Linghong

    2017-09-01

    Geometry calibration is a vital step for describing the geometry of a cone beam computed tomography (CBCT) system and is a prerequisite for CBCT reconstruction. In current methods, calibration phantom commission and geometry calibration are divided into two independent tasks. Small errors in ball-bearing (BB) positioning in the phantom-making step will severely degrade the quality of phantom calibration. To solve this problem, we propose an integrated method to simultaneously realize geometry phantom commission and geometry calibration. Instead of assuming the accuracy of the geometry phantom, the integrated method considers BB centers in the phantom as an optimized parameter in the workflow. Specifically, an evaluation phantom and the corresponding evaluation contrast index are used to evaluate geometry artifacts for optimizing the BB coordinates in the geometry phantom. After utilizing particle swarm optimization, the CBCT geometry and BB coordinates in the geometry phantom are calibrated accurately and are then directly used for the next geometry calibration task in other CBCT systems. To evaluate the proposed method, both qualitative and quantitative studies were performed on simulated and realistic CBCT data. The spatial resolution of reconstructed images using dental CBCT can reach up to 15 line pair cm-1. The proposed method is also superior to the Wiesent method in experiments. This paper shows that the proposed method is attractive for simultaneous and accurate geometry phantom commission and geometry calibration.

  12. Simplified method for creating a density-absorbed dose calibration curve for the low dose range from Gafchromic EBT3 film.

    PubMed

    Gotanda, Tatsuhiro; Katsuda, Toshizo; Gotanda, Rumi; Kuwano, Tadao; Akagawa, Takuya; Tanki, Nobuyoshi; Tabuchi, Akihiko; Shimono, Tetsunori; Kawaji, Yasuyuki

    2016-01-01

    Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were -32.336 and -33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range.

  13. Simplified method for creating a density-absorbed dose calibration curve for the low dose range from Gafchromic EBT3 film

    PubMed Central

    Gotanda, Tatsuhiro; Katsuda, Toshizo; Gotanda, Rumi; Kuwano, Tadao; Akagawa, Takuya; Tanki, Nobuyoshi; Tabuchi, Akihiko; Shimono, Tetsunori; Kawaji, Yasuyuki

    2016-01-01

    Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were −32.336 and −33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range. PMID:28144120

  14. A Novel Miniature Wide-band Radiometer for Space Applications

    NASA Astrophysics Data System (ADS)

    Sykulska-Lawrence, H. M.

    2016-12-01

    Design, development and testing of a novel miniaturised infrared radiometer is described. The instrument opens up new possibilities in planetary science of deployment on smaller platforms - such as unmanned aerial vehicles and microprobes - to enable study of a planet's radiation balance, as well as terrestrial volcano plumes and trace gases in planetary atmospheres, using low-cost long-term observations. Thus a key enabling development is that of miniaturised, low-power and well-calibrated instrumentation. The talk reports advances in miniature technology to perform high accuracy visible / IR remote sensing measurements. The infrared radiometer is akin to those widely used for remote sensing for earth and space applications, which are currently either large instruments on orbiting platforms or medium-sized payloads on balloons. We use MEMS microfabrication techniques to shrink a conventional design, while combining the calibration benefits of large (>1kg) type radiometers with the flexibility and portability of a <10g device. The instrument measures broadband (0.2 to 100µm) upward and downward radiation fluxes, showing improvements in calibration stability and accuracy,with built-in calibration capability, incorporating traceability to temperature standards such as ITS-90. The miniature instrument described here was derived from a concept developed for a European Space Agency study, Dalomis (Proc. of 'i-SAIRAS 2005', Munich, 2005), which involved dropping multiple probes into the atmosphere of Venus from a balloon to sample numerous parts of the complex weather systems on the planet. Data from such an in-situ instrument would complement information from a satellite remote sensing instrument or balloon radiosonde. Moreover, the addition of an internal calibration standard facilitates comparisons between datasets. One of the main challenges for a reduced size device is calibration. We use an in-situ method whereby a blackbody source is integrated within the device and a micromirror switches the input to the detector between the measured signal and the calibration target. Achieving two well-calibrated radiometer channels within a small (<10g) payload is made possible by using modern micromachining techniques.

  15. Modeling technical change in climate analysis: evidence from agricultural crop damages.

    PubMed

    Ahmed, Adeel; Devadason, Evelyn S; Al-Amin, Abul Quasem

    2017-05-01

    This study accounts for the Hicks neutral technical change in a calibrated model of climate analysis, to identify the optimum level of technical change for addressing climate changes. It demonstrates the reduction to crop damages, the costs to technical change, and the net gains for the adoption of technical change for a climate-sensitive Pakistan economy. The calibrated model assesses the net gains of technical change for the overall economy and at the agriculture-specific level. The study finds that the gains of technical change are overwhelmingly higher than the costs across the agriculture subsectors. The gains and costs following technical change differ substantially for different crops. More importantly, the study finds a cost-effective optimal level of technical change that potentially reduces crop damages to a minimum possible level. The study therefore contends that the climate policy for Pakistan should consider the role of technical change in addressing climate impacts on the agriculture sector.

  16. Low-Cost Air Quality Monitoring Tools: From Research to Practice (A Workshop Summary)

    PubMed Central

    Griswold, William G.; RS, Abhijit; Johnston, Jill E.; Herting, Megan M.; Thorson, Jacob; Collier-Oxandale, Ashley; Hannigan, Michael

    2017-01-01

    In May 2017, a two-day workshop was held in Los Angeles (California, U.S.A.) to gather practitioners who work with low-cost sensors used to make air quality measurements. The community of practice included individuals from academia, industry, non-profit groups, community-based organizations, and regulatory agencies. The group gathered to share knowledge developed from a variety of pilot projects in hopes of advancing the collective knowledge about how best to use low-cost air quality sensors. Panel discussion topics included: (1) best practices for deployment and calibration of low-cost sensor systems, (2) data standardization efforts and database design, (3) advances in sensor calibration, data management, and data analysis and visualization, and (4) lessons learned from research/community partnerships to encourage purposeful use of sensors and create change/action. Panel discussions summarized knowledge advances and project successes while also highlighting the questions, unresolved issues, and technological limitations that still remain within the low-cost air quality sensor arena. PMID:29143775

  17. A new systematic calibration method of ring laser gyroscope inertial navigation system

    NASA Astrophysics Data System (ADS)

    Wei, Guo; Gao, Chunfeng; Wang, Qi; Wang, Qun; Xiong, Zhenyu; Long, Xingwu

    2016-10-01

    Inertial navigation system has been the core component of both military and civil navigation systems. Before the INS is put into application, it is supposed to be calibrated in the laboratory in order to compensate repeatability error caused by manufacturing. Discrete calibration method cannot fulfill requirements of high-accurate calibration of the mechanically dithered ring laser gyroscope navigation system with shock absorbers. This paper has analyzed theories of error inspiration and separation in detail and presented a new systematic calibration method for ring laser gyroscope inertial navigation system. Error models and equations of calibrated Inertial Measurement Unit are given. Then proper rotation arrangement orders are depicted in order to establish the linear relationships between the change of velocity errors and calibrated parameter errors. Experiments have been set up to compare the systematic errors calculated by filtering calibration result with those obtained by discrete calibration result. The largest position error and velocity error of filtering calibration result are only 0.18 miles and 0.26m/s compared with 2 miles and 1.46m/s of discrete calibration result. These results have validated the new systematic calibration method and proved its importance for optimal design and accuracy improvement of calibration of mechanically dithered ring laser gyroscope inertial navigation system.

  18. Calibrating the stress-time curve of a combined finite-discrete element method to a Split Hopkinson Pressure Bar experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban

    We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less

  19. Calibrating the stress-time curve of a combined finite-discrete element method to a Split Hopkinson Pressure Bar experiment

    DOE PAGES

    Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban; ...

    2018-05-01

    We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less

  20. Robust, low-cost data loggers for stream temperature, flow intermittency, and relative conductivity monitoring

    USGS Publications Warehouse

    Chapin, Thomas; Todd, Andrew S.; Zeigler, Matthew P.

    2014-01-01

    Water temperature and streamflow intermittency are critical parameters influencing aquatic ecosystem health. Low-cost temperature loggers have made continuous water temperature monitoring relatively simple but determining streamflow timing and intermittency using temperature data alone requires significant and subjective data interpretation. Electrical resistance (ER) sensors have recently been developed to overcome the major limitations of temperature-based methods for the assessment of streamflow intermittency. This technical note introduces the STIC (Stream Temperature, Intermittency, and Conductivity logger); a robust, low-cost, simple to build instrument that provides long-duration, high-resolution monitoring of both relative conductivity (RC) and temperature. Simultaneously collected temperature and RC data provide unambiguous water temperature and streamflow intermittency information that is crucial for monitoring aquatic ecosystem health and assessing regulatory compliance. With proper calibration, the STIC relative conductivity data can be used to monitor specific conductivity.

  1. Shortwave Radiometer Calibration Methods Comparison and Resulting Solar Irradiance Measurement Differences: A User Perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, Aron; Sengupta, Manajit; Andreas, Afshin

    Banks financing solar energy projects require assurance that these systems will produce the energy predicted. Furthermore, utility planners and grid system operators need to understand the impact of the variable solar resource on solar energy conversion system performance. Accurate solar radiation data sets reduce the expense associated with mitigating performance risk and assist in understanding the impacts of solar resource variability. The accuracy of solar radiation measured by radiometers depends on the instrument performance specification, installation method, calibration procedure, measurement conditions, maintenance practices, location, and environmental conditions. This study addresses the effect of different calibration methods provided by radiometric calibrationmore » service providers, such as NREL and manufacturers of radiometers, on the resulting calibration responsivity. Some of these radiometers are calibrated indoors and some outdoors. To establish or understand the differences in calibration methodology, we processed and analyzed field-measured data from these radiometers. This study investigates calibration responsivities provided by NREL's broadband outdoor radiometer calibration (BORCAL) and a few prominent manufacturers. The BORCAL method provides the outdoor calibration responsivity of pyranometers and pyrheliometers at 45 degree solar zenith angle, and as a function of solar zenith angle determined by clear-sky comparisons with reference irradiance. The BORCAL method also employs a thermal offset correction to the calibration responsivity of single-black thermopile detectors used in pyranometers. Indoor calibrations of radiometers by their manufacturers are performed using a stable artificial light source in a side-by-side comparison between the test radiometer under calibration and a reference radiometer of the same type. In both methods, the reference radiometer calibrations are traceable to the World Radiometric Reference (WRR). These different methods of calibration demonstrated +1% to +2% differences in solar irradiance measurement. Analyzing these differences will ultimately help determine the uncertainty of the field radiometer data and guide the development of a consensus standard for calibration. Further advancing procedures for precisely calibrating radiometers to world reference standards that reduce measurement uncertainty will allow more accurate prediction of solar output and improve the bankability of solar projects.« less

  2. Monitoring high-intensity focused ultrasound (HIFU) therapy using radio frequency ultrasound backscatter to quantify heating

    NASA Astrophysics Data System (ADS)

    Kaczkowski, Peter J.; Anand, Ajay

    2005-09-01

    The spatial distribution and temporal history of tissue temperature is an essential indicator of thermal therapy progress, and treatment safety and efficacy. Magnetic resonance methods provide the gold standard noninvasive measurement of temperature but are costly and cumbersome compared to the therapy itself. We have been developing the use of ultrasound backscattering for real-time temperature estimation; ultrasonic methods have been limited to relatively low temperature rise, primarily due to lack of sensitivity at protein denaturation temperatures (50-70°C). Through validation experiments on gel phantoms and ex vivo tissue we show that temperature rise can be accurately mapped throughout the therapeutic temperature range using a new BioHeat Transfer Equation (BHTE) model-constrained inverse approach. Speckle-free temperature and thermal dose maps are generated using the ultrasound calibrated model over the imaged region throughout therapy delivery and post-treatment cooling periods. Results of turkey breast tissue experiments are presented for static HIFU exposures, in which the ultrasound calibrated BHTE temperature maps are shown to be very accurate (within a degree) using independent thermocouple measurements. This new temperature monitoring method may speed clinical adoption of ultrasound-guided HIFU therapy. [Work supported by Army MRMC.

  3. Calibration Method to Eliminate Zeroth Order Effect in Lateral Shearing Interferometry

    NASA Astrophysics Data System (ADS)

    Fang, Chao; Xiang, Yang; Qi, Keqi; Chen, Dawei

    2018-04-01

    In this paper, a calibration method is proposed which eliminates the zeroth order effect in lateral shearing interferometry. An analytical expression of the calibration error function is deduced, and the relationship between the phase-restoration error and calibration error is established. The analytical results show that the phase-restoration error introduced by the calibration error is proportional to the phase shifting error and zeroth order effect. The calibration method is verified using simulations and experiments. The simulation results show that the phase-restoration error is approximately proportional to the phase shift error and zeroth order effect, when the phase shifting error is less than 2° and the zeroth order effect is less than 0.2. The experimental result shows that compared with the conventional method with 9-frame interferograms, the calibration method with 5-frame interferograms achieves nearly the same restoration accuracy.

  4. Potential and Limitations of Low-Cost Unmanned Aerial Systems for Monitoring Altitudinal Vegetation Phenology in the Tropics

    NASA Astrophysics Data System (ADS)

    Silva, T. S. F.; Torres, R. S.; Morellato, P.

    2017-12-01

    Vegetation phenology is a key component of ecosystem function and biogeochemical cycling, and highly susceptible to climatic change. Phenological knowledge in the tropics is limited by lack of monitoring, traditionally done by laborious direct observation. Ground-based digital cameras can automate daily observations, but also offer limited spatial coverage. Imaging by low-cost Unmanned Aerial Systems (UAS) combines the fine resolution of ground-based methods with and unprecedented capability for spatial coverage, but challenges remain in producing color-consistent multitemporal images. We evaluated the applicability of multitemporal UAS imaging to monitor phenology in tropical altitudinal grasslands and forests, answering: 1) Can very-high resolution aerial photography from conventional digital cameras be used to reliably monitor vegetative and reproductive phenology? 2) How is UAS monitoring affected by changes in illumination and by sensor physical limitations? We flew imaging missions monthly from Feb-16 to Feb-17, using a UAS equipped with an RGB Canon SX260 camera. Flights were carried between 10am and 4pm, at 120-150m a.g.l., yielding 5-10cm spatial resolution. To compensate illumination changes caused by time of day, season and cloud cover, calibration was attempted using reference targets and empirical models, as well as color space transformations. For vegetative phenological monitoring, multitemporal response was severely affected by changes in illumination conditions, strongly confounding the phenological signal. These variations could not be adequately corrected through calibration due to sensor limitations. For reproductive phenology, the very-high resolution of the acquired imagery allowed discrimination of individual reproductive structures for some species, and its stark colorimetric differences to vegetative structures allowed detection of the reproductive timing on the HSV color space, despite illumination effects. We conclude that reliable vegetative phenology monitoring may exceed the capabilities of consumer cameras, but reproductive phenology can be successfully monitored for species with conspicuous reproductive structures. Further research is being conducted to improve calibration methods and information extraction through machine learning.

  5. A New Method for Non-destructive Measurement of Biomass, Growth Rates, Vertical Biomass Distribution and Dry Matter Content Based on Digital Image Analysis

    PubMed Central

    Tackenberg, Oliver

    2007-01-01

    Background and Aims Biomass is an important trait in functional ecology and growth analysis. The typical methods for measuring biomass are destructive. Thus, they do not allow the development of individual plants to be followed and they require many individuals to be cultivated for repeated measurements. Non-destructive methods do not have these limitations. Here, a non-destructive method based on digital image analysis is presented, addressing not only above-ground fresh biomass (FBM) and oven-dried biomass (DBM), but also vertical biomass distribution as well as dry matter content (DMC) and growth rates. Methods Scaled digital images of the plants silhouettes were taken for 582 individuals of 27 grass species (Poaceae). Above-ground biomass and DMC were measured using destructive methods. With image analysis software Zeiss KS 300, the projected area and the proportion of greenish pixels were calculated, and generalized linear models (GLMs) were developed with destructively measured parameters as dependent variables and parameters derived from image analysis as independent variables. A bootstrap analysis was performed to assess the number of individuals required for re-calibration of the models. Key Results The results of the developed models showed no systematic errors compared with traditionally measured values and explained most of their variance (R2 ≥ 0·85 for all models). The presented models can be directly applied to herbaceous grasses without further calibration. Applying the models to other growth forms might require a re-calibration which can be based on only 10–20 individuals for FBM or DMC and on 40–50 individuals for DBM. Conclusions The methods presented are time and cost effective compared with traditional methods, especially if development or growth rates are to be measured repeatedly. Hence, they offer an alternative way of determining biomass, especially as they are non-destructive and address not only FBM and DBM, but also vertical biomass distribution and DMC. PMID:17353204

  6. 4D ERT-based calibration and prediction of biostimulant induced changes in fluid conductivity

    NASA Astrophysics Data System (ADS)

    Johnson, T. C.; Versteeg, R. J.; Day-Lewis, F. D.; Major, W. R.; Wright, K. E.

    2008-12-01

    In-situ bioremediation is an emerging and cost-effective method of removing organic contaminants from groundwater. The performance of bioremedial systems depends on the adequate delivery and distribution of biostimulants to contaminated zones. Monitoring the distribution of biostimulants using monitoring wells is expensive, time consuming, and provides inadequate information between sampling wells. We discuss a Hydrogeophysical Performance Monitoring System (HPMS) deployed to monitor bioremediation efforts at a TCE-contaminated Superfund site in Brandywine MD. The HPMS enables autonomous electrical geophysical data acquisition, processing, quality-assurance/quality-control, and inversion. Our objective is to demonstrate the feasibility and cost effectiveness of the HPMS to provide near real-time information on the spatiotemporal behavior of injected biostimulants. As a first step, we use time-lapse electrical resistivity tomography (ERT) to estimate changes in bulk conductivity caused by the injectate. We demonstrate how ERT-based bulk conductivity estimates can be calibrated with a small number of fluid conductivity measurements to produce ERT-based estimates of fluid conductivity. The calibration procedure addresses the spatially variable resolution of the ERT tomograms. To test the validity of these estimates, we used the ERT results to predict the fluid conductivity at tens of points prior to field sampling of fluid conductivity at the same points. The comparison of ERT-predicted vs. observed fluid conductivity displays a high degree of correlation (correlation coefficient over 0.8), and demonstrates the ability of the HPMS to estimate the four-dimensional (4D) distribution of fluid conductivity caused by the biostimulant injection.

  7. CNV-ROC: A cost effective, computer-aided analytical performance evaluator of chromosomal microarrays

    PubMed Central

    Goodman, Corey W.; Major, Heather J.; Walls, William D.; Sheffield, Val C.; Casavant, Thomas L.; Darbro, Benjamin W.

    2016-01-01

    Chromosomal microarrays (CMAs) are routinely used in both research and clinical laboratories; yet, little attention has been given to the estimation of genome-wide true and false negatives during the assessment of these assays and how such information could be used to calibrate various algorithmic metrics to improve performance. Low-throughput, locus-specific methods such as fluorescence in situ hybridization (FISH), quantitative PCR (qPCR), or multiplex ligation-dependent probe amplification (MLPA) preclude rigorous calibration of various metrics used by copy number variant (CNV) detection algorithms. To aid this task, we have established a comparative methodology, CNV-ROC, which is capable of performing a high throughput, low cost, analysis of CMAs that takes into consideration genome-wide true and false negatives. CNV-ROC uses a higher resolution microarray to confirm calls from a lower resolution microarray and provides for a true measure of genome-wide performance metrics at the resolution offered by microarray testing. CNV-ROC also provides for a very precise comparison of CNV calls between two microarray platforms without the need to establish an arbitrary degree of overlap. Comparison of CNVs across microarrays is done on a per-probe basis and receiver operator characteristic (ROC) analysis is used to calibrate algorithmic metrics, such as log2 ratio threshold, to enhance CNV calling performance. CNV-ROC addresses a critical and consistently overlooked aspect of analytical assessments of genome-wide techniques like CMAs which is the measurement and use of genome-wide true and false negative data for the calculation of performance metrics and comparison of CNV profiles between different microarray experiments. PMID:25595567

  8. Theoretical foundation, methods, and criteria for calibrating human vibration models using frequency response functions

    PubMed Central

    Dong, Ren G.; Welcome, Daniel E.; McDowell, Thomas W.; Wu, John Z.

    2015-01-01

    While simulations of the measured biodynamic responses of the whole human body or body segments to vibration are conventionally interpreted as summaries of biodynamic measurements, and the resulting models are considered quantitative, this study looked at these simulations from a different angle: model calibration. The specific aims of this study are to review and clarify the theoretical basis for model calibration, to help formulate the criteria for calibration validation, and to help appropriately select and apply calibration methods. In addition to established vibration theory, a novel theorem of mechanical vibration is also used to enhance the understanding of the mathematical and physical principles of the calibration. Based on this enhanced understanding, a set of criteria was proposed and used to systematically examine the calibration methods. Besides theoretical analyses, a numerical testing method is also used in the examination. This study identified the basic requirements for each calibration method to obtain a unique calibration solution. This study also confirmed that the solution becomes more robust if more than sufficient calibration references are provided. Practically, however, as more references are used, more inconsistencies can arise among the measured data for representing the biodynamic properties. To help account for the relative reliabilities of the references, a baseline weighting scheme is proposed. The analyses suggest that the best choice of calibration method depends on the modeling purpose, the model structure, and the availability and reliability of representative reference data. PMID:26740726

  9. Predicting complications of percutaneous coronary intervention using a novel support vector method

    PubMed Central

    Lee, Gyemin; Gurm, Hitinder S; Syed, Zeeshan

    2013-01-01

    Objective To explore the feasibility of a novel approach using an augmented one-class learning algorithm to model in-laboratory complications of percutaneous coronary intervention (PCI). Materials and methods Data from the Blue Cross Blue Shield of Michigan Cardiovascular Consortium (BMC2) multicenter registry for the years 2007 and 2008 (n=41 016) were used to train models to predict 13 different in-laboratory PCI complications using a novel one-plus-class support vector machine (OP-SVM) algorithm. The performance of these models in terms of discrimination and calibration was compared to the performance of models trained using the following classification algorithms on BMC2 data from 2009 (n=20 289): logistic regression (LR), one-class support vector machine classification (OC-SVM), and two-class support vector machine classification (TC-SVM). For the OP-SVM and TC-SVM approaches, variants of the algorithms with cost-sensitive weighting were also considered. Results The OP-SVM algorithm and its cost-sensitive variant achieved the highest area under the receiver operating characteristic curve for the majority of the PCI complications studied (eight cases). Similar improvements were observed for the Hosmer–Lemeshow χ2 value (seven cases) and the mean cross-entropy error (eight cases). Conclusions The OP-SVM algorithm based on an augmented one-class learning problem improved discrimination and calibration across different PCI complications relative to LR and traditional support vector machine classification. Such an approach may have value in a broader range of clinical domains. PMID:23599229

  10. Application of Composite Small Calibration Objects in Traffic Accident Scene Photogrammetry

    PubMed Central

    Chen, Qiang; Xu, Hongguo; Tan, Lidong

    2015-01-01

    In order to address the difficulty of arranging large calibration objects and the low measurement accuracy of small calibration objects in traffic accident scene photogrammetry, a photogrammetric method based on a composite of small calibration objects is proposed. Several small calibration objects are placed around the traffic accident scene, and the coordinate system of the composite calibration object is given based on one of them. By maintaining the relative position and coplanar relationship of the small calibration objects, the local coordinate system of each small calibration object is transformed into the coordinate system of the composite calibration object. The two-dimensional direct linear transformation method is improved based on minimizing the reprojection error of the calibration points of all objects. A rectified image is obtained using the nonlinear optimization method. The increased accuracy of traffic accident scene photogrammetry using a composite small calibration object is demonstrated through the analysis of field experiments and case studies. PMID:26011052

  11. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.

    PubMed

    Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin

    2015-02-01

    To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.

  12. Calibrating abundance indices with population size estimators of red back salamanders (Plethodon cinereus) in a New England forest

    PubMed Central

    Ellison, Aaron M.; Jackson, Scott

    2015-01-01

    Herpetologists and conservation biologists frequently use convenient and cost-effective, but less accurate, abundance indices (e.g., number of individuals collected under artificial cover boards or during natural objects surveys) in lieu of more accurate, but costly and destructive, population size estimators to detect and monitor size, state, and trends of amphibian populations. Although there are advantages and disadvantages to each approach, reliable use of abundance indices requires that they be calibrated with accurate population estimators. Such calibrations, however, are rare. The red back salamander, Plethodon cinereus, is an ecologically useful indicator species of forest dynamics, and accurate calibration of indices of salamander abundance could increase the reliability of abundance indices used in monitoring programs. We calibrated abundance indices derived from surveys of P. cinereus under artificial cover boards or natural objects with a more accurate estimator of their population size in a New England forest. Average densities/m2 and capture probabilities of P. cinereus under natural objects or cover boards in independent, replicate sites at the Harvard Forest (Petersham, Massachusetts, USA) were similar in stands dominated by Tsuga canadensis (eastern hemlock) and deciduous hardwood species (predominantly Quercus rubra [red oak] and Acer rubrum [red maple]). The abundance index based on salamanders surveyed under natural objects was significantly associated with density estimates of P. cinereus derived from depletion (removal) surveys, but underestimated true density by 50%. In contrast, the abundance index based on cover-board surveys overestimated true density by a factor of 8 and the association between the cover-board index and the density estimates was not statistically significant. We conclude that when calibrated and used appropriately, some abundance indices may provide cost-effective and reliable measures of P. cinereus abundance that could be used in conservation assessments and long-term monitoring at Harvard Forest and other northeastern USA forests. PMID:26020008

  13. Application of near-infrared spectroscopy for estimation of non-structural carbohydrates in foliar samples of Eucalyptus globulus Labilladière.

    PubMed

    Quentin, A G; Rodemann, T; Doutreleau, M-F; Moreau, M; Davies, N W; Millard, Peter

    2017-01-31

    Near-infrared reflectance spectroscopy (NIRS) is frequently used for the assessment of key nutrients of forage or crops but remains underused in ecological and physiological studies, especially to quantify non-structural carbohydrates. The aim of this study was to develop calibration models to assess the content in soluble sugars (fructose, glucose, sucrose) and starch in foliar material of Eucalyptus globulus. A partial least squares (PLS) regression was used on the sample spectral data and was compared to the contents measured using standard wet chemistry methods. The calibration models were validated using a completely independent set of samples. We used key indicators such as the ratio of prediction to deviation (RPD) and the range error ratio to give an assessment of the performance of the calibration models. Accurate calibration models were obtained for fructose and sucrose content (R2 > 0.85, root mean square error of prediction (RMSEP) of 0.95%–1.26% in the validation models), followed by sucrose and total soluble sugar content (R2 ~ 0.70 and RMSEP > 2.3%). In comparison to the others, calibration of the starch model performed very poorly with RPD = 1.70. This study establishes the ability of the NIRS calibration model to infer soluble sugar content in foliar samples of E. globulus in a rapid and cost-effective way. We suggest a complete redevelopment of the starch analysis using more specific quantification such as an HPLC-based technique to reach higher performance in the starch model. Overall, NIRS could serve as a high-throughput phenotyping tool to study plant response to stress factors.

  14. Calibration and accuracy analysis of a focused plenoptic camera

    NASA Astrophysics Data System (ADS)

    Zeller, N.; Quint, F.; Stilla, U.

    2014-08-01

    In this article we introduce new methods for the calibration of depth images from focused plenoptic cameras and validate the results. We start with a brief description of the concept of a focused plenoptic camera and how from the recorded raw image a depth map can be estimated. For this camera, an analytical expression of the depth accuracy is derived for the first time. In the main part of the paper, methods to calibrate a focused plenoptic camera are developed and evaluated. The optical imaging process is calibrated by using a method which is already known from the calibration of traditional cameras. For the calibration of the depth map two new model based methods, which make use of the projection concept of the camera are developed. These new methods are compared to a common curve fitting approach, which is based on Taylor-series-approximation. Both model based methods show significant advantages compared to the curve fitting method. They need less reference points for calibration than the curve fitting method and moreover, supply a function which is valid in excess of the range of calibration. In addition the depth map accuracy of the plenoptic camera was experimentally investigated for different focal lengths of the main lens and is compared to the analytical evaluation.

  15. A Measuring System for Well Logging Attitude and a Method of Sensor Calibration

    PubMed Central

    Ren, Yong; Wang, Yangdong; Wang, Mijian; Wu, Sheng; Wei, Biao

    2014-01-01

    This paper proposes an approach for measuring the azimuth angle and tilt angle of underground drilling tools with a MEMS three-axis accelerometer and a three-axis fluxgate sensor. A mathematical model of well logging attitude angle is deduced based on combining space coordinate transformations and algebraic equations. In addition, a system implementation plan of the inclinometer is given in this paper, which features low cost, small volume and integration. Aiming at the sensor and assembly errors, this paper analyses the sources of errors, and establishes two mathematical models of errors and calculates related parameters to achieve sensor calibration. The results show that this scheme can obtain a stable and high precision azimuth angle and tilt angle of drilling tools, with the deviation of the former less than ±1.4° and the deviation of the latter less than ±0.1°. PMID:24859028

  16. A measuring system for well logging attitude and a method of sensor calibration.

    PubMed

    Ren, Yong; Wang, Yangdong; Wang, Mijian; Wu, Sheng; Wei, Biao

    2014-05-23

    This paper proposes an approach for measuring the azimuth angle and tilt angle of underground drilling tools with a MEMS three-axis accelerometer and a three-axis fluxgate sensor. A mathematical model of well logging attitude angle is deduced based on combining space coordinate transformations and algebraic equations. In addition, a system implementation plan of the inclinometer is given in this paper, which features low cost, small volume and integration. Aiming at the sensor and assembly errors, this paper analyses the sources of errors, and establishes two mathematical models of errors and calculates related parameters to achieve sensor calibration. The results show that this scheme can obtain a stable and high precision azimuth angle and tilt angle of drilling tools, with the deviation of the former less than ±1.4° and the deviation of the latter less than ±0.1°.

  17. Improved quantification of important beer quality parameters based on nonlinear calibration methods applied to FT-MIR spectra.

    PubMed

    Cernuda, Carlos; Lughofer, Edwin; Klein, Helmut; Forster, Clemens; Pawliczek, Marcin; Brandstetter, Markus

    2017-01-01

    During the production process of beer, it is of utmost importance to guarantee a high consistency of the beer quality. For instance, the bitterness is an essential quality parameter which has to be controlled within the specifications at the beginning of the production process in the unfermented beer (wort) as well as in final products such as beer and beer mix beverages. Nowadays, analytical techniques for quality control in beer production are mainly based on manual supervision, i.e., samples are taken from the process and analyzed in the laboratory. This typically requires significant lab technicians efforts for only a small fraction of samples to be analyzed, which leads to significant costs for beer breweries and companies. Fourier transform mid-infrared (FT-MIR) spectroscopy was used in combination with nonlinear multivariate calibration techniques to overcome (i) the time consuming off-line analyses in beer production and (ii) already known limitations of standard linear chemometric methods, like partial least squares (PLS), for important quality parameters Speers et al. (J I Brewing. 2003;109(3):229-235), Zhang et al. (J I Brewing. 2012;118(4):361-367) such as bitterness, citric acid, total acids, free amino nitrogen, final attenuation, or foam stability. The calibration models are established with enhanced nonlinear techniques based (i) on a new piece-wise linear version of PLS by employing fuzzy rules for local partitioning the latent variable space and (ii) on extensions of support vector regression variants (-PLSSVR and ν-PLSSVR), for overcoming high computation times in high-dimensional problems and time-intensive and inappropriate settings of the kernel parameters. Furthermore, we introduce a new model selection scheme based on bagged ensembles in order to improve robustness and thus predictive quality of the final models. The approaches are tested on real-world calibration data sets for wort and beer mix beverages, and successfully compared to linear methods, showing a clear out-performance in most cases and being able to meet the model quality requirements defined by the experts at the beer company. Figure Workflow for calibration of non-Linear model ensembles from FT-MIR spectra in beer production .

  18. Building Energy Simulation Test for Existing Homes (BESTEST-EX) (Presentation)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Judkoff, R.; Neymark, J.; Polly, B.

    2011-12-01

    This presentation discusses the goals of NREL Analysis Accuracy R&D; BESTEST-EX goals; what BESTEST-EX is; how it works; 'Building Physics' cases; 'Building Physics' reference results; 'utility bill calibration' cases; limitations and potential future work. Goals of NREL Analysis Accuracy R&D are: (1) Provide industry with the tools and technical information needed to improve the accuracy and consistency of analysis methods; (2) Reduce the risks associated with purchasing, financing, and selling energy efficiency upgrades; and (3) Enhance software and input collection methods considering impacts on accuracy, cost, and time of energy assessments. BESTEST-EX Goals are: (1) Test software predictions of retrofitmore » energy savings in existing homes; (2) Ensure building physics calculations and utility bill calibration procedures perform up to a minimum standard; and (3) Quantify impact of uncertainties in input audit data and occupant behavior. BESTEST-EX is a repeatable procedure that tests how well audit software predictions compare to the current state of the art in building energy simulation. There is no direct truth standard. However, reference software have been subjected to validation testing, including comparisons with empirical data.« less

  19. Information theoretic methods for image processing algorithm optimization

    NASA Astrophysics Data System (ADS)

    Prokushkin, Sergey F.; Galil, Erez

    2015-01-01

    Modern image processing pipelines (e.g., those used in digital cameras) are full of advanced, highly adaptive filters that often have a large number of tunable parameters (sometimes > 100). This makes the calibration procedure for these filters very complex, and the optimal results barely achievable in the manual calibration; thus an automated approach is a must. We will discuss an information theory based metric for evaluation of algorithm adaptive characteristics ("adaptivity criterion") using noise reduction algorithms as an example. The method allows finding an "orthogonal decomposition" of the filter parameter space into the "filter adaptivity" and "filter strength" directions. This metric can be used as a cost function in automatic filter optimization. Since it is a measure of a physical "information restoration" rather than perceived image quality, it helps to reduce the set of the filter parameters to a smaller subset that is easier for a human operator to tune and achieve a better subjective image quality. With appropriate adjustments, the criterion can be used for assessment of the whole imaging system (sensor plus post-processing).

  20. CALIBRATION OF SEMI-ANALYTIC MODELS OF GALAXY FORMATION USING PARTICLE SWARM OPTIMIZATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruiz, Andrés N.; Domínguez, Mariano J.; Yaryura, Yamila

    2015-03-10

    We present a fast and accurate method to select an optimal set of parameters in semi-analytic models of galaxy formation and evolution (SAMs). Our approach compares the results of a model against a set of observables applying a stochastic technique called Particle Swarm Optimization (PSO), a self-learning algorithm for localizing regions of maximum likelihood in multidimensional spaces that outperforms traditional sampling methods in terms of computational cost. We apply the PSO technique to the SAG semi-analytic model combined with merger trees extracted from a standard Lambda Cold Dark Matter N-body simulation. The calibration is performed using a combination of observedmore » galaxy properties as constraints, including the local stellar mass function and the black hole to bulge mass relation. We test the ability of the PSO algorithm to find the best set of free parameters of the model by comparing the results with those obtained using a MCMC exploration. Both methods find the same maximum likelihood region, however, the PSO method requires one order of magnitude fewer evaluations. This new approach allows a fast estimation of the best-fitting parameter set in multidimensional spaces, providing a practical tool to test the consequences of including other astrophysical processes in SAMs.« less

  1. Comparison of TLD calibration methods for  192Ir dosimetry

    PubMed Central

    Butler, Duncan J.; Wilfert, Lisa; Ebert, Martin A.; Todd, Stephen P.; Hayton, Anna J.M.; Kron, Tomas

    2013-01-01

    For the purpose of dose measurement using a high‐dose rate  192Ir source, four methods of thermoluminescent dosimeter (TLD) calibration were investigated. Three of the four calibration methods used the  192Ir source. Dwell times were calculated to deliver 1 Gy to the TLDs irradiated either in air or water. Dwell time calculations were confirmed by direct measurement using an ionization chamber. The fourth method of calibration used 6 MV photons from a medical linear accelerator, and an energy correction factor was applied to account for the difference in sensitivity of the TLDs in  192Ir and 6 M V. The results of the four TLD calibration methods are presented in terms of the results of a brachytherapy audit where seven Australian centers irradiated three sets of TLDs in a water phantom. The results were in agreement within estimated uncertainties when the TLDs were calibrated with the  192Ir source. Calibrating TLDs in a phantom similar to that used for the audit proved to be the most practical method and provided the greatest confidence in measured dose. When calibrated using 6 MV photons, the TLD results were consistently higher than the  192Ir−calibrated TLDs, suggesting this method does not fully correct for the response of the TLDs when irradiated in the audit phantom. PACS number: 87 PMID:23318392

  2. Methods for Calibration of Prout-Tompkins Kinetics Parameters Using EZM Iteration and GLO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wemhoff, A P; Burnham, A K; de Supinski, B

    2006-11-07

    This document contains information regarding the standard procedures used to calibrate chemical kinetics parameters for the extended Prout-Tompkins model to match experimental data. Two methods for calibration are mentioned: EZM calibration and GLO calibration. EZM calibration matches kinetics parameters to three data points, while GLO calibration slightly adjusts kinetic parameters to match multiple points. Information is provided regarding the theoretical approach and application procedure for both of these calibration algorithms. It is recommended that for the calibration process, the user begin with EZM calibration to provide a good estimate, and then fine-tune the parameters using GLO. Two examples have beenmore » provided to guide the reader through a general calibrating process.« less

  3. Error modeling and analysis of star cameras for a class of 1U spacecraft

    NASA Astrophysics Data System (ADS)

    Fowler, David M.

    As spacecraft today become increasingly smaller, the demand for smaller components and sensors rises as well. The smartphone, a cutting edge consumer technology, has impressive collections of both sensors and processing capabilities and may have the potential to fill this demand in the spacecraft market. If the technologies of a smartphone can be used in space, the cost of building miniature satellites would drop significantly and give a boost to the aerospace and scientific communities. Concentrating on the problem of spacecraft orientation, this study sets ground to determine the capabilities of a smartphone camera when acting as a star camera. Orientations determined from star images taken from a smartphone camera are compared to those of higher quality cameras in order to determine the associated accuracies. The results of the study reveal the abilities of low-cost off-the-shelf imagers in space and give a starting point for future research in the field. The study began with a complete geometric calibration of each analyzed imager such that all comparisons start from the same base. After the cameras were calibrated, image processing techniques were introduced to correct for atmospheric, lens, and image sensor effects. Orientations for each test image are calculated through methods of identifying the stars exposed on each image. Analyses of these orientations allow the overall errors of each camera to be defined and provide insight into the abilities of low-cost imagers.

  4. SU-F-BRA-16: Development of a Radiation Monitoring Device Using a Low-Cost CCD Camera Following Radionuclide Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taneja, S; Fru, L Che; Desai, V

    Purpose: It is now commonplace to handle treatments of hyperthyroidism using iodine-131 as an outpatient procedure due to lower costs and less stringent federal regulations. The Nuclear Regulatory Commission has currently updated release guidelines for these procedures, but there is still a large uncertainty in the dose to the public. Current guidelines to minimize dose to the public require patients to remain isolated after treatment. The purpose of this study was to use a low-cost common device, such as a cell phone, to estimate exposure emitted from a patient to the general public. Methods: Measurements were performed using an Applemore » iPhone 3GS and a Cs-137 irradiator. The charge-coupled device (CCD) camera on the phone was irradiated to exposure rates ranging from 0.1 mR/hr to 100 mR/hr and 30-sec videos were taken during irradiation with the camera lens covered by electrical tape. Interactions were detected as white pixels on a black background in each video. Both single threshold (ST) and colony counting (CC) methods were performed using MATLAB®. Calibration curves were determined by comparing the total pixel intensity output from each method to the known exposure rate. Results: The calibration curve showed a linear relationship above 5 mR/hr for both analysis techniques. The number of events counted per unit exposure rate within the linear region was 19.5 ± 0.7 events/mR and 8.9 ± 0.4 events/mR for the ST and CC methods respectively. Conclusion: Two algorithms were developed and show a linear relationship between photons detected by a CCD camera and low exposure rates, in the range of 5 mR/hr to 100-mR/hr. Future work aims to refine this model by investigating the dose-rate and energy dependencies of the camera response. This algorithm allows for quantitative monitoring of exposure from patients treated with iodine-131 using a simple device outside of the hospital.« less

  5. A calibration method based on virtual large planar target for cameras with large FOV

    NASA Astrophysics Data System (ADS)

    Yu, Lei; Han, Yangyang; Nie, Hong; Ou, Qiaofeng; Xiong, Bangshu

    2018-02-01

    In order to obtain high precision in camera calibration, a target should be large enough to cover the whole field of view (FOV). For cameras with large FOV, using a small target will seriously reduce the precision of calibration. However, using a large target causes many difficulties in making, carrying and employing the large target. In order to solve this problem, a calibration method based on the virtual large planar target (VLPT), which is virtually constructed with multiple small targets (STs), is proposed for cameras with large FOV. In the VLPT-based calibration method, first, the positions and directions of STs are changed several times to obtain a number of calibration images. Secondly, the VLPT of each calibration image is created by finding the virtual point corresponding to the feature points of the STs. Finally, intrinsic and extrinsic parameters of the camera are calculated by using the VLPTs. Experiment results show that the proposed method can not only achieve the similar calibration precision as those employing a large target, but also have good stability in the whole measurement area. Thus, the difficulties to accurately calibrate cameras with large FOV can be perfectly tackled by the proposed method with good operability.

  6. A Full-Envelope Air Data Calibration and Three-Dimensional Wind Estimation Method Using Global Output-Error Optimization and Flight-Test Techniques

    NASA Technical Reports Server (NTRS)

    Taylor, Brian R.

    2012-01-01

    A novel, efficient air data calibration method is proposed for aircraft with limited envelopes. This method uses output-error optimization on three-dimensional inertial velocities to estimate calibration and wind parameters. Calibration parameters are based on assumed calibration models for static pressure, angle of attack, and flank angle. Estimated wind parameters are the north, east, and down components. The only assumptions needed for this method are that the inertial velocities and Euler angles are accurate, the calibration models are correct, and that the steady-state component of wind is constant throughout the maneuver. A two-minute maneuver was designed to excite the aircraft over the range of air data calibration parameters and de-correlate the angle-of-attack bias from the vertical component of wind. Simulation of the X-48B (The Boeing Company, Chicago, Illinois) aircraft was used to validate the method, ultimately using data derived from wind-tunnel testing to simulate the un-calibrated air data measurements. Results from the simulation were accurate and robust to turbulence levels comparable to those observed in flight. Future experiments are planned to evaluate the proposed air data calibration in a flight environment.

  7. Reconstructing paleoclimate fields using online data assimilation with a linear inverse model

    NASA Astrophysics Data System (ADS)

    Perkins, Walter A.; Hakim, Gregory J.

    2017-05-01

    We examine the skill of a new approach to climate field reconstructions (CFRs) using an online paleoclimate data assimilation (PDA) method. Several recent studies have foregone climate model forecasts during assimilation due to the computational expense of running coupled global climate models (CGCMs) and the relatively low skill of these forecasts on longer timescales. Here we greatly diminish the computational cost by employing an empirical forecast model (linear inverse model, LIM), which has been shown to have skill comparable to CGCMs for forecasting annual-to-decadal surface temperature anomalies. We reconstruct annual-average 2 m air temperature over the instrumental period (1850-2000) using proxy records from the PAGES 2k Consortium Phase 1 database; proxy models for estimating proxy observations are calibrated on GISTEMP surface temperature analyses. We compare results for LIMs calibrated using observational (Berkeley Earth), reanalysis (20th Century Reanalysis), and CMIP5 climate model (CCSM4 and MPI) data relative to a control offline reconstruction method. Generally, we find that the usage of LIM forecasts for online PDA increases reconstruction agreement with the instrumental record for both spatial fields and global mean temperature (GMT). Specifically, the coefficient of efficiency (CE) skill metric for detrended GMT increases by an average of 57 % over the offline benchmark. LIM experiments display a common pattern of skill improvement in the spatial fields over Northern Hemisphere land areas and in the high-latitude North Atlantic-Barents Sea corridor. Experiments for non-CGCM-calibrated LIMs reveal region-specific reductions in spatial skill compared to the offline control, likely due to aspects of the LIM calibration process. Overall, the CGCM-calibrated LIMs have the best performance when considering both spatial fields and GMT. A comparison with the persistence forecast experiment suggests that improvements are associated with the linear dynamical constraints of the forecast and not simply persistence of temperature anomalies.

  8. Low Cost Sensor Calibration Options

    EPA Science Inventory

    Low-cost sensors ($1 D0-500) represent a unique class of air monitoring devices that may provide for more ubiquitous pollutant monitoring. They vary widely in design and measure pollutants, ranging from ozone, particulate matter, to volatile organic compounds. Many of these senso...

  9. A fast calibration method for 3-D tracking of ultrasound images using a spatial localizer.

    PubMed

    Pagoulatos, N; Haynor, D R; Kim, Y

    2001-09-01

    We have developed a fast calibration method for computing the position and orientation of 2-D ultrasound (US) images in 3-D space where a position sensor is mounted on the US probe. This calibration is required in the fields of 3-D ultrasound and registration of ultrasound with other imaging modalities. Most of the existing calibration methods require a complex and tedious experimental procedure. Our method is simple and it is based on a custom-built phantom. Thirty N-fiducials (markers in the shape of the letter "N") embedded in the phantom provide the basis for our calibration procedure. We calibrated a 3.5-MHz sector phased-array probe with a magnetic position sensor, and we studied the accuracy and precision of our method. A typical calibration procedure requires approximately 2 min. We conclude that we can achieve accurate and precise calibration using a single US image, provided that a large number (approximately ten) of N-fiducials are captured within the US image, enabling a representative sampling of the imaging plane.

  10. Self-calibration method for rotating laser positioning system using interscanning technology and ultrasonic ranging.

    PubMed

    Wu, Jun; Yu, Zhijing; Zhuge, Jingchang

    2016-04-01

    A rotating laser positioning system (RLPS) is an efficient measurement method for large-scale metrology. Due to multiple transmitter stations, which consist of a measurement network, the position relationship of these stations must be first calibrated. However, with such auxiliary devices such as a laser tracker, scale bar, and complex calibration process, the traditional calibration methods greatly reduce the measurement efficiency. This paper proposes a self-calibration method for RLPS, which can automatically obtain the position relationship. The method is implemented through interscanning technology by using a calibration bar mounted on the transmitter station. Each bar is composed of three RLPS receivers and one ultrasonic sensor whose coordinates are known in advance. The calibration algorithm is mainly based on multiplane and distance constraints and is introduced in detail through a two-station mathematical model. The repeated experiments demonstrate that the coordinate measurement uncertainty of spatial points by using this method is about 0.1 mm, and the accuracy experiments show that the average coordinate measurement deviation is about 0.3 mm compared with a laser tracker. The accuracy can meet the requirements of most applications, while the calibration efficiency is significantly improved.

  11. Smart Aquifer Characterisation validated using Information Theory and Cost benefit analysis

    NASA Astrophysics Data System (ADS)

    Moore, Catherine

    2016-04-01

    The field data acquisition required to characterise aquifer systems are time consuming and expensive. Decisions regarding field testing, the type of field measurements to make and the spatial and temporal resolution of measurements have significant cost repercussions and impact the accuracy of various predictive simulations. The Smart Aquifer Characterisation (SAC) research programme (New Zealand (NZ)) addresses this issue by assembling and validating a suite of innovative methods for characterising groundwater systems at the large, regional and national scales. The primary outcome is a suite of cost effective tools and procedures provided to resource managers to advance the understanding and management of groundwater systems and thereby assist decision makers and communities in the management of their groundwater resources, including the setting of land use limits that protect fresh water flows and quality and the ecosystems dependent on that fresh water. The programme has focused novel investigation approaches including the use of geophysics, satellite remote sensing, temperature sensing and age dating. The SMART (Save Money And Reduce Time) aspect of the programme emphasises techniques that use these passive cost effective data sources to characterise groundwater systems at both the aquifer and the national scale by: • Determination of aquifer hydraulic properties • Determination of aquifer dimensions • Quantification of fluxes between ground waters and surface water • Groundwater age dating These methods allow either a lower cost method for estimating these properties and fluxes, or a greater spatial and temporal coverage for the same cost. To demonstrate the cost effectiveness of the methods a 'data worth' analysis is undertaken. The data worth method involves quantification of the utility of observation data in terms of how much it reduces the uncertainty of model parameters and decision focussed predictions which depend on these parameters. Such decision focussed predictions can include many aspects of system behaviour which underpin management decisions e.g., drawdown of groundwater levels, salt water intrusion, stream depletion, or wetland water level. The value of a data type or an observation location (e.g. remote sensing data (Westerhoff 2015) or a distributed temperature sensing measurement) is greater the more it enhances the certainty with which the model is able to predict such environmental behaviour. By comparing the difference in predictive uncertainty with or without such data, the value of potential observations is assessed. This can easily be achieved using rapid linear predictive uncertainty analysis methods (Moore 2005, Moore and Doherty 2006). By assessing the tension between the cost of data acquisition and the predictive accuracy achieved by gathering these observations in a pareto analysis, the relative cost effectiveness of these novel methods can be compared with more traditional measurements (e.g. bore logs, aquifer pumping tests, and simultaneous stream loss gaugings) for a suite of pertinent groundwater management decisions (Wallis et al 2014). This comparison illuminates those field data acquisition methods which offer the best value for the specific issues managers face in any region, and also indicates the diminishing returns of increasingly large and expensive data sets. References: Wallis I, Moore C, Post V, Wolf L, Martens E, Prommer. Using predictive uncertainty analysis to optimise tracer test design and data acquisition. Journal of Hydrology 515 (2014) 191-204. Moore, C. (2005). The use of regularized inversion in groundwater model calibration and prediction uncertainty analysis. Thesis submitted for the degree of Doctor of Philosophy at The University of Queensland, Australia. Moore, C., and Doherty, D. (2005). Role of the calibration process in reducing model predictive error. Water Resources Research 41, no.5 W05050. Westerhoff RS. Using uncertainty of Penman and Penman-Monteith methods in combined satellite and ground-based evapotranspiration estimates. Remote Sensing of Environment 169, 102-112

  12. Spectrophotometric method for quantitative determination of total anthocyanins and quality characteristics of roselle (Hibiscus sabdariffa).

    PubMed

    Sukwattanasinit, Tasamaporn; Burana-Osot, Jankana; Sotanaphun, Uthai

    2007-11-01

    A simple, rapid and cost-saving method for the determination of total anthocyanins in roselle has been developed. The method was based on pH-differential spectrophotometry. The calibration curve of the major anthocyanin in roselle, delphinidin 3-sambubioside (Dp-3-sam), was constructed by using methyl orange and their correlation factor. The reliability of this developed method was comparable to the direct method using standard Dp-3-sam and the HPLC method. Quality characteristics of roselle produced in Thailand were also reported. Its physical quality met the required specifications. The overall chemical quality was herein surveyed for the first time and it was found to be the important parameter corresponded to the commercial grading of roselle. Total contents of anthocyanins and phenolics were proportional to the antiradical capacity.

  13. Coupling HYDRUS-1D Code with PA-DDS Algorithms for Inverse Calibration

    NASA Astrophysics Data System (ADS)

    Wang, Xiang; Asadzadeh, Masoud; Holländer, Hartmut

    2017-04-01

    Numerical modelling requires calibration to predict future stages. A standard method for calibration is inverse calibration where generally multi-objective optimization algorithms are used to find a solution, e.g. to find an optimal solution of the van Genuchten Mualem (VGM) parameters to predict water fluxes in the vadose zone. We coupled HYDRUS-1D with PA-DDS to add a new, robust function for inverse calibration to the model. The PA-DDS method is a recently developed multi-objective optimization algorithm, which combines Dynamically Dimensioned Search (DDS) and Pareto Archived Evolution Strategy (PAES). The results were compared to a standard method (Marquardt-Levenberg method) implemented in HYDRUS-1D. Calibration performance is evaluated using observed and simulated soil moisture at two soil layers in the Southern Abbotsford, British Columbia, Canada in the terms of the root mean squared error (RMSE) and the Nash-Sutcliffe Efficiency (NSE). Results showed low RMSE values of 0.014 and 0.017 and strong NSE values of 0.961 and 0.939. Compared to the results by the Marquardt-Levenberg method, we received better calibration results for deeper located soil sensors. However, VGM parameters were similar comparing with previous studies. Both methods are equally computational efficient. We claim that a direct implementation of PA-DDS into HYDRUS-1D should reduce the computation effort further. This, the PA-DDS method is efficient for calibrating recharge for complex vadose zone modelling with multiple soil layer and can be a potential tool for calibration of heat and solute transport. Future work should focus on the effectiveness of PA-DDS for calibrating more complex versions of the model with complex vadose zone settings, with more soil layers, and against measured heat and solute transport. Keywords: Recharge, Calibration, HYDRUS-1D, Multi-objective Optimization

  14. Volumetric calibration of a plenoptic camera.

    PubMed

    Hall, Elise Munz; Fahringer, Timothy W; Guildenbecher, Daniel R; Thurow, Brian S

    2018-02-01

    The volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creation of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.

  15. Simultaneous overpass off nadir (SOON): a method for unified calibration/validation across IEOS and GEOSS system of systems

    NASA Astrophysics Data System (ADS)

    Ardanuy, Philip; Bergen, Bill; Huang, Allen; Kratz, Gene; Puschell, Jeff; Schueler, Carl; Walker, Joe

    2006-08-01

    The US operates a diverse, evolving constellation of research and operational environmental satellites, principally in polar and geosynchronous orbits. Our current and enhanced future domestic remote sensing capability is complemented by the significant capabilities of our current and potential future international partners. In this analysis, we define "success" through the data customers' "eyes": participating in the sufficient and continuously improving satisfaction of their mission responsibilities. To successfully fuse together observations from multiple simultaneous platforms and sensors into a common, self-consistent, operational environment requires that there exist a unified calibration and validation approach. Here, we consider develop a concept for an integrating framework for absolute accuracy; long-term stability; self-consistency among sensors, platforms, techniques, and observing systems; and validation and characterization of performance. Across all systems, this is a non-trivial problem. Simultaneous Nadir Overpasses, or SNO's, provide a proven intercomparison technique: simultaneous, collocated, co-angular measurements. Many systems have off-nadir elements, or effects, that must be calibrated. For these systems, the nadir technique constrains the process. We define the term "SOON," for simultaneous overpass off nadir. We present a target architecture and sensitivity analysis for the affordable, sustainable implementation of a global SOON calibration/validation network that can deliver the much-needed comprehensive, common, self-consistent operational picture in near-real time, at an affordable cost.

  16. Novel crystal timing calibration method based on total variation

    NASA Astrophysics Data System (ADS)

    Yu, Xingjian; Isobe, Takashi; Watanabe, Mitsuo; Liu, Huafeng

    2016-11-01

    A novel crystal timing calibration method based on total variation (TV), abbreviated as ‘TV merge’, has been developed for a high-resolution positron emission tomography (PET) system. The proposed method was developed for a system with a large number of crystals, it can provide timing calibration at the crystal level. In the proposed method, the timing calibration process was formulated as a linear problem. To robustly optimize the timing resolution, a TV constraint was added to the linear equation. Moreover, to solve the computer memory problem associated with the calculation of the timing calibration factors for systems with a large number of crystals, the merge component was used for obtaining the crystal level timing calibration values. Compared with other conventional methods, the data measured from a standard cylindrical phantom filled with a radioisotope solution was sufficient for performing a high-precision crystal-level timing calibration. In this paper, both simulation and experimental studies were performed to demonstrate the effectiveness and robustness of the TV merge method. We compare the timing resolutions of a 22Na point source, which was located in the field of view (FOV) of the brain PET system, with various calibration techniques. After implementing the TV merge method, the timing resolution improved from 3.34 ns at full width at half maximum (FWHM) to 2.31 ns FWHM.

  17. BESTEST-EX | Buildings | NREL

    Science.gov Websites

    method for testing home energy audit software and associated calibration methods. BESTEST-EX is one of Energy Analysis Model Calibration Methods. When completed, the ANSI/RESNET SMOT will specify test procedures for evaluating calibration methods used in conjunction with predicting building energy use and

  18. Calibrating a Rainfall-Runoff and Routing Model for the Continental United States

    NASA Astrophysics Data System (ADS)

    Jankowfsky, S.; Li, S.; Assteerawatt, A.; Tillmanns, S.; Hilberts, A.

    2014-12-01

    Catastrophe risk models are widely used in the insurance industry to estimate the cost of risk. The models consist of hazard models linked to vulnerability and financial loss models. In flood risk models, the hazard model generates inundation maps. In order to develop country wide inundation maps for different return periods a rainfall-runoff and routing model is run using stochastic rainfall data. The simulated discharge and runoff is then input to a two dimensional inundation model, which produces the flood maps. In order to get realistic flood maps, the rainfall-runoff and routing models have to be calibrated with observed discharge data. The rainfall-runoff model applied here is a semi-distributed model based on the Topmodel (Beven and Kirkby, 1979) approach which includes additional snowmelt and evapotranspiration models. The routing model is based on the Muskingum-Cunge (Cunge, 1969) approach and includes the simulation of lakes and reservoirs using the linear reservoir approach. Both models were calibrated using the multiobjective NSGA-II (Deb et al., 2002) genetic algorithm with NLDAS forcing data and around 4500 USGS discharge gauges for the period from 1979-2013. Additional gauges having no data after 1979 were calibrated using CPC rainfall data. The model performed well in wetter regions and shows the difficulty of simulating areas with sinks such as karstic areas or dry areas. Beven, K., Kirkby, M., 1979. A physically based, variable contributing area model of basin hydrology. Hydrol. Sci. Bull. 24 (1), 43-69. Cunge, J.A., 1969. On the subject of a flood propagation computation method (Muskingum method), J. Hydr. Research, 7(2), 205-230. Deb, K., Pratap, A., Agarwal, S., Meyarivan, T., 2002. A fast and elitist multiobjective genetic algorithm: NSGA-II, IEEE Transactions on evolutionary computation, 6(2), 182-197.

  19. Quantifying and reducing statistical uncertainty in sample-based health program costing studies in low- and middle-income countries.

    PubMed

    Rivera-Rodriguez, Claudia L; Resch, Stephen; Haneuse, Sebastien

    2018-01-01

    In many low- and middle-income countries, the costs of delivering public health programs such as for HIV/AIDS, nutrition, and immunization are not routinely tracked. A number of recent studies have sought to estimate program costs on the basis of detailed information collected on a subsample of facilities. While unbiased estimates can be obtained via accurate measurement and appropriate analyses, they are subject to statistical uncertainty. Quantification of this uncertainty, for example, via standard errors and/or 95% confidence intervals, provides important contextual information for decision-makers and for the design of future costing studies. While other forms of uncertainty, such as that due to model misspecification, are considered and can be investigated through sensitivity analyses, statistical uncertainty is often not reported in studies estimating the total program costs. This may be due to a lack of awareness/understanding of (1) the technical details regarding uncertainty estimation and (2) the availability of software with which to calculate uncertainty for estimators resulting from complex surveys. We provide an overview of statistical uncertainty in the context of complex costing surveys, emphasizing the various potential specific sources that contribute to overall uncertainty. We describe how analysts can compute measures of uncertainty, either via appropriately derived formulae or through resampling techniques such as the bootstrap. We also provide an overview of calibration as a means of using additional auxiliary information that is readily available for the entire program, such as the total number of doses administered, to decrease uncertainty and thereby improve decision-making and the planning of future studies. A recent study of the national program for routine immunization in Honduras shows that uncertainty can be reduced by using information available prior to the study. This method can not only be used when estimating the total cost of delivering established health programs but also to decrease uncertainty when the interest lies in assessing the incremental effect of an intervention. Measures of statistical uncertainty associated with survey-based estimates of program costs, such as standard errors and 95% confidence intervals, provide important contextual information for health policy decision-making and key inputs for the design of future costing studies. Such measures are often not reported, possibly because of technical challenges associated with their calculation and a lack of awareness of appropriate software. Modern statistical analysis methods for survey data, such as calibration, provide a means to exploit additional information that is readily available but was not used in the design of the study to significantly improve the estimation of total cost through the reduction of statistical uncertainty.

  20. High-accuracy self-calibration method for dual-axis rotation-modulating RLG-INS

    NASA Astrophysics Data System (ADS)

    Wei, Guo; Gao, Chunfeng; Wang, Qi; Wang, Qun; Long, Xingwu

    2017-05-01

    Inertial navigation system has been the core component of both military and civil navigation systems. Dual-axis rotation modulation can completely eliminate the inertial elements constant errors of the three axes to improve the system accuracy. But the error caused by the misalignment angles and the scale factor error cannot be eliminated through dual-axis rotation modulation. And discrete calibration method cannot fulfill requirements of high-accurate calibration of the mechanically dithered ring laser gyroscope navigation system with shock absorbers. This paper has analyzed the effect of calibration error during one modulated period and presented a new systematic self-calibration method for dual-axis rotation-modulating RLG-INS. Procedure for self-calibration of dual-axis rotation-modulating RLG-INS has been designed. The results of self-calibration simulation experiment proved that: this scheme can estimate all the errors in the calibration error model, the calibration precision of the inertial sensors scale factor error is less than 1ppm and the misalignment is less than 5″. These results have validated the systematic self-calibration method and proved its importance for accuracy improvement of dual -axis rotation inertial navigation system with mechanically dithered ring laser gyroscope.

  1. Uncertainty propagation in the calibration equations for NTC thermistors

    NASA Astrophysics Data System (ADS)

    Liu, Guang; Guo, Liang; Liu, Chunlong; Wu, Qingwen

    2018-06-01

    The uncertainty propagation problem is quite important for temperature measurements, since we rely so much on the sensors and calibration equations. Although uncertainty propagation for platinum resistance or radiation thermometers is well known, there have been few publications concerning negative temperature coefficient (NTC) thermistors. Insight into the propagation characteristics of uncertainty that develop when equations are determined using the Lagrange interpolation or least-squares fitting method is presented here with respect to several of the most common equations used in NTC thermistor calibration. Within this work, analytical expressions of the propagated uncertainties for both fitting methods are derived for the uncertainties in the measured temperature and resistance at each calibration point. High-precision calibration of an NTC thermistor in a precision water bath was performed by means of the comparison method. Results show that, for both fitting methods, the propagated uncertainty is flat in the interpolation region but rises rapidly beyond the calibration range. Also, for temperatures interpolated between calibration points, the propagated uncertainty is generally no greater than that associated with the calibration points. For least-squares fitting, the propagated uncertainty is significantly reduced by increasing the number of calibration points and can be well kept below the uncertainty of the calibration points.

  2. A New Online Calibration Method Based on Lord's Bias-Correction.

    PubMed

    He, Yinhong; Chen, Ping; Li, Yong; Zhang, Shumei

    2017-09-01

    Online calibration technique has been widely employed to calibrate new items due to its advantages. Method A is the simplest online calibration method and has attracted many attentions from researchers recently. However, a key assumption of Method A is that it treats person-parameter estimates θ ^ s (obtained by maximum likelihood estimation [MLE]) as their true values θ s , thus the deviation of the estimated θ ^ s from their true values might yield inaccurate item calibration when the deviation is nonignorable. To improve the performance of Method A, a new method, MLE-LBCI-Method A, is proposed. This new method combines a modified Lord's bias-correction method (named as maximum likelihood estimation-Lord's bias-correction with iteration [MLE-LBCI]) with the original Method A in an effort to correct the deviation of θ ^ s which may adversely affect the item calibration precision. Two simulation studies were carried out to explore the performance of both MLE-LBCI and MLE-LBCI-Method A under several scenarios. Simulation results showed that MLE-LBCI could make a significant improvement over the ML ability estimates, and MLE-LBCI-Method A did outperform Method A in almost all experimental conditions.

  3. Calibration procedure of Hukseflux SR25 to Establish the Diffuse Reference for the Outdoor Broadband Radiometer Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reda, Ibrahim M.; Andreas, Afshin M.

    2017-08-01

    Accurate pyranometer calibrations, traceable to internationally recognized standards, are critical for solar irradiance measurements. One calibration method is the component summation method, where the pyranometers are calibrated outdoors under clear sky conditions, and the reference global solar irradiance is calculated as the sum of two reference components, the diffuse horizontal and subtended beam solar irradiances. The beam component is measured with pyrheliometers traceable to the World Radiometric Reference, while there is no internationally recognized reference for the diffuse component. In the absence of such a reference, we present a method to consistently calibrate pyranometers for measuring the diffuse component. Themore » method is based on using a modified shade/unshade method and a pyranometer with less than 0.5 W/m2 thermal offset. The calibration result shows that the responsivity of Hukseflux SR25 pyranometer equals 10.98 uV/(W/m2) with +/-0.86 percent uncertainty.« less

  4. Radiation calibration for LWIR Hyperspectral Imager Spectrometer

    NASA Astrophysics Data System (ADS)

    Yang, Zhixiong; Yu, Chunchao; Zheng, Wei-jian; Lei, Zhenggang; Yan, Min; Yuan, Xiaochun; Zhang, Peizhong

    2014-11-01

    The radiometric calibration of LWIR Hyperspectral imager Spectrometer is presented. The lab has been developed to LWIR Interferometric Hyperspectral imager Spectrometer Prototype(CHIPED-I) to study Lab Radiation Calibration, Two-point linear calibration is carried out for the spectrometer by using blackbody respectively. Firstly, calibration measured relative intensity is converted to the absolute radiation lightness of the object. Then, radiation lightness of the object is is converted the brightness temperature spectrum by the method of brightness temperature. The result indicated †that this method of Radiation Calibration calibration was very good.

  5. A proposed standard method for polarimetric calibration and calibration verification

    NASA Astrophysics Data System (ADS)

    Persons, Christopher M.; Jones, Michael W.; Farlow, Craig A.; Morell, L. Denise; Gulley, Michael G.; Spradley, Kevin D.

    2007-09-01

    Accurate calibration of polarimetric sensors is critical to reducing and analyzing phenomenology data, producing uniform polarimetric imagery for deployable sensors, and ensuring predictable performance of polarimetric algorithms. It is desirable to develop a standard calibration method, including verification reporting, in order to increase credibility with customers and foster communication and understanding within the polarimetric community. This paper seeks to facilitate discussions within the community on arriving at such standards. Both the calibration and verification methods presented here are performed easily with common polarimetric equipment, and are applicable to visible and infrared systems with either partial Stokes or full Stokes sensitivity. The calibration procedure has been used on infrared and visible polarimetric imagers over a six year period, and resulting imagery has been presented previously at conferences and workshops. The proposed calibration method involves the familiar calculation of the polarimetric data reduction matrix by measuring the polarimeter's response to a set of input Stokes vectors. With this method, however, linear combinations of Stokes vectors are used to generate highly accurate input states. This allows the direct measurement of all system effects, in contrast with fitting modeled calibration parameters to measured data. This direct measurement of the data reduction matrix allows higher order effects that are difficult to model to be discovered and corrected for in calibration. This paper begins with a detailed tutorial on the proposed calibration and verification reporting methods. Example results are then presented for a LWIR rotating half-wave retarder polarimeter.

  6. A low-cost acoustic permeameter

    NASA Astrophysics Data System (ADS)

    Drake, Stephen A.; Selker, John S.; Higgins, Chad W.

    2017-04-01

    Intrinsic permeability is an important parameter that regulates air exchange through porous media such as snow. Standard methods of measuring snow permeability are inconvenient to perform outdoors, are fraught with sampling errors, and require specialized equipment, while bringing intact samples back to the laboratory is also challenging. To address these issues, we designed, built, and tested a low-cost acoustic permeameter that allows computation of volume-averaged intrinsic permeability for a homogenous medium. In this paper, we validate acoustically derived permeability of homogenous, reticulated foam samples by comparison with results derived using a standard flow-through permeameter. Acoustic permeameter elements were designed for use in snow, but the measurement methods are not snow-specific. The electronic components - consisting of a signal generator, amplifier, speaker, microphone, and oscilloscope - are inexpensive and easily obtainable. The system is suitable for outdoor use when it is not precipitating, but the electrical components require protection from the elements in inclement weather. The permeameter can be operated with a microphone either internally mounted or buried a known depth in the medium. The calibration method depends on choice of microphone positioning. For an externally located microphone, calibration was based on a low-frequency approximation applied at 500 Hz that provided an estimate of both intrinsic permeability and tortuosity. The low-frequency approximation that we used is valid up to 2 kHz, but we chose 500 Hz because data reproducibility was maximized at this frequency. For an internally mounted microphone, calibration was based on attenuation at 50 Hz and returned only intrinsic permeability. We found that 50 Hz corresponded to a wavelength that minimized resonance frequencies in the acoustic tube and was also within the response limitations of the microphone. We used reticulated foam of known permeability (ranging from 2 × 10-7 to 3 × 10-9 m2) and estimated tortuosity of 1.05 to validate both methods. For the externally mounted microphone the mean normalized standard deviation was 6 % for permeability and 2 % for tortuosity. The mean relative error from known measurements was 17 % for permeability and 2 % for tortuosity. For the internally mounted microphone the mean normalized standard deviation for permeability was 10 % and the relative error was also 10 %. Permeability determination for an externally mounted microphone is less sensitive to environmental noise than is the internally mounted microphone and is therefore the recommended method. The approximation using the internally mounted microphone was developed as an alternative for circumstances in which placing the microphone in the medium was not feasible. Environmental noise degrades precision of both methods and is recognizable as increased scatter for replicate data points.

  7. The Impact of Indoor and Outdoor Radiometer Calibration on Solar Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, Aron; Sengupta, Manajit; Andreas, Afshin

    2016-06-02

    This study addresses the effect of calibration methodologies on calibration responsivities and the resulting impact on radiometric measurements. The calibration responsivities used in this study are provided by NREL's broadband outdoor radiometer calibration (BORCAL) and a few prominent manufacturers. The BORCAL method provides outdoor calibration responsivity of pyranometers and pyrheliometers at a 45 degree solar zenith angle and responsivity as a function of solar zenith angle determined by clear-sky comparisons to reference irradiance. The BORCAL method also employs a thermal offset correction to the calibration responsivity of single-black thermopile detectors used in pyranometers. Indoor calibrations of radiometers by their manufacturersmore » are performed using a stable artificial light source in a side-by-side comparison of the test radiometer under calibration to a reference radiometer of the same type. These different methods of calibration demonstrated 1percent to 2 percent differences in solar irradiance measurement. Analyzing these values will ultimately enable a reduction in radiometric measurement uncertainties and assist in developing consensus on a standard for calibration.« less

  8. Particle swarm optimization algorithm based low cost magnetometer calibration

    NASA Astrophysics Data System (ADS)

    Ali, A. S.; Siddharth, S., Syed, Z., El-Sheimy, N.

    2011-12-01

    Inertial Navigation Systems (INS) consist of accelerometers, gyroscopes and a microprocessor provide inertial digital data from which position and orientation is obtained by integrating the specific forces and rotation rates. In addition to the accelerometers and gyroscopes, magnetometers can be used to derive the absolute user heading based on Earth's magnetic field. Unfortunately, the measurements of the magnetic field obtained with low cost sensors are corrupted by several errors including manufacturing defects and external electro-magnetic fields. Consequently, proper calibration of the magnetometer is required to achieve high accuracy heading measurements. In this paper, a Particle Swarm Optimization (PSO) based calibration algorithm is presented to estimate the values of the bias and scale factor of low cost magnetometer. The main advantage of this technique is the use of the artificial intelligence which does not need any error modeling or awareness of the nonlinearity. The estimated bias and scale factor errors from the proposed algorithm improve the heading accuracy and the results are also statistically significant. Also, it can help in the development of the Pedestrian Navigation Devices (PNDs) when combined with the INS and GPS/Wi-Fi especially in the indoor environments

  9. A Novel Miniature Wide-band Radiometer for Space Applications

    NASA Astrophysics Data System (ADS)

    Sykulska-Lawrence, Hanna

    2016-10-01

    Design, development and testing of a novel miniaturised infrared radiometer is described. The instrument opens up new possibilities in planetary science of deployment on smaller platforms - such as unmanned aerial vehicles and microprobes - to enable study of a planet's radiation balance, as well as terrestrial volcano plumes and trace gases in planetary atmospheres, using low-cost long-term observations. Thus a key enabling development is that of miniaturised, low-power and well-calibrated instrumentation.The paper reports advances in miniature technology to perform high accuracy visible / IR remote sensing measurements. The infrared radiometer is akin to those widely used for remote sensing for earth and space applications, which are currently either large instruments on orbiting platforms or medium-sized payloads on balloons. We use MEMS microfabrication techniques to shrink a conventional design, while combining the calibration benefits of large (>1kg) type radiometers with the flexibility and portability of a <10g device. The instrument measures broadband (0.2 to 100um) upward and downward radiation fluxes, with built-in calibration capability, incorporating traceability to temperature standards such as ITS-90.The miniature instrument described here was derived from a concept developed for a European Space Agency study, Dalomis (Proc. of 'i-SAIRAS 2005', Munich, 2005), which involved dropping multiple probes into the atmosphere of Venus from a balloon to sample numerous parts of the complex weather systems on the planet. Data from such an in-situ instrument would complement information from a satellite remote sensing instrument or balloon radiosonde. Moreover, the addition of an internal calibration standard facilitates comparisons between datasets.One of the main challenges for a reduced size device is calibration. We use an in-situ method whereby a blackbody source is integrated within the device and a micromirror switches the input to the detector between the measured signal and the calibration target. Achieving two well-calibrated radiometer channels within a small (<10g) payload is made possible by using micromachining techniques.

  10. Model Robust Calibration: Method and Application to Electronically-Scanned Pressure Transducers

    NASA Technical Reports Server (NTRS)

    Walker, Eric L.; Starnes, B. Alden; Birch, Jeffery B.; Mays, James E.

    2010-01-01

    This article presents the application of a recently developed statistical regression method to the controlled instrument calibration problem. The statistical method of Model Robust Regression (MRR), developed by Mays, Birch, and Starnes, is shown to improve instrument calibration by reducing the reliance of the calibration on a predetermined parametric (e.g. polynomial, exponential, logarithmic) model. This is accomplished by allowing fits from the predetermined parametric model to be augmented by a certain portion of a fit to the residuals from the initial regression using a nonparametric (locally parametric) regression technique. The method is demonstrated for the absolute scale calibration of silicon-based pressure transducers.

  11. Spectral characterization and calibration of AOTF spectrometers and hyper-spectral imaging system

    NASA Astrophysics Data System (ADS)

    Katrašnik, Jaka; Pernuš, Franjo; Likar, Boštjan

    2010-02-01

    The goal of this article is to present a novel method for spectral characterization and calibration of spectrometers and hyper-spectral imaging systems based on non-collinear acousto-optical tunable filters. The method characterizes the spectral tuning curve (frequency-wavelength characteristic) of the AOTF (Acousto-Optic Tunable Filter) filter by matching the acquired and modeled spectra of the HgAr calibration lamp, which emits line spectrum that can be well modeled via AOTF transfer function. In this way, not only tuning curve characterization and corresponding spectral calibration but also spectral resolution assessment is performed. The obtained results indicated that the proposed method is efficient, accurate and feasible for routine calibration of AOTF spectrometers and hyper-spectral imaging systems and thereby a highly competitive alternative to the existing calibration methods.

  12. High Accuracy Temperature Measurements Using RTDs with Current Loop Conditioning

    NASA Technical Reports Server (NTRS)

    Hill, Gerald M.

    1997-01-01

    To measure temperatures with a greater degree of accuracy than is possible with thermocouples, RTDs (Resistive Temperature Detectors) are typically used. Calibration standards use specialized high precision RTD probes with accuracies approaching 0.001 F. These are extremely delicate devices, and far too costly to be used in test facility instrumentation. Less costly sensors which are designed for aeronautical wind tunnel testing are available and can be readily adapted to probes, rakes, and test rigs. With proper signal conditioning of the sensor, temperature accuracies of 0.1 F is obtainable. For reasons that will be explored in this paper, the Anderson current loop is the preferred method used for signal conditioning. This scheme has been used in NASA Lewis Research Center's 9 x 15 Low Speed Wind Tunnel, and is detailed.

  13. The Calibration of AVHRR/3 Visible Dual Gain Using Meteosat-8 as a MODIS Calibration Transfer Medium

    NASA Technical Reports Server (NTRS)

    Avey, Lance; Garber, Donald; Nguyen, Louis; Minnis, Patrick

    2007-01-01

    This viewgraph presentation reviews the NOAA-17 AVHRR visible channels calibrated against MET-8/MODIS using dual gain regression methods. The topics include: 1) Motivation; 2) Methodology; 3) Dual Gain Regression Methods; 4) Examples of Regression methods; 5) AVHRR/3 Regression Strategy; 6) Cross-Calibration Method; 7) Spectral Response Functions; 8) MET8/NOAA-17; 9) Example of gain ratio adjustment; 10) Effect of mixed low/high count FOV; 11) Monitor dual gains over time; and 12) Conclusions

  14. Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units.

    PubMed

    Cai, Qingzhong; Yang, Gongliu; Song, Ningfang; Liu, Yiliang

    2016-06-22

    An inertial navigation system (INS) has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10(-6)°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs) using common turntables, has a great application potential in future atomic gyro INSs.

  15. IMU-based online kinematic calibration of robot manipulator.

    PubMed

    Du, Guanglong; Zhang, Ping

    2013-01-01

    Robot calibration is a useful diagnostic method for improving the positioning accuracy in robot production and maintenance. An online robot self-calibration method based on inertial measurement unit (IMU) is presented in this paper. The method requires that the IMU is rigidly attached to the robot manipulator, which makes it possible to obtain the orientation of the manipulator with the orientation of the IMU in real time. This paper proposed an efficient approach which incorporates Factored Quaternion Algorithm (FQA) and Kalman Filter (KF) to estimate the orientation of the IMU. Then, an Extended Kalman Filter (EKF) is used to estimate kinematic parameter errors. Using this proposed orientation estimation method will result in improved reliability and accuracy in determining the orientation of the manipulator. Compared with the existing vision-based self-calibration methods, the great advantage of this method is that it does not need the complex steps, such as camera calibration, images capture, and corner detection, which make the robot calibration procedure more autonomous in a dynamic manufacturing environment. Experimental studies on a GOOGOL GRB3016 robot show that this method has better accuracy, convenience, and effectiveness than vision-based methods.

  16. SU-E-I-38: Improved Metal Artifact Correction Using Adaptive Dual Energy Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, X; Elder, E; Roper, J

    2015-06-15

    Purpose: The empirical dual energy calibration (EDEC) method corrects for beam-hardening artifacts, but shows limited performance on metal artifact correction. In this work, we propose an adaptive dual energy calibration (ADEC) method to correct for metal artifacts. Methods: The empirical dual energy calibration (EDEC) method corrects for beam-hardening artifacts, but shows limited performance on metal artifact correction. In this work, we propose an adaptive dual energy calibration (ADEC) method to correct for metal artifacts. Results: Highly attenuating copper rods cause severe streaking artifacts on standard CT images. EDEC improves the image quality, but cannot eliminate the streaking artifacts. Compared tomore » EDEC, the proposed ADEC method further reduces the streaking resulting from metallic inserts and beam-hardening effects and obtains material decomposition images with significantly improved accuracy. Conclusion: We propose an adaptive dual energy calibration method to correct for metal artifacts. ADEC is evaluated with the Shepp-Logan phantom, and shows superior metal artifact correction performance. In the future, we will further evaluate the performance of the proposed method with phantom and patient data.« less

  17. Multimodal Spatial Calibration for Accurately Registering EEG Sensor Positions

    PubMed Central

    Chen, Shengyong; Xiao, Gang; Li, Xiaoli

    2014-01-01

    This paper proposes a fast and accurate calibration method to calibrate multiple multimodal sensors using a novel photogrammetry system for fast localization of EEG sensors. The EEG sensors are placed on human head and multimodal sensors are installed around the head to simultaneously obtain all EEG sensor positions. A multiple views' calibration process is implemented to obtain the transformations of multiple views. We first develop an efficient local repair algorithm to improve the depth map, and then a special calibration body is designed. Based on them, accurate and robust calibration results can be achieved. We evaluate the proposed method by corners of a chessboard calibration plate. Experimental results demonstrate that the proposed method can achieve good performance, which can be further applied to EEG source localization applications on human brain. PMID:24803954

  18. An Innovative Software Tool Suite for Power Plant Model Validation and Parameter Calibration using PMU Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yuanyuan; Diao, Ruisheng; Huang, Renke

    Maintaining good quality of power plant stability models is of critical importance to ensure the secure and economic operation and planning of today’s power grid with its increasing stochastic and dynamic behavior. According to North American Electric Reliability (NERC) standards, all generators in North America with capacities larger than 10 MVA are required to validate their models every five years. Validation is quite costly and can significantly affect the revenue of generator owners, because the traditional staged testing requires generators to be taken offline. Over the past few years, validating and calibrating parameters using online measurements including phasor measurement unitsmore » (PMUs) and digital fault recorders (DFRs) has been proven to be a cost-effective approach. In this paper, an innovative open-source tool suite is presented for validating power plant models using PPMV tool, identifying bad parameters with trajectory sensitivity analysis, and finally calibrating parameters using an ensemble Kalman filter (EnKF) based algorithm. The architectural design and the detailed procedures to run the tool suite are presented, with results of test on a realistic hydro power plant using PMU measurements for 12 different events. The calibrated parameters of machine, exciter, governor and PSS models demonstrate much better performance than the original models for all the events and show the robustness of the proposed calibration algorithm.« less

  19. Developing a lower-cost atmospheric CO2 monitoring system using commercial NDIR sensor

    NASA Astrophysics Data System (ADS)

    Arzoumanian, E.; Bastos, A.; Gaynullin, B.; Laurent, O.; Vogel, F. R.

    2017-12-01

    Cities release to the atmosphere about 44 % of global energy-related CO2. It is clear that accurate estimates of the magnitude of anthropogenic and natural urban emissions are needed to assess their influence on the carbon balance. A dense ground-based CO2 monitoring network in cities would potentially allow retrieving sector specific CO2 emission estimates when combined with an atmospheric inversion framework using reasonably accurate observations (ca. 1 ppm for hourly means). One major barrier for denser observation networks can be the high cost of high precision instruments or high calibration cost of cheaper and unstable instruments. We have developed and tested a novel inexpensive NDIR sensors for CO2 measurements which fulfils cost and typical parameters requirements (i.e. signal stability, efficient handling, and connectivity) necessary for this task. Such sensors are essential in the market of emissions estimates in cities from continuous monitoring networks as well as for leak detection of MRV (monitoring, reporting, and verification) services for industrial sites. We conducted extensive laboratory tests (short and long-term repeatability, cross-sensitivities, etc.) on a series of prototypes and the final versions were also tested in a climatic chamber. On four final HPP prototypes the sensitivity to pressure and temperature were precisely quantified and correction&calibration strategies developed. Furthermore, we fully integrated these HPP sensors in a Raspberry PI platform containing the CO2 sensor and additional sensors (pressure, temperature and humidity sensors), gas supply pump and a fully automated data acquisition unit. This platform was deployed in parallel to Picarro G2401 instruments in the peri-urban site Saclay - next to Paris, and in the urban site Jussieu - Paris, France. These measurements were conducted over several months in order to characterize the long-term drift of our HPP instruments and the ability of the correction and calibration scheme to provide bias free observations. From the lessons learned in the laboratory tests and field measurements, we developed a specific correction and calibration strategy for our NDIR sensors. Latest results and calibration strategies will be shown.

  20. Volumetric calibration of a plenoptic camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, Elise Munz; Fahringer, Timothy W.; Guildenbecher, Daniel Robert

    Here, the volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creationmore » of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.« less

  1. Volumetric calibration of a plenoptic camera

    DOE PAGES

    Hall, Elise Munz; Fahringer, Timothy W.; Guildenbecher, Daniel Robert; ...

    2018-02-01

    Here, the volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creationmore » of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.« less

  2. Blind calibration of radio interferometric arrays using sparsity constraints and its implications for self-calibration

    NASA Astrophysics Data System (ADS)

    Chiarucci, Simone; Wijnholds, Stefan J.

    2018-02-01

    Blind calibration, i.e. calibration without a priori knowledge of the source model, is robust to the presence of unknown sources such as transient phenomena or (low-power) broad-band radio frequency interference that escaped detection. In this paper, we present a novel method for blind calibration of a radio interferometric array assuming that the observed field only contains a small number of discrete point sources. We show the huge computational advantage over previous blind calibration methods and we assess its statistical efficiency and robustness to noise and the quality of the initial estimate. We demonstrate the method on actual data from a Low-Frequency Array low-band antenna station showing that our blind calibration is able to recover the same gain solutions as the regular calibration approach, as expected from theory and simulations. We also discuss the implications of our findings for the robustness of regular self-calibration to poor starting models.

  3. An Accurate Projector Calibration Method Based on Polynomial Distortion Representation

    PubMed Central

    Liu, Miao; Sun, Changku; Huang, Shujun; Zhang, Zonghua

    2015-01-01

    In structure light measurement systems or 3D printing systems, the errors caused by optical distortion of a digital projector always affect the precision performance and cannot be ignored. Existing methods to calibrate the projection distortion rely on calibration plate and photogrammetry, so the calibration performance is largely affected by the quality of the plate and the imaging system. This paper proposes a new projector calibration approach that makes use of photodiodes to directly detect the light emitted from a digital projector. By analyzing the output sequence of the photoelectric module, the pixel coordinates can be accurately obtained by the curve fitting method. A polynomial distortion representation is employed to reduce the residuals of the traditional distortion representation model. Experimental results and performance evaluation show that the proposed calibration method is able to avoid most of the disadvantages in traditional methods and achieves a higher accuracy. This proposed method is also practically applicable to evaluate the geometric optical performance of other optical projection system. PMID:26492247

  4. Novel Calibration Algorithm for a Three-Axis Strapdown Magnetometer

    PubMed Central

    Liu, Yan Xia; Li, Xi Sheng; Zhang, Xiao Juan; Feng, Yi Bo

    2014-01-01

    A complete error calibration model with 12 independent parameters is established by analyzing the three-axis magnetometer error mechanism. The said model conforms to an ellipsoid restriction, the parameters of the ellipsoid equation are estimated, and the ellipsoid coefficient matrix is derived. However, the calibration matrix cannot be determined completely, as there are fewer ellipsoid parameters than calibration model parameters. Mathematically, the calibration matrix derived from the ellipsoid coefficient matrix by a different matrix decomposition method is not unique, and there exists an unknown rotation matrix R between them. This paper puts forward a constant intersection angle method (angles between the geomagnetic field and gravitational field are fixed) to estimate R. The Tikhonov method is adopted to solve the problem that rounding errors or other errors may seriously affect the calculation results of R when the condition number of the matrix is very large. The geomagnetic field vector and heading error are further corrected by R. The constant intersection angle method is convenient and practical, as it is free from any additional calibration procedure or coordinate transformation. In addition, the simulation experiment indicates that the heading error declines from ±1° calibrated by classical ellipsoid fitting to ±0.2° calibrated by a constant intersection angle method, and the signal-to-noise ratio is 50 dB. The actual experiment exhibits that the heading error is further corrected from ±0.8° calibrated by the classical ellipsoid fitting to ±0.3° calibrated by a constant intersection angle method. PMID:24831110

  5. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization

    NASA Astrophysics Data System (ADS)

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj

    2015-03-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.

  6. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization

    PubMed Central

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj

    2017-01-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery. PMID:28943703

  7. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization.

    PubMed

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D; Shekhar, Raj

    2015-03-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool ( rdCalib ; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker ® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.

  8. A novel calibration method of focused light field camera for 3-D reconstruction of flame temperature

    NASA Astrophysics Data System (ADS)

    Sun, Jun; Hossain, Md. Moinul; Xu, Chuan-Long; Zhang, Biao; Wang, Shi-Min

    2017-05-01

    This paper presents a novel geometric calibration method for focused light field camera to trace the rays of flame radiance and to reconstruct the three-dimensional (3-D) temperature distribution of a flame. A calibration model is developed to calculate the corner points and their projections of the focused light field camera. The characteristics of matching main lens and microlens f-numbers are used as an additional constrains for the calibration. Geometric parameters of the focused light field camera are then achieved using Levenberg-Marquardt algorithm. Total focused images in which all the points are in focus, are utilized to validate the proposed calibration method. Calibration results are presented and discussed in details. The maximum mean relative error of the calibration is found less than 0.13%, indicating that the proposed method is capable of calibrating the focused light field camera successfully. The parameters obtained by the calibration are then utilized to trace the rays of flame radiance. A least square QR-factorization algorithm with Plank's radiation law is used to reconstruct the 3-D temperature distribution of a flame. Experiments were carried out on an ethylene air fired combustion test rig to reconstruct the temperature distribution of flames. The flame temperature obtained by the proposed method is then compared with that obtained by using high-precision thermocouple. The difference between the two measurements was found no greater than 6.7%. Experimental results demonstrated that the proposed calibration method and the applied measurement technique perform well in the reconstruction of the flame temperature.

  9. Comparison of infusion pumps calibration methods

    NASA Astrophysics Data System (ADS)

    Batista, Elsa; Godinho, Isabel; do Céu Ferreira, Maria; Furtado, Andreia; Lucas, Peter; Silva, Claudia

    2017-12-01

    Nowadays, several types of infusion pump are commonly used for drug delivery, such as syringe pumps and peristaltic pumps. These instruments present different measuring features and capacities according to their use and therapeutic application. In order to ensure the metrological traceability of these flow and volume measuring equipment, it is necessary to use suitable calibration methods and standards. Two different calibration methods can be used to determine the flow error of infusion pumps. One is the gravimetric method, considered as a primary method, commonly used by National Metrology Institutes. The other calibration method, a secondary method, relies on an infusion device analyser (IDA) and is typically used by hospital maintenance offices. The suitability of the IDA calibration method was assessed by testing several infusion instruments at different flow rates using the gravimetric method. In addition, a measurement comparison between Portuguese Accredited Laboratories and hospital maintenance offices was performed under the coordination of the Portuguese Institute for Quality, the National Metrology Institute. The obtained results were directly related to the used calibration method and are presented in this paper. This work has been developed in the framework of the EURAMET projects EMRP MeDD and EMPIR 15SIP03.

  10. Low Frequency Error Analysis and Calibration for High-Resolution Optical Satellite's Uncontrolled Geometric Positioning

    NASA Astrophysics Data System (ADS)

    Wang, Mi; Fang, Chengcheng; Yang, Bo; Cheng, Yufeng

    2016-06-01

    The low frequency error is a key factor which has affected uncontrolled geometry processing accuracy of the high-resolution optical image. To guarantee the geometric quality of imagery, this paper presents an on-orbit calibration method for the low frequency error based on geometric calibration field. Firstly, we introduce the overall flow of low frequency error on-orbit analysis and calibration, which includes optical axis angle variation detection of star sensor, relative calibration among star sensors, multi-star sensor information fusion, low frequency error model construction and verification. Secondly, we use optical axis angle change detection method to analyze the law of low frequency error variation. Thirdly, we respectively use the method of relative calibration and information fusion among star sensors to realize the datum unity and high precision attitude output. Finally, we realize the low frequency error model construction and optimal estimation of model parameters based on DEM/DOM of geometric calibration field. To evaluate the performance of the proposed calibration method, a certain type satellite's real data is used. Test results demonstrate that the calibration model in this paper can well describe the law of the low frequency error variation. The uncontrolled geometric positioning accuracy of the high-resolution optical image in the WGS-84 Coordinate Systems is obviously improved after the step-wise calibration.

  11. On-Demand Calibration and Evaluation for Electromagnetically Tracked Laparoscope in Augmented Reality Visualization

    PubMed Central

    Liu, Xinyang; Plishker, William; Zaki, George; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj

    2017-01-01

    Purpose Common camera calibration methods employed in current laparoscopic augmented reality systems require the acquisition of multiple images of an entire checkerboard pattern from various poses. This lengthy procedure prevents performing laparoscope calibration in the operating room (OR). The purpose of this work was to develop a fast calibration method for electromagnetically (EM) tracked laparoscopes, such that calibration can be performed in the OR on demand. Methods We designed a mechanical tracking mount to uniquely and snugly position an EM sensor to an appropriate location on a conventional laparoscope. A tool named fCalib was developed to calibrate intrinsic camera parameters, distortion coefficients, and extrinsic parameters (transformation between the scope lens coordinate system and the EM sensor coordinate system) using a single image that shows an arbitrary portion of a special target pattern. For quick evaluation of calibration result in the OR, we integrated a tube phantom with fCalib and overlaid a virtual representation of the tube on the live video scene. Results We compared spatial target registration error between the common OpenCV method and the fCalib method in a laboratory setting. In addition, we compared the calibration re-projection error between the EM tracking-based fCalib and the optical tracking-based fCalib in a clinical setting. Our results suggested that the proposed method is comparable to the OpenCV method. However, changing the environment, e.g., inserting or removing surgical tools, would affect re-projection accuracy for the EM tracking-based approach. Computational time of the fCalib method averaged 14.0 s (range 3.5 s – 22.7 s). Conclusions We developed and validated a prototype for fast calibration and evaluation of EM tracked conventional (forward viewing) laparoscopes. The calibration method achieved acceptable accuracy and was relatively fast and easy to be performed in the OR on demand. PMID:27250853

  12. Temperature-Sensitive Coating Sensor Based on Hematite

    NASA Technical Reports Server (NTRS)

    Bencic, Timothy J.

    2011-01-01

    A temperature-sensitive coating, based on hematite (iron III oxide), has been developed to measure surface temperature using spectral techniques. The hematite powder is added to a binder that allows the mixture to be painted on the surface of a test specimen. The coating dynamically changes its relative spectral makeup or color with changes in temperature. The color changes from a reddish-brown appearance at room temperature (25 C) to a black-gray appearance at temperatures around 600 C. The color change is reversible and repeatable with temperature cycling from low to high and back to low temperatures. Detection of the spectral changes can be recorded by different sensors, including spectrometers, photodiodes, and cameras. Using a-priori information obtained through calibration experiments in known thermal environments, the color change can then be calibrated to yield accurate quantitative temperature information. Temperature information can be obtained at a point, or over an entire surface, depending on the type of equipment used for data acquisition. Because this innovation uses spectrophotometry principles of operation, rather than the current methods, which use photoluminescence principles, white light can be used for illumination rather than high-intensity short wavelength excitation. The generation of high-intensity white (or potentially filtered long wavelength light) is much easier, and is used more prevalently for photography and video technologies. In outdoor tests, the Sun can be used for short durations as an illumination source as long as the amplitude remains relatively constant. The reflected light is also much higher in intensity than the emitted light from the inefficient current methods. Having a much brighter surface allows a wider array of detection schemes and devices. Because color change is the principle of operation, the development of high-quality, lower-cost digital cameras can be used for detection, as opposed to the high-cost imagers needed for intensity measurements with the current methods. Alternative methods of detection are possible to increase the measurement sensitivity. For example, a monochrome camera can be used with an appropriate filter and a radiometric measurement of normalized intensity change that is proportional to the change coating temperature. Using different spectral regions yields different sensitivities and calibration curves for converting intensity change to temperature units. Alternatively, using a color camera, a ratio of the standard red, green, and blue outputs can be used as a self-referenced change. The blue region (less than 500 nm) does not change nearly as much as the red region (greater than 575 nm), so a ratio of color intensities will yield a calibrated temperature image. The new temperature sensor coating is easy to apply, is inexpensive, can contour complex shape surfaces, and can be a global surface measurement system based on spectrophotometry. The color change, or relative intensity change, at different colors makes the optical detection under white light illumination, and associated interpretation, much easier to measure and interpret than in the detection systems of the current methods.

  13. Design of an ultra-portable field transfer radiometer supporting automated vicarious calibration

    NASA Astrophysics Data System (ADS)

    Anderson, Nikolaus; Thome, Kurtis; Czapla-Myers, Jeffrey; Biggar, Stuart

    2015-09-01

    The University of Arizona Remote Sensing Group (RSG) began outfitting the radiometric calibration test site (RadCaTS) at Railroad Valley Nevada in 2004 for automated vicarious calibration of Earth-observing sensors. RadCaTS was upgraded to use RSG custom 8-band ground viewing radiometers (GVRs) beginning in 2011 and currently four GVRs are deployed providing an average reflectance for the test site. This measurement of ground reflectance is the most critical component of vicarious calibration using the reflectance-based method. In order to ensure the quality of these measurements, RSG has been exploring more efficient and accurate methods of on-site calibration evaluation. This work describes the design of, and initial results from, a small portable transfer radiometer for the purpose of GVR calibration validation on site. Prior to deployment, RSG uses high accuracy laboratory calibration methods in order to provide radiance calibrations with low uncertainties for each GVR. After deployment, a solar radiation based calibration has typically been used. The method is highly dependent on a clear, stable atmosphere, requires at least two people to perform, is time consuming in post processing, and is dependent on several large pieces of equipment. In order to provide more regular and more accurate calibration monitoring, the small portable transfer radiometer is designed for quick, one-person operation and on-site field calibration comparison results. The radiometer is also suited for laboratory calibration use and thus could be used as a transfer radiometer calibration standard for ground viewing radiometers of a RadCalNet site.

  14. The analytical calibration in (bio)imaging/mapping of the metallic elements in biological samples--definitions, nomenclature and strategies: state of the art.

    PubMed

    Jurowski, Kamil; Buszewski, Bogusław; Piekoszewski, Wojciech

    2015-01-01

    Nowadays, studies related to the distribution of metallic elements in biological samples are one of the most important issues. There are many articles dedicated to specific analytical atomic spectrometry techniques used for mapping/(bio)imaging the metallic elements in various kinds of biological samples. However, in such literature, there is a lack of articles dedicated to reviewing calibration strategies, and their problems, nomenclature, definitions, ways and methods used to obtain quantitative distribution maps. The aim of this article was to characterize the analytical calibration in the (bio)imaging/mapping of the metallic elements in biological samples including (1) nomenclature; (2) definitions, and (3) selected and sophisticated, examples of calibration strategies with analytical calibration procedures applied in the different analytical methods currently used to study an element's distribution in biological samples/materials such as LA ICP-MS, SIMS, EDS, XRF and others. The main emphasis was placed on the procedures and methodology of the analytical calibration strategy. Additionally, the aim of this work is to systematize the nomenclature for the calibration terms: analytical calibration, analytical calibration method, analytical calibration procedure and analytical calibration strategy. The authors also want to popularize the division of calibration methods that are different than those hitherto used. This article is the first work in literature that refers to and emphasizes many different and complex aspects of analytical calibration problems in studies related to (bio)imaging/mapping metallic elements in different kinds of biological samples. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Automatic Calibration Method for Driver’s Head Orientation in Natural Driving Environment

    PubMed Central

    Fu, Xianping; Guan, Xiao; Peli, Eli; Liu, Hongbo; Luo, Gang

    2013-01-01

    Gaze tracking is crucial for studying driver’s attention, detecting fatigue, and improving driver assistance systems, but it is difficult in natural driving environments due to nonuniform and highly variable illumination and large head movements. Traditional calibrations that require subjects to follow calibrators are very cumbersome to be implemented in daily driving situations. A new automatic calibration method, based on a single camera for determining the head orientation and which utilizes the side mirrors, the rear-view mirror, the instrument board, and different zones in the windshield as calibration points, is presented in this paper. Supported by a self-learning algorithm, the system tracks the head and categorizes the head pose in 12 gaze zones based on facial features. The particle filter is used to estimate the head pose to obtain an accurate gaze zone by updating the calibration parameters. Experimental results show that, after several hours of driving, the automatic calibration method without driver’s corporation can achieve the same accuracy as a manual calibration method. The mean error of estimated eye gazes was less than 5°in day and night driving. PMID:24639620

  16. Further Evidence on the Effect of Acquisition Policy and Process on Cost Growth of Major Defense Acquisition Programs

    DTIC Science & Technology

    2016-06-01

    Total Package Procurement (TPP) when it was judged to be practicable and, when not, Fixed Price Incentive Fee (FPIF) or Cost Plus Incentive Fee (CPIF...development contracts in favor of CPIF. ( Cost Plus Award Fee may not have been included in the contracting play book yet.) As a general matter, Packard’s...Group CAPE Cost Assessment and Program Evaluation CD Compact Disc CE Current Estimate CLC Calibrated Learning Curve CPIF Cost Plus Incentive Fee

  17. Measuring ammonia concentrations and emissions from agricultural land and liquid surfaces: a review.

    PubMed

    Shah, Sanjay B; Westerman, Philip W; Arogo, Jactone

    2006-07-01

    Aerial ammonia concentrations (Cg) are measured using acid scrubbers, filter packs, denuders, or optical methods. Using Cg and wind speed or airflow rate, ammonia emission rate or flux can be directly estimated using enclosures or micrometeorological methods. Using nitrogen (N) recovery is not recommended, mainly because the different gaseous N components cannot be separated. Although low cost and replicable, chambers modify environmental conditions and are suitable only for comparing treatments. Wind tunnels do not modify environmental conditions as much as chambers, but they may not be appropriate for determining ammonia fluxes; however, they can be used to compare emissions and test models. Larger wind tunnels that also simulate natural wind profiles may be more useful for comparing treatments than micrometeorological methods because the latter require larger plots and are, thus, difficult to replicate. For determining absolute ammonia flux, the micrometeorological methods are the most suitable because they are nonintrusive. For use with micrometeorological methods, both the passive denuders and optical methods give comparable accuracies, although the latter give real-time Cg but at a higher cost. The passive denuder is wind weighted and also costs less than forced-air Cg measurement methods, but it requires calibration. When ammonia contamination during sample preparation and handling is a concern and separating the gas-phase ammonia and aerosol ammonium is not required, the scrubber is preferred over the passive denuder. The photothermal interferometer, because of its low detection limit and robustness, may hold potential for use in agriculture, but it requires evaluation. With its simpler theoretical basis and fewer restrictions, the integrated horizontal flux (IHF) method is preferable over other micrometeorological methods, particularly for lagoons, where berms and land-lagoon boundaries modify wind flow and flux gradients. With uniform wind flow, the ZINST method requiring measurement at one predetermined height may perform comparably to the IHF method but at a lower cost.

  18. CNV-ROC: A cost effective, computer-aided analytical performance evaluator of chromosomal microarrays.

    PubMed

    Goodman, Corey W; Major, Heather J; Walls, William D; Sheffield, Val C; Casavant, Thomas L; Darbro, Benjamin W

    2015-04-01

    Chromosomal microarrays (CMAs) are routinely used in both research and clinical laboratories; yet, little attention has been given to the estimation of genome-wide true and false negatives during the assessment of these assays and how such information could be used to calibrate various algorithmic metrics to improve performance. Low-throughput, locus-specific methods such as fluorescence in situ hybridization (FISH), quantitative PCR (qPCR), or multiplex ligation-dependent probe amplification (MLPA) preclude rigorous calibration of various metrics used by copy number variant (CNV) detection algorithms. To aid this task, we have established a comparative methodology, CNV-ROC, which is capable of performing a high throughput, low cost, analysis of CMAs that takes into consideration genome-wide true and false negatives. CNV-ROC uses a higher resolution microarray to confirm calls from a lower resolution microarray and provides for a true measure of genome-wide performance metrics at the resolution offered by microarray testing. CNV-ROC also provides for a very precise comparison of CNV calls between two microarray platforms without the need to establish an arbitrary degree of overlap. Comparison of CNVs across microarrays is done on a per-probe basis and receiver operator characteristic (ROC) analysis is used to calibrate algorithmic metrics, such as log2 ratio threshold, to enhance CNV calling performance. CNV-ROC addresses a critical and consistently overlooked aspect of analytical assessments of genome-wide techniques like CMAs which is the measurement and use of genome-wide true and false negative data for the calculation of performance metrics and comparison of CNV profiles between different microarray experiments. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Flood loss modelling with FLF-IT: a new flood loss function for Italian residential structures

    NASA Astrophysics Data System (ADS)

    Hasanzadeh Nafari, Roozbeh; Amadio, Mattia; Ngo, Tuan; Mysiak, Jaroslav

    2017-07-01

    The damage triggered by different flood events costs the Italian economy millions of euros each year. This cost is likely to increase in the future due to climate variability and economic development. In order to avoid or reduce such significant financial losses, risk management requires tools which can provide a reliable estimate of potential flood impacts across the country. Flood loss functions are an internationally accepted method for estimating physical flood damage in urban areas. In this study, we derived a new flood loss function for Italian residential structures (FLF-IT), on the basis of empirical damage data collected from a recent flood event in the region of Emilia-Romagna. The function was developed based on a new Australian approach (FLFA), which represents the confidence limits that exist around the parameterized functional depth-damage relationship. After model calibration, the performance of the model was validated for the prediction of loss ratios and absolute damage values. It was also contrasted with an uncalibrated relative model with frequent usage in Europe. In this regard, a three-fold cross-validation procedure was carried out over the empirical sample to measure the range of uncertainty from the actual damage data. The predictive capability has also been studied for some sub-classes of water depth. The validation procedure shows that the newly derived function performs well (no bias and only 10 % mean absolute error), especially when the water depth is high. Results of these validation tests illustrate the importance of model calibration. The advantages of the FLF-IT model over other Italian models include calibration with empirical data, consideration of the epistemic uncertainty of data, and the ability to change parameters based on building practices across Italy.

  20. Fast wavelength calibration method for spectrometers based on waveguide comb optical filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Zhengang; Department of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai 200240; Huang, Meizhen, E-mail: mzhuang@sjtu.edu.cn

    2015-04-15

    A novel fast wavelength calibration method for spectrometers based on a standard spectrometer and a double metal-cladding waveguide comb optical filter (WCOF) is proposed and demonstrated. By using the WCOF device, a wide-spectrum beam is comb-filtered, which is very suitable for spectrometer wavelength calibration. The influence of waveguide filter’s structural parameters and the beam incident angle on the comb absorption peaks’ wavelength and its bandwidth are also discussed. The verification experiments were carried out in the wavelength range of 200–1100 nm with satisfactory results. Comparing with the traditional wavelength calibration method based on discrete sparse atomic emission or absorption lines,more » the new method has some advantages: sufficient calibration data, high accuracy, short calibration time, fit for produce process, stability, etc.« less

  1. Asymptotic Analysis Of The Total Least Squares ESPRIT Algorithm'

    NASA Astrophysics Data System (ADS)

    Ottersten, B. E.; Viberg, M.; Kailath, T.

    1989-11-01

    This paper considers the problem of estimating the parameters of multiple narrowband signals arriving at an array of sensors. Modern approaches to this problem often involve costly procedures for calculating the estimates. The ESPRIT (Estimation of Signal Parameters via Rotational Invariance Techniques) algorithm was recently proposed as a means for obtaining accurate estimates without requiring a costly search of the parameter space. This method utilizes an array invariance to arrive at a computationally efficient multidimensional estimation procedure. Herein, the asymptotic distribution of the estimation error is derived for the Total Least Squares (TLS) version of ESPRIT. The Cramer-Rao Bound (CRB) for the ESPRIT problem formulation is also derived and found to coincide with the variance of the asymptotic distribution through numerical examples. The method is also compared to least squares ESPRIT and MUSIC as well as to the CRB for a calibrated array. Simulations indicate that the theoretic expressions can be used to accurately predict the performance of the algorithm.

  2. Automatic alignment method for calibration of hydrometers

    NASA Astrophysics Data System (ADS)

    Lee, Y. J.; Chang, K. H.; Chon, J. C.; Oh, C. Y.

    2004-04-01

    This paper presents a new method to automatically align specific scale-marks for the calibration of hydrometers. A hydrometer calibration system adopting the new method consists of a vision system, a stepping motor, and software to control the system. The vision system is composed of a CCD camera and a frame grabber, and is used to acquire images. The stepping motor moves the camera, which is attached to the vessel containing a reference liquid, along the hydrometer. The operating program has two main functions: to process images from the camera to find the position of the horizontal plane and to control the stepping motor for the alignment of the horizontal plane with a particular scale-mark. Any system adopting this automatic alignment method is a convenient and precise means of calibrating a hydrometer. The performance of the proposed method is illustrated by comparing the calibration results using the automatic alignment method with those obtained using the manual method.

  3. An overview of in-orbit radiometric calibration of typical satellite sensors

    NASA Astrophysics Data System (ADS)

    Zhou, G. Q.; Li, C. Y.; Yue, T.; Jiang, L. J.; Liu, N.; Sun, Y.; Li, M. Y.

    2015-06-01

    This paper reviews the development of in-orbit radiometric calibration methods in the past 40 years. It summarizes the development of in-orbit radiometric calibration technology of typical satellite sensors in the visible/near-infrared bands and the thermal infrared band. Focuses on the visible/near-infrared bands radiometric calibration method including: Lamp calibration and solar radiationbased calibration. Summarizes the calibration technology of Landsat series satellite sensors including MSS, TM, ETM+, OLI, TIRS; SPOT series satellite sensors including HRV, HRS. In addition to the above sensors, there are also summarizing ALI which was equipped on EO-1, IRMSS which was equipped on CBERS series satellite. Comparing the in-orbit radiometric calibration technology of different periods but the same type satellite sensors analyzes the similarities and differences of calibration technology. Meanwhile summarizes the in-orbit radiometric calibration technology in the same periods but different country satellite sensors advantages and disadvantages of calibration technology.

  4. Cloned plasmid DNA fragments as calibrators for controlling GMOs: different real-time duplex quantitative PCR methods.

    PubMed

    Taverniers, Isabel; Van Bockstaele, Erik; De Loose, Marc

    2004-03-01

    Analytical real-time PCR technology is a powerful tool for implementation of the GMO labeling regulations enforced in the EU. The quality of analytical measurement data obtained by quantitative real-time PCR depends on the correct use of calibrator and reference materials (RMs). For GMO methods of analysis, the choice of appropriate RMs is currently under debate. So far, genomic DNA solutions from certified reference materials (CRMs) are most often used as calibrators for GMO quantification by means of real-time PCR. However, due to some intrinsic features of these CRMs, errors may be expected in the estimations of DNA sequence quantities. In this paper, two new real-time PCR methods are presented for Roundup Ready soybean, in which two types of plasmid DNA fragments are used as calibrators. Single-target plasmids (STPs) diluted in a background of genomic DNA were used in the first method. Multiple-target plasmids (MTPs) containing both sequences in one molecule were used as calibrators for the second method. Both methods simultaneously detect a promoter 35S sequence as GMO-specific target and a lectin gene sequence as endogenous reference target in a duplex PCR. For the estimation of relative GMO percentages both "delta C(T)" and "standard curve" approaches are tested. Delta C(T) methods are based on direct comparison of measured C(T) values of both the GMO-specific target and the endogenous target. Standard curve methods measure absolute amounts of target copies or haploid genome equivalents. A duplex delta C(T) method with STP calibrators performed at least as well as a similar method with genomic DNA calibrators from commercial CRMs. Besides this, high quality results were obtained with a standard curve method using MTP calibrators. This paper demonstrates that plasmid DNA molecules containing either one or multiple target sequences form perfect alternative calibrators for GMO quantification and are especially suitable for duplex PCR reactions.

  5. Evaluating the accuracy of soil water sensors for irrigation scheduling to conserve freshwater

    NASA Astrophysics Data System (ADS)

    Ganjegunte, Girisha K.; Sheng, Zhuping; Clark, John A.

    2012-06-01

    In the Trans-Pecos area, pecan [ Carya illinoinensis (Wangenh) C. Koch] is a major irrigated cash crop. Pecan trees require large amounts of water for their growth and flood (border) irrigation is the most common method of irrigation. Pecan crop is often over irrigated using traditional method of irrigation scheduling by counting number of calendar days since the previous irrigation. Studies in other pecan growing areas have shown that the water use efficiency can be improved significantly and precious freshwater can be saved by scheduling irrigation based on soil moisture conditions. This study evaluated the accuracy of three recent low cost soil water sensors (ECH2O-5TE, Watermark 200SS and Tensiometer model R) to monitor volumetric soil water content (θv) to develop improved irrigation scheduling in a mature pecan orchard in El Paso, Texas. Results indicated that while all three sensors were successful in following the general trends of soil moisture conditions during the growing season, actual measurements differed significantly. Statistical analyses of results indicated that Tensiometer provided relatively accurate soil moisture data than ECH2O-5TE and Watermark without site-specific calibration. While ECH2O-5TE overestimated the soil water content, Watermark and Tensiometer underestimated. Results of this study suggested poor accuracy of all three sensors if factory calibration and reported soil water retention curve for study site soil texture were used. This indicated that sensors needed site-specific calibration to improve their accuracy in estimating soil water content data.

  6. A new calibration methodology for thorax and upper limbs motion capture in children using magneto and inertial sensors.

    PubMed

    Ricci, Luca; Formica, Domenico; Sparaci, Laura; Lasorsa, Francesca Romana; Taffoni, Fabrizio; Tamilia, Eleonora; Guglielmelli, Eugenio

    2014-01-09

    Recent advances in wearable sensor technologies for motion capture have produced devices, mainly based on magneto and inertial measurement units (M-IMU), that are now suitable for out-of-the-lab use with children. In fact, the reduced size, weight and the wireless connectivity meet the requirement of minimum obtrusivity and give scientists the possibility to analyze children's motion in daily life contexts. Typical use of magneto and inertial measurement units (M-IMU) motion capture systems is based on attaching a sensing unit to each body segment of interest. The correct use of this setup requires a specific calibration methodology that allows mapping measurements from the sensors' frames of reference into useful kinematic information in the human limbs' frames of reference. The present work addresses this specific issue, presenting a calibration protocol to capture the kinematics of the upper limbs and thorax in typically developing (TD) children. The proposed method allows the construction, on each body segment, of a meaningful system of coordinates that are representative of real physiological motions and that are referred to as functional frames (FFs). We will also present a novel cost function for the Levenberg-Marquardt algorithm, to retrieve the rotation matrices between each sensor frame (SF) and the corresponding FF. Reported results on a group of 40 children suggest that the method is repeatable and reliable, opening the way to the extensive use of this technology for out-of-the-lab motion capture in children.

  7. Experimental Demonstration of In-Place Calibration for Time Domain Microwave Imaging System

    NASA Astrophysics Data System (ADS)

    Kwon, S.; Son, S.; Lee, K.

    2018-04-01

    In this study, the experimental demonstration of in-place calibration was conducted using the developed time domain measurement system. Experiments were conducted using three calibration methods—in-place calibration and two existing calibrations, that is, array rotation and differential calibration. The in-place calibration uses dual receivers located at an equal distance from the transmitter. The received signals at the dual receivers contain similar unwanted signals, that is, the directly received signal and antenna coupling. In contrast to the simulations, the antennas are not perfectly matched and there might be unexpected environmental errors. Thus, we experimented with the developed experimental system to demonstrate the proposed method. The possible problems with low signal-to-noise ratio and clock jitter, which may exist in time domain systems, were rectified by averaging repeatedly measured signals. The tumor was successfully detected using the three calibration methods according to the experimental results. The cross correlation was calculated using the reconstructed image of the ideal differential calibration for a quantitative comparison between the existing rotation calibration and the proposed in-place calibration. The mean value of cross correlation between the in-place calibration and ideal differential calibration was 0.80, and the mean value of cross correlation of the rotation calibration was 0.55. Furthermore, the results of simulation were compared with the experimental results to verify the in-place calibration method. A quantitative analysis was also performed, and the experimental results show a tendency similar to the simulation.

  8. Influence of Installation Errors On the Output Data of the Piezoelectric Vibrations Transducers

    NASA Astrophysics Data System (ADS)

    Kozuch, Barbara; Chelmecki, Jaroslaw; Tatara, Tadeusz

    2017-10-01

    The paper examines an influence of installation errors of the piezoelectric vibrations transducers on the output data. PCB Piezotronics piezoelectric accelerometers were used to perform calibrations by comparison. The measurements were performed with TMS 9155 Calibration Workstation version 5.4.0 at frequency in the range of 5Hz - 2000Hz. Accelerometers were fixed on the calibration station in a so-called back-to-back configuration in accordance with the applicable international standard - ISO 16063-21: Methods for the calibration of vibration and shock transducers - Part 21: Vibration calibration by comparison to a reference transducer. The first accelerometer was calibrated by suitable methods with traceability to a primary reference transducer. Each subsequent calibration was performed when changing one setting in relation to the original calibration. The alterations were related to negligence and failures in relation to the above-mentioned standards and operating guidelines - e.g. the sensor was not tightened or appropriate substance was not placed. Also, there was modified the method of connection which was in the standards requirements. Different kind of wax, light oil, grease and other assembly methods were used. The aim of the study was to verify the significance of standards requirements and to estimate of their validity. The authors also wanted to highlight the most significant calibration errors. Moreover, relation between various appropriate methods of the connection was demonstrated.

  9. Efficient calibration for imperfect computer models

    DOE PAGES

    Tuo, Rui; Wu, C. F. Jeff

    2015-12-01

    Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.

  10. Augmented classical least squares multivariate spectral analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2004-02-03

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  11. Augmented Classical Least Squares Multivariate Spectral Analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2005-07-26

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  12. Augmented Classical Least Squares Multivariate Spectral Analysis

    DOEpatents

    Haaland, David M.; Melgaard, David K.

    2005-01-11

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  13. Quantitative analysis of Sudan dye adulteration in paprika powder using FTIR spectroscopy.

    PubMed

    Lohumi, Santosh; Joshi, Ritu; Kandpal, Lalit Mohan; Lee, Hoonsoo; Kim, Moon S; Cho, Hyunjeong; Mo, Changyeun; Seo, Young-Wook; Rahman, Anisur; Cho, Byoung-Kwan

    2017-05-01

    As adulteration of foodstuffs with Sudan dye, especially paprika- and chilli-containing products, has been reported with some frequency, this issue has become one focal point for addressing food safety. FTIR spectroscopy has been used extensively as an analytical method for quality control and safety determination for food products. Thus, the use of FTIR spectroscopy for rapid determination of Sudan dye in paprika powder was investigated in this study. A net analyte signal (NAS)-based methodology, named HLA/GO (hybrid linear analysis in the literature), was applied to FTIR spectral data to predict Sudan dye concentration. The calibration and validation sets were designed to evaluate the performance of the multivariate method. The obtained results had a high determination coefficient (R 2 ) of 0.98 and low root mean square error (RMSE) of 0.026% for the calibration set, and an R 2 of 0.97 and RMSE of 0.05% for the validation set. The model was further validated using a second validation set and through the figures of merit, such as sensitivity, selectivity, and limits of detection and quantification. The proposed technique of FTIR combined with HLA/GO is rapid, simple and low cost, making this approach advantageous when compared with the main alternative methods based on liquid chromatography (LC) techniques.

  14. Simultaneous determination of hydroquinone, catechol and resorcinol by voltammetry using graphene screen-printed electrodes and partial least squares calibration.

    PubMed

    Aragó, Miriam; Ariño, Cristina; Dago, Àngela; Díaz-Cruz, José Manuel; Esteban, Miquel

    2016-11-01

    Catechol (CC), resorcinol (RC) and hydroquinone (HQ) are dihydroxybenzene isomers that usually coexist in different samples and can be determined using voltammetric techniques taking profit of their fast response, high sensitivity and selectivity, cheap instrumentation, simple and timesaving operation modes. However, a strong overlapping of CC and HQ signals is observed hindering their accurate analysis. In the present work, the combination of differential pulse voltammetry with graphene screen-printed electrodes (allowing detection limits of 2.7, 1.7 and 2.4µmolL(-1) for HQ, CC and RC respectively) and the data analysis by partial least squares calibration (giving root mean square errors of prediction, RMSEP values, of 2.6, 4.1 and 2.3 for HQ, CC and RC respectively) has been proposed as a powerful tool for the quantification of mixtures of these dihydroxybenzene isomers. The commercial availability of the screen-printed devices and the low cost and simplicity of the analysis suggest that the proposed method can be a valuable alternative to chromatographic and electrophoretic methods for the considered species. The method has been applied to the analysis of these isomers in spiked tap water. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Full Flight Envelope Direct Thrust Measurement on a Supersonic Aircraft

    NASA Technical Reports Server (NTRS)

    Conners, Timothy R.; Sims, Robert L.

    1998-01-01

    Direct thrust measurement using strain gages offers advantages over analytically-based thrust calculation methods. For flight test applications, the direct measurement method typically uses a simpler sensor arrangement and minimal data processing compared to analytical techniques, which normally require costly engine modeling and multisensor arrangements throughout the engine. Conversely, direct thrust measurement has historically produced less than desirable accuracy because of difficulty in mounting and calibrating the strain gages and the inability to account for secondary forces that influence the thrust reading at the engine mounts. Consequently, the strain-gage technique has normally been used for simple engine arrangements and primarily in the subsonic speed range. This paper presents the results of a strain gage-based direct thrust-measurement technique developed by the NASA Dryden Flight Research Center and successfully applied to the full flight envelope of an F-15 aircraft powered by two F100-PW-229 turbofan engines. Measurements have been obtained at quasi-steady-state operating conditions at maximum non-augmented and maximum augmented power throughout the altitude range of the vehicle and to a maximum speed of Mach 2.0 and are compared against results from two analytically-based thrust calculation methods. The strain-gage installation and calibration processes are also described.

  16. On-demand calibration and evaluation for electromagnetically tracked laparoscope in augmented reality visualization.

    PubMed

    Liu, Xinyang; Plishker, William; Zaki, George; Kang, Sukryool; Kane, Timothy D; Shekhar, Raj

    2016-06-01

    Common camera calibration methods employed in current laparoscopic augmented reality systems require the acquisition of multiple images of an entire checkerboard pattern from various poses. This lengthy procedure prevents performing laparoscope calibration in the operating room (OR). The purpose of this work was to develop a fast calibration method for electromagnetically (EM) tracked laparoscopes, such that the calibration can be performed in the OR on demand. We designed a mechanical tracking mount to uniquely and snugly position an EM sensor to an appropriate location on a conventional laparoscope. A tool named fCalib was developed to calibrate intrinsic camera parameters, distortion coefficients, and extrinsic parameters (transformation between the scope lens coordinate system and the EM sensor coordinate system) using a single image that shows an arbitrary portion of a special target pattern. For quick evaluation of calibration results in the OR, we integrated a tube phantom with fCalib prototype and overlaid a virtual representation of the tube on the live video scene. We compared spatial target registration error between the common OpenCV method and the fCalib method in a laboratory setting. In addition, we compared the calibration re-projection error between the EM tracking-based fCalib and the optical tracking-based fCalib in a clinical setting. Our results suggest that the proposed method is comparable to the OpenCV method. However, changing the environment, e.g., inserting or removing surgical tools, might affect re-projection accuracy for the EM tracking-based approach. Computational time of the fCalib method averaged 14.0 s (range 3.5 s-22.7 s). We developed and validated a prototype for fast calibration and evaluation of EM tracked conventional (forward viewing) laparoscopes. The calibration method achieved acceptable accuracy and was relatively fast and easy to be performed in the OR on demand.

  17. Developing and refining NIR calibrations for total carbohydrate composition and isoflavones and saponins in ground whole soy meal

    USDA-ARS?s Scientific Manuscript database

    Although many near infrared (NIR) spectrometric calibrations exist for a variety of components in soy, current calibration methods are often limited by either a small sample size on which the calibrations are based or a wide variation in sample preparation and measurement methods, which yields unrel...

  18. Multiplexed fluctuation-dissipation-theorem calibration of optical tweezers inside living cells

    NASA Astrophysics Data System (ADS)

    Yan, Hao; Johnston, Jessica F.; Cahn, Sidney B.; King, Megan C.; Mochrie, Simon G. J.

    2017-11-01

    In order to apply optical tweezers-based force measurements within an uncharacterized viscoelastic medium such as the cytoplasm of a living cell, a quantitative calibration method that may be applied in this complex environment is needed. We describe an improved version of the fluctuation-dissipation-theorem calibration method, which has been developed to perform in situ calibration in viscoelastic media without prior knowledge of the trapped object. Using this calibration procedure, it is possible to extract values of the medium's viscoelastic moduli as well as the force constant describing the optical trap. To demonstrate our method, we calibrate an optical trap in water, in polyethylene oxide solutions of different concentrations, and inside living fission yeast (S. pombe).

  19. A Self-Adaptive Model-Based Wi-Fi Indoor Localization Method.

    PubMed

    Tuta, Jure; Juric, Matjaz B

    2016-12-06

    This paper presents a novel method for indoor localization, developed with the main aim of making it useful for real-world deployments. Many indoor localization methods exist, yet they have several disadvantages in real-world deployments-some are static, which is not suitable for long-term usage; some require costly human recalibration procedures; and others require special hardware such as Wi-Fi anchors and transponders. Our method is self-calibrating and self-adaptive thus maintenance free and based on Wi-Fi only. We have employed two well-known propagation models-free space path loss and ITU models-which we have extended with additional parameters for better propagation simulation. Our self-calibrating procedure utilizes one propagation model to infer parameters of the space and the other to simulate the propagation of the signal without requiring any additional hardware beside Wi-Fi access points, which is suitable for real-world usage. Our method is also one of the few model-based Wi-Fi only self-adaptive approaches that do not require the mobile terminal to be in the access-point mode. The only input requirements of the method are Wi-Fi access point positions, and positions and properties of the walls. Our method has been evaluated in single- and multi-room environments, with measured mean error of 2-3 and 3-4 m, respectively, which is similar to existing methods. The evaluation has proven that usable localization accuracy can be achieved in real-world environments solely by the proposed Wi-Fi method that relies on simple hardware and software requirements.

  20. A Self-Adaptive Model-Based Wi-Fi Indoor Localization Method

    PubMed Central

    Tuta, Jure; Juric, Matjaz B.

    2016-01-01

    This paper presents a novel method for indoor localization, developed with the main aim of making it useful for real-world deployments. Many indoor localization methods exist, yet they have several disadvantages in real-world deployments—some are static, which is not suitable for long-term usage; some require costly human recalibration procedures; and others require special hardware such as Wi-Fi anchors and transponders. Our method is self-calibrating and self-adaptive thus maintenance free and based on Wi-Fi only. We have employed two well-known propagation models—free space path loss and ITU models—which we have extended with additional parameters for better propagation simulation. Our self-calibrating procedure utilizes one propagation model to infer parameters of the space and the other to simulate the propagation of the signal without requiring any additional hardware beside Wi-Fi access points, which is suitable for real-world usage. Our method is also one of the few model-based Wi-Fi only self-adaptive approaches that do not require the mobile terminal to be in the access-point mode. The only input requirements of the method are Wi-Fi access point positions, and positions and properties of the walls. Our method has been evaluated in single- and multi-room environments, with measured mean error of 2–3 and 3–4 m, respectively, which is similar to existing methods. The evaluation has proven that usable localization accuracy can be achieved in real-world environments solely by the proposed Wi-Fi method that relies on simple hardware and software requirements. PMID:27929453

  1. An efficient, maintenance free and approved method for spectroscopic control and monitoring of blend uniformity: The moving F-test.

    PubMed

    Besseling, Rut; Damen, Michiel; Tran, Thanh; Nguyen, Thanh; van den Dries, Kaspar; Oostra, Wim; Gerich, Ad

    2015-10-10

    Dry powder mixing is a wide spread Unit Operation in the Pharmaceutical industry. With the advent of in-line Near Infrared (NIR) Spectroscopy and Quality by Design principles, application of Process Analytical Technology to monitor Blend Uniformity (BU) is taking a more prominent role. Yet routine use of NIR for monitoring, let alone control of blending processes is not common in the industry, despite the improved process understanding and (cost) efficiency that it may offer. Method maintenance, robustness and translation to regulatory requirements have been important barriers to implement the method. This paper presents a qualitative NIR-BU method offering a convenient and compliant approach to apply BU control for routine operation and process understanding, without extensive calibration and method maintenance requirements. The method employs a moving F-test to detect the steady state of measured spectral variances and the endpoint of mixing. The fundamentals and performance characteristics of the method are first presented, followed by a description of the link to regulatory BU criteria, the method sensitivity and practical considerations. Applications in upscaling, tech transfer and commercial production are described, along with evaluation of the method performance by comparison with results from quantitative calibration models. A full application, in which end-point detection via the F-test controls the blending process of a low dose product, was successfully filed in Europe and Australia, implemented in commercial production and routinely used for about five years and more than 100 batches. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Swarm Optimization-Based Magnetometer Calibration for Personal Handheld Devices

    PubMed Central

    Ali, Abdelrahman; Siddharth, Siddharth; Syed, Zainab; El-Sheimy, Naser

    2012-01-01

    Inertial Navigation Systems (INS) consist of accelerometers, gyroscopes and a processor that generates position and orientation solutions by integrating the specific forces and rotation rates. In addition to the accelerometers and gyroscopes, magnetometers can be used to derive the user heading based on Earth's magnetic field. Unfortunately, the measurements of the magnetic field obtained with low cost sensors are usually corrupted by several errors, including manufacturing defects and external electro-magnetic fields. Consequently, proper calibration of the magnetometer is required to achieve high accuracy heading measurements. In this paper, a Particle Swarm Optimization (PSO)-based calibration algorithm is presented to estimate the values of the bias and scale factor of low cost magnetometers. The main advantage of this technique is the use of the artificial intelligence which does not need any error modeling or awareness of the nonlinearity. Furthermore, the proposed algorithm can help in the development of Pedestrian Navigation Devices (PNDs) when combined with inertial sensors and GPS/Wi-Fi for indoor navigation and Location Based Services (LBS) applications.

  3. Flight Test Results of an Angle of Attack and Angle of Sideslip Calibration Method Using Output-Error Optimization

    NASA Technical Reports Server (NTRS)

    Siu, Marie-Michele; Martos, Borja; Foster, John V.

    2013-01-01

    As part of a joint partnership between the NASA Aviation Safety Program (AvSP) and the University of Tennessee Space Institute (UTSI), research on advanced air data calibration methods has been in progress. This research was initiated to expand a novel pitot-static calibration method that was developed to allow rapid in-flight calibration for the NASA Airborne Subscale Transport Aircraft Research (AirSTAR) facility. This approach uses Global Positioning System (GPS) technology coupled with modern system identification methods that rapidly computes optimal pressure error models over a range of airspeed with defined confidence bounds. Subscale flight tests demonstrated small 2-s error bounds with significant reduction in test time compared to other methods. Recent UTSI full scale flight tests have shown airspeed calibrations with the same accuracy or better as the Federal Aviation Administration (FAA) accepted GPS 'four-leg' method in a smaller test area and in less time. The current research was motivated by the desire to extend this method for inflight calibration of angle of attack (AOA) and angle of sideslip (AOS) flow vanes. An instrumented Piper Saratoga research aircraft from the UTSI was used to collect the flight test data and evaluate flight test maneuvers. Results showed that the output-error approach produces good results for flow vane calibration. In addition, maneuvers for pitot-static and flow vane calibration can be integrated to enable simultaneous and efficient testing of each system.

  4. National Transonic Facility Wall Pressure Calibration Using Modern Design of Experiments (Invited)

    NASA Technical Reports Server (NTRS)

    Underwood, Pamela J.; Everhart, Joel L.; DeLoach, Richard

    2001-01-01

    The Modern Design of Experiments (MDOE) has been applied to wind tunnel testing at NASA Langley Research Center for several years. At Langley, MDOE has proven to be a useful and robust approach to aerodynamic testing that yields significant reductions in the cost and duration of experiments while still providing for the highest quality research results. This paper extends its application to include empty tunnel wall pressure calibrations. These calibrations are performed in support of wall interference corrections. This paper will present the experimental objectives, and the theoretical design process. To validate the tunnel-empty-calibration experiment design, preliminary response surface models calculated from previously acquired data are also presented. Finally, lessons learned and future wall interference applications of MDOE are discussed.

  5. Calibration procedure for a laser triangulation scanner with uncertainty evaluation

    NASA Astrophysics Data System (ADS)

    Genta, Gianfranco; Minetola, Paolo; Barbato, Giulio

    2016-11-01

    Most of low cost 3D scanning devices that are nowadays available on the market are sold without a user calibration procedure to correct measurement errors related to changes in environmental conditions. In addition, there is no specific international standard defining a procedure to check the performance of a 3D scanner along time. This paper aims at detailing a thorough methodology to calibrate a 3D scanner and assess its measurement uncertainty. The proposed procedure is based on the use of a reference ball plate and applied to a triangulation laser scanner. Experimental results show that the metrological performance of the instrument can be greatly improved by the application of the calibration procedure that corrects systematic errors and reduces the device's measurement uncertainty.

  6. Calibration of EBT2 film by the PDD method with scanner non-uniformity correction.

    PubMed

    Chang, Liyun; Chui, Chen-Shou; Ding, Hueisch-Jy; Hwang, Ing-Ming; Ho, Sheng-Yow

    2012-09-21

    The EBT2 film together with a flatbed scanner is a convenient dosimetry QA tool for verification of clinical radiotherapy treatments. However, it suffers from a relatively high degree of uncertainty and a tedious film calibration process for every new lot of films, including cutting the films into several small pieces, exposing with different doses, restoring them back and selecting the proper region of interest (ROI) for each piece for curve fitting. In this work, we present a percentage depth dose (PDD) method that can accurately calibrate the EBT2 film together with the scanner non-uniformity correction and provide an easy way to perform film dosimetry. All films were scanned before and after the irradiation in one of the two homemade 2 mm thick acrylic frames (one portrait and the other landscape), which was located at a fixed position on the scan bed of an Epson 10 000XL scanner. After the pre-irradiated scan, the film was placed parallel to the beam central axis and sandwiched between six polystyrene plates (5 cm thick each), followed by irradiation of a 20 × 20 cm² 6 MV photon beam. Two different beams on times were used on two different films to deliver a dose to the film ranging from 32 to 320 cGy. After the post-irradiated scan, the net optical densities for a total of 235 points on the beam central axis on the films were auto-extracted and compared with the corresponding depth doses that were calculated through the measurement of a 0.6 cc farmer chamber and the related PDD table to perform the curve fitting. The portrait film location was selected for routine calibration, since the central beam axis on the film is parallel to the scanning direction, where non-uniformity correction is not needed (Ferreira et al 2009 Phys. Med. Biol. 54 1073-85). To perform the scanner non-uniformity calibration, the cross-beam profiles of the film were analysed by referencing the measured profiles from a Profiler™. Finally, to verify our method, the films were exposed to 60° physical wedge fields and the compositive fields, and their relative dose profiles were compared with those from the water phantom measurement. The fitting uncertainty was less than 0.5% due to the many calibration points, and the overall calibration uncertainty was within 3% for doses above 50 cGy, when the average of four films were used for the calibration. According to our study, the non-uniformity calibration factor was found to be independent of the given dose for the EBT2 film and the relative dose differences between the profiles measured by the film and the Profiler were within 1.5% after applying the non-uniformity correction. For the verification tests, the relative dose differences between the measurements by films and in the water phantom, when the average of three films were used, were generally within 3% for the 60° wedge fields and compositive fields, respectively. In conclusion, our method is convenient, time-saving and cost-effective, since no film cutting is needed and only two films with two exposures are needed.

  7. User-friendly freehand ultrasound calibration using Lego bricks and automatic registration.

    PubMed

    Xiao, Yiming; Yan, Charles Xiao Bo; Drouin, Simon; De Nigris, Dante; Kochanowska, Anna; Collins, D Louis

    2016-09-01

    As an inexpensive, noninvasive, and portable clinical imaging modality, ultrasound (US) has been widely employed in many interventional procedures for monitoring potential tissue deformation, surgical tool placement, and locating surgical targets. The application requires the spatial mapping between 2D US images and 3D coordinates of the patient. Although positions of the devices (i.e., ultrasound transducer) and the patient can be easily recorded by a motion tracking system, the spatial relationship between the US image and the tracker attached to the US transducer needs to be estimated through an US calibration procedure. Previously, various calibration techniques have been proposed, where a spatial transformation is computed to match the coordinates of corresponding features in a physical phantom and those seen in the US scans. However, most of these methods are difficult to use for novel users. We proposed an ultrasound calibration method by constructing a phantom from simple Lego bricks and applying an automated multi-slice 2D-3D registration scheme without volumetric reconstruction. The method was validated for its calibration accuracy and reproducibility. Our method yields a calibration accuracy of [Formula: see text] mm and a calibration reproducibility of 1.29 mm. We have proposed a robust, inexpensive, and easy-to-use ultrasound calibration method.

  8. Improved Calibration Shows Images True Colors

    NASA Technical Reports Server (NTRS)

    2015-01-01

    Innovative Imaging and Research, located at Stennis Space Center, used a single SBIR contract with the center to build a large-scale integrating sphere, capable of calibrating a whole array of cameras simultaneously, at a fraction of the usual cost for such a device. Through the use of LEDs, the company also made the sphere far more efficient than existing products and able to mimic sunlight.

  9. IMU-Based Online Kinematic Calibration of Robot Manipulator

    PubMed Central

    2013-01-01

    Robot calibration is a useful diagnostic method for improving the positioning accuracy in robot production and maintenance. An online robot self-calibration method based on inertial measurement unit (IMU) is presented in this paper. The method requires that the IMU is rigidly attached to the robot manipulator, which makes it possible to obtain the orientation of the manipulator with the orientation of the IMU in real time. This paper proposed an efficient approach which incorporates Factored Quaternion Algorithm (FQA) and Kalman Filter (KF) to estimate the orientation of the IMU. Then, an Extended Kalman Filter (EKF) is used to estimate kinematic parameter errors. Using this proposed orientation estimation method will result in improved reliability and accuracy in determining the orientation of the manipulator. Compared with the existing vision-based self-calibration methods, the great advantage of this method is that it does not need the complex steps, such as camera calibration, images capture, and corner detection, which make the robot calibration procedure more autonomous in a dynamic manufacturing environment. Experimental studies on a GOOGOL GRB3016 robot show that this method has better accuracy, convenience, and effectiveness than vision-based methods. PMID:24302854

  10. Extrinsic Calibration of Camera Networks Based on Pedestrians

    PubMed Central

    Guan, Junzhi; Deboeverie, Francis; Slembrouck, Maarten; Van Haerenborgh, Dirk; Van Cauwelaert, Dimitri; Veelaert, Peter; Philips, Wilfried

    2016-01-01

    In this paper, we propose a novel extrinsic calibration method for camera networks by analyzing tracks of pedestrians. First of all, we extract the center lines of walking persons by detecting their heads and feet in the camera images. We propose an easy and accurate method to estimate the 3D positions of the head and feet w.r.t. a local camera coordinate system from these center lines. We also propose a RANSAC-based orthogonal Procrustes approach to compute relative extrinsic parameters connecting the coordinate systems of cameras in a pairwise fashion. Finally, we refine the extrinsic calibration matrices using a method that minimizes the reprojection error. While existing state-of-the-art calibration methods explore epipolar geometry and use image positions directly, the proposed method first computes 3D positions per camera and then fuses the data. This results in simpler computations and a more flexible and accurate calibration method. Another advantage of our method is that it can also handle the case of persons walking along straight lines, which cannot be handled by most of the existing state-of-the-art calibration methods since all head and feet positions are co-planar. This situation often happens in real life. PMID:27171080

  11. Hybrid Geometric Calibration Method for Multi-Platform Spaceborne SAR Image with Sparse Gcps

    NASA Astrophysics Data System (ADS)

    Lv, G.; Tang, X.; Ai, B.; Li, T.; Chen, Q.

    2018-04-01

    Geometric calibration is able to provide high-accuracy geometric coordinates of spaceborne SAR image through accurate geometric parameters in the Range-Doppler model by ground control points (GCPs). However, it is very difficult to obtain GCPs that covering large-scale areas, especially in the mountainous regions. In addition, the traditional calibration method is only used for single platform SAR images and can't support the hybrid geometric calibration for multi-platform images. To solve the above problems, a hybrid geometric calibration method for multi-platform spaceborne SAR images with sparse GCPs is proposed in this paper. First, we calibrate the master image that contains GCPs. Secondly, the point tracking algorithm is used to obtain the tie points (TPs) between the master and slave images. Finally, we calibrate the slave images using TPs as the GCPs. We take the Beijing-Tianjin- Hebei region as an example to study SAR image hybrid geometric calibration method using 3 TerraSAR-X images, 3 TanDEM-X images and 5 GF-3 images covering more than 235 kilometers in the north-south direction. Geometric calibration of all images is completed using only 5 GCPs. The GPS data extracted from GNSS receiver are used to assess the plane accuracy after calibration. The results after geometric calibration with sparse GCPs show that the geometric positioning accuracy is 3 m for TSX/TDX images and 7.5 m for GF-3 images.

  12. Combining satellite, aerial and ground measurements to assess forest carbon stocks in Democratic Republic of Congo

    NASA Astrophysics Data System (ADS)

    Beaumont, Benjamin; Bouvy, Alban; Stephenne, Nathalie; Mathoux, Pierre; Bastin, Jean-François; Baudot, Yves; Akkermans, Tom

    2015-04-01

    Monitoring tropical forest carbon stocks changes has been a rising topic in the recent years as a result of REDD+ mechanisms negotiations. Such monitoring will be mandatory for each project/country willing to benefit from these financial incentives in the future. Aerial and satellite remote sensing technologies offer cost advantages in implementing large scale forest inventories. Despite the recent progress made in the use of airborne LiDAR for carbon stocks estimation, no widely operational and cost effective method has yet been delivered for central Africa forest monitoring. Within the Maï Ndombe region of Democratic Republic of Congo, the EO4REDD project develops a method combining satellite, aerial and ground measurements. This combination is done in three steps: [1] mapping and quantifying forest cover changes using an object-based semi-automatic change detection (deforestation and forest degradation) methodology based on very high resolution satellite imagery (RapidEye), [2] developing an allometric linear model for above ground biomass measurements based on dendrometric parameters (tree crown areas and heights) extracted from airborne stereoscopic image pairs and calibrated using ground measurements of individual trees on a data set of 18 one hectare plots and [3] relating these two products to assess carbon stocks changes at a regional scale. Given the high accuracies obtained in [1] (> 80% for deforestation and 77% for forest degradation) and the suitable, but still to be improved with a larger calibrating sample, model (R² of 0.7) obtained in [2], EO4REDD products can be seen as a valid and replicable option for carbon stocks monitoring in tropical forests. Further improvements are planned to strengthen the cost effectiveness value and the REDD+ suitability in the second phase of EO4REDD. This second phase will include [A] specific model developments per forest type; [B] measurements of afforestation, reforestation and natural regeneration processes and [C] study of Sentinel satellite data series potential use.

  13. Sensor Integration in a Low Cost Land Mobile Mapping System

    PubMed Central

    Madeira, Sergio; Gonçalves, José A.; Bastos, Luísa

    2012-01-01

    Mobile mapping is a multidisciplinary technique which requires several dedicated equipment, calibration procedures that must be as rigorous as possible, time synchronization of all acquired data and software for data processing and extraction of additional information. To decrease the cost and complexity of Mobile Mapping Systems (MMS), the use of less expensive sensors and the simplification of procedures for calibration and data acquisition are mandatory features. This article refers to the use of MMS technology, focusing on the main aspects that need to be addressed to guarantee proper data acquisition and describing the way those aspects were handled in a terrestrial MMS developed at the University of Porto. In this case the main aim was to implement a low cost system while maintaining good quality standards of the acquired georeferenced information. The results discussed here show that this goal has been achieved. PMID:22736985

  14. Standardization of glycohemoglobin results and reference values in whole blood studied in 103 laboratories using 20 methods.

    PubMed

    Weykamp, C W; Penders, T J; Miedema, K; Muskiet, F A; van der Slik, W

    1995-01-01

    We investigated the effect of calibration with lyophilized calibrators on whole-blood glycohemoglobin (glyHb) results. One hundred three laboratories, using 20 different methods, determined glyHb in two lyophilized calibrators and two whole-blood samples. For whole-blood samples with low (5%) and high (9%) glyHb percentages, respectively, calibration decreased overall interlaboratory variation (CV) from 16% to 9% and from 11% to 6% and decreased intermethod variation from 14% to 6% and from 12% to 5%. Forty-seven laboratories, using 14 different methods, determined mean glyHb percentages in self-selected groups of 10 nondiabetic volunteers each. With calibration their overall mean (2SD) was 5.0% (0.5%), very close to the 5.0% (0.3%) derived from the reference method used in the Diabetes Control and Complications Trial. In both experiments the Abbott IMx and Vision showed deviating results. We conclude that, irrespective of the analytical method used, calibration enables standardization of glyHb results, reference values, and interpretation criteria.

  15. Input variable selection and calibration data selection for storm water quality regression models.

    PubMed

    Sun, Siao; Bertrand-Krajewski, Jean-Luc

    2013-01-01

    Storm water quality models are useful tools in storm water management. Interest has been growing in analyzing existing data for developing models for urban storm water quality evaluations. It is important to select appropriate model inputs when many candidate explanatory variables are available. Model calibration and verification are essential steps in any storm water quality modeling. This study investigates input variable selection and calibration data selection in storm water quality regression models. The two selection problems are mutually interacted. A procedure is developed in order to fulfil the two selection tasks in order. The procedure firstly selects model input variables using a cross validation method. An appropriate number of variables are identified as model inputs to ensure that a model is neither overfitted nor underfitted. Based on the model input selection results, calibration data selection is studied. Uncertainty of model performances due to calibration data selection is investigated with a random selection method. An approach using the cluster method is applied in order to enhance model calibration practice based on the principle of selecting representative data for calibration. The comparison between results from the cluster selection method and random selection shows that the former can significantly improve performances of calibrated models. It is found that the information content in calibration data is important in addition to the size of calibration data.

  16. Determination of Trace Available Heavy Metals in Soil Using Laser-Induced Breakdown Spectroscopy Assisted with Phase Transformation Method.

    PubMed

    Yi, Rongxing; Yang, Xinyan; Zhou, Ran; Li, Jiaming; Yu, Huiwu; Hao, Zhongqi; Guo, Lianbo; Li, Xiangyou; Lu, Yongfeng; Zeng, Xiaoyan

    2018-05-18

    To detect available heavy metals in soil using laser-induced breakdown spectroscopy (LIBS) and improve its poor detection sensitivity, a simple and low cost sample pretreatment method named solid-liquid-solid transformation was proposed. By this method, available heavy metals were extracted from soil through ultrasonic vibration and centrifuging and then deposited on a glass slide. Utilization of this solid-liquid-solid transformation method, available Cd and Pb elements in soil were detected successfully. The results show that the regression coefficients of calibration curves for soil analyses reach to more than 0.98. The limits of detection could reach to 0.067 and 0.94 ppm for available Cd and Pb elements in soil under optimized conditions, respectively, which are much better than those obtained by conventional LIBS.

  17. Improved neural network based scene-adaptive nonuniformity correction method for infrared focal plane arrays.

    PubMed

    Lai, Rui; Yang, Yin-tang; Zhou, Duan; Li, Yue-jin

    2008-08-20

    An improved scene-adaptive nonuniformity correction (NUC) algorithm for infrared focal plane arrays (IRFPAs) is proposed. This method simultaneously estimates the infrared detectors' parameters and eliminates the nonuniformity causing fixed pattern noise (FPN) by using a neural network (NN) approach. In the learning process of neuron parameter estimation, the traditional LMS algorithm is substituted with the newly presented variable step size (VSS) normalized least-mean square (NLMS) based adaptive filtering algorithm, which yields faster convergence, smaller misadjustment, and lower computational cost. In addition, a new NN structure is designed to estimate the desired target value, which promotes the calibration precision considerably. The proposed NUC method reaches high correction performance, which is validated by the experimental results quantitatively tested with a simulative testing sequence and a real infrared image sequence.

  18. Uncertainty in air quality observations using low-cost sensors

    NASA Astrophysics Data System (ADS)

    Castell, Nuria; Dauge, Franck R.; Dongol, Rozina; Vogt, Matthias; Schneider, Philipp

    2016-04-01

    Air pollution poses a threat to human health, and the WHO has classified air pollution as the world's largest single environmental health risk. In Europe, the majority of the population lives in areas where air quality levels frequently exceed WHO's ambient air quality guidelines. The emergence of low-cost, user-friendly and very compact air pollution platforms allowing observations at high spatial resolution in near real-time, provides us with new opportunities to simultaneously enhance existing monitoring systems as well as enable citizens to engage in more active environmental monitoring (citizen science). However the data sets generated by low-cost sensors show often questionable data quality. For many sensors, neither their error characteristics nor how their measurement capability holds up over time or through a range of environmental conditions, have been evaluated. We have conducted an exhaustive evaluation of the commercial low-cost platform AQMesh (measuring NO, NO2, CO, O3, PM10 and PM2.5) in laboratory and in real-world conditions in the city of Oslo (Norway). Co-locations in field of 24 platforms were conducted over a 6 month period (April to September 2015) allowing to characterize the temporal variability in the performance. Additionally, the field performance included the characterization on different monitoring urban monitoring sites characteristic of both traffic and background conditions. All the evaluations have been conducted against CEN reference method analyzers maintained according to the Norwegian National Reference Laboratory quality system. The results show clearly that a good performance in laboratory does not imply similar performance in real-world outdoor conditions. Moreover, laboratory calibration is not suitable for subsequent measurements in urban environments. In order to reduce the errors, sensors require on-site field calibration. Even after such field calibration, the platforms show a significant variability in the performance due to changes in the environmental conditions. Currently there is a lack of testing to ensure adequate sensor performance prior to marketing such instruments. Even when manufacturers provide detailed specification sheets, there is little guarantee that the specifications can actually be met in real-world conditions. Data quality is a pertinent concern, especially when citizens are collecting and interpreting the data by themselves. Poor or unknown data quality can lead to incorrect or inappropriate decisions. We present the experiences gained within the EU project CITI-SENSE, where low-cost sensors are one of the tools employed to empower citizens in air quality issues.

  19. Determination of dissolved methane in natural waters using headspace analysis with cavity ring-down spectroscopy.

    PubMed

    Roberts, Hannah M; Shiller, Alan M

    2015-01-26

    Methane (CH4) is the third most abundant greenhouse gas (GHG) but is vastly understudied in comparison to carbon dioxide. Sources and sinks to the atmosphere vary considerably in estimation, including sources such as fresh and marine water systems. A new method to determine dissolved methane concentrations in discrete water samples has been evaluated. By analyzing an equilibrated headspace using laser cavity ring-down spectroscopy (CRDS), low nanomolar dissolved methane concentrations can be determined with high reproducibility (i.e., 0.13 nM detection limit and typical 4% RSD). While CRDS instruments cost roughly twice that of gas chromatographs (GC) usually used for methane determination, the process presented herein is substantially simpler, faster, and requires fewer materials than GC methods. Typically, 70-mL water samples are equilibrated with an equivalent amount of zero air in plastic syringes. The equilibrated headspace is transferred to a clean, dry syringe and then drawn into a Picarro G2301 CRDS analyzer via the instrument's pump. We demonstrate that this instrument holds a linear calibration into the sub-ppmv methane concentration range and holds a stable calibration for at least two years. Application of the method to shipboard dissolved methane determination in the northern Gulf of Mexico as well as river water is shown. Concentrations spanning nearly six orders of magnitude have been determined with this method. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. An Improved Fast Self-Calibration Method for Hybrid Inertial Navigation System under Stationary Condition

    PubMed Central

    Liu, Bingqi; Wei, Shihui; Su, Guohua; Wang, Jiping; Lu, Jiazhen

    2018-01-01

    The navigation accuracy of the inertial navigation system (INS) can be greatly improved when the inertial measurement unit (IMU) is effectively calibrated and compensated, such as gyro drifts and accelerometer biases. To reduce the requirement for turntable precision in the classical calibration method, a continuous dynamic self-calibration method based on a three-axis rotating frame for the hybrid inertial navigation system is presented. First, by selecting a suitable IMU frame, the error models of accelerometers and gyros are established. Then, by taking the navigation errors during rolling as the observations, the overall twenty-one error parameters of hybrid inertial navigation system (HINS) are identified based on the calculation of the intermediate parameter. The actual experiment verifies that the method can identify all error parameters of HINS and this method has equivalent accuracy to the classical calibration on a high-precision turntable. In addition, this method is rapid, simple and feasible. PMID:29695041

  1. An Improved Fast Self-Calibration Method for Hybrid Inertial Navigation System under Stationary Condition.

    PubMed

    Liu, Bingqi; Wei, Shihui; Su, Guohua; Wang, Jiping; Lu, Jiazhen

    2018-04-24

    The navigation accuracy of the inertial navigation system (INS) can be greatly improved when the inertial measurement unit (IMU) is effectively calibrated and compensated, such as gyro drifts and accelerometer biases. To reduce the requirement for turntable precision in the classical calibration method, a continuous dynamic self-calibration method based on a three-axis rotating frame for the hybrid inertial navigation system is presented. First, by selecting a suitable IMU frame, the error models of accelerometers and gyros are established. Then, by taking the navigation errors during rolling as the observations, the overall twenty-one error parameters of hybrid inertial navigation system (HINS) are identified based on the calculation of the intermediate parameter. The actual experiment verifies that the method can identify all error parameters of HINS and this method has equivalent accuracy to the classical calibration on a high-precision turntable. In addition, this method is rapid, simple and feasible.

  2. A method for soil moisture probes calibration and validation of satellite estimates.

    PubMed

    Holzman, Mauro; Rivas, Raúl; Carmona, Facundo; Niclòs, Raquel

    2017-01-01

    Optimization of field techniques is crucial to ensure high quality soil moisture data. The aim of the work is to present a sampling method for undisturbed soil and soil water content to calibrated soil moisture probes, in a context of the SMOS (Soil Moisture and Ocean Salinity) mission MIRAS Level 2 soil moisture product validation in Pampean Region of Argentina. The method avoids soil alteration and is recommended to calibrated probes based on soil type under a freely drying process at ambient temperature. A detailed explanation of field and laboratory procedures to obtain reference soil moisture is shown. The calibration results reflected accurate operation for the Delta-T thetaProbe ML2x probes in most of analyzed cases (RMSE and bias ≤ 0.05 m 3 /m 3 ). Post-calibration results indicated that the accuracy improves significantly applying the adjustments of the calibration based on soil types (RMSE ≤ 0.022 m 3 /m 3 , bias ≤ -0.010 m 3 /m 3 ). •A sampling method that provides high quality data of soil water content for calibration of probes is described.•Importance of calibration based on soil types.•A calibration process for similar soil types could be suitable in practical terms, depending on the required accuracy level.

  3. Simplified stereo-optical ultrasound plane calibration

    NASA Astrophysics Data System (ADS)

    Hoßbach, Martin; Noll, Matthias; Wesarg, Stefan

    2013-03-01

    Image guided therapy is a natural concept and commonly used in medicine. In anesthesia, a common task is the injection of an anesthetic close to a nerve under freehand ultrasound guidance. Several guidance systems exist using electromagnetic tracking of the ultrasound probe as well as the needle, providing the physician with a precise projection of the needle into the ultrasound image. This, however, requires additional expensive devices. We suggest using optical tracking with miniature cameras attached to a 2D ultrasound probe to achieve a higher acceptance among physicians. The purpose of this paper is to present an intuitive method to calibrate freehand ultrasound needle guidance systems employing a rigid stereo camera system. State of the art methods are based on a complex series of error prone coordinate system transformations which makes them susceptible to error accumulation. By reducing the amount of calibration steps to a single calibration procedure we provide a calibration method that is equivalent, yet not prone to error accumulation. It requires a linear calibration object and is validated on three datasets utilizing di erent calibration objects: a 6mm metal bar and a 1:25mm biopsy needle were used for experiments. Compared to existing calibration methods for freehand ultrasound needle guidance systems, we are able to achieve higher accuracy results while additionally reducing the overall calibration complexity. Ke

  4. Improved GPS-based time link calibration involving ROA and PTB.

    PubMed

    Esteban, Héctor; Palacio, Juan; Galindo, Francisco Javier; Feldmann, Thorsten; Bauch, Andreas; Piester, Dirk

    2010-03-01

    The calibration of time transfer links is mandatory in the context of international collaboration for the realization of International Atomic Time. In this paper, we present the results of the calibration of the GPS time transfer link between the Real Instituto y Observatorio de la Armada (ROA) and the Physikalisch-Technische Bundesanstalt (PTB) by means of a traveling geodetic-type GPS receiver and an evaluation of the achieved type A and B uncertainty. The time transfer results were achieved by using CA, P3, and also carrier phase PPP comparison techniques. We finally use these results to re-calibrate the two-way satellite time and frequency transfer (TWSTFT) link between ROA and PTB, using one month of data. We show that a TWSTFT link can be calibrated by means of GPS time comparisons with an uncertainty below 2 ns, and that potentially even sub-nanosecond uncertainty can be achieved. This is a novel and cost-effective approach compared with the more common calibration using a traveling TWSTFT station.

  5. Characterization of a multi-user indoor positioning system based on low cost depth vision (Kinect) for monitoring human activity in a smart home.

    PubMed

    Sevrin, Loïc; Noury, Norbert; Abouchi, Nacer; Jumel, Fabrice; Massot, Bertrand; Saraydaryan, Jacques

    2015-01-01

    An increasing number of systems use indoor positioning for many scenarios such as asset tracking, health care, games, manufacturing, logistics, shopping, and security. Many technologies are available and the use of depth cameras is becoming more and more attractive as this kind of device becomes affordable and easy to handle. This paper contributes to the effort of creating an indoor positioning system based on low cost depth cameras (Kinect). A method is proposed to optimize the calibration of the depth cameras, to describe the multi-camera data fusion and to specify a global positioning projection to maintain the compatibility with outdoor positioning systems. The monitoring of the people trajectories at home is intended for the early detection of a shift in daily activities which highlights disabilities and loss of autonomy. This system is meant to improve homecare health management at home for a better end of life at a sustainable cost for the community.

  6. The color bar phase meter: A simple and economical method for calibrating crystal oscillators

    NASA Technical Reports Server (NTRS)

    Davis, D. D.

    1973-01-01

    Comparison of crystal oscillators to the rubidium stabilized color burst is made easy and inexpensive by use of the color bar phase meter. Required equipment consists of an unmodified color TV receiver, a color bar synthesizer and a stop watch (a wrist watch or clock with sweep second hand may be used with reduced precision). Measurement precision of 1 x 10 to the minus 10th power can be realized in measurement times of less than two minutes. If the color bar synthesizer were commercially available, user cost should be less than $200.00, exclusive of the TV receiver. Parts cost for the color bar synthesizer which translates the crystal oscillator frequency to 3.579MHz and modulates the received RF signal before it is fed to the receiver antenna terminals is about $25.00. A more sophisticated automated version, with precision of 1 x 10 to the minus 11th power would cost about twice as much.

  7. Too Late to Vaccinate? The Incremental Benefits and Cost-effectiveness of a Delayed Catch-up Program Using the 4-Valent Human Papillomavirus Vaccine in Norway

    PubMed Central

    Burger, Emily A.; Sy, Stephen; Nygård, Mari; Kristiansen, Ivar S.; Kim, Jane J.

    2015-01-01

    Background Human papillomavirus (HPV) vaccines are ideally administered before HPV exposure; therefore, catch-up programs for girls past adolescence have not been readily funded. We evaluated the benefits and cost-effectiveness of a delayed, 1-year female catch-up vaccination program in Norway. Methods We calibrated a dynamic HPV transmission model to Norwegian data and projected the costs and benefits associated with 8 HPV-related conditions while varying the upper vaccination age limit to 20, 22, 24, or 26 years. We explored the impact of vaccine protection in women with prior vaccine-targeted HPV infections, vaccine cost, coverage, and natural- and vaccine-induced immunity. Results The incremental benefits and cost-effectiveness decreased as the upper age limit for catch-up increased. Assuming a vaccine cost of $150/dose, vaccination up to age 20 years remained below Norway's willingness-to-pay threshold (approximately $83 000/quality-adjusted life year gained); extension to age 22 years was cost-effective at a lower cost per dose ($50–$75). At high levels of vaccine protection in women with prior HPV exposure, vaccinating up to age 26 years was cost-effective. Results were stable with lower coverage. Conclusions HPV vaccination catch-up programs, 5 years after routine implementation, may be warranted; however, even at low vaccine cost per dose, the cost-effectiveness of vaccinating beyond age 22 years remains uncertain. PMID:25057044

  8. Simultaneous multi-headed imager geometry calibration method

    DOEpatents

    Tran, Vi-Hoa [Newport News, VA; Meikle, Steven Richard [Penshurst, AU; Smith, Mark Frederick [Yorktown, VA

    2008-02-19

    A method for calibrating multi-headed high sensitivity and high spatial resolution dynamic imaging systems, especially those useful in the acquisition of tomographic images of small animals. The method of the present invention comprises: simultaneously calibrating two or more detectors to the same coordinate system; and functionally correcting for unwanted detector movement due to gantry flexing.

  9. Model independent approach to the single photoelectron calibration of photomultiplier tubes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saldanha, R.; Grandi, L.; Guardincerri, Y.

    2017-08-01

    The accurate calibration of photomultiplier tubes is critical in a wide variety of applications in which it is necessary to know the absolute number of detected photons or precisely determine the resolution of the signal. Conventional calibration methods rely on fitting the photomultiplier response to a low intensity light source with analytical approximations to the single photoelectron distribution, often leading to biased estimates due to the inability to accurately model the full distribution, especially at low charge values. In this paper we present a simple statistical method to extract the relevant single photoelectron calibration parameters without making any assumptions aboutmore » the underlying single photoelectron distribution. We illustrate the use of this method through the calibration of a Hamamatsu R11410 photomultiplier tube and study the accuracy and precision of the method using Monte Carlo simulations. The method is found to have significantly reduced bias compared to conventional methods and works under a wide range of light intensities, making it suitable for simultaneously calibrating large arrays of photomultiplier tubes.« less

  10. A Theoretical Framework for Calibration in Computer Models: Parametrization, Estimation and Convergence Properties

    DOE PAGES

    Tuo, Rui; Jeff Wu, C. F.

    2016-07-19

    Calibration parameters in deterministic computer experiments are those attributes that cannot be measured or available in physical experiments. Here, an approach to estimate them by using data from physical experiments and computer simulations. A theoretical framework is given which allows us to study the issues of parameter identifiability and estimation. We define the L 2-consistency for calibration as a justification for calibration methods. It is shown that a simplified version of the original KO method leads to asymptotically L 2-inconsistent calibration. This L 2-inconsistency can be remedied by modifying the original estimation procedure. A novel calibration method, called the Lmore » 2 calibration, is proposed and proven to be L 2-consistent and enjoys optimal convergence rate. Furthermore a numerical example and some mathematical analysis are used to illustrate the source of the L 2-inconsistency problem.« less

  11. ASTER preflight and inflight calibration and the validation of level 2 products

    USGS Publications Warehouse

    Thome, K.; Aral, K.; Hook, S.; Kieffer, H.; Lang, H.; Matsunaga, T.; Ono, A.; Palluconi, F. D.; Sakuma, H.; Slater, P.; Takashima, T.; Tonooka, H.; Tsuchida, S.; Welch, R.M.; Zalewski, E.

    1998-01-01

    This paper describes the preflight and inflight calibration approaches used for the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER). The system is a multispectral, high-spatial resolution sensor on the Earth Observing System's (EOS)-AMl platform. Preflight calibration of ASTER uses well-characterized sources to provide calibration and preflight round-robin exercises to understand biases between the calibration sources of ASTER and other EOS sensors. These round-robins rely on well-characterized, ultra-stable radiometers. An experiment held in Yokohama, Japan, showed that the output from the source used for the visible and near-infrared (VNIR) subsystem of ASTER may be underestimated by 1.5%, but this is still within the 4% specification for the absolute, radiometric calibration of these bands. Inflight calibration will rely on vicarious techniques and onboard blackbodies and lamps. Vicarious techniques include ground-reference methods using desert and water sites. A recent joint field campaign gives confidence that these methods currently provide absolute calibration to better than 5%, and indications are that uncertainties less than the required 4% should be achievable at launch. The EOS-AMI platform will also provide a spacecraft maneuver that will allow ASTER to see the moon, allowing further characterization of the sensor. A method for combining the results of these independent calibration results is presented. The paper also describes the plans for validating the Level 2 data products from ASTER. These plans rely heavily upon field campaigns using methods similar to those used for the ground-reference, vicarious calibration methods. ?? 1998 IEEE.

  12. Modulated CMOS camera for fluorescence lifetime microscopy.

    PubMed

    Chen, Hongtao; Holst, Gerhard; Gratton, Enrico

    2015-12-01

    Widefield frequency-domain fluorescence lifetime imaging microscopy (FD-FLIM) is a fast and accurate method to measure the fluorescence lifetime of entire images. However, the complexity and high costs involved in construction of such a system limit the extensive use of this technique. PCO AG recently released the first luminescence lifetime imaging camera based on a high frequency modulated CMOS image sensor, QMFLIM2. Here we tested and provide operational procedures to calibrate the camera and to improve the accuracy using corrections necessary for image analysis. With its flexible input/output options, we are able to use a modulated laser diode or a 20 MHz pulsed white supercontinuum laser as the light source. The output of the camera consists of a stack of modulated images that can be analyzed by the SimFCS software using the phasor approach. The nonuniform system response across the image sensor must be calibrated at the pixel level. This pixel calibration is crucial and needed for every camera settings, e.g. modulation frequency and exposure time. A significant dependency of the modulation signal on the intensity was also observed and hence an additional calibration is needed for each pixel depending on the pixel intensity level. These corrections are important not only for the fundamental frequency, but also for the higher harmonics when using the pulsed supercontinuum laser. With these post data acquisition corrections, the PCO CMOS-FLIM camera can be used for various biomedical applications requiring a large frame and high speed acquisition. © 2015 Wiley Periodicals, Inc.

  13. Predicting Aspergillus fumigatus exposure from composting facilities using a dispersion model: A conditional calibration and validation.

    PubMed

    Douglas, Philippa; Tyrrel, Sean F; Kinnersley, Robert P; Whelan, Michael; Longhurst, Philip J; Hansell, Anna L; Walsh, Kerry; Pollard, Simon J T; Drew, Gillian H

    2017-01-01

    Bioaerosols are released in elevated quantities from composting facilities and are associated with negative health effects, although dose-response relationships are unclear. Exposure levels are difficult to quantify as established sampling methods are costly, time-consuming and current data provide limited temporal and spatial information. Confidence in dispersion model outputs in this context would be advantageous to provide a more detailed exposure assessment. We present the calibration and validation of a recognised atmospheric dispersion model (ADMS) for bioaerosol exposure assessments. The model was calibrated by a trial and error optimisation of observed Aspergillus fumigatus concentrations at different locations around a composting site. Validation was performed using a second dataset of measured concentrations for a different site. The best fit between modelled and measured data was achieved when emissions were represented as a single area source, with a temperature of 29°C. Predicted bioaerosol concentrations were within an order of magnitude of measured values (1000-10,000CFU/m 3 ) at the validation site, once minor adjustments were made to reflect local differences between the sites (r 2 >0.7 at 150, 300, 500 and 600m downwind of source). Results suggest that calibrated dispersion modelling can be applied to make reasonable predictions of bioaerosol exposures at multiple sites and may be used to inform site regulation and operational management. Copyright © 2016 The Authors. Published by Elsevier GmbH.. All rights reserved.

  14. Apparatus for in-situ calibration of instruments that measure fluid depth

    DOEpatents

    Campbell, Melvin D.

    1994-01-01

    The present invention provides a method and apparatus for in-situ calibration of distance measuring equipment. The method comprises obtaining a first distance measurement in a first location, then obtaining at least one other distance measurement in at least one other location of a precisely known distance from the first location, and calculating a calibration constant. The method is applied specifically to calculating a calibration constant for obtaining fluid level and embodied in an apparatus using a pressure transducer and a spacer of precisely known length. The calibration constant is used to calculate the depth of a fluid from subsequent single pressure measurements at any submerged position.

  15. Predicting ambient aerosol thermal-optical reflectance (TOR) measurements from infrared spectra: organic carbon

    NASA Astrophysics Data System (ADS)

    Dillner, A. M.; Takahama, S.

    2015-03-01

    Organic carbon (OC) can constitute 50% or more of the mass of atmospheric particulate matter. Typically, organic carbon is measured from a quartz fiber filter that has been exposed to a volume of ambient air and analyzed using thermal methods such as thermal-optical reflectance (TOR). Here, methods are presented that show the feasibility of using Fourier transform infrared (FT-IR) absorbance spectra from polytetrafluoroethylene (PTFE or Teflon) filters to accurately predict TOR OC. This work marks an initial step in proposing a method that can reduce the operating costs of large air quality monitoring networks with an inexpensive, non-destructive analysis technique using routinely collected PTFE filter samples which, in addition to OC concentrations, can concurrently provide information regarding the composition of organic aerosol. This feasibility study suggests that the minimum detection limit and errors (or uncertainty) of FT-IR predictions are on par with TOR OC such that evaluation of long-term trends and epidemiological studies would not be significantly impacted. To develop and test the method, FT-IR absorbance spectra are obtained from 794 samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites collected during 2011. Partial least-squares regression is used to calibrate sample FT-IR absorbance spectra to TOR OC. The FTIR spectra are divided into calibration and test sets by sampling site and date. The calibration produces precise and accurate TOR OC predictions of the test set samples by FT-IR as indicated by high coefficient of variation (R2; 0.96), low bias (0.02 μg m-3, the nominal IMPROVE sample volume is 32.8 m3), low error (0.08 μg m-3) and low normalized error (11%). These performance metrics can be achieved with various degrees of spectral pretreatment (e.g., including or excluding substrate contributions to the absorbances) and are comparable in precision to collocated TOR measurements. FT-IR spectra are also divided into calibration and test sets by OC mass and by OM / OC ratio, which reflects the organic composition of the particulate matter and is obtained from organic functional group composition; these divisions also leads to precise and accurate OC predictions. Low OC concentrations have higher bias and normalized error due to TOR analytical errors and artifact-correction errors, not due to the range of OC mass of the samples in the calibration set. However, samples with low OC mass can be used to predict samples with high OC mass, indicating that the calibration is linear. Using samples in the calibration set that have different OM / OC or ammonium / OC distributions than the test set leads to only a modest increase in bias and normalized error in the predicted samples. We conclude that FT-IR analysis with partial least-squares regression is a robust method for accurately predicting TOR OC in IMPROVE network samples - providing complementary information to the organic functional group composition and organic aerosol mass estimated previously from the same set of sample spectra (Ruthenburg et al., 2014).

  16. Adaptive Prior Variance Calibration in the Bayesian Continual Reassessment Method

    PubMed Central

    Zhang, Jin; Braun, Thomas M.; Taylor, Jeremy M.G.

    2012-01-01

    Use of the Continual Reassessment Method (CRM) and other model-based approaches to design in Phase I clinical trials has increased due to the ability of the CRM to identify the maximum tolerated dose (MTD) better than the 3+3 method. However, the CRM can be sensitive to the variance selected for the prior distribution of the model parameter, especially when a small number of patients are enrolled. While methods have emerged to adaptively select skeletons and to calibrate the prior variance only at the beginning of a trial, there has not been any approach developed to adaptively calibrate the prior variance throughout a trial. We propose three systematic approaches to adaptively calibrate the prior variance during a trial and compare them via simulation to methods proposed to calibrate the variance at the beginning of a trial. PMID:22987660

  17. The Use of Color Sensors for Spectrographic Calibration

    NASA Astrophysics Data System (ADS)

    Thomas, Neil B.

    2018-04-01

    The wavelength calibration of spectrographs is an essential but challenging task in many disciplines. Calibration is traditionally accomplished by imaging the spectrum of a light source containing features that are known to appear at certain wavelengths and mapping them to their location on the sensor. This is typically required in conjunction with each scientific observation to account for mechanical and optical variations of the instrument over time, which may span years for certain projects. The method presented here investigates the usage of color itself instead of spectral features to calibrate a spectrograph. The primary advantage of such a calibration is that any broad-spectrum light source such as the sky or an incandescent bulb is suitable. This method allows for calibration using the full optical pathway of the instrument instead of incorporating separate calibration equipment that may introduce errors. This paper focuses on the potential for color calibration in the field of radial velocity astronomy, in which instruments must be finely calibrated for long periods of time to detect tiny Doppler wavelength shifts. This method is not restricted to radial velocity, however, and may find application in any field requiring calibrated spectrometers such as sea water analysis, cellular biology, chemistry, atmospheric studies, and so on. This paper demonstrates that color sensors have the potential to provide calibration with greatly reduced complexity.

  18. Development of composite calibration standard for quantitative NDE by ultrasound and thermography

    NASA Astrophysics Data System (ADS)

    Dayal, Vinay; Benedict, Zach G.; Bhatnagar, Nishtha; Harper, Adam G.

    2018-04-01

    Inspection of aircraft components for damage utilizing ultrasonic Non-Destructive Evaluation (NDE) is a time intensive endeavor. Additional time spent during aircraft inspections translates to added cost to the company performing them, and as such, reducing this expenditure is of great importance. There is also great variance in the calibration samples from one entity to another due to a lack of a common calibration set. By characterizing damage types, we can condense the required calibration sets and reduce the time required to perform calibration while also providing procedures for the fabrication of these standard sets. We present here our effort to fabricate composite samples with known defects and quantify the size and location of defects, such as delaminations, and impact damage. Ultrasonic and Thermographic images are digitally enhanced to accurately measure the damage size. Ultrasonic NDE is compared with thermography.

  19. On-orbit characterization of hyperspectral imagers

    NASA Astrophysics Data System (ADS)

    McCorkel, Joel

    Remote Sensing Group (RSG) at the University of Arizona has a long history of using ground-based test sites for the calibration of airborne- and satellite-based sensors. Often, ground-truth measurements at these tests sites are not always successful due to weather and funding availability. Therefore, RSG has also employed automated ground instrument approaches and cross-calibration methods to verify the radiometric calibration of a sensor. The goal in the cross-calibration method is to transfer the calibration of a well-known sensor to that of a different sensor. This dissertation presents a method for determining the radiometric calibration of a hyperspectral imager using multispectral imagery. The work relies on a multispectral sensor, Moderate-resolution Imaging Spectroradiometer (MODIS), as a reference for the hyperspectral sensor Hyperion. Test sites used for comparisons are Railroad Valley in Nevada and a portion of the Libyan Desert in North Africa. A method to predict hyperspectral surface reflectance using a combination of MODIS data and spectral shape information is developed and applied for the characterization of Hyperion. Spectral shape information is based on RSG's historical in situ data for the Railroad Valley test site and spectral library data for the Libyan test site. Average atmospheric parameters, also based on historical measurements, are used in reflectance prediction and transfer to space. Results of several cross-calibration scenarios that differ in image acquisition coincidence, test site, and reference sensor are found for the characterization of Hyperion. These are compared with results from the reflectance-based approach of vicarious calibration, a well-documented method developed by the RSG that serves as a baseline for calibration performance for the cross-calibration method developed here. Cross-calibration provides results that are within 2% of those of reflectance-based results in most spectral regions. Larger disagreements exist for shorter wavelengths studied in this work as well as in spectral areas that experience absorption by the atmosphere.

  20. Rapid determination of free fatty acid content in waste deodorizer distillates using single bounce-attenuated total reflectance-FTIR spectroscopy.

    PubMed

    Naz, Saba; Sherazi, Sayed Tufail Hussain; Talpur, Farah N; Mahesar, Sarfaraz A; Kara, Huseyin

    2012-01-01

    A simple, rapid, economical, and environmentally friendly analytical method was developed for the quantitative assessment of free fatty acids (FFAs) present in deodorizer distillates and crude oils by single bounce-attenuated total reflectance-FTIR spectroscopy. Partial least squares was applied for the calibration model based on the peak region of the carbonyl group (C=O) from 1726 to 1664 cm(-1) associated with the FFAs. The proposed method totally avoided the use of organic solvents or costly standards and could be applied easily in the oil processing industry. The accuracy of the method was checked by comparison to a conventional standard American Oil Chemists' Society (AOCS) titrimetric procedure, which provided good correlation (R = 0.99980), with an SD of +/- 0.05%. Therefore, the proposed method could be used as an alternate to the AOCS titrimetric method for the quantitative determination of FFAs especially in deodorizer distillates.

  1. A Method of Calibrating Airspeed Installations on Airplanes at Transonic and Supersonic Speeds by the Use of Accelerometer and Attitude-Angle Measurements

    NASA Technical Reports Server (NTRS)

    Zalovick, John A; Lina, Lindsay J; Trant, James P , Jr

    1953-01-01

    A method is described for calibrating airspeed installation on airplanes at transonic and supersonic speeds in vertical-plane maneuvers in which use is made of measurements of normal and longitudinal accelerations and attitude angle. In this method all the required instrumentation is carried within the airplane. An analytical study of the effects of various sources of error on the accuracy of an airspeed calibration by the accelerometer method indicated that the required measurements can be made accurately enough to insure a satisfactory calibration.

  2. Fast calibration of electromagnetically tracked oblique-viewing rigid endoscopes.

    PubMed

    Liu, Xinyang; Rice, Christina E; Shekhar, Raj

    2017-10-01

    The oblique-viewing (i.e., angled) rigid endoscope is a commonly used tool in conventional endoscopic surgeries. The relative rotation between its two moveable parts, the telescope and the camera head, creates a rotation offset between the actual and the projection of an object in the camera image. A calibration method tailored to compensate such offset is needed. We developed a fast calibration method for oblique-viewing rigid endoscopes suitable for clinical use. In contrast to prior approaches based on optical tracking, we used electromagnetic (EM) tracking as the external tracking hardware to improve compactness and practicality. Two EM sensors were mounted on the telescope and the camera head, respectively, with considerations to minimize EM tracking errors. Single-image calibration was incorporated into the method, and a sterilizable plate, laser-marked with the calibration pattern, was also developed. Furthermore, we proposed a general algorithm to estimate the rotation center in the camera image. Formulas for updating the camera matrix in terms of clockwise and counterclockwise rotations were also developed. The proposed calibration method was validated using a conventional [Formula: see text], 5-mm laparoscope. Freehand calibrations were performed using the proposed method, and the calibration time averaged 2 min and 8 s. The calibration accuracy was evaluated in a simulated clinical setting with several surgical tools present in the magnetic field of EM tracking. The root-mean-square re-projection error averaged 4.9 pixel (range 2.4-8.5 pixel, with image resolution of [Formula: see text] for rotation angles ranged from [Formula: see text] to [Formula: see text]. We developed a method for fast and accurate calibration of oblique-viewing rigid endoscopes. The method was also designed to be performed in the operating room and will therefore support clinical translation of many emerging endoscopic computer-assisted surgical systems.

  3. A New Calibration Method for Commercial RGB-D Sensors.

    PubMed

    Darwish, Walid; Tang, Shenjun; Li, Wenbin; Chen, Wu

    2017-05-24

    Commercial RGB-D sensors such as Kinect and Structure Sensors have been widely used in the game industry, where geometric fidelity is not of utmost importance. For applications in which high quality 3D is required, i.e., 3D building models of centimeter‑level accuracy, accurate and reliable calibrations of these sensors are required. This paper presents a new model for calibrating the depth measurements of RGB-D sensors based on the structured light concept. Additionally, a new automatic method is proposed for the calibration of all RGB-D parameters, including internal calibration parameters for all cameras, the baseline between the infrared and RGB cameras, and the depth error model. When compared with traditional calibration methods, this new model shows a significant improvement in depth precision for both near and far ranges.

  4. Model Calibration with Censored Data

    DOE PAGES

    Cao, Fang; Ba, Shan; Brenneman, William A.; ...

    2017-06-28

    Here, the purpose of model calibration is to make the model predictions closer to reality. The classical Kennedy-O'Hagan approach is widely used for model calibration, which can account for the inadequacy of the computer model while simultaneously estimating the unknown calibration parameters. In many applications, the phenomenon of censoring occurs when the exact outcome of the physical experiment is not observed, but is only known to fall within a certain region. In such cases, the Kennedy-O'Hagan approach cannot be used directly, and we propose a method to incorporate the censoring information when performing model calibration. The method is applied tomore » study the compression phenomenon of liquid inside a bottle. The results show significant improvement over the traditional calibration methods, especially when the number of censored observations is large.« less

  5. Assessment of opacimeter calibration according to International Standard Organization 10155.

    PubMed

    Gomes, J F

    2001-01-01

    This paper compares the calibration method for opacimeters issued by the International Standard Organization (ISO) 10155 with the manual reference method for determination of dust content in stack gases. ISO 10155 requires at least nine operational measurements, corresponding to three operational measurements per each dust emission range within the stack. The procedure is assessed by comparison with previous calibration methods for opacimeters using only two operational measurements from a set of measurements made at stacks from pulp mills. The results show that even if the international standard for opacimeter calibration requires that the calibration curve is to be obtained using 3 x 3 points, a calibration curve derived using 3 points could be, at times, acceptable in statistical terms, provided that the amplitude of individual measurements is low.

  6. Patient-specific calibration of cone-beam computed tomography data sets for radiotherapy dose calculations and treatment plan assessment.

    PubMed

    MacFarlane, Michael; Wong, Daniel; Hoover, Douglas A; Wong, Eugene; Johnson, Carol; Battista, Jerry J; Chen, Jeff Z

    2018-03-01

    In this work, we propose a new method of calibrating cone beam computed tomography (CBCT) data sets for radiotherapy dose calculation and plan assessment. The motivation for this patient-specific calibration (PSC) method is to develop an efficient, robust, and accurate CBCT calibration process that is less susceptible to deformable image registration (DIR) errors. Instead of mapping the CT numbers voxel-by-voxel with traditional DIR calibration methods, the PSC methods generates correlation plots between deformably registered planning CT and CBCT voxel values, for each image slice. A linear calibration curve specific to each slice is then obtained by least-squares fitting, and applied to the CBCT slice's voxel values. This allows each CBCT slice to be corrected using DIR without altering the patient geometry through regional DIR errors. A retrospective study was performed on 15 head-and-neck cancer patients, each having routine CBCTs and a middle-of-treatment re-planning CT (reCT). The original treatment plan was re-calculated on the patient's reCT image set (serving as the gold standard) as well as the image sets produced by voxel-to-voxel DIR, density-overriding, and the new PSC calibration methods. Dose accuracy of each calibration method was compared to the reference reCT data set using common dose-volume metrics and 3D gamma analysis. A phantom study was also performed to assess the accuracy of the DIR and PSC CBCT calibration methods compared with planning CT. Compared with the gold standard using reCT, the average dose metric differences were ≤ 1.1% for all three methods (PSC: -0.3%; DIR: -0.7%; density-override: -1.1%). The average gamma pass rates with thresholds 3%, 3 mm were also similar among the three techniques (PSC: 95.0%; DIR: 96.1%; density-override: 94.4%). An automated patient-specific calibration method was developed which yielded strong dosimetric agreement with the results obtained using a re-planning CT for head-and-neck patients. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  7. Research on orbit prediction for solar-based calibration proper satellite

    NASA Astrophysics Data System (ADS)

    Chen, Xuan; Qi, Wenwen; Xu, Peng

    2018-03-01

    Utilizing the mathematical model of the orbit mechanics, the orbit prediction is to forecast the space target's orbit information of a certain time based on the orbit of the initial moment. The proper satellite radiometric calibration and calibration orbit prediction process are introduced briefly. On the basis of the research of the calibration space position design method and the radiative transfer model, an orbit prediction method for proper satellite radiometric calibration is proposed to select the appropriate calibration arc for the remote sensor and to predict the orbit information of the proper satellite and the remote sensor. By analyzing the orbit constraint of the proper satellite calibration, the GF-1solar synchronous orbit is chose as the proper satellite orbit in order to simulate the calibration visible durance for different satellites to be calibrated. The results of simulation and analysis provide the basis for the improvement of the radiometric calibration accuracy of the satellite remote sensor, which lays the foundation for the high precision and high frequency radiometric calibration.

  8. Evaluation of a physically based quasi-linear and a conceptually based nonlinear Muskingum methods

    NASA Astrophysics Data System (ADS)

    Perumal, Muthiah; Tayfur, Gokmen; Rao, C. Madhusudana; Gurarslan, Gurhan

    2017-03-01

    Two variants of the Muskingum flood routing method formulated for accounting nonlinearity of the channel routing process are investigated in this study. These variant methods are: (1) The three-parameter conceptual Nonlinear Muskingum (NLM) method advocated by Gillin 1978, and (2) The Variable Parameter McCarthy-Muskingum (VPMM) method recently proposed by Perumal and Price in 2013. The VPMM method does not require rigorous calibration and validation procedures as required in the case of NLM method due to established relationships of its parameters with flow and channel characteristics based on hydrodynamic principles. The parameters of the conceptual nonlinear storage equation used in the NLM method were calibrated using the Artificial Intelligence Application (AIA) techniques, such as the Genetic Algorithm (GA), the Differential Evolution (DE), the Particle Swarm Optimization (PSO) and the Harmony Search (HS). The calibration was carried out on a given set of hypothetical flood events obtained by routing a given inflow hydrograph in a set of 40 km length prismatic channel reaches using the Saint-Venant (SV) equations. The validation of the calibrated NLM method was investigated using a different set of hypothetical flood hydrographs obtained in the same set of channel reaches used for calibration studies. Both the sets of solutions obtained in the calibration and validation cases using the NLM method were compared with the corresponding solutions of the VPMM method based on some pertinent evaluation measures. The results of the study reveal that the physically based VPMM method is capable of accounting for nonlinear characteristics of flood wave movement better than the conceptually based NLM method which requires the use of tedious calibration and validation procedures.

  9. On-orbit calibration for star sensors without priori information.

    PubMed

    Zhang, Hao; Niu, Yanxiong; Lu, Jiazhen; Zhang, Chengfen; Yang, Yanqiang

    2017-07-24

    The star sensor is a prerequisite navigation device for a spacecraft. The on-orbit calibration is an essential guarantee for its operation performance. However, traditional calibration methods rely on ground information and are invalid without priori information. The uncertain on-orbit parameters will eventually influence the performance of guidance navigation and control system. In this paper, a novel calibration method without priori information for on-orbit star sensors is proposed. Firstly, the simplified back propagation neural network is designed for focal length and main point estimation along with system property evaluation, called coarse calibration. Then the unscented Kalman filter is adopted for the precise calibration of all parameters, including focal length, main point and distortion. The proposed method benefits from self-initialization and no attitude or preinstalled sensor parameter is required. Precise star sensor parameter estimation can be achieved without priori information, which is a significant improvement for on-orbit devices. Simulations and experiments results demonstrate that the calibration is easy for operation with high accuracy and robustness. The proposed method can satisfy the stringent requirement for most star sensors.

  10. A Visual Servoing-Based Method for ProCam Systems Calibration

    PubMed Central

    Berry, Francois; Aider, Omar Ait; Mosnier, Jeremie

    2013-01-01

    Projector-camera systems are currently used in a wide field of applications, such as 3D reconstruction and augmented reality, and can provide accurate measurements, depending on the configuration and calibration. Frequently, the calibration task is divided into two steps: camera calibration followed by projector calibration. The latter still poses certain problems that are not easy to solve, such as the difficulty in obtaining a set of 2D–3D points to compute the projection matrix between the projector and the world. Existing methods are either not sufficiently accurate or not flexible. We propose an easy and automatic method to calibrate such systems that consists in projecting a calibration pattern and superimposing it automatically on a known printed pattern. The projected pattern is provided by a virtual camera observing a virtual pattern in an OpenGL model. The projector displays what the virtual camera visualizes. Thus, the projected pattern can be controlled and superimposed on the printed one with the aid of visual servoing. Our experimental results compare favorably with those of other methods considering both usability and accuracy. PMID:24084121

  11. Using Reliability to Meet Z540.3's 2 percent Rule

    NASA Technical Reports Server (NTRS)

    Mimbs, Scott M.

    2011-01-01

    NASA's Kennedy Space Center (KSC) undertook implementation of ANSI/NCSL Z540.3-2006 in October 2008. Early in the implementation, KSC identified that the largest cost driver of Z540.3 implementation is measurement uncertainty analyses for legacy calibration processes. NASA, like other organizations, has a significant inventory of measuring and test equipment (MTE) that have documented calibration procedures without documented measurement uncertainties. This paper provides background information to support the rationale for using high in-tolerance reliability as evidence of compliance to the 2% probability of false acceptance (PFA) quality metric of ANSI/NCSL Z540.3-2006 allowing use of qualifying legacy processes. NASA is adopting this as policy and is recommending NCSL International consider this as a method of compliance to Z540.3. Topics covered include compliance issues, using end-of-period reliability (EOPR) to estimate test point uncertainty, reliability data influences within the PFA model, the validity of EOPR data, and an appendix covering "observed" versus "true" EOPR.

  12. Piezo-thermal Probe Array for High Throughput Applications

    PubMed Central

    Gaitas, Angelo; French, Paddy

    2012-01-01

    Microcantilevers are used in a number of applications including atomic-force microscopy (AFM). In this work, deflection-sensing elements along with heating elements are integrated onto micromachined cantilever arrays to increase sensitivity, and reduce complexity and cost. An array of probes with 5–10 nm gold ultrathin film sensors on silicon substrates for high throughput scanning probe microscopy is developed. The deflection sensitivity is 0.2 ppm/nm. Plots of the change in resistance of the sensing element with displacement are used to calibrate the probes and determine probe contact with the substrate. Topographical scans demonstrate high throughput and nanometer resolution. The heating elements are calibrated and the thermal coefficient of resistance (TCR) is 655 ppm/K. The melting temperature of a material is measured by locally heating the material with the heating element of the cantilever while monitoring the bending with the deflection sensing element. The melting point value measured with this method is in close agreement with the reported value in literature. PMID:23641125

  13. Physicochemical characterization of Lavandula spp. honey with FT-Raman spectroscopy.

    PubMed

    Anjos, Ofélia; Santos, António J A; Paixão, Vasco; Estevinho, Letícia M

    2018-02-01

    This study aimed to evaluate the potential of FT-Raman spectroscopy in the prediction of the chemical composition of Lavandula spp. monofloral honey. Partial Least Squares (PLS) regression models were performed for the quantitative estimation and the results were correlated with those obtained using reference methods. Good calibration models were obtained for electrical conductivity, ash, total acidity, pH, reducing sugars, hydroxymethylfurfural (HMF), proline, diastase index, apparent sucrose, total flavonoids content and total phenol content. On the other hand, the model was less accurate for pH determination. The calibration models had high r 2 (ranging between 92.8% and 99.9%), high residual prediction deviation - RPD (ranging between 4.2 and 26.8) and low root mean square errors. These results confirm the hypothesis that FT-Raman is a useful technique for the quality control and chemical properties' evaluation of Lavandula spp honey. Its application may allow improving the efficiency, speed and cost of the current laboratory analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. 75 FR 8039 - Announcement of the American Petroleum Institute's Standards Activities

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-23

    ... Provers, 3rd Ed. MPMS Ch. 4.9.3, Methods of Calibration for Displacement and Volumetric Tank Provers, Part 3--Determination of the Volume of Displacement Provers by the Master Meter Method of Calibration, 1st Ed. MPMS Ch. 4.9.4, Methods of Calibration for Displacement and Volumetric Tank Provers, Part 4...

  15. Absolute Radiometric Calibration of Narrow-Swath Imaging Sensors with Reference to Non-Coincident Wide-Swath Sensors

    NASA Technical Reports Server (NTRS)

    McCorkel, Joel; Thome, Kurtis; Lockwood, Ronald

    2012-01-01

    An inter-calibration method is developed to provide absolute radiometric calibration of narrow-swath imaging sensors with reference to non-coincident wide-swath sensors. The method predicts at-sensor radiance using non-coincident imagery from the reference sensor and knowledge of spectral reflectance of the test site. The imagery of the reference sensor is restricted to acquisitions that provide similar view and solar illumination geometry to reduce uncertainties due to directional reflectance effects. Spectral reflectance of the test site is found with a simple iterative radiative transfer method using radiance values of a well-understood wide-swath sensor and spectral shape information based on historical ground-based measurements. At-sensor radiance is calculated for the narrow-swath sensor using this spectral reflectance and atmospheric parameters that are also based on historical in situ measurements. Results of the inter-calibration method show agreement on the 2 5 percent level in most spectral regions with the vicarious calibration technique relying on coincident ground-based measurements referred to as the reflectance-based approach. While the variability of the inter-calibration method based on non-coincident image pairs is significantly larger, results are consistent with techniques relying on in situ measurements. The method is also insensitive to spectral differences between the sensors by transferring to surface spectral reflectance prior to prediction of at-sensor radiance. The utility of this inter-calibration method is made clear by its flexibility to utilize image pairings with acquisition dates differing in excess of 30 days allowing frequent absolute calibration comparisons between wide- and narrow-swath sensors.

  16. Automated calibration of laser spectrometer measurements of δ18 O and δ2 H values in water vapour using a Dew Point Generator.

    PubMed

    Munksgaard, Niels C; Cheesman, Alexander W; Gray-Spence, Andrew; Cernusak, Lucas A; Bird, Michael I

    2018-06-30

    Continuous measurement of stable O and H isotope compositions in water vapour requires automated calibration for remote field deployments. We developed a new low-cost device for calibration of both water vapour mole fraction and isotope composition. We coupled a commercially available dew point generator (DPG) to a laser spectrometer and developed hardware for water and air handling along with software for automated operation and data processing. We characterised isotopic fractionation in the DPG, conducted a field test and assessed the influence of critical parameters on the performance of the device. An analysis time of 1 hour was sufficient to achieve memory-free analysis of two water vapour standards and the δ 18 O and δ 2 H values were found to be independent of water vapour concentration over a range of ≈20,000-33,000 ppm. The reproducibility of the standard vapours over a 10-day period was better than 0.14 ‰ and 0.75 ‰ for δ 18 O and δ 2 H values, respectively (1 σ, n = 11) prior to drift correction and calibration. The analytical accuracy was confirmed by the analysis of a third independent vapour standard. The DPG distillation process requires that isotope calibration takes account of DPG temperature, analysis time, injected water volume and air flow rate. The automated calibration system provides high accuracy and precision and is a robust, cost-effective option for long-term field measurements of water vapour isotopes. The necessary modifications to the DPG are minor and easily reversible. Copyright © 2018 John Wiley & Sons, Ltd.

  17. Curvature-correction-based time-domain CMOS smart temperature sensor with an inaccuracy of -0.8 °C-1.2 °C after one-point calibration from -40 °C to 120 °C

    NASA Astrophysics Data System (ADS)

    Chen, Chun-Chi; Lin, Shih-Hao; Lin, Yi

    2014-06-01

    This paper proposes a time-domain CMOS smart temperature sensor featuring on-chip curvature correction and one-point calibration support for thermal management systems. Time-domain inverter-based temperature sensors, which exhibit the advantages of low power and low cost, have been proposed for on-chip thermal monitoring. However, the curvature is large for the thermal transfer curve, which substantially affects the accuracy as the temperature range increases. Another problem is that the inverter is sensitive to process variations, resulting in difficulty for the sensors to achieve an acceptable accuracy for one-point calibration. To overcome these two problems, a temperature-dependent oscillator with curvature correction is proposed to increase the linearity of the oscillatory width, thereby resolving the drawback caused by a costly off-chip second-order master curve fitting. For one-point calibration support, an adjustable-gain time amplifier was adopted to eliminate the effect of process variations, with the assistance of a calibration circuit. The proposed circuit occupied a small area of 0.073 mm2 and was fabricated in a TSMC CMOS 0.35-μm 2P4M digital process. The linearization of the oscillator and the effect cancellation of process variations enabled the sensor, which featured a fixed resolution of 0.049 °C/LSB, to achieve an optimal inaccuracy of -0.8 °C to 1.2 °C after one-point calibration of 12 test chips from -40 °C to 120 °C. The power consumption was 35 μW at a sample rate of 10 samples/s.

  18. From mobile ADCP to high-resolution SSC: a cross-section calibration tool

    USGS Publications Warehouse

    Boldt, Justin A.

    2015-01-01

    Sediment is a major cause of stream impairment, and improved sediment monitoring is a crucial need. Point samples of suspended-sediment concentration (SSC) are often not enough to provide an understanding to answer critical questions in a changing environment. As technology has improved, there now exists the opportunity to obtain discrete measurements of SSC and flux while providing a spatial scale unmatched by any other device. Acoustic instruments are ubiquitous in the U.S. Geological Survey (USGS) for making streamflow measurements but when calibrated with physical sediment samples, they may be used for sediment measurements as well. The acoustic backscatter measured by an acoustic Doppler current profiler (ADCP) has long been known to correlate well with suspended sediment, but until recently, it has mainly been qualitative in nature. This new method using acoustic surrogates has great potential to leverage the routine data collection to provide calibrated, quantitative measures of SSC which hold promise to be more accurate, complete, and cost efficient than other methods. This extended abstract presents a method for the measurement of high spatial and temporal resolution SSC using a down-looking, mobile ADCP from discrete cross-sections. The high-resolution scales of sediment data are a primary advantage and a vast improvement over other discrete methods for measuring SSC. Although acoustic surrogate technology using continuous, fixed-deployment ADCPs (side-looking) is proven, the same methods cannot be used with down-looking ADCPs due to the fact that the SSC and particle-size distribution variation in the vertical profile violates theory and complicates assumptions. A software tool was developed to assist in using acoustic backscatter from a down-looking, mobile ADCP as a surrogate for SSC. This tool has a simple graphical user interface that loads the data, assists in the calibration procedure, and provides data visualization and output options. This tool is designed to improve ongoing efforts to monitor and predict resource responses to a changing environment. Because ADCPs are used routinely for streamflow measurements, using acoustic backscatter from ADCPs as a surrogate for SSC has the potential to revolutionize sediment measurements by providing rapid measurements of sediment flux and distribution at spatial and temporal scales that are far beyond the capabilities of traditional physical samplers.

  19. High-Precision Pulse Generator

    NASA Technical Reports Server (NTRS)

    Katz, Richard; Kleyner, Igor

    2011-01-01

    A document discusses a pulse generator with subnanosecond resolution implemented with a low-cost field-programmable gate array (FPGA) at low power levels. The method used exploits the fast carry chains of certain FPGAs. Prototypes have been built and tested in both Actel AX and Xilinx Virtex 4 technologies. In-flight calibration or control can be performed by using a similar and related technique as a time interval measurement circuit by measuring a period of the stable oscillator, as the delays through the fast carry chains will vary as a result of manufacturing variances as well as the result of environmental conditions (voltage, aging, temperature, and radiation).

  20. Smart System for Bicarbonate Control in Irrigation for Hydroponic Precision Farming

    PubMed Central

    Cambra, Carlos; Lacuesta, Raquel

    2018-01-01

    Improving the sustainability in agriculture is nowadays an important challenge. The automation of irrigation processes via low-cost sensors can to spread technological advances in a sector very influenced by economical costs. This article presents an auto-calibrated pH sensor able to detect and adjust the imbalances in the pH levels of the nutrient solution used in hydroponic agriculture. The sensor is composed by a pH probe and a set of micropumps that sequentially pour the different liquid solutions to maintain the sensor calibration and the water samples from the channels that contain the nutrient solution. To implement our architecture, we use an auto-calibrated pH sensor connected to a wireless node. Several nodes compose our wireless sensor networks (WSN) to control our greenhouse. The sensors periodically measure the pH level of each hydroponic support and send the information to a data base (DB) which stores and analyzes the data to warn farmers about the measures. The data can then be accessed through a user-friendly, web-based interface that can be accessed through the Internet by using desktop or mobile devices. This paper also shows the design and test bench for both the auto-calibrated pH sensor and the wireless network to check their correct operation. PMID:29693611

  1. Smart System for Bicarbonate Control in Irrigation for Hydroponic Precision Farming.

    PubMed

    Cambra, Carlos; Sendra, Sandra; Lloret, Jaime; Lacuesta, Raquel

    2018-04-25

    Improving the sustainability in agriculture is nowadays an important challenge. The automation of irrigation processes via low-cost sensors can to spread technological advances in a sector very influenced by economical costs. This article presents an auto-calibrated pH sensor able to detect and adjust the imbalances in the pH levels of the nutrient solution used in hydroponic agriculture. The sensor is composed by a pH probe and a set of micropumps that sequentially pour the different liquid solutions to maintain the sensor calibration and the water samples from the channels that contain the nutrient solution. To implement our architecture, we use an auto-calibrated pH sensor connected to a wireless node. Several nodes compose our wireless sensor networks (WSN) to control our greenhouse. The sensors periodically measure the pH level of each hydroponic support and send the information to a data base (DB) which stores and analyzes the data to warn farmers about the measures. The data can then be accessed through a user-friendly, web-based interface that can be accessed through the Internet by using desktop or mobile devices. This paper also shows the design and test bench for both the auto-calibrated pH sensor and the wireless network to check their correct operation.

  2. Non-orthogonal tool/flange and robot/world calibration.

    PubMed

    Ernst, Floris; Richter, Lars; Matthäus, Lars; Martens, Volker; Bruder, Ralf; Schlaefer, Alexander; Schweikard, Achim

    2012-12-01

    For many robot-assisted medical applications, it is necessary to accurately compute the relation between the robot's coordinate system and the coordinate system of a localisation or tracking device. Today, this is typically carried out using hand-eye calibration methods like those proposed by Tsai/Lenz or Daniilidis. We present a new method for simultaneous tool/flange and robot/world calibration by estimating a solution to the matrix equation AX = YB. It is computed using a least-squares approach. Because real robots and localisation are all afflicted by errors, our approach allows for non-orthogonal matrices, partially compensating for imperfect calibration of the robot or localisation device. We also introduce a new method where full robot/world and partial tool/flange calibration is possible by using localisation devices providing less than six degrees of freedom (DOFs). The methods are evaluated on simulation data and on real-world measurements from optical and magnetical tracking devices, volumetric ultrasound providing 3-DOF data, and a surface laser scanning device. We compare our methods with two classical approaches: the method by Tsai/Lenz and the method by Daniilidis. In all experiments, the new algorithms outperform the classical methods in terms of translational accuracy by up to 80% and perform similarly in terms of rotational accuracy. Additionally, the methods are shown to be stable: the number of calibration stations used has far less influence on calibration quality than for the classical methods. Our work shows that the new method can be used for estimating the relationship between the robot's and the localisation device's coordinate systems. The new method can also be used for deficient systems providing only 3-DOF data, and it can be employed in real-time scenarios because of its speed. Copyright © 2012 John Wiley & Sons, Ltd.

  3. Regression Analysis and Calibration Recommendations for the Characterization of Balance Temperature Effects

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; Volden, T.

    2018-01-01

    Analysis and use of temperature-dependent wind tunnel strain-gage balance calibration data are discussed in the paper. First, three different methods are presented and compared that may be used to process temperature-dependent strain-gage balance data. The first method uses an extended set of independent variables in order to process the data and predict balance loads. The second method applies an extended load iteration equation during the analysis of balance calibration data. The third method uses temperature-dependent sensitivities for the data analysis. Physical interpretations of the most important temperature-dependent regression model terms are provided that relate temperature compensation imperfections and the temperature-dependent nature of the gage factor to sets of regression model terms. Finally, balance calibration recommendations are listed so that temperature-dependent calibration data can be obtained and successfully processed using the reviewed analysis methods.

  4. A Review on Microdialysis Calibration Methods: the Theory and Current Related Efforts.

    PubMed

    Kho, Chun Min; Enche Ab Rahim, Siti Kartini; Ahmad, Zainal Arifin; Abdullah, Norazharuddin Shah

    2017-07-01

    Microdialysis is a sampling technique first introduced in the late 1950s. Although this technique was originally designed to study endogenous compounds in animal brain, it is later modified to be used in other organs. Additionally, microdialysis is not only able to collect unbound concentration of compounds from tissue sites; this technique can also be used to deliver exogenous compounds to a designated area. Due to its versatility, microdialysis technique is widely employed in a number of areas, including biomedical research. However, for most in vivo studies, the concentration of substance obtained directly from the microdialysis technique does not accurately describe the concentration of the substance on-site. In order to relate the results collected from microdialysis to the actual in vivo condition, a calibration method is required. To date, various microdialysis calibration methods have been reported, with each method being capable to provide valuable insights of the technique itself and its applications. This paper aims to provide a critical review on various calibration methods used in microdialysis applications, inclusive of a detailed description of the microdialysis technique itself to start with. It is expected that this article shall review in detail, the various calibration methods employed, present examples of work related to each calibration method including clinical efforts, plus the advantages and disadvantages of each of the methods.

  5. Apparatus for in-situ calibration of instruments that measure fluid depth

    DOEpatents

    Campbell, M.D.

    1994-01-11

    The present invention provides a method and apparatus for in-situ calibration of distance measuring equipment. The method comprises obtaining a first distance measurement in a first location, then obtaining at least one other distance measurement in at least one other location of a precisely known distance from the first location, and calculating a calibration constant. The method is applied specifically to calculating a calibration constant for obtaining fluid level and embodied in an apparatus using a pressure transducer and a spacer of precisely known length. The calibration constant is used to calculate the depth of a fluid from subsequent single pressure measurements at any submerged position. 8 figures.

  6. Effects of calibration methods on quantitative material decomposition in photon-counting spectral computed tomography using a maximum a posteriori estimator.

    PubMed

    Curtis, Tyler E; Roeder, Ryan K

    2017-10-01

    Advances in photon-counting detectors have enabled quantitative material decomposition using multi-energy or spectral computed tomography (CT). Supervised methods for material decomposition utilize an estimated attenuation for each material of interest at each photon energy level, which must be calibrated based upon calculated or measured values for known compositions. Measurements using a calibration phantom can advantageously account for system-specific noise, but the effect of calibration methods on the material basis matrix and subsequent quantitative material decomposition has not been experimentally investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on the accuracy of quantitative material decomposition in the image domain. Gadolinium was chosen as a model contrast agent in imaging phantoms, which also contained bone tissue and water as negative controls. The maximum gadolinium concentration (30, 60, and 90 mM) and total number of concentrations (2, 4, and 7) were independently varied to systematically investigate effects of the material basis matrix and scaling factor calibration on the quantitative (root mean squared error, RMSE) and spatial (sensitivity and specificity) accuracy of material decomposition. Images of calibration and sample phantoms were acquired using a commercially available photon-counting spectral micro-CT system with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material decomposition of gadolinium, calcium, and water was performed for each calibration method using a maximum a posteriori estimator. Both the quantitative and spatial accuracy of material decomposition were most improved by using an increased maximum gadolinium concentration (range) in the basis matrix calibration; the effects of using a greater number of concentrations were relatively small in magnitude by comparison. The material basis matrix calibration was more sensitive to changes in the calibration methods than the scaling factor calibration. The material basis matrix calibration significantly influenced both the quantitative and spatial accuracy of material decomposition, while the scaling factor calibration influenced quantitative but not spatial accuracy. Importantly, the median RMSE of material decomposition was as low as ~1.5 mM (~0.24 mg/mL gadolinium), which was similar in magnitude to that measured by optical spectroscopy on the same samples. The accuracy of quantitative material decomposition in photon-counting spectral CT was significantly influenced by calibration methods which must therefore be carefully considered for the intended diagnostic imaging application. © 2017 American Association of Physicists in Medicine.

  7. Simulation of temperature field for temperature-controlled radio frequency ablation using a hyperbolic bioheat equation and temperature-varied voltage calibration: a liver-mimicking phantom study.

    PubMed

    Zhang, Man; Zhou, Zhuhuang; Wu, Shuicai; Lin, Lan; Gao, Hongjian; Feng, Yusheng

    2015-12-21

    This study aims at improving the accuracy of temperature simulation for temperature-controlled radio frequency ablation (RFA). We proposed a new voltage-calibration method in the simulation and investigated the feasibility of a hyperbolic bioheat equation (HBE) in the RFA simulation with longer durations and higher power. A total of 40 RFA experiments was conducted in a liver-mimicking phantom. Four mathematical models with multipolar electrodes were developed by the finite element method in COMSOL software: HBE with/without voltage calibration, and the Pennes bioheat equation (PBE) with/without voltage calibration. The temperature-varied voltage calibration used in the simulation was calculated from an experimental power output and temperature-dependent resistance of liver tissue. We employed the HBE in simulation by considering the delay time τ of 16 s. First, for simulations by each kind of bioheat equation (PBE or HBE), we compared the differences between the temperature-varied voltage-calibration and the fixed-voltage values used in the simulations. Then, the comparisons were conducted between the PBE and the HBE in the simulations with temperature-varied voltage calibration. We verified the simulation results by experimental temperature measurements on nine specific points of the tissue phantom. The results showed that: (1) the proposed voltage-calibration method improved the simulation accuracy of temperature-controlled RFA for both the PBE and the HBE, and (2) for temperature-controlled RFA simulation with the temperature-varied voltage calibration, the HBE method was 0.55 °C more accurate than the PBE method. The proposed temperature-varied voltage calibration may be useful in temperature field simulations of temperature-controlled RFA. Besides, the HBE may be used as an alternative in the simulation of long-duration high-power RFA.

  8. Simultaneous auto-calibration and gradient delays estimation (SAGE) in non-Cartesian parallel MRI using low-rank constraints.

    PubMed

    Jiang, Wenwen; Larson, Peder E Z; Lustig, Michael

    2018-03-09

    To correct gradient timing delays in non-Cartesian MRI while simultaneously recovering corruption-free auto-calibration data for parallel imaging, without additional calibration scans. The calibration matrix constructed from multi-channel k-space data should be inherently low-rank. This property is used to construct reconstruction kernels or sensitivity maps. Delays between the gradient hardware across different axes and RF receive chain, which are relatively benign in Cartesian MRI (excluding EPI), lead to trajectory deviations and hence data inconsistencies for non-Cartesian trajectories. These in turn lead to higher rank and corrupted calibration information which hampers the reconstruction. Here, a method named Simultaneous Auto-calibration and Gradient delays Estimation (SAGE) is proposed that estimates the actual k-space trajectory while simultaneously recovering the uncorrupted auto-calibration data. This is done by estimating the gradient delays that result in the lowest rank of the calibration matrix. The Gauss-Newton method is used to solve the non-linear problem. The method is validated in simulations using center-out radial, projection reconstruction and spiral trajectories. Feasibility is demonstrated on phantom and in vivo scans with center-out radial and projection reconstruction trajectories. SAGE is able to estimate gradient timing delays with high accuracy at a signal to noise ratio level as low as 5. The method is able to effectively remove artifacts resulting from gradient timing delays and restore image quality in center-out radial, projection reconstruction, and spiral trajectories. The low-rank based method introduced simultaneously estimates gradient timing delays and provides accurate auto-calibration data for improved image quality, without any additional calibration scans. © 2018 International Society for Magnetic Resonance in Medicine.

  9. Minimal-Drift Heading Measurement using a MEMS Gyro for Indoor Mobile Robots.

    PubMed

    Hong, Sung Kyung; Park, Sungsu

    2008-11-17

    To meet the challenges of making low-cost MEMS yaw rate gyros for the precise self-localization of indoor mobile robots, this paper examines a practical and effective method of minimizing drift on the heading angle that relies solely on integration of rate signals from a gyro. The main idea of the proposed approach is consists of two parts; 1) self-identification of calibration coefficients that affects long-term performance, and 2) threshold filter to reject the broadband noise component that affects short-term performance. Experimental results with the proposed phased method applied to Epson XV3500 gyro demonstrate that it effectively yields minimal drift heading angle measurements getting over major error sources in the MEMS gyro output.

  10. Skateboard/Longboard Speedometer Project

    ERIC Educational Resources Information Center

    Hare, Jonathan

    2012-01-01

    A simple, low-cost infrared LED speedometer is described that can be fitted to a skateboard, longboard or even a bicycle to measure speed. Notes on building, setting up and calibration are given. When used with a low-cost data logger, continuous measurements of speed can be made while out and about. The device forms an interesting science club…

  11. A new time calibration method for switched-capacitor-array-based waveform samplers

    NASA Astrophysics Data System (ADS)

    Kim, H.; Chen, C.-T.; Eclov, N.; Ronzhin, A.; Murat, P.; Ramberg, E.; Los, S.; Moses, W.; Choong, W.-S.; Kao, C.-M.

    2014-12-01

    We have developed a new time calibration method for the DRS4 waveform sampler that enables us to precisely measure the non-uniform sampling interval inherent in the switched-capacitor cells of the DRS4. The method uses the proportionality between the differential amplitude and sampling interval of adjacent switched-capacitor cells responding to a sawtooth-shape pulse. In the experiment, a sawtooth-shape pulse with a 40 ns period generated by a Tektronix AWG7102 is fed to a DRS4 evaluation board for calibrating the sampling intervals of all 1024 cells individually. The electronic time resolution of the DRS4 evaluation board with the new time calibration is measured to be 2.4 ps RMS by using two simultaneous Gaussian pulses with 2.35 ns full-width at half-maximum and applying a Gaussian fit. The time resolution dependencies on the time difference with the new time calibration are measured and compared to results obtained by another method. The new method could be applicable for other switched-capacitor-array technology-based waveform samplers for precise time calibration.

  12. A New Time Calibration Method for Switched-capacitor-array-based Waveform Samplers.

    PubMed

    Kim, H; Chen, C-T; Eclov, N; Ronzhin, A; Murat, P; Ramberg, E; Los, S; Moses, W; Choong, W-S; Kao, C-M

    2014-12-11

    We have developed a new time calibration method for the DRS4 waveform sampler that enables us to precisely measure the non-uniform sampling interval inherent in the switched-capacitor cells of the DRS4. The method uses the proportionality between the differential amplitude and sampling interval of adjacent switched-capacitor cells responding to a sawtooth-shape pulse. In the experiment, a sawtooth-shape pulse with a 40 ns period generated by a Tektronix AWG7102 is fed to a DRS4 evaluation board for calibrating the sampling intervals of all 1024 cells individually. The electronic time resolution of the DRS4 evaluation board with the new time calibration is measured to be ~2.4 ps RMS by using two simultaneous Gaussian pulses with 2.35 ns full-width at half-maximum and applying a Gaussian fit. The time resolution dependencies on the time difference with the new time calibration are measured and compared to results obtained by another method. The new method could be applicable for other switched-capacitor-array technology-based waveform samplers for precise time calibration.

  13. A New Time Calibration Method for Switched-capacitor-array-based Waveform Samplers

    PubMed Central

    Kim, H.; Chen, C.-T.; Eclov, N.; Ronzhin, A.; Murat, P.; Ramberg, E.; Los, S.; Moses, W.; Choong, W.-S.; Kao, C.-M.

    2014-01-01

    We have developed a new time calibration method for the DRS4 waveform sampler that enables us to precisely measure the non-uniform sampling interval inherent in the switched-capacitor cells of the DRS4. The method uses the proportionality between the differential amplitude and sampling interval of adjacent switched-capacitor cells responding to a sawtooth-shape pulse. In the experiment, a sawtooth-shape pulse with a 40 ns period generated by a Tektronix AWG7102 is fed to a DRS4 evaluation board for calibrating the sampling intervals of all 1024 cells individually. The electronic time resolution of the DRS4 evaluation board with the new time calibration is measured to be ~2.4 ps RMS by using two simultaneous Gaussian pulses with 2.35 ns full-width at half-maximum and applying a Gaussian fit. The time resolution dependencies on the time difference with the new time calibration are measured and compared to results obtained by another method. The new method could be applicable for other switched-capacitor-array technology-based waveform samplers for precise time calibration. PMID:25506113

  14. Application of six sigma and AHP in analysis of variable lead time calibration process instrumentation

    NASA Astrophysics Data System (ADS)

    Rimantho, Dino; Rahman, Tomy Abdul; Cahyadi, Bambang; Tina Hernawati, S.

    2017-02-01

    Calibration of instrumentation equipment in the pharmaceutical industry is an important activity to determine the true value of a measurement. Preliminary studies indicated that occur lead-time calibration resulted in disruption of production and laboratory activities. This study aimed to analyze the causes of lead-time calibration. Several methods used in this study such as, Six Sigma in order to determine the capability process of the calibration instrumentation of equipment. Furthermore, the method of brainstorming, Pareto diagrams, and Fishbone diagrams were used to identify and analyze the problems. Then, the method of Hierarchy Analytical Process (AHP) was used to create a hierarchical structure and prioritize problems. The results showed that the value of DPMO around 40769.23 which was equivalent to the level of sigma in calibration equipment approximately 3,24σ. This indicated the need for improvements in the calibration process. Furthermore, the determination of problem-solving strategies Lead Time Calibration such as, shortens the schedule preventive maintenance, increase the number of instrument Calibrators, and train personnel. Test results on the consistency of the whole matrix of pairwise comparisons and consistency test showed the value of hierarchy the CR below 0.1.

  15. Comparison of global optimization approaches for robust calibration of hydrologic model parameters

    NASA Astrophysics Data System (ADS)

    Jung, I. W.

    2015-12-01

    Robustness of the calibrated parameters of hydrologic models is necessary to provide a reliable prediction of future performance of watershed behavior under varying climate conditions. This study investigated calibration performances according to the length of calibration period, objective functions, hydrologic model structures and optimization methods. To do this, the combination of three global optimization methods (i.e. SCE-UA, Micro-GA, and DREAM) and four hydrologic models (i.e. SAC-SMA, GR4J, HBV, and PRMS) was tested with different calibration periods and objective functions. Our results showed that three global optimization methods provided close calibration performances under different calibration periods, objective functions, and hydrologic models. However, using the agreement of index, normalized root mean square error, Nash-Sutcliffe efficiency as the objective function showed better performance than using correlation coefficient and percent bias. Calibration performances according to different calibration periods from one year to seven years were hard to generalize because four hydrologic models have different levels of complexity and different years have different information content of hydrological observation. Acknowledgements This research was supported by a grant (14AWMP-B082564-01) from Advanced Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.

  16. Influence of Ultrasonic Nonlinear Propagation on Hydrophone Calibration Using Two-Transducer Reciprocity Method

    NASA Astrophysics Data System (ADS)

    Yoshioka, Masahiro; Sato, Sojun; Kikuchi, Tsuneo; Matsuda, Yoichi

    2006-05-01

    In this study, the influence of ultrasonic nonlinear propagation on hydrophone calibration by the two-transducer reciprocity method is investigated quantitatively using the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation. It is proposed that the correction for the diffraction and attenuation of ultrasonic waves used in two-transducer reciprocity calibration can be derived using the KZK equation to remove the influence of nonlinear propagation. The validity of the correction is confirmed by comparing the sensitivities calibrated by the two-transducer reciprocity method and laser interferometry.

  17. Technique for Radiometer and Antenna Array Calibration - TRAAC

    NASA Technical Reports Server (NTRS)

    Meyer, Paul; Sims, William; Varnavas, Kosta; McCracken, Jeff; Srinivasan, Karthik; Limaye, Ashutosh; Laymon, Charles; Richeson. James

    2012-01-01

    Highly sensitive receivers are used to detect minute amounts of emitted electromagnetic energy. Calibration of these receivers is vital to the accuracy of the measurements. Traditional calibration techniques depend on calibration reference internal to the receivers as reference for the calibration of the observed electromagnetic energy. Such methods can only calibrate errors in measurement introduced by the receiver only. The disadvantage of these existing methods is that they cannot account for errors introduced by devices, such as antennas, used for capturing electromagnetic radiation. This severely limits the types of antennas that can be used to make measurements with a high degree of accuracy. Complex antenna systems, such as electronically steerable antennas (also known as phased arrays), while offering potentially significant advantages, suffer from a lack of a reliable and accurate calibration technique. The proximity of antenna elements in an array results in interaction between the electromagnetic fields radiated (or received) by the individual elements. This phenomenon is called mutual coupling. The new calibration method uses a known noise source as a calibration load to determine the instantaneous characteristics of the antenna. The noise source is emitted from one element of the antenna array and received by all the other elements due to mutual coupling. This received noise is used as a calibration standard to monitor the stability of the antenna electronics.

  18. A New Calibration Method for Commercial RGB-D Sensors

    PubMed Central

    Darwish, Walid; Tang, Shenjun; Li, Wenbin; Chen, Wu

    2017-01-01

    Commercial RGB-D sensors such as Kinect and Structure Sensors have been widely used in the game industry, where geometric fidelity is not of utmost importance. For applications in which high quality 3D is required, i.e., 3D building models of centimeter-level accuracy, accurate and reliable calibrations of these sensors are required. This paper presents a new model for calibrating the depth measurements of RGB-D sensors based on the structured light concept. Additionally, a new automatic method is proposed for the calibration of all RGB-D parameters, including internal calibration parameters for all cameras, the baseline between the infrared and RGB cameras, and the depth error model. When compared with traditional calibration methods, this new model shows a significant improvement in depth precision for both near and far ranges. PMID:28538695

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuo, Rui; Jeff Wu, C. F.

    Calibration parameters in deterministic computer experiments are those attributes that cannot be measured or available in physical experiments. Here, an approach to estimate them by using data from physical experiments and computer simulations. A theoretical framework is given which allows us to study the issues of parameter identifiability and estimation. We define the L 2-consistency for calibration as a justification for calibration methods. It is shown that a simplified version of the original KO method leads to asymptotically L 2-inconsistent calibration. This L 2-inconsistency can be remedied by modifying the original estimation procedure. A novel calibration method, called the Lmore » 2 calibration, is proposed and proven to be L 2-consistent and enjoys optimal convergence rate. Furthermore a numerical example and some mathematical analysis are used to illustrate the source of the L 2-inconsistency problem.« less

  20. [Calibration of a room air gas monitor with certified reference gases].

    PubMed

    Krueger, W A; Trick, M; Schroeder, T H; Unertl, K E

    2003-12-01

    Photo-acoustic infrared spectrometry is considered to be the gold standard for on-line measurement of anesthetic waste gas in room air. For maintenance of the precision of the measurements, the manufacturer recommends calibration of the gas monitor monitor every 3-12 months. We investigated whether the use of reference gases with analysis certificate could serve as a feasible alternative to commercial recalibration. We connected a multi-gas monitor type1302 (Bruel & Kjaer, Naerum, Denmark) to compressed air bottles containing reference gases with analysis certificate. Using a T-piece with a flow-meter, we avoided the entry of room air during the calibration phase. Highly purified nitrogen was used for zero calibration. The reference concentrations for desflurane, enflurane, halothane, isoflurane, and sevoflurane ranged from 41.6-51.1 ml/m(3) (ppm) in synthetic air. Since there is an overlap of the infrared absorption spectra of volatile anesthetics with alcohol used in operating rooms, we performed a cross-compensation with iso-propanol (107.0 ppm). A two-point calibration was performed for N(2)O (96.2 and 979.0 ppm), followed by cross-compensation with CO(2). Nafion tubes were used in order to avoid erroneous measurements due to molecular relaxation phenomena. The deviation of the measurement values ranged initially from 0-2.0% and increased to up to 4.9% after 18 months. For N(2)O, the corresponding values were 4.2% and 2.7%, respectively. Thus, our calibration procedure using certified reference gases yielded precise measurements with low deterioration over 18 months. It seems to be advantageous that the precision can be determined whenever deemed necessary. This allows for an individual decision, when the gas monitor needs to be calibrated again. The costs for reference gases and working time as well as logistic aspects such as storage and expiration dates must be individually balanced against the costs for commercial recalibration.

  1. The preliminary checkout, evaluation and calibration of a 3-component force measurement system for calibrating propulsion simulators for wind tunnel models

    NASA Technical Reports Server (NTRS)

    Scott, W. A.

    1984-01-01

    The propulsion simulator calibration laboratory (PSCL) in which calibrations can be performed to determine the gross thrust and airflow of propulsion simulators installed in wind tunnel models is described. The preliminary checkout, evaluation and calibration of the PSCL's 3 component force measurement system is reported. Methods and equipment were developed for the alignment and calibration of the force measurement system. The initial alignment of the system demonstrated the need for more efficient means of aligning system's components. The use of precision alignment jigs increases both the speed and accuracy with which the system is aligned. The calibration of the force measurement system shows that the methods and equipment for this procedure can be successful.

  2. Hand-eye calibration for rigid laparoscopes using an invariant point.

    PubMed

    Thompson, Stephen; Stoyanov, Danail; Schneider, Crispin; Gurusamy, Kurinchi; Ourselin, Sébastien; Davidson, Brian; Hawkes, David; Clarkson, Matthew J

    2016-06-01

    Laparoscopic liver resection has significant advantages over open surgery due to less patient trauma and faster recovery times, yet it can be difficult due to the restricted field of view and lack of haptic feedback. Image guidance provides a potential solution but one current challenge is in accurate "hand-eye" calibration, which determines the position and orientation of the laparoscope camera relative to the tracking markers. In this paper, we propose a simple and clinically feasible calibration method based on a single invariant point. The method requires no additional hardware, can be constructed by theatre staff during surgical setup, requires minimal image processing and can be visualised in real time. Real-time visualisation allows the surgical team to assess the calibration accuracy before use in surgery. In addition, in the laboratory, we have developed a laparoscope with an electromagnetic tracking sensor attached to the camera end and an optical tracking marker attached to the distal end. This enables a comparison of tracking performance. We have evaluated our method in the laboratory and compared it to two widely used methods, "Tsai's method" and "direct" calibration. The new method is of comparable accuracy to existing methods, and we show RMS projected error due to calibration of 1.95 mm for optical tracking and 0.85 mm for EM tracking, versus 4.13 and 1.00 mm respectively, using existing methods. The new method has also been shown to be workable under sterile conditions in the operating room. We have proposed a new method of hand-eye calibration, based on a single invariant point. Initial experience has shown that the method provides visual feedback, satisfactory accuracy and can be performed during surgery. We also show that an EM sensor placed near the camera would provide significantly improved image overlay accuracy.

  3. Two laboratory methods for the calibration of GPS speed meters

    NASA Astrophysics Data System (ADS)

    Bai, Yin; Sun, Qiao; Du, Lei; Yu, Mei; Bai, Jie

    2015-01-01

    The set-ups of two calibration systems are presented to investigate calibration methods of GPS speed meters. The GPS speed meter calibrated is a special type of high accuracy speed meter for vehicles which uses Doppler demodulation of GPS signals to calculate the measured speed of a moving target. Three experiments are performed: including simulated calibration, field-test signal replay calibration, and in-field test comparison with an optical speed meter. The experiments are conducted at specific speeds in the range of 40-180 km h-1 with the same GPS speed meter as the device under calibration. The evaluation of measurement results validates both methods for calibrating GPS speed meters. The relative deviations between the measurement results of the GPS-based high accuracy speed meter and those of the optical speed meter are analyzed, and the equivalent uncertainty of the comparison is evaluated. The comparison results justify the utilization of GPS speed meters as reference equipment if no fewer than seven satellites are available. This study contributes to the widespread use of GPS-based high accuracy speed meters as legal reference equipment in traffic speed metrology.

  4. Data Adjustments for TRACE-P, INTEX-A and INTEX-B

    Atmospheric Science Data Center

    2013-08-06

    ... that time, we have done repeated calibrations with two other methods: measuring the production of ozone from oxygen photolysis and the ... this notification.   All of our four calibration methods indicate that the PMT calibration is incorrect, but they differ in the ...

  5. Contributions to the problem of piezoelectric accelerometer calibration. [using lock-in voltmeter

    NASA Technical Reports Server (NTRS)

    Jakab, I.; Bordas, A.

    1974-01-01

    After discussing the principal calibration methods for piezoelectric accelerometers, an experimental setup for accelerometer calibration by the reciprocity method is described It is shown how the use of a lock-in voltmeter eliminates errors due to viscous damping and electrical loading.

  6. A novel calibration method for non-orthogonal shaft laser theodolite measurement system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Bin, E-mail: wubin@tju.edu.cn, E-mail: xueting@tju.edu.cn; Yang, Fengting; Ding, Wen

    2016-03-15

    Non-orthogonal shaft laser theodolite (N-theodolite) is a new kind of large-scale metrological instrument made up by two rotary tables and one collimated laser. There are three axes for an N-theodolite. According to naming conventions in traditional theodolite, rotary axes of two rotary tables are called as horizontal axis and vertical axis, respectively, and the collimated laser beam is named as sight axis. And the difference between N-theodolite and traditional theodolite is obvious, since the former one with no orthogonal and intersecting accuracy requirements. So the calibration method for traditional theodolite is no longer suitable for N-theodolite, while the calibration methodmore » applied currently is really complicated. Thus this paper introduces a novel calibration method for non-orthogonal shaft laser theodolite measurement system to simplify the procedure and to improve the calibration accuracy. A simple two-step process, calibration for intrinsic parameters and for extrinsic parameters, is proposed by the novel method. And experiments have shown its efficiency and accuracy.« less

  7. A new method to calibrate Lagrangian model with ASAR images for oil slick trajectory.

    PubMed

    Tian, Siyu; Huang, Xiaoxia; Li, Hongga

    2017-03-15

    Since Lagrangian model coefficients vary with different conditions, it is necessary to calibrate the model to obtain optimal coefficient combination for special oil spill accident. This paper focuses on proposing a new method to calibrate Lagrangian model with time series of Envisat ASAR images. Oil slicks extracted from time series images form a detected trajectory of special oil slick. Lagrangian model is calibrated by minimizing the difference between simulated trajectory and detected trajectory. mean center position distance difference (MCPD) and rotation difference (RD) of Oil slicks' or particles' standard deviational ellipses (SDEs) are calculated as two evaluations. The two parameters are taken to evaluate the performance of Lagrangian transport model with different coefficient combinations. This method is applied to Penglai 19-3 oil spill accident. The simulation result with calibrated model agrees well with related satellite observations. It is suggested the new method is effective to calibrate Lagrangian model. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Improvement of Accuracy in Environmental Dosimetry by TLD Cards Using Three-dimensional Calibration Method

    PubMed Central

    HosseiniAliabadi, S. J.; Hosseini Pooya, S. M.; Afarideh, H.; Mianji, F.

    2015-01-01

    Introduction The angular dependency of response for TLD cards may cause deviation from its true value on the results of environmental dosimetry, since TLDs may be exposed to radiation at different angles of incidence from the surrounding area. Objective A 3D setting of TLD cards has been calibrated isotropically in a standard radiation field to evaluate the improvement of the accuracy of measurement for environmental dosimetry. Method Three personal TLD cards were rectangularly placed in a cylindrical holder, and calibrated using 1D and 3D calibration methods. Then, the dosimeter has been used simultaneously with a reference instrument in a real radiation field measuring the accumulated dose within a time interval. Result The results show that the accuracy of measurement has been improved by 6.5% using 3D calibration factor in comparison with that of normal 1D calibration method. Conclusion This system can be utilized in large scale environmental monitoring with a higher accuracy. PMID:26157729

  9. Calibration of gravitational radiation antenna by dynamic Newton field

    NASA Astrophysics Data System (ADS)

    Suzuki, T.; Tsubono, K.; Kuroda, K.; Hirakawa, H.

    1981-07-01

    A method is presented of calibrating antennas for gravitational radiation. The method, which used the dynamic Newton field of a rotating body, is suitable in experiments for frequencies up to several hundred hertz. What is more, the method requires no hardware inside the vacuum chamber of the antenna and is particularly convenient for calibration of low-temperature antenna systems.

  10. Reflectance calibration of focal plane array hyperspectral imaging system for agricultural and food safety applications

    NASA Astrophysics Data System (ADS)

    Lawrence, Kurt C.; Park, Bosoon; Windham, William R.; Mao, Chengye; Poole, Gavin H.

    2003-03-01

    A method to calibrate a pushbroom hyperspectral imaging system for "near-field" applications in agricultural and food safety has been demonstrated. The method consists of a modified geometric control point correction applied to a focal plane array to remove smile and keystone distortion from the system. Once a FPA correction was applied, single wavelength and distance calibrations were used to describe all points on the FPA. Finally, a percent reflectance calibration, applied on a pixel-by-pixel basis, was used for accurate measurements for the hyperspectral imaging system. The method was demonstrated with a stationary prism-grating-prism, pushbroom hyperspectral imaging system. For the system described, wavelength and distance calibrations were used to reduce the wavelength errors to <0.5 nm and distance errors to <0.01mm (across the entrance slit width). The pixel-by-pixel percent reflectance calibration, which was performed at all wavelengths with dark current and 99% reflectance calibration-panel measurements, was verified with measurements on a certified gradient Spectralon panel with values ranging from about 14% reflectance to 99% reflectance with errors generally less than 5% at the mid-wavelength measurements. Results from the calibration method, indicate the hyperspectral imaging system has a usable range between 420 nm and 840 nm. Outside this range, errors increase significantly.

  11. Wind Tunnel Force Balance Calibration Study - Interim Results

    NASA Technical Reports Server (NTRS)

    Rhew, Ray D.

    2012-01-01

    Wind tunnel force balance calibration is preformed utilizing a variety of different methods and does not have a direct traceable standard such as standards used for most calibration practices (weights, and voltmeters). These different calibration methods and practices include, but are not limited to, the loading schedule, the load application hardware, manual and automatic systems, re-leveling and non-re-leveling. A study of the balance calibration techniques used by NASA was undertaken to develop metrics for reviewing and comparing results using sample calibrations. The study also includes balances of different designs, single and multi-piece. The calibration systems include, the manual, and the automatic that are provided by NASA and its vendors. The results to date will be presented along with the techniques for comparing the results. In addition, future planned calibrations and investigations based on the results will be provided.

  12. Comparison of spectral radiance responsivity calibration techniques used for backscatter ultraviolet satellite instruments

    NASA Astrophysics Data System (ADS)

    Kowalewski, M. G.; Janz, S. J.

    2015-02-01

    Methods of absolute radiometric calibration of backscatter ultraviolet (BUV) satellite instruments are compared as part of an effort to minimize pre-launch calibration uncertainties. An internally illuminated integrating sphere source has been used for the Shuttle Solar BUV, Total Ozone Mapping Spectrometer, Ozone Mapping Instrument, and Global Ozone Monitoring Experiment 2 using standardized procedures traceable to national standards. These sphere-based spectral responsivities agree to within the derived combined standard uncertainty of 1.87% relative to calibrations performed using an external diffuser illuminated by standard irradiance sources, the customary spectral radiance responsivity calibration method for BUV instruments. The combined standard uncertainty for these calibration techniques as implemented at the NASA Goddard Space Flight Center’s Radiometric Calibration and Development Laboratory is shown to less than 2% at 250 nm when using a single traceable calibration standard.

  13. Management of groundwater in-situ bioremediation system using reactive transport modelling under parametric uncertainty: field scale application

    NASA Astrophysics Data System (ADS)

    Verardo, E.; Atteia, O.; Rouvreau, L.

    2015-12-01

    In-situ bioremediation is a commonly used remediation technology to clean up the subsurface of petroleum-contaminated sites. Forecasting remedial performance (in terms of flux and mass reduction) is a challenge due to uncertainties associated with source properties and the uncertainties associated with contribution and efficiency of concentration reducing mechanisms. In this study, predictive uncertainty analysis of bio-remediation system efficiency is carried out with the null-space Monte Carlo (NSMC) method which combines the calibration solution-space parameters with the ensemble of null-space parameters, creating sets of calibration-constrained parameters for input to follow-on remedial efficiency. The first step in the NSMC methodology for uncertainty analysis is model calibration. The model calibration was conducted by matching simulated BTEX concentration to a total of 48 observations from historical data before implementation of treatment. Two different bio-remediation designs were then implemented in the calibrated model. The first consists in pumping/injection wells and the second in permeable barrier coupled with infiltration across slotted piping. The NSMC method was used to calculate 1000 calibration-constrained parameter sets for the two different models. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. The first variant implementation of the NSMC is based on a single calibrated model. In the second variant, models were calibrated from different initial parameter sets. NSMC calibration-constrained parameter sets were sampled from these different calibrated models. We demonstrate that in context of nonlinear model, second variant avoids to underestimate parameter uncertainty which may lead to a poor quantification of predictive uncertainty. Application of the proposed approach to manage bioremediation of groundwater in a real site shows that it is effective to provide support in management of the in-situ bioremediation systems. Moreover, this study demonstrates that the NSMC method provides a computationally efficient and practical methodology of utilizing model predictive uncertainty methods in environmental management.

  14. Innovative methodology for intercomparison of radionuclide calibrators using short half-life in situ prepared radioactive sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oliveira, P. A.; Santos, J. A. M., E-mail: joao.santos@ipoporto.min-saude.pt; Serviço de Física Médica do Instituto Português de Oncologia do Porto Francisco Gentil, EPE, Porto

    2014-07-15

    Purpose: An original radionuclide calibrator method for activity determination is presented. The method could be used for intercomparison surveys for short half-life radioactive sources used in Nuclear Medicine, such as{sup 99m}Tc or most positron emission tomography radiopharmaceuticals. Methods: By evaluation of the resulting net optical density (netOD) using a standardized scanning method of irradiated Gafchromic XRQA2 film, a comparison of the netOD measurement with a previously determined calibration curve can be made and the difference between the tested radionuclide calibrator and a radionuclide calibrator used as reference device can be calculated. To estimate the total expected measurement uncertainties, a carefulmore » analysis of the methodology, for the case of{sup 99m}Tc, was performed: reproducibility determination, scanning conditions, and possible fadeout effects. Since every factor of the activity measurement procedure can influence the final result, the method also evaluates correct syringe positioning inside the radionuclide calibrator. Results: As an alternative to using a calibrated source sent to the surveyed site, which requires a relatively long half-life of the nuclide, or sending a portable calibrated radionuclide calibrator, the proposed method uses a source preparedin situ. An indirect activity determination is achieved by the irradiation of a radiochromic film using {sup 99m}Tc under strictly controlled conditions, and cumulated activity calculation from the initial activity and total irradiation time. The irradiated Gafchromic film and the irradiator, without the source, can then be sent to a National Metrology Institute for evaluation of the results. Conclusions: The methodology described in this paper showed to have a good potential for accurate (3%) radionuclide calibrators intercomparison studies for{sup 99m}Tc between Nuclear Medicine centers without source transfer and can easily be adapted to other short half-life radionuclides.« less

  15. Calibration method for spectroscopic systems

    DOEpatents

    Sandison, David R.

    1998-01-01

    Calibration spots of optically-characterized material placed in the field of view of a spectroscopic system allow calibration of the spectroscopic system. Response from the calibration spots is measured and used to calibrate for varying spectroscopic system operating parameters. The accurate calibration achieved allows quantitative spectroscopic analysis of responses taken at different times, different excitation conditions, and of different targets.

  16. Calibration method for spectroscopic systems

    DOEpatents

    Sandison, D.R.

    1998-11-17

    Calibration spots of optically-characterized material placed in the field of view of a spectroscopic system allow calibration of the spectroscopic system. Response from the calibration spots is measured and used to calibrate for varying spectroscopic system operating parameters. The accurate calibration achieved allows quantitative spectroscopic analysis of responses taken at different times, different excitation conditions, and of different targets. 3 figs.

  17. Bayesian Treed Calibration: An Application to Carbon Capture With AX Sorbent

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konomi, Bledar A.; Karagiannis, Georgios; Lai, Kevin

    2017-01-02

    In cases where field or experimental measurements are not available, computer models can model real physical or engineering systems to reproduce their outcomes. They are usually calibrated in light of experimental data to create a better representation of the real system. Statistical methods, based on Gaussian processes, for calibration and prediction have been especially important when the computer models are expensive and experimental data limited. In this paper, we develop the Bayesian treed calibration (BTC) as an extension of standard Gaussian process calibration methods to deal with non-stationarity computer models and/or their discrepancy from the field (or experimental) data. Ourmore » proposed method partitions both the calibration and observable input space, based on a binary tree partitioning, into sub-regions where existing model calibration methods can be applied to connect a computer model with the real system. The estimation of the parameters in the proposed model is carried out using Markov chain Monte Carlo (MCMC) computational techniques. Different strategies have been applied to improve mixing. We illustrate our method in two artificial examples and a real application that concerns the capture of carbon dioxide with AX amine based sorbents. The source code and the examples analyzed in this paper are available as part of the supplementary materials.« less

  18. In-Situ Transfer Standard and Coincident-View Intercomparisons for Sensor Cross-Calibration

    NASA Technical Reports Server (NTRS)

    Thome, Kurt; McCorkel, Joel; Czapla-Myers, Jeff

    2013-01-01

    There exist numerous methods for accomplishing on-orbit calibration. Methods include the reflectance-based approach relying on measurements of surface and atmospheric properties at the time of a sensor overpass as well as invariant scene approaches relying on knowledge of the temporal characteristics of the site. The current work examines typical cross-calibration methods and discusses the expected uncertainties of the methods. Data from the Advanced Land Imager (ALI), Advanced Spaceborne Thermal Emission and Reflection and Radiometer (ASTER), Enhanced Thematic Mapper Plus (ETM+), Moderate Resolution Imaging Spectroradiometer (MODIS), and Thematic Mapper (TM) are used to demonstrate the limits of relative sensor-to-sensor calibration as applied to current sensors while Landsat-5 TM and Landsat-7 ETM+ are used to evaluate the limits of in situ site characterizations for SI-traceable cross calibration. The current work examines the difficulties in trending of results from cross-calibration approaches taking into account sampling issues, site-to-site variability, and accuracy of the method. Special attention is given to the differences caused in the cross-comparison of sensors in radiance space as opposed to reflectance space. The results show that cross calibrations with absolute uncertainties lesser than 1.5 percent (1 sigma) are currently achievable even for sensors without coincident views.

  19. A GPS-Based Pitot-Static Calibration Method Using Global Output-Error Optimization

    NASA Technical Reports Server (NTRS)

    Foster, John V.; Cunningham, Kevin

    2010-01-01

    Pressure-based airspeed and altitude measurements for aircraft typically require calibration of the installed system to account for pressure sensing errors such as those due to local flow field effects. In some cases, calibration is used to meet requirements such as those specified in Federal Aviation Regulation Part 25. Several methods are used for in-flight pitot-static calibration including tower fly-by, pacer aircraft, and trailing cone methods. In the 1990 s, the introduction of satellite-based positioning systems to the civilian market enabled new inflight calibration methods based on accurate ground speed measurements provided by Global Positioning Systems (GPS). Use of GPS for airspeed calibration has many advantages such as accuracy, ease of portability (e.g. hand-held) and the flexibility of operating in airspace without the limitations of test range boundaries or ground telemetry support. The current research was motivated by the need for a rapid and statistically accurate method for in-flight calibration of pitot-static systems for remotely piloted, dynamically-scaled research aircraft. Current calibration methods were deemed not practical for this application because of confined test range size and limited flight time available for each sortie. A method was developed that uses high data rate measurements of static and total pressure, and GPSbased ground speed measurements to compute the pressure errors over a range of airspeed. The novel application of this approach is the use of system identification methods that rapidly compute optimal pressure error models with defined confidence intervals in nearreal time. This method has been demonstrated in flight tests and has shown 2- bounds of approximately 0.2 kts with an order of magnitude reduction in test time over other methods. As part of this experiment, a unique database of wind measurements was acquired concurrently with the flight experiments, for the purpose of experimental validation of the optimization method. This paper describes the GPS-based pitot-static calibration method developed for the AirSTAR research test-bed operated as part of the Integrated Resilient Aircraft Controls (IRAC) project in the NASA Aviation Safety Program (AvSP). A description of the method will be provided and results from recent flight tests will be shown to illustrate the performance and advantages of this approach. Discussion of maneuver requirements and data reduction will be included as well as potential applications.

  20. Calibrating the orientation between a microlens array and a sensor based on projective geometry

    NASA Astrophysics Data System (ADS)

    Su, Lijuan; Yan, Qiangqiang; Cao, Jun; Yuan, Yan

    2016-07-01

    We demonstrate a method for calibrating a microlens array (MLA) with a sensor component by building a plenoptic camera with a conventional prime lens. This calibration method includes a geometric model, a setup to adjust the distance (L) between the prime lens and the MLA, a calibration procedure for determining the subimage centers, and an optimization algorithm. The geometric model introduces nine unknown parameters regarding the centers of the microlenses and their images, whereas the distance adjustment setup provides an initial guess for the distance L. The simulation results verify the effectiveness and accuracy of the proposed method. The experimental results demonstrate the calibration process can be performed with a commercial prime lens and the proposed method can be used to quantitatively evaluate whether a MLA and a sensor is assembled properly for plenoptic systems.

  1. Cost-effectiveness of the Norwegian breast cancer screening program.

    PubMed

    van Luijt, P A; Heijnsdijk, E A M; de Koning, H J

    2017-02-15

    The Norwegian Breast Cancer Screening Programme (NBCSP) has a nation-wide coverage since 2005. All women aged 50-69 years are invited biennially for mammography screening. We evaluated breast cancer mortality reduction and performed a cost-effectiveness analysis, using our microsimulation model, calibrated to most recent data. The microsimulation model allows for the comparison of mortality and costs between a (hypothetical) situation without screening and a situation with screening. Breast cancer incidence in Norway had a steep increase in the early 1990s. We calibrated the model to simulate this increase and included recent costs for screening, diagnosis and treatment of breast cancer and travel and productivity loss. We estimate a 16% breast cancer mortality reduction for a cohort of women, invited to screening, followed over their complete lifetime. Cost-effectiveness is estimated at NOK 112,162 per QALY gained, when taking only direct medical costs into account (the cost of the buses, examinations, and invitations). We used a 3.5% annual discount rate. Cost-effectiveness estimates are substantially below the threshold of NOK 1,926,366 as recommended by the WHO guidelines. For the Norwegian population, which has been gradually exposed to screening, breast cancer mortality reduction for women exposed to screening is increasing and is estimated to rise to ∼30% in 2020 for women aged 55-80 years. The NBCSP is a highly cost-effective measure to reduce breast cancer specific mortality. We estimate a breast cancer specific mortality reduction of 16-30%, at the cost of 112,162 NOK per QALY gained. © 2016 UICC.

  2. Photometric calibration of the COMBO-17 survey with the Softassign Procrustes Matching method

    NASA Astrophysics Data System (ADS)

    Sheikhbahaee, Z.; Nakajima, R.; Erben, T.; Schneider, P.; Hildebrandt, H.; Becker, A. C.

    2017-11-01

    Accurate photometric calibration of optical data is crucial for photometric redshift estimation. We present the Softassign Procrustes Matching (SPM) method to improve the colour calibration upon the commonly used Stellar Locus Regression (SLR) method for the COMBO-17 survey. Our colour calibration approach can be categorised as a point-set matching method, which is frequently used in medical imaging and pattern recognition. We attain a photometric redshift precision Δz/(1 + zs) of better than 2 per cent. Our method is based on aligning the stellar locus of the uncalibrated stars to that of a spectroscopic sample of the Sloan Digital Sky Survey standard stars. We achieve our goal by finding a correspondence matrix between the two point-sets and applying the matrix to estimate the appropriate translations in multidimensional colour space. The SPM method is able to find the translation between two point-sets, despite the existence of noise and incompleteness of the common structures in the sets, as long as there is a distinct structure in at least one of the colour-colour pairs. We demonstrate the precision of our colour calibration method with a mock catalogue. The SPM colour calibration code is publicly available at https://neuronphysics@bitbucket.org/neuronphysics/spm.git.

  3. On combination of strict Bayesian principles with model reduction technique or how stochastic model calibration can become feasible for large-scale applications

    NASA Astrophysics Data System (ADS)

    Oladyshkin, S.; Schroeder, P.; Class, H.; Nowak, W.

    2013-12-01

    Predicting underground carbon dioxide (CO2) storage represents a challenging problem in a complex dynamic system. Due to lacking information about reservoir parameters, quantification of uncertainties may become the dominant question in risk assessment. Calibration on past observed data from pilot-scale test injection can improve the predictive power of the involved geological, flow, and transport models. The current work performs history matching to pressure time series from a pilot storage site operated in Europe, maintained during an injection period. Simulation of compressible two-phase flow and transport (CO2/brine) in the considered site is computationally very demanding, requiring about 12 days of CPU time for an individual model run. For that reason, brute-force approaches for calibration are not feasible. In the current work, we explore an advanced framework for history matching based on the arbitrary polynomial chaos expansion (aPC) and strict Bayesian principles. The aPC [1] offers a drastic but accurate stochastic model reduction. Unlike many previous chaos expansions, it can handle arbitrary probability distribution shapes of uncertain parameters, and can therefore handle directly the statistical information appearing during the matching procedure. We capture the dependence of model output on these multipliers with the expansion-based reduced model. In our study we keep the spatial heterogeneity suggested by geophysical methods, but consider uncertainty in the magnitude of permeability trough zone-wise permeability multipliers. Next combined the aPC with Bootstrap filtering (a brute-force but fully accurate Bayesian updating mechanism) in order to perform the matching. In comparison to (Ensemble) Kalman Filters, our method accounts for higher-order statistical moments and for the non-linearity of both the forward model and the inversion, and thus allows a rigorous quantification of calibrated model uncertainty. The usually high computational costs of accurate filtering become very feasible for our suggested aPC-based calibration framework. However, the power of aPC-based Bayesian updating strongly depends on the accuracy of prior information. In the current study, the prior assumptions on the model parameters were not satisfactory and strongly underestimate the reservoir pressure. Thus, the aPC-based response surface used in Bootstrap filtering is fitted to a distant and poorly chosen region within the parameter space. Thanks to the iterative procedure suggested in [2] we overcome this drawback with small computational costs. The iteration successively improves the accuracy of the expansion around the current estimation of the posterior distribution. The final result is a calibrated model of the site that can be used for further studies, with an excellent match to the data. References [1] Oladyshkin S. and Nowak W. Data-driven uncertainty quantification using the arbitrary polynomial chaos expansion. Reliability Engineering and System Safety, 106:179-190, 2012. [2] Oladyshkin S., Class H., Nowak W. Bayesian updating via Bootstrap filtering combined with data-driven polynomial chaos expansions: methodology and application to history matching for carbon dioxide storage in geological formations. Computational Geosciences, 17 (4), 671-687, 2013.

  4. A formulation of tissue- and water-equivalent materials using the stoichiometric analysis method for CT-number calibration in radiotherapy treatment planning.

    PubMed

    Yohannes, Indra; Kolditz, Daniel; Langner, Oliver; Kalender, Willi A

    2012-03-07

    Tissue- and water-equivalent materials (TEMs) are widely used in quality assurance and calibration procedures, both in radiodiagnostics and radiotherapy. In radiotherapy, particularly, the TEMs are often used for computed tomography (CT) number calibration in treatment planning systems. However, currently available TEMs may not be very accurate in the determination of the calibration curves due to their limitation in mimicking radiation characteristics of the corresponding real tissues in both low- and high-energy ranges. Therefore, we are proposing a new formulation of TEMs using a stoichiometric analysis method to obtain TEMs for the calibration purposes. We combined the stoichiometric calibration and the basic data method to compose base materials to develop TEMs matching standard real tissues from ICRU Report 44 and 46. First, the CT numbers of six materials with known elemental compositions were measured to get constants for the stoichiometric calibration. The results of the stoichiometric calibration were used together with the basic data method to formulate new TEMs. These new TEMs were scanned to validate their CT numbers. The electron density and the stopping power calibration curves were also generated. The absolute differences of the measured CT numbers of the new TEMs were less than 4 HU for the soft tissues and less than 22 HU for the bone compared to the ICRU real tissues. Furthermore, the calculated relative electron density and electron and proton stopping powers of the new TEMs differed by less than 2% from the corresponding ICRU real tissues. The new TEMs which were formulated using the proposed technique increase the simplicity of the calibration process and preserve the accuracy of the stoichiometric calibration simultaneously.

  5. Integrated calibration sphere and calibration step fixture for improved coordinate measurement machine calibration

    DOEpatents

    Clifford, Harry J [Los Alamos, NM

    2011-03-22

    A method and apparatus for mounting a calibration sphere to a calibration fixture for Coordinate Measurement Machine (CMM) calibration and qualification is described, decreasing the time required for such qualification, thus allowing the CMM to be used more productively. A number of embodiments are disclosed that allow for new and retrofit manufacture to perform as integrated calibration sphere and calibration fixture devices. This invention renders unnecessary the removal of a calibration sphere prior to CMM measurement of calibration features on calibration fixtures, thereby greatly reducing the time spent qualifying a CMM.

  6. Simultaneous digital quantification and fluorescence-based size characterization of massively parallel sequencing libraries.

    PubMed

    Laurie, Matthew T; Bertout, Jessica A; Taylor, Sean D; Burton, Joshua N; Shendure, Jay A; Bielas, Jason H

    2013-08-01

    Due to the high cost of failed runs and suboptimal data yields, quantification and determination of fragment size range are crucial steps in the library preparation process for massively parallel sequencing (or next-generation sequencing). Current library quality control methods commonly involve quantification using real-time quantitative PCR and size determination using gel or capillary electrophoresis. These methods are laborious and subject to a number of significant limitations that can make library calibration unreliable. Herein, we propose and test an alternative method for quality control of sequencing libraries using droplet digital PCR (ddPCR). By exploiting a correlation we have discovered between droplet fluorescence and amplicon size, we achieve the joint quantification and size determination of target DNA with a single ddPCR assay. We demonstrate the accuracy and precision of applying this method to the preparation of sequencing libraries.

  7. Suborbital Reusable Launch Vehicles as an Opportunity to Consolidate and Calibrate Ground Based and Satellite Instruments

    NASA Astrophysics Data System (ADS)

    Papadopoulos, K.

    2014-12-01

    XCOR Aerospace, a commercial space company, is planning to provide frequent, low cost access to near-Earth space on the Lynx suborbital Reusable Launch Vehicle (sRLV). Measurements in the external vacuum environment can be made and can launch from most runways on a limited lead time. Lynx can operate as a platform to perform suborbital in situ measurements and remote sensing to supplement models and simulations with new data points. These measurements can serve as a quantitative link to existing instruments and be used as a basis to calibrate detectors on spacecraft. Easier access to suborbital data can improve the longevity and cohesiveness of spacecraft and ground-based resources. A study of how these measurements can be made on Lynx sRLV will be presented. At the boundary between terrestrial and space weather, measurements from instruments on Lynx can help develop algorithms to optimize the consolidation of ground and satellite based data as well as assimilate global models with new data points. For example, current tides and the equatorial electrojet, essential to understanding the Thermosphere-Ionosphere system, can be measured in situ frequently and on short notice. Furthermore, a negative-ion spectrometer and a Faraday cup, can take measurements of the D-region ion composition. A differential GPS receiver can infer the spatial gradient of ionospheric electron density. Instruments and optics on spacecraft degrade over time, leading to calibration drift. Lynx can be a cost effective platform for deploying a reference instrument to calibrate satellites with a frequent and fast turnaround and a successful return of the instrument. A calibrated reference instrument on Lynx can make collocated observations as another instrument and corrections are made for the latter, thus ensuring data consistency and mission longevity. Aboard a sRLV, atmospheric conditions that distort remotely sensed data (ground and spacecraft based) can be measured in situ. Moreover, an active instrument can be deployed in a sRLV under a satellite track, and serve as a "standard candle" for instruments on satellites. Yearly calibrations of the Solar Extreme Ultraviolet Experiment (SEE) instrument aboard the TIMED orbiter using sounding rockets depict the necessity of calibrations and illustrates calibration frequency.

  8. Determination of antenna factors using a three-antenna method at open-field test site

    NASA Astrophysics Data System (ADS)

    Masuzawa, Hiroshi; Tejima, Teruo; Harima, Katsushige; Morikawa, Takao

    1992-09-01

    Recently NIST has used the three-antenna method for calibration of the antenna factor of an antenna used for EMI measurements. This method does not require the specially designed standard antennas which are necessary in the standard field method or the standard antenna method, and can be used at an open-field test site. This paper theoretically and experimentally examines the measurement errors of this method and evaluates the precision of the antenna-factor calibration. It is found that the main source of the error is the non-ideal propagation characteristics of the test site, which should therefore be measured before the calibration. The precision of the antenna-factor calibration at the test site used in these experiments, is estimated to be 0.5 dB.

  9. Estimation of stress distribution in ferromagnetic tensile specimens using low cost eddy current stress measurement system and BP neural network.

    PubMed

    Li, Jianwei; Zhang, Weimin; Zeng, Weiqin; Chen, Guolong; Qiu, Zhongchao; Cao, Xinyuan; Gao, Xuanyi

    2017-01-01

    Estimation of the stress distribution in ferromagnetic components is very important for evaluating the working status of mechanical equipment and implementing preventive maintenance. Eddy current testing technology is a promising method in this field because of its advantages of safety, no need of coupling agent, etc. In order to reduce the cost of eddy current stress measurement system, and obtain the stress distribution in ferromagnetic materials without scanning, a low cost eddy current stress measurement system based on Archimedes spiral planar coil was established, and a method based on BP neural network to obtain the stress distribution using the stress of several discrete test points was proposed. To verify the performance of the developed test system and the validity of the proposed method, experiment was implemented using structural steel (Q235) specimens. Standard curves of sensors at each test point were achieved, the calibrated data were used to establish the BP neural network model for approximating the stress variation on the specimen surface, and the stress distribution curve of the specimen was obtained by interpolating with the established model. The results show that there is a good linear relationship between the change of signal modulus and the stress in most elastic range of the specimen, and the established system can detect the change in stress with a theoretical average sensitivity of -0.4228 mV/MPa. The obtained stress distribution curve is well consonant with the theoretical analysis result. At last, possible causes and improving methods of problems appeared in the results were discussed. This research has important significance for reducing the cost of eddy current stress measurement system, and advancing the engineering application of eddy current stress testing.

  10. Estimation of stress distribution in ferromagnetic tensile specimens using low cost eddy current stress measurement system and BP neural network

    PubMed Central

    Li, Jianwei; Zeng, Weiqin; Chen, Guolong; Qiu, Zhongchao; Cao, Xinyuan; Gao, Xuanyi

    2017-01-01

    Estimation of the stress distribution in ferromagnetic components is very important for evaluating the working status of mechanical equipment and implementing preventive maintenance. Eddy current testing technology is a promising method in this field because of its advantages of safety, no need of coupling agent, etc. In order to reduce the cost of eddy current stress measurement system, and obtain the stress distribution in ferromagnetic materials without scanning, a low cost eddy current stress measurement system based on Archimedes spiral planar coil was established, and a method based on BP neural network to obtain the stress distribution using the stress of several discrete test points was proposed. To verify the performance of the developed test system and the validity of the proposed method, experiment was implemented using structural steel (Q235) specimens. Standard curves of sensors at each test point were achieved, the calibrated data were used to establish the BP neural network model for approximating the stress variation on the specimen surface, and the stress distribution curve of the specimen was obtained by interpolating with the established model. The results show that there is a good linear relationship between the change of signal modulus and the stress in most elastic range of the specimen, and the established system can detect the change in stress with a theoretical average sensitivity of -0.4228 mV/MPa. The obtained stress distribution curve is well consonant with the theoretical analysis result. At last, possible causes and improving methods of problems appeared in the results were discussed. This research has important significance for reducing the cost of eddy current stress measurement system, and advancing the engineering application of eddy current stress testing. PMID:29145500

  11. Comparison of Pinus taeda L. whole-tree wood property calibrations using diffuse reflectance near infrared spectra obtained using a variety of sampling options

    Treesearch

    P. David Jones; Laurence R. Schimleck; Richard F. Daniels; Alexander Clark; Robert C. Purnell

    2008-01-01

    A necessary objective for tree-breeding programs, with a focus on wood quality, is the measurement of wood properties on a whole-tree basis, however, the time and cost involved limits the numbers of trees sampled. Near infrared (NIR) spectroscopy provides an alternative and recently, it has been demonstrated that calibrations based on milled increment cores and whole-...

  12. Self-Calibration of Cone-Beam CT Geometry Using 3D-2D Image Registration: Development and Application to Task-Based Imaging with a Robotic C-Arm

    PubMed Central

    Ouadah, S.; Stayman, J. W.; Gang, G.; Uneri, A.; Ehtiati, T.; Siewerdsen, J. H.

    2015-01-01

    Purpose Robotic C-arm systems are capable of general noncircular orbits whose trajectories can be driven by the particular imaging task. However obtaining accurate calibrations for reconstruction in such geometries can be a challenging problem. This work proposes a method to perform a unique geometric calibration of an arbitrary C-arm orbit by registering 2D projections to a previously acquired 3D image to determine the transformation parameters representing the system geometry. Methods Experiments involved a cone-beam CT (CBCT) bench system, a robotic C-arm, and three phantoms. A robust 3D-2D registration process was used to compute the 9 degree of freedom (DOF) transformation between each projection and an existing 3D image by maximizing normalized gradient information with a digitally reconstructed radiograph (DRR) of the 3D volume. The quality of the resulting “self-calibration” was evaluated in terms of the agreement with an established calibration method using a BB phantom as well as image quality in the resulting CBCT reconstruction. Results The self-calibration yielded CBCT images without significant difference in spatial resolution from the standard (“true”) calibration methods (p-value >0.05 for all three phantoms), and the differences between CBCT images reconstructed using the “self” and “true” calibration methods were on the order of 10−3 mm−1. Maximum error in magnification was 3.2%, and back-projection ray placement was within 0.5 mm. Conclusion The proposed geometric “self” calibration provides a means for 3D imaging on general non-circular orbits in CBCT systems for which a geometric calibration is either not available or not reproducible. The method forms the basis of advanced “task-based” 3D imaging methods now in development for robotic C-arms. PMID:26388661

  13. A new method to calibrate the absolute sensitivity of a soft X-ray streak camera

    NASA Astrophysics Data System (ADS)

    Yu, Jian; Liu, Shenye; Li, Jin; Yang, Zhiwen; Chen, Ming; Guo, Luting; Yao, Li; Xiao, Shali

    2016-12-01

    In this paper, we introduce a new method to calibrate the absolute sensitivity of a soft X-ray streak camera (SXRSC). The calibrations are done in the static mode by using a small laser-produced X-ray source. A calibrated X-ray CCD is used as a secondary standard detector to monitor the X-ray source intensity. In addition, two sets of holographic flat-field grating spectrometers are chosen as the spectral discrimination systems of the SXRSC and the X-ray CCD. The absolute sensitivity of the SXRSC is obtained by comparing the signal counts of the SXRSC to the output counts of the X-ray CCD. Results show that the calibrated spectrum covers the range from 200 eV to 1040 eV. The change of the absolute sensitivity in the vicinity of the K-edge of the carbon can also be clearly seen. The experimental values agree with the calculated values to within 29% error. Compared with previous calibration methods, the proposed method has several advantages: a wide spectral range, high accuracy, and simple data processing. Our calibration results can be used to make quantitative X-ray flux measurements in laser fusion research.

  14. Integrated calibration between digital camera and laser scanner from mobile mapping system for land vehicles

    NASA Astrophysics Data System (ADS)

    Zhao, Guihua; Chen, Hong; Li, Xingquan; Zou, Xiaoliang

    The paper presents the concept of lever arm and boresight angle, the design requirements of calibration sites and the integrated calibration method of boresight angles of digital camera or laser scanner. Taking test data collected by Applanix's LandMark system as an example, the camera calibration method is introduced to be piling three consecutive stereo images and OTF-Calibration method using ground control points. The laser calibration of boresight angle is proposed to use a manual and automatic method with ground control points. Integrated calibration between digital camera and laser scanner is introduced to improve the systemic precision of two sensors. By analyzing the measurement value between ground control points and its corresponding image points in sequence images, a conclusion is that position objects between camera and images are within about 15cm in relative errors and 20cm in absolute errors. By comparing the difference value between ground control points and its corresponding laser point clouds, the errors is less than 20cm. From achieved results of these experiments in analysis, mobile mapping system is efficient and reliable system for generating high-accuracy and high-density road spatial data more rapidly.

  15. Systems and methods for optically measuring properties of hydrocarbon fuel gases

    DOEpatents

    Adler-Golden, S.; Bernstein, L.S.; Bien, F.; Gersh, M.E.; Goldstein, N.

    1998-10-13

    A system and method for optical interrogation and measurement of a hydrocarbon fuel gas includes a light source generating light at near-visible wavelengths. A cell containing the gas is optically coupled to the light source which is in turn partially transmitted by the sample. A spectrometer disperses the transmitted light and captures an image thereof. The image is captured by a low-cost silicon-based two-dimensional CCD array. The captured spectral image is processed by electronics for determining energy or BTU content and composition of the gas. The innovative optical approach provides a relatively inexpensive, durable, maintenance-free sensor and method which is reliable in the field and relatively simple to calibrate. In view of the above, accurate monitoring is possible at a plurality of locations along the distribution chain leading to more efficient distribution. 14 figs.

  16. Systems and methods for optically measuring properties of hydrocarbon fuel gases

    DOEpatents

    Adler-Golden, Steven; Bernstein, Lawrence S.; Bien, Fritz; Gersh, Michael E.; Goldstein, Neil

    1998-10-13

    A system and method for optical interrogation and measurement of a hydrocarbon fuel gas includes a light source generating light at near-visible wavelengths. A cell containing the gas is optically coupled to the light source which is in turn partially transmitted by the sample. A spectrometer disperses the transmitted light and captures an image thereof. The image is captured by a low-cost silicon-based two-dimensional CCD array. The captured spectral image is processed by electronics for determining energy or BTU content and composition of the gas. The innovative optical approach provides a relatively inexpensive, durable, maintenance-free sensor and method which is reliable in the field and relatively simple to calibrate. In view of the above, accurate monitoring is possible at a plurality of locations along the distribution chain leading to more efficient distribution.

  17. An adaptable parallel algorithm for the direct numerical simulation of incompressible turbulent flows using a Fourier spectral/hp element method and MPI virtual topologies

    NASA Astrophysics Data System (ADS)

    Bolis, A.; Cantwell, C. D.; Moxey, D.; Serson, D.; Sherwin, S. J.

    2016-09-01

    A hybrid parallelisation technique for distributed memory systems is investigated for a coupled Fourier-spectral/hp element discretisation of domains characterised by geometric homogeneity in one or more directions. The performance of the approach is mathematically modelled in terms of operation count and communication costs for identifying the most efficient parameter choices. The model is calibrated to target a specific hardware platform after which it is shown to accurately predict the performance in the hybrid regime. The method is applied to modelling turbulent flow using the incompressible Navier-Stokes equations in an axisymmetric pipe and square channel. The hybrid method extends the practical limitations of the discretisation, allowing greater parallelism and reduced wall times. Performance is shown to continue to scale when both parallelisation strategies are used.

  18. A Unified Fisher's Ratio Learning Method for Spatial Filter Optimization.

    PubMed

    Li, Xinyang; Guan, Cuntai; Zhang, Haihong; Ang, Kai Keng

    To detect the mental task of interest, spatial filtering has been widely used to enhance the spatial resolution of electroencephalography (EEG). However, the effectiveness of spatial filtering is undermined due to the significant nonstationarity of EEG. Based on regularization, most of the conventional stationary spatial filter design methods address the nonstationarity at the cost of the interclass discrimination. Moreover, spatial filter optimization is inconsistent with feature extraction when EEG covariance matrices could not be jointly diagonalized due to the regularization. In this paper, we propose a novel framework for a spatial filter design. With Fisher's ratio in feature space directly used as the objective function, the spatial filter optimization is unified with feature extraction. Given its ratio form, the selection of the regularization parameter could be avoided. We evaluate the proposed method on a binary motor imagery data set of 16 subjects, who performed the calibration and test sessions on different days. The experimental results show that the proposed method yields improvement in classification performance for both single broadband and filter bank settings compared with conventional nonunified methods. We also provide a systematic attempt to compare different objective functions in modeling data nonstationarity with simulation studies.To detect the mental task of interest, spatial filtering has been widely used to enhance the spatial resolution of electroencephalography (EEG). However, the effectiveness of spatial filtering is undermined due to the significant nonstationarity of EEG. Based on regularization, most of the conventional stationary spatial filter design methods address the nonstationarity at the cost of the interclass discrimination. Moreover, spatial filter optimization is inconsistent with feature extraction when EEG covariance matrices could not be jointly diagonalized due to the regularization. In this paper, we propose a novel framework for a spatial filter design. With Fisher's ratio in feature space directly used as the objective function, the spatial filter optimization is unified with feature extraction. Given its ratio form, the selection of the regularization parameter could be avoided. We evaluate the proposed method on a binary motor imagery data set of 16 subjects, who performed the calibration and test sessions on different days. The experimental results show that the proposed method yields improvement in classification performance for both single broadband and filter bank settings compared with conventional nonunified methods. We also provide a systematic attempt to compare different objective functions in modeling data nonstationarity with simulation studies.

  19. Automatic Calibration of Global Flow Routing Model Parameters in the Amazon Basin Using Virtual SWOT Data

    NASA Astrophysics Data System (ADS)

    Mouffe, Melodie; Getirana, Augusto; Ricci, Sophie; Lion, Christine; Biancamaria, Sylvian; Boone, Aaron; Mognard, Nelly; Rogel, Philippe

    2013-09-01

    The Surface Water and Ocean Topography (SWOT) wide swath altimetry mission will provide measurements of water surface elevations (WSE) at a global scale. The aim of this study is to investigate the potential of these satellite data for the calibration of the hydrological model HyMAP, over the Amazon river basin. Since SWOT has not yet been launched, synthetical observations are used to calibrate the river bed depth and width, the Manning coefficient and the baseflow concentration time. The calibration process stands in the minimization of a cost function using an evolutionnary, global and multi-objective algorithm that describes the difference between the simulated and the observed WSE. We found that the calibration procedure is able to retrieve an optimal set of parameters such that it brings the simulated WSE closer to the observation. Still with a global calibration procedure where a uniform correction is applied, the improvement is limited to a mean correction over the catchment and the simulation period. We conclude that in order to benefit from the high resolution and complete coverage of the SWOT mission, the calibration process should be achieved sequentially in time over sub-domains as observations become available.

  20. Linking Item Parameters to a Base Scale

    ERIC Educational Resources Information Center

    Kang, Taehoon; Petersen, Nancy S.

    2012-01-01

    This paper compares three methods of item calibration--concurrent calibration, separate calibration with linking, and fixed item parameter calibration--that are frequently used for linking item parameters to a base scale. Concurrent and separate calibrations were implemented using BILOG-MG. The Stocking and Lord in "Appl Psychol Measure"…

Top