Sample records for calibration sensitivity analysis

  1. Trend analysis of Terra/ASTER/VNIR radiometric calibration coefficient through onboard and vicarious calibrations as well as cross calibration with MODIS

    NASA Astrophysics Data System (ADS)

    Arai, Kohei

    2012-07-01

    More than 11 years Radiometric Calibration Coefficients (RCC) derived from onboard and vicarious calibrations are compared together with cross comparison to the well calibrated MODIS RCC. Fault Tree Analysis (FTA) is also conducted for clarification of possible causes of the RCC degradation together with sensitivity analysis for vicarious calibration. One of the suspects of causes of RCC degradation is clarified through FTA. Test site dependency on vicarious calibration is quite obvious. It is because of the vicarious calibration RCC is sensitive to surface reflectance measurement accuracy, not atmospheric optical depth. The results from cross calibration with MODIS support that significant sensitivity of surface reflectance measurements on vicarious calibration.

  2. Accurate evaluation of sensitivity for calibration between a LiDAR and a panoramic camera used for remote sensing

    NASA Astrophysics Data System (ADS)

    García-Moreno, Angel-Iván; González-Barbosa, José-Joel; Ramírez-Pedraza, Alfonso; Hurtado-Ramos, Juan B.; Ornelas-Rodriguez, Francisco-Javier

    2016-04-01

    Computer-based reconstruction models can be used to approximate urban environments. These models are usually based on several mathematical approximations and the usage of different sensors, which implies dependency on many variables. The sensitivity analysis presented in this paper is used to weigh the relative importance of each uncertainty contributor into the calibration of a panoramic camera-LiDAR system. Both sensors are used for three-dimensional urban reconstruction. Simulated and experimental tests were conducted. For the simulated tests we analyze and compare the calibration parameters using the Monte Carlo and Latin hypercube sampling techniques. Sensitivity analysis for each variable involved into the calibration was computed by the Sobol method, which is based on the analysis of the variance breakdown, and the Fourier amplitude sensitivity test method, which is based on Fourier's analysis. Sensitivity analysis is an essential tool in simulation modeling and for performing error propagation assessments.

  3. CALIBRATION, OPTIMIZATION, AND SENSITIVITY AND UNCERTAINTY ALGORITHMS APPLICATION PROGRAMMING INTERFACE (COSU-API)

    EPA Science Inventory

    The Application Programming Interface (API) for Uncertainty Analysis, Sensitivity Analysis, and Parameter Estimation (UA/SA/PE API) tool development, here fore referred to as the Calibration, Optimization, and Sensitivity and Uncertainty Algorithms API (COSU-API), was initially d...

  4. Calibration of a complex activated sludge model for the full-scale wastewater treatment plant.

    PubMed

    Liwarska-Bizukojc, Ewa; Olejnik, Dorota; Biernacki, Rafal; Ledakowicz, Stanislaw

    2011-08-01

    In this study, the results of the calibration of the complex activated sludge model implemented in BioWin software for the full-scale wastewater treatment plant are presented. Within the calibration of the model, sensitivity analysis of its parameters and the fractions of carbonaceous substrate were performed. In the steady-state and dynamic calibrations, a successful agreement between the measured and simulated values of the output variables was achieved. Sensitivity analysis revealed that upon the calculations of normalized sensitivity coefficient (S(i,j)) 17 (steady-state) or 19 (dynamic conditions) kinetic and stoichiometric parameters are sensitive. Most of them are associated with growth and decay of ordinary heterotrophic organisms and phosphorus accumulating organisms. The rankings of ten most sensitive parameters established on the basis of the calculations of the mean square sensitivity measure (δ(msqr)j) indicate that irrespective of the fact, whether the steady-state or dynamic calibration was performed, there is an agreement in the sensitivity of parameters.

  5. Sensitivity analysis, calibration, and testing of a distributed hydrological model using error‐based weighting and one objective function

    USGS Publications Warehouse

    Foglia, L.; Hill, Mary C.; Mehl, Steffen W.; Burlando, P.

    2009-01-01

    We evaluate the utility of three interrelated means of using data to calibrate the fully distributed rainfall‐runoff model TOPKAPI as applied to the Maggia Valley drainage area in Switzerland. The use of error‐based weighting of observation and prior information data, local sensitivity analysis, and single‐objective function nonlinear regression provides quantitative evaluation of sensitivity of the 35 model parameters to the data, identification of data types most important to the calibration, and identification of correlations among parameters that contribute to nonuniqueness. Sensitivity analysis required only 71 model runs, and regression required about 50 model runs. The approach presented appears to be ideal for evaluation of models with long run times or as a preliminary step to more computationally demanding methods. The statistics used include composite scaled sensitivities, parameter correlation coefficients, leverage, Cook's D, and DFBETAS. Tests suggest predictive ability of the calibrated model typical of hydrologic models.

  6. Linked Sensitivity Analysis, Calibration, and Uncertainty Analysis Using a System Dynamics Model for Stroke Comparative Effectiveness Research.

    PubMed

    Tian, Yuan; Hassmiller Lich, Kristen; Osgood, Nathaniel D; Eom, Kirsten; Matchar, David B

    2016-11-01

    As health services researchers and decision makers tackle more difficult problems using simulation models, the number of parameters and the corresponding degree of uncertainty have increased. This often results in reduced confidence in such complex models to guide decision making. To demonstrate a systematic approach of linked sensitivity analysis, calibration, and uncertainty analysis to improve confidence in complex models. Four techniques were integrated and applied to a System Dynamics stroke model of US veterans, which was developed to inform systemwide intervention and research planning: Morris method (sensitivity analysis), multistart Powell hill-climbing algorithm and generalized likelihood uncertainty estimation (calibration), and Monte Carlo simulation (uncertainty analysis). Of 60 uncertain parameters, sensitivity analysis identified 29 needing calibration, 7 that did not need calibration but significantly influenced key stroke outcomes, and 24 not influential to calibration or stroke outcomes that were fixed at their best guess values. One thousand alternative well-calibrated baselines were obtained to reflect calibration uncertainty and brought into uncertainty analysis. The initial stroke incidence rate among veterans was identified as the most influential uncertain parameter, for which further data should be collected. That said, accounting for current uncertainty, the analysis of 15 distinct prevention and treatment interventions provided a robust conclusion that hypertension control for all veterans would yield the largest gain in quality-adjusted life years. For complex health care models, a mixed approach was applied to examine the uncertainty surrounding key stroke outcomes and the robustness of conclusions. We demonstrate that this rigorous approach can be practical and advocate for such analysis to promote understanding of the limits of certainty in applying models to current decisions and to guide future data collection. © The Author(s) 2016.

  7. Sensitivity of planetary cruise navigation to earth orientation calibration errors

    NASA Technical Reports Server (NTRS)

    Estefan, J. A.; Folkner, W. M.

    1995-01-01

    A detailed analysis was conducted to determine the sensitivity of spacecraft navigation errors to the accuracy and timeliness of Earth orientation calibrations. Analyses based on simulated X-band (8.4-GHz) Doppler and ranging measurements acquired during the interplanetary cruise segment of the Mars Pathfinder heliocentric trajectory were completed for the nominal trajectory design and for an alternative trajectory with a longer transit time. Several error models were developed to characterize the effect of Earth orientation on navigational accuracy based on current and anticipated Deep Space Network calibration strategies. The navigational sensitivity of Mars Pathfinder to calibration errors in Earth orientation was computed for each candidate calibration strategy with the Earth orientation parameters included as estimated parameters in the navigation solution. In these cases, the calibration errors contributed 23 to 58% of the total navigation error budget, depending on the calibration strategy being assessed. Navigation sensitivity calculations were also performed for cases in which Earth orientation calibration errors were not adjusted in the navigation solution. In these cases, Earth orientation calibration errors contributed from 26 to as much as 227% of the total navigation error budget. The final analysis suggests that, not only is the method used to calibrate Earth orientation vitally important for precision navigation of Mars Pathfinder, but perhaps equally important is the method for inclusion of the calibration errors in the navigation solutions.

  8. Sensitivity and Uncertainty Analysis for Streamflow Prediction Using Different Objective Functions and Optimization Algorithms: San Joaquin California

    NASA Astrophysics Data System (ADS)

    Paul, M.; Negahban-Azar, M.

    2017-12-01

    The hydrologic models usually need to be calibrated against observed streamflow at the outlet of a particular drainage area through a careful model calibration. However, a large number of parameters are required to fit in the model due to their unavailability of the field measurement. Therefore, it is difficult to calibrate the model for a large number of potential uncertain model parameters. This even becomes more challenging if the model is for a large watershed with multiple land uses and various geophysical characteristics. Sensitivity analysis (SA) can be used as a tool to identify most sensitive model parameters which affect the calibrated model performance. There are many different calibration and uncertainty analysis algorithms which can be performed with different objective functions. By incorporating sensitive parameters in streamflow simulation, effects of the suitable algorithm in improving model performance can be demonstrated by the Soil and Water Assessment Tool (SWAT) modeling. In this study, the SWAT was applied in the San Joaquin Watershed in California covering 19704 km2 to calibrate the daily streamflow. Recently, sever water stress escalating due to intensified climate variability, prolonged drought and depleting groundwater for agricultural irrigation in this watershed. Therefore it is important to perform a proper uncertainty analysis given the uncertainties inherent in hydrologic modeling to predict the spatial and temporal variation of the hydrologic process to evaluate the impacts of different hydrologic variables. The purpose of this study was to evaluate the sensitivity and uncertainty of the calibrated parameters for predicting streamflow. To evaluate the sensitivity of the calibrated parameters three different optimization algorithms (Sequential Uncertainty Fitting- SUFI-2, Generalized Likelihood Uncertainty Estimation- GLUE and Parameter Solution- ParaSol) were used with four different objective functions (coefficient of determination- r2, Nash-Sutcliffe efficiency- NSE, percent bias- PBIAS, and Kling-Gupta efficiency- KGE). The preliminary results showed that using the SUFI-2 algorithm with the objective function NSE and KGE has improved significantly the calibration (e.g. R2 and NSE is found 0.52 and 0.47 respectively for daily streamflow calibration).

  9. Continuous glucose monitoring in subcutaneous tissue using factory-calibrated sensors: a pilot study.

    PubMed

    Hoss, Udo; Jeddi, Iman; Schulz, Mark; Budiman, Erwin; Bhogal, Claire; McGarraugh, Geoffrey

    2010-08-01

    Commercial continuous subcutaneous glucose monitors require in vivo calibration using capillary blood glucose tests. Feasibility of factory calibration, i.e., sensor batch characterization in vitro with no further need for in vivo calibration, requires a predictable and stable in vivo sensor sensitivity and limited inter- and intra-subject variation of the ratio of interstitial to blood glucose concentration. Twelve volunteers wore two FreeStyle Navigator (Abbott Diabetes Care, Alameda, CA) continuous glucose monitoring systems for 5 days in parallel for two consecutive sensor wears (four sensors per subject, 48 sensors total). Sensors from a prototype sensor lot with a low variability in glucose sensitivity were used for the study. Median sensor sensitivity values based on capillary blood glucose were calculated per sensor and compared for inter- and intra-subject variation. Mean absolute relative difference (MARD) calculation and error grid analysis were performed using a single calibration factor for all sensors to simulate factory calibration and compared to standard fingerstick calibration. Sensor sensitivity variation in vitro was 4.6%, which increased to 8.3% in vivo (P < 0.0001). Analysis of variance revealed no significant inter-subject differences in sensor sensitivity (P = 0.134). Applying a single universal calibration factor retrospectively to all sensors resulted in a MARD of 10.4% and 88.1% of values in Clarke Error Grid Zone A, compared to a MARD of 10.9% and 86% of values in Error Grid Zone A for fingerstick calibration. Factory calibration of sensors for continuous subcutaneous glucose monitoring is feasible with similar accuracy to standard fingerstick calibration. Additional data are required to confirm this result in subjects with diabetes.

  10. Sensitivity-Based Guided Model Calibration

    NASA Astrophysics Data System (ADS)

    Semnani, M.; Asadzadeh, M.

    2017-12-01

    A common practice in automatic calibration of hydrologic models is applying the sensitivity analysis prior to the global optimization to reduce the number of decision variables (DVs) by identifying the most sensitive ones. This two-stage process aims to improve the optimization efficiency. However, Parameter sensitivity information can be used to enhance the ability of the optimization algorithms to find good quality solutions in a fewer number of solution evaluations. This improvement can be achieved by increasing the focus of optimization on sampling from the most sensitive parameters in each iteration. In this study, the selection process of the dynamically dimensioned search (DDS) optimization algorithm is enhanced by utilizing a sensitivity analysis method to put more emphasis on the most sensitive decision variables for perturbation. The performance of DDS with the sensitivity information is compared to the original version of DDS for different mathematical test functions and a model calibration case study. Overall, the results show that DDS with sensitivity information finds nearly the same solutions as original DDS, however, in a significantly fewer number of solution evaluations.

  11. Finite Element Model Calibration Approach for Area I-X

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Gaspar, James L.; Lazor, Daniel R.; Parks, Russell A.; Bartolotta, Paul A.

    2010-01-01

    Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of non-conventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pretest predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.

  12. Finite Element Model Calibration Approach for Ares I-X

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Lazor, Daniel R.; Gaspar, James L.; Parks, Russel A.; Bartolotta, Paul A.

    2010-01-01

    Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of nonconventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pre-test predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.

  13. Evaluation of the AnnAGNPS Model for Predicting Runoff and Nutrient Export in a Typical Small Watershed in the Hilly Region of Taihu Lake.

    PubMed

    Luo, Chuan; Li, Zhaofu; Li, Hengpeng; Chen, Xiaomin

    2015-09-02

    The application of hydrological and water quality models is an efficient approach to better understand the processes of environmental deterioration. This study evaluated the ability of the Annualized Agricultural Non-Point Source (AnnAGNPS) model to predict runoff, total nitrogen (TN) and total phosphorus (TP) loading in a typical small watershed of a hilly region near Taihu Lake, China. Runoff was calibrated and validated at both an annual and monthly scale, and parameter sensitivity analysis was performed for TN and TP before the two water quality components were calibrated. The results showed that the model satisfactorily simulated runoff at annual and monthly scales, both during calibration and validation processes. Additionally, results of parameter sensitivity analysis showed that the parameters Fertilizer rate, Fertilizer organic, Canopy cover and Fertilizer inorganic were more sensitive to TN output. In terms of TP, the parameters Residue mass ratio, Fertilizer rate, Fertilizer inorganic and Canopy cover were the most sensitive. Based on these sensitive parameters, calibration was performed. TN loading produced satisfactory results for both the calibration and validation processes, whereas the performance of TP loading was slightly poor. The simulation results showed that AnnAGNPS has the potential to be used as a valuable tool for the planning and management of watersheds.

  14. AN OVERVIEW OF THE UNCERTAINTY ANALYSIS, SENSITIVITY ANALYSIS, AND PARAMETER ESTIMATION (UA/SA/PE) API AND HOW TO IMPLEMENT IT

    EPA Science Inventory

    The Application Programming Interface (API) for Uncertainty Analysis, Sensitivity Analysis, and
    Parameter Estimation (UA/SA/PE API) (also known as Calibration, Optimization and Sensitivity and Uncertainty (CUSO)) was developed in a joint effort between several members of both ...

  15. A new methodology based on sensitivity analysis to simplify the recalibration of functional-structural plant models in new conditions.

    PubMed

    Mathieu, Amélie; Vidal, Tiphaine; Jullien, Alexandra; Wu, QiongLi; Chambon, Camille; Bayol, Benoit; Cournède, Paul-Henry

    2018-06-19

    Functional-structural plant models (FSPMs) describe explicitly the interactions between plants and their environment at organ to plant scale. However, the high level of description of the structure or model mechanisms makes this type of model very complex and hard to calibrate. A two-step methodology to facilitate the calibration process is proposed here. First, a global sensitivity analysis method was applied to the calibration loss function. It provided first-order and total-order sensitivity indexes that allow parameters to be ranked by importance in order to select the most influential ones. Second, the Akaike information criterion (AIC) was used to quantify the model's quality of fit after calibration with different combinations of selected parameters. The model with the lowest AIC gives the best combination of parameters to select. This methodology was validated by calibrating the model on an independent data set (same cultivar, another year) with the parameters selected in the second step. All the parameters were set to their nominal value; only the most influential ones were re-estimated. Sensitivity analysis applied to the calibration loss function is a relevant method to underline the most significant parameters in the estimation process. For the studied winter oilseed rape model, 11 out of 26 estimated parameters were selected. Then, the model could be recalibrated for a different data set by re-estimating only three parameters selected with the model selection method. Fitting only a small number of parameters dramatically increases the efficiency of recalibration, increases the robustness of the model and helps identify the principal sources of variation in varying environmental conditions. This innovative method still needs to be more widely validated but already gives interesting avenues to improve the calibration of FSPMs.

  16. Global sensitivity analysis of a filtration model for submerged anaerobic membrane bioreactors (AnMBR).

    PubMed

    Robles, A; Ruano, M V; Ribes, J; Seco, A; Ferrer, J

    2014-04-01

    The results of a global sensitivity analysis of a filtration model for submerged anaerobic MBRs (AnMBRs) are assessed in this paper. This study aimed to (1) identify the less- (or non-) influential factors of the model in order to facilitate model calibration and (2) validate the modelling approach (i.e. to determine the need for each of the proposed factors to be included in the model). The sensitivity analysis was conducted using a revised version of the Morris screening method. The dynamic simulations were conducted using long-term data obtained from an AnMBR plant fitted with industrial-scale hollow-fibre membranes. Of the 14 factors in the model, six were identified as influential, i.e. those calibrated using off-line protocols. A dynamic calibration (based on optimisation algorithms) of these influential factors was conducted. The resulting estimated model factors accurately predicted membrane performance. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Evaluation of the AnnAGNPS Model for Predicting Runoff and Nutrient Export in a Typical Small Watershed in the Hilly Region of Taihu Lake

    PubMed Central

    Luo, Chuan; Li, Zhaofu; Li, Hengpeng; Chen, Xiaomin

    2015-01-01

    The application of hydrological and water quality models is an efficient approach to better understand the processes of environmental deterioration. This study evaluated the ability of the Annualized Agricultural Non-Point Source (AnnAGNPS) model to predict runoff, total nitrogen (TN) and total phosphorus (TP) loading in a typical small watershed of a hilly region near Taihu Lake, China. Runoff was calibrated and validated at both an annual and monthly scale, and parameter sensitivity analysis was performed for TN and TP before the two water quality components were calibrated. The results showed that the model satisfactorily simulated runoff at annual and monthly scales, both during calibration and validation processes. Additionally, results of parameter sensitivity analysis showed that the parameters Fertilizer rate, Fertilizer organic, Canopy cover and Fertilizer inorganic were more sensitive to TN output. In terms of TP, the parameters Residue mass ratio, Fertilizer rate, Fertilizer inorganic and Canopy cover were the most sensitive. Based on these sensitive parameters, calibration was performed. TN loading produced satisfactory results for both the calibration and validation processes, whereas the performance of TP loading was slightly poor. The simulation results showed that AnnAGNPS has the potential to be used as a valuable tool for the planning and management of watersheds. PMID:26364642

  18. Influence of Primary Gage Sensitivities on the Convergence of Balance Load Iterations

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred

    2012-01-01

    The connection between the convergence of wind tunnel balance load iterations and the existence of the primary gage sensitivities of a balance is discussed. First, basic elements of two load iteration equations that the iterative method uses in combination with results of a calibration data analysis for the prediction of balance loads are reviewed. Then, the connection between the primary gage sensitivities, the load format, the gage output format, and the convergence characteristics of the load iteration equation choices is investigated. A new criterion is also introduced that may be used to objectively determine if the primary gage sensitivity of a balance gage exists. Then, it is shown that both load iteration equations will converge as long as a suitable regression model is used for the analysis of the balance calibration data, the combined influence of non linear terms of the regression model is very small, and the primary gage sensitivities of all balance gages exist. The last requirement is fulfilled, e.g., if force balance calibration data is analyzed in force balance format. Finally, it is demonstrated that only one of the two load iteration equation choices, i.e., the iteration equation used by the primary load iteration method, converges if one or more primary gage sensitivities are missing. This situation may occur, e.g., if force balance calibration data is analyzed in direct read format using the original gage outputs. Data from the calibration of a six component force balance is used to illustrate the connection between the convergence of the load iteration equation choices and the existence of the primary gage sensitivities.

  19. Probabilistic methods for sensitivity analysis and calibration in the NASA challenge problem

    DOE PAGES

    Safta, Cosmin; Sargsyan, Khachik; Najm, Habib N.; ...

    2015-01-01

    In this study, a series of algorithms are proposed to address the problems in the NASA Langley Research Center Multidisciplinary Uncertainty Quantification Challenge. A Bayesian approach is employed to characterize and calibrate the epistemic parameters based on the available data, whereas a variance-based global sensitivity analysis is used to rank the epistemic and aleatory model parameters. A nested sampling of the aleatory–epistemic space is proposed to propagate uncertainties from model parameters to output quantities of interest.

  20. Probabilistic methods for sensitivity analysis and calibration in the NASA challenge problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Safta, Cosmin; Sargsyan, Khachik; Najm, Habib N.

    In this study, a series of algorithms are proposed to address the problems in the NASA Langley Research Center Multidisciplinary Uncertainty Quantification Challenge. A Bayesian approach is employed to characterize and calibrate the epistemic parameters based on the available data, whereas a variance-based global sensitivity analysis is used to rank the epistemic and aleatory model parameters. A nested sampling of the aleatory–epistemic space is proposed to propagate uncertainties from model parameters to output quantities of interest.

  1. Sensitivity Analysis of Fatigue Crack Growth Model for API Steels in Gaseous Hydrogen.

    PubMed

    Amaro, Robert L; Rustagi, Neha; Drexler, Elizabeth S; Slifka, Andrew J

    2014-01-01

    A model to predict fatigue crack growth of API pipeline steels in high pressure gaseous hydrogen has been developed and is presented elsewhere. The model currently has several parameters that must be calibrated for each pipeline steel of interest. This work provides a sensitivity analysis of the model parameters in order to provide (a) insight to the underlying mathematical and mechanistic aspects of the model, and (b) guidance for model calibration of other API steels.

  2. Global Sensitivity Analysis with Small Sample Sizes: Ordinary Least Squares Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, Michael J.; Liu, Wei; Sivaramakrishnan, Raghu

    2016-12-21

    A new version of global sensitivity analysis is developed in this paper. This new version coupled with tools from statistics, machine learning, and optimization can devise small sample sizes that allow for the accurate ordering of sensitivity coefficients for the first 10-30 most sensitive chemical reactions in complex chemical-kinetic mechanisms, and is particularly useful for studying the chemistry in realistic devices. A key part of the paper is calibration of these small samples. Because these small sample sizes are developed for use in realistic combustion devices, the calibration is done over the ranges of conditions in such devices, with amore » test case being the operating conditions of a compression ignition engine studied earlier. Compression ignition engines operate under low-temperature combustion conditions with quite complicated chemistry making this calibration difficult, leading to the possibility of false positives and false negatives in the ordering of the reactions. So an important aspect of the paper is showing how to handle the trade-off between false positives and false negatives using ideas from the multiobjective optimization literature. The combination of the new global sensitivity method and the calibration are sample sizes a factor of approximately 10 times smaller than were available with our previous algorithm.« less

  3. Basin-scale geothermal model calibration: experience from the Perth Basin, Australia

    NASA Astrophysics Data System (ADS)

    Wellmann, Florian; Reid, Lynn

    2014-05-01

    The calibration of large-scale geothermal models for entire sedimentary basins is challenging as direct measurements of rock properties and subsurface temperatures are commonly scarce and the basal boundary conditions poorly constrained. Instead of the often applied "trial-and-error" manual model calibration, we examine here if we can gain additional insight into parameter sensitivities and model uncertainty with a model analysis and calibration study. Our geothermal model is based on a high-resolution full 3-D geological model, covering an area of more than 100,000 square kilometers and extending to a depth of 55 kilometers. The model contains all major faults (>80 ) and geological units (13) for the entire basin. This geological model is discretised into a rectilinear mesh with a lateral resolution of 500 x 500 m, and a variable resolution at depth. The highest resolution of 25 m is applied to a depth range of 1000-3000 m where most temperature measurements are available. The entire discretised model consists of approximately 50 million cells. The top thermal boundary condition is derived from surface temperature measurements on land and ocean floor. The base of the model extents below the Moho, and we apply the heat flux over the Moho as a basal heat flux boundary condition. Rock properties (thermal conductivity, porosity, and heat production) have been compiled from several existing data sets. The conductive geothermal forward simulation is performed with SHEMAT, and we then use the stand-alone capabilities of iTOUGH2 for sensitivity analysis and model calibration. Simulated temperatures are compared to 130 quality weighted bottom hole temperature measurements. The sensitivity analysis provided a clear insight into the most sensitive parameters and parameter correlations. This proved to be of value as strong correlations, for example between basal heat flux and heat production in deep geological units, can significantly influence the model calibration procedure. The calibration resulted in a better determination of subsurface temperatures, and, in addition, provided an insight into model quality. Furthermore, a detailed analysis of the measurements used for calibration highlighted potential outliers, and limitations with the model assumptions. Extending the previously existing large-scale geothermal simulation with iTOUGH2 provided us with a valuable insight into the sensitive parameters and data in the model, which would clearly not be possible with a simple trial-and-error calibration method. Using the gained knowledge, future work will include more detailed studies on the influence of advection and convection.

  4. Calibration of Passive Microwave Polarimeters that Use Hybrid Coupler-Based Correlators

    NASA Technical Reports Server (NTRS)

    Piepmeier, J. R.

    2003-01-01

    Four calibration algorithms are studied for microwave polarimeters that use hybrid coupler-based correlators: 1) conventional two-look of hot and cold sources, 2) three looks of hot and cold source combinations, 3) two-look with correlated source, and 4) four-look combining methods 2 and 3. The systematic errors are found to depend on the polarimeter component parameters and accuracy of calibration noise temperatures. A case study radiometer in four different remote sensing scenarios was considered in light of these results. Applications for Ocean surface salinity, Ocean surface winds, and soil moisture were found to be sensitive to different systematic errors. Finally, a standard uncertainty analysis was performed on the four-look calibration algorithm, which was found to be most sensitive to the correlated calibration source.

  5. Sensitivity Analysis and Uncertainty Quantification for the LAMMPS Molecular Dynamics Simulation Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Picard, Richard Roy; Bhat, Kabekode Ghanasham

    2017-07-18

    We examine sensitivity analysis and uncertainty quantification for molecular dynamics simulation. Extreme (large or small) output values for the LAMMPS code often occur at the boundaries of input regions, and uncertainties in those boundary values are overlooked by common SA methods. Similarly, input values for which code outputs are consistent with calibration data can also occur near boundaries. Upon applying approaches in the literature for imprecise probabilities (IPs), much more realistic results are obtained than for the complacent application of standard SA and code calibration.

  6. A mixture theory model of fluid and solute transport in the microvasculature of normal and malignant tissues. II: Factor sensitivity analysis, calibration, and validation.

    PubMed

    Schuff, M M; Gore, J P; Nauman, E A

    2013-12-01

    The treatment of cancerous tumors is dependent upon the delivery of therapeutics through the blood by means of the microcirculation. Differences in the vasculature of normal and malignant tissues have been recognized, but it is not fully understood how these differences affect transport and the applicability of existing mathematical models has been questioned at the microscale due to the complex rheology of blood and fluid exchange with the tissue. In addition to determining an appropriate set of governing equations it is necessary to specify appropriate model parameters based on physiological data. To this end, a two stage sensitivity analysis is described which makes it possible to determine the set of parameters most important to the model's calibration. In the first stage, the fluid flow equations are examined and a sensitivity analysis is used to evaluate the importance of 11 different model parameters. Of these, only four substantially influence the intravascular axial flow providing a tractable set that could be calibrated using red blood cell velocity data from the literature. The second stage also utilizes a sensitivity analysis to evaluate the importance of 14 model parameters on extravascular flux. Of these, six exhibit high sensitivity and are integrated into the model calibration using a response surface methodology and experimental intra- and extravascular accumulation data from the literature (Dreher et al. in J Natl Cancer Inst 98(5):335-344, 2006). The model exhibits good agreement with the experimental results for both the mean extravascular concentration and the penetration depth as a function of time for inert dextran over a wide range of molecular weights.

  7. Integrating model behavior, optimization, and sensitivity/uncertainty analysis: overview and application of the MOUSE software toolbox

    USDA-ARS?s Scientific Manuscript database

    This paper provides an overview of the Model Optimization, Uncertainty, and SEnsitivity Analysis (MOUSE) software application, an open-source, Java-based toolbox of visual and numerical analysis components for the evaluation of environmental models. MOUSE is based on the OPTAS model calibration syst...

  8. The impact of standard and hard-coded parameters on the hydrologic fluxes in the Noah-MP land surface model

    NASA Astrophysics Data System (ADS)

    Thober, S.; Cuntz, M.; Mai, J.; Samaniego, L. E.; Clark, M. P.; Branch, O.; Wulfmeyer, V.; Attinger, S.

    2016-12-01

    Land surface models incorporate a large number of processes, described by physical, chemical and empirical equations. The agility of the models to react to different meteorological conditions is artificially constrained by having hard-coded parameters in their equations. Here we searched for hard-coded parameters in the computer code of the land surface model Noah with multiple process options (Noah-MP) to assess the model's agility during parameter estimation. We found 139 hard-coded values in all Noah-MP process options in addition to the 71 standard parameters. We performed a Sobol' global sensitivity analysis to variations of the standard and hard-coded parameters. The sensitivities of the hydrologic output fluxes latent heat and total runoff, their component fluxes, as well as photosynthesis and sensible heat were evaluated at twelve catchments of the Eastern United States with very different hydro-meteorological regimes. Noah-MP's output fluxes are sensitive to two thirds of its standard parameters. The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for evaporation, which proved to be oversensitive in other land surface models as well. Latent heat and total runoff show very similar sensitivities towards standard and hard-coded parameters. They are sensitive to both soil and plant parameters, which means that model calibrations of hydrologic or land surface models should take both soil and plant parameters into account. Sensible and latent heat exhibit almost the same sensitivities so that calibration or sensitivity analysis can be performed with either of the two. Photosynthesis has almost the same sensitivities as transpiration, which are different from the sensitivities of latent heat. Including photosynthesis and latent heat in model calibration might therefore be beneficial. Surface runoff is sensitive to almost all hard-coded snow parameters. These sensitivities get, however, diminished in total runoff. It is thus recommended to include the most sensitive hard-coded model parameters that were exposed in this study when calibrating Noah-MP.

  9. Regression Analysis and Calibration Recommendations for the Characterization of Balance Temperature Effects

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; Volden, T.

    2018-01-01

    Analysis and use of temperature-dependent wind tunnel strain-gage balance calibration data are discussed in the paper. First, three different methods are presented and compared that may be used to process temperature-dependent strain-gage balance data. The first method uses an extended set of independent variables in order to process the data and predict balance loads. The second method applies an extended load iteration equation during the analysis of balance calibration data. The third method uses temperature-dependent sensitivities for the data analysis. Physical interpretations of the most important temperature-dependent regression model terms are provided that relate temperature compensation imperfections and the temperature-dependent nature of the gage factor to sets of regression model terms. Finally, balance calibration recommendations are listed so that temperature-dependent calibration data can be obtained and successfully processed using the reviewed analysis methods.

  10. Global Sensitivity Analysis and Parameter Calibration for an Ecosystem Carbon Model

    NASA Astrophysics Data System (ADS)

    Safta, C.; Ricciuto, D. M.; Sargsyan, K.; Najm, H. N.; Debusschere, B.; Thornton, P. E.

    2013-12-01

    We present uncertainty quantification results for a process-based ecosystem carbon model. The model employs 18 parameters and is driven by meteorological data corresponding to years 1992-2006 at the Harvard Forest site. Daily Net Ecosystem Exchange (NEE) observations were available to calibrate the model parameters and test the performance of the model. Posterior distributions show good predictive capabilities for the calibrated model. A global sensitivity analysis was first performed to determine the important model parameters based on their contribution to the variance of NEE. We then proceed to calibrate the model parameters in a Bayesian framework. The daily discrepancies between measured and predicted NEE values were modeled as independent and identically distributed Gaussians with prescribed daily variance according to the recorded instrument error. All model parameters were assumed to have uninformative priors with bounds set according to expert opinion. The global sensitivity results show that the rate of leaf fall (LEAFALL) is responsible for approximately 25% of the total variance in the average NEE for 1992-2005. A set of 4 other parameters, Nitrogen use efficiency (NUE), base rate for maintenance respiration (BR_MR), growth respiration fraction (RG_FRAC), and allocation to plant stem pool (ASTEM) contribute between 5% and 12% to the variance in average NEE, while the rest of the parameters have smaller contributions. The posterior distributions, sampled with a Markov Chain Monte Carlo algorithm, exhibit significant correlations between model parameters. However LEAFALL, the most important parameter for the average NEE, is not informed by the observational data, while less important parameters show significant updates between their prior and posterior densities. The Fisher information matrix values, indicating which parameters are most informed by the experimental observations, are examined to augment the comparison between the calibration and global sensitivity analysis results.

  11. Using sensitivity analysis in model calibration efforts

    USGS Publications Warehouse

    Tiedeman, Claire; Hill, Mary C.

    2003-01-01

    In models of natural and engineered systems, sensitivity analysis can be used to assess relations among system state observations, model parameters, and model predictions. The model itself links these three entities, and model sensitivities can be used to quantify the links. Sensitivities are defined as the derivatives of simulated quantities (such as simulated equivalents of observations, or model predictions) with respect to model parameters. We present four measures calculated from model sensitivities that quantify the observation-parameter-prediction links and that are especially useful during the calibration and prediction phases of modeling. These four measures are composite scaled sensitivities (CSS), prediction scaled sensitivities (PSS), the value of improved information (VOII) statistic, and the observation prediction (OPR) statistic. These measures can be used to help guide initial calibration of models, collection of field data beneficial to model predictions, and recalibration of models updated with new field information. Once model sensitivities have been calculated, each of the four measures requires minimal computational effort. We apply the four measures to a three-layer MODFLOW-2000 (Harbaugh et al., 2000; Hill et al., 2000) model of the Death Valley regional ground-water flow system (DVRFS), located in southern Nevada and California. D’Agnese et al. (1997, 1999) developed and calibrated the model using nonlinear regression methods. Figure 1 shows some of the observations, parameters, and predictions for the DVRFS model. Observed quantities include hydraulic heads and spring flows. The 23 defined model parameters include hydraulic conductivities, vertical anisotropies, recharge rates, evapotranspiration rates, and pumpage. Predictions of interest for this regional-scale model are advective transport paths from potential contamination sites underlying the Nevada Test Site and Yucca Mountain.

  12. Global sensitivity analysis, probabilistic calibration, and predictive assessment for the data assimilation linked ecosystem carbon model

    DOE PAGES

    Safta, C.; Ricciuto, Daniel M.; Sargsyan, Khachik; ...

    2015-07-01

    In this paper we propose a probabilistic framework for an uncertainty quantification (UQ) study of a carbon cycle model and focus on the comparison between steady-state and transient simulation setups. A global sensitivity analysis (GSA) study indicates the parameters and parameter couplings that are important at different times of the year for quantities of interest (QoIs) obtained with the data assimilation linked ecosystem carbon (DALEC) model. We then employ a Bayesian approach and a statistical model error term to calibrate the parameters of DALEC using net ecosystem exchange (NEE) observations at the Harvard Forest site. The calibration results are employedmore » in the second part of the paper to assess the predictive skill of the model via posterior predictive checks.« less

  13. EO-1 Hyperion reflectance time series at calibration and validation sites: stability and sensitivity to seasonal dynamics

    Treesearch

    Petya K. Entcheva Campbell; Elizabeth M. Middleton; Kurt J. Thome; Raymond F. Kokaly; Karl Fred Huemmrich; David Lagomasino; Kimberly A. Novick; Nathaniel A. Brunsell

    2013-01-01

    This study evaluated Earth Observing 1 (EO-1) Hyperion reflectance time series at established calibration sites to assess the instrument stability and suitability for monitoring vegetation functional parameters. Our analysis using three pseudo-invariant calibration sites in North America indicated that the reflectance time series are devoid of apparent spectral trends...

  14. Multi-site calibration, validation, and sensitivity analysis of the MIKE SHE Model for a large watershed in northern China

    Treesearch

    S. Wang; Z. Zhang; G. Sun; P. Strauss; J. Guo; Y. Tang; A. Yao

    2012-01-01

    Model calibration is essential for hydrologic modeling of large watersheds in a heterogeneous mountain environment. Little guidance is available for model calibration protocols for distributed models that aim at capturing the spatial variability of hydrologic processes. This study used the physically-based distributed hydrologic model, MIKE SHE, to contrast a lumped...

  15. [Numerical simulation and operation optimization of biological filter].

    PubMed

    Zou, Zong-Sen; Shi, Han-Chang; Chen, Xiang-Qiang; Xie, Xiao-Qing

    2014-12-01

    BioWin software and two sensitivity analysis methods were used to simulate the Denitrification Biological Filter (DNBF) + Biological Aerated Filter (BAF) process in Yuandang Wastewater Treatment Plant. Based on the BioWin model of DNBF + BAF process, the operation data of September 2013 were used for sensitivity analysis and model calibration, and the operation data of October 2013 were used for model validation. The results indicated that the calibrated model could accurately simulate practical DNBF + BAF processes, and the most sensitive parameters were the parameters related to biofilm, OHOs and aeration. After the validation and calibration of model, it was used for process optimization with simulating operation results under different conditions. The results showed that, the best operation condition for discharge standard B was: reflux ratio = 50%, ceasing methanol addition, influent C/N = 4.43; while the best operation condition for discharge standard A was: reflux ratio = 50%, influent COD = 155 mg x L(-1) after methanol addition, influent C/N = 5.10.

  16. Addressing Curse of Dimensionality in Sensitivity Analysis: How Can We Handle High-Dimensional Problems?

    NASA Astrophysics Data System (ADS)

    Safaei, S.; Haghnegahdar, A.; Razavi, S.

    2016-12-01

    Complex environmental models are now the primary tool to inform decision makers for the current or future management of environmental resources under the climate and environmental changes. These complex models often contain a large number of parameters that need to be determined by a computationally intensive calibration procedure. Sensitivity analysis (SA) is a very useful tool that not only allows for understanding the model behavior, but also helps in reducing the number of calibration parameters by identifying unimportant ones. The issue is that most global sensitivity techniques are highly computationally demanding themselves for generating robust and stable sensitivity metrics over the entire model response surface. Recently, a novel global sensitivity analysis method, Variogram Analysis of Response Surfaces (VARS), is introduced that can efficiently provide a comprehensive assessment of global sensitivity using the Variogram concept. In this work, we aim to evaluate the effectiveness of this highly efficient GSA method in saving computational burden, when applied to systems with extra-large number of input factors ( 100). We use a test function and a hydrological modelling case study to demonstrate the capability of VARS method in reducing problem dimensionality by identifying important vs unimportant input factors.

  17. Sewer deterioration modeling with condition data lacking historical records.

    PubMed

    Egger, C; Scheidegger, A; Reichert, P; Maurer, M

    2013-11-01

    Accurate predictions of future conditions of sewer systems are needed for efficient rehabilitation planning. For this purpose, a range of sewer deterioration models has been proposed which can be improved by calibration with observed sewer condition data. However, if datasets lack historical records, calibration requires a combination of deterioration and sewer rehabilitation models, as the current state of the sewer network reflects the combined effect of both processes. Otherwise, physical sewer lifespans are overestimated as pipes in poor condition that were rehabilitated are no longer represented in the dataset. We therefore propose the combination of a sewer deterioration model with a simple rehabilitation model which can be calibrated with datasets lacking historical information. We use Bayesian inference for parameter estimation due to the limited information content of the data and limited identifiability of the model parameters. A sensitivity analysis gives an insight into the model's robustness against the uncertainty of the prior. The analysis reveals that the model results are principally sensitive to the means of the priors of specific model parameters, which should therefore be elicited with care. The importance sampling technique applied for the sensitivity analysis permitted efficient implementation for regional sensitivity analysis with reasonable computational outlay. Application of the combined model with both simulated and real data shows that it effectively compensates for the bias induced by a lack of historical data. Thus, the novel approach makes it possible to calibrate sewer pipe deterioration models even when historical condition records are lacking. Since at least some prior knowledge of the model parameters is available, the strength of Bayesian inference is particularly evident in the case of small datasets. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. A two-step sensitivity analysis for hydrological signatures in Jinhua River Basin, East China

    NASA Astrophysics Data System (ADS)

    Pan, S.; Fu, G.; Chiang, Y. M.; Xu, Y. P.

    2016-12-01

    Owing to model complexity and large number of parameters, calibration and sensitivity analysis are difficult processes for distributed hydrological models. In this study, a two-step sensitivity analysis approach is proposed for analyzing the hydrological signatures in Jinhua River Basin, East China, using the Distributed Hydrology-Soil-Vegetation Model (DHSVM). A rough sensitivity analysis is firstly conducted to obtain preliminary influential parameters via Analysis of Variance. The number of parameters was greatly reduced from eighteen-three to sixteen. Afterwards, the sixteen parameters are further analyzed based on a variance-based global sensitivity analysis, i.e., Sobol's sensitivity analysis method, to achieve robust sensitivity rankings and parameter contributions. Parallel-Computing is applied to reduce computational burden in variance-based sensitivity analysis. The results reveal that only a few number of model parameters are significantly sensitive, including rain LAI multiplier, lateral conductivity, porosity, field capacity, wilting point of clay loam, understory monthly LAI, understory minimum resistance and root zone depths of croplands. Finally several hydrological signatures are used for investigating the performance of DHSVM. Results show that high value of efficiency criteria didn't indicate excellent performance of hydrological signatures. For most samples from Sobol's sensitivity analysis, water yield was simulated very well. However, lowest and maximum annual daily runoffs were underestimated. Most of seven-day minimum runoffs were overestimated. Nevertheless, good performances of the three signatures above still exist in a number of samples. Analysis of peak flow shows that small and medium floods are simulated perfectly while slight underestimations happen to large floods. The work in this study helps to further multi-objective calibration of DHSVM model and indicates where to improve the reliability and credibility of model simulation.

  19. Nonlinear bias analysis and correction of microwave temperature sounder observations for FY-3C meteorological satellite

    NASA Astrophysics Data System (ADS)

    Hu, Taiyang; Lv, Rongchuan; Jin, Xu; Li, Hao; Chen, Wenxin

    2018-01-01

    The nonlinear bias analysis and correction of receiving channels in Chinese FY-3C meteorological satellite Microwave Temperature Sounder (MWTS) is a key technology of data assimilation for satellite radiance data. The thermal-vacuum chamber calibration data acquired from the MWTS can be analyzed to evaluate the instrument performance, including radiometric temperature sensitivity, channel nonlinearity and calibration accuracy. Especially, the nonlinearity parameters due to imperfect square-law detectors will be calculated from calibration data and further used to correct the nonlinear bias contributions of microwave receiving channels. Based upon the operational principles and thermalvacuum chamber calibration procedures of MWTS, this paper mainly focuses on the nonlinear bias analysis and correction methods for improving the calibration accuracy of the important instrument onboard FY-3C meteorological satellite, from the perspective of theoretical and experimental studies. Furthermore, a series of original results are presented to demonstrate the feasibility and significance of the methods.

  20. Digital Correlation Microwave Polarimetry: Analysis and Demonstration

    NASA Technical Reports Server (NTRS)

    Piepmeier, J. R.; Gasiewski, A. J.; Krebs, Carolyn A. (Technical Monitor)

    2000-01-01

    The design, analysis, and demonstration of a digital-correlation microwave polarimeter for use in earth remote sensing is presented. We begin with an analysis of three-level digital correlation and develop the correlator transfer function and radiometric sensitivity. A fifth-order polynomial regression is derived for inverting the digital correlation coefficient into the analog statistic. In addition, the effects of quantizer threshold asymmetry and hysteresis are discussed. A two-look unpolarized calibration scheme is developed for identifying correlation offsets. The developed theory and calibration method are verified using a 10.7 GHz and a 37.0 GHz polarimeter. The polarimeters are based upon 1-GS/s three-level digital correlators and measure the first three Stokes parameters. Through experiment, the radiometric sensitivity is shown to approach the theoretical as derived earlier in the paper and the two-look unpolarized calibration method is successfully compared with results using a polarimetric scheme. Finally, sample data from an aircraft experiment demonstrates that the polarimeter is highly-useful for ocean wind-vector measurement.

  1. OARE flight maneuvers and calibration measurements on STS-58

    NASA Technical Reports Server (NTRS)

    Blanchard, Robert C.; Nicholson, John Y.; Ritter, James R.; Larman, Kevin T.

    1994-01-01

    The Orbital Acceleration Research Experiment (OARE), which has flown on STS-40, STS-50, and STS-58, contains a three axis accelerometer with a single, nonpendulous, electrostatically suspended proofmass which can resolve accelerations to the nano-g level. The experiment also contains a full calibration station to permit in situ bias and scale factor calibration. This on-orbit calibration capability eliminates the large uncertainty of ground-based calibrations encountered with accelerometers flown in the past on the orbiter, thus providing absolute acceleration measurement accuracy heretofore unachievable. This is the first time accelerometer scale factor measurements have been performed on orbit. A detailed analysis of the calibration process is given along with results of the calibration factors from the on-orbit OARE flight measurements on STS-58. In addition, the analysis of OARE flight maneuver data used to validate the scale factor measurements in the sensor's most sensitive range is also presented. Estimates on calibration uncertainties are discussed. This provides bounds on the STS-58 absolute acceleration measurements for future applications.

  2. The status of BAT detector

    NASA Astrophysics Data System (ADS)

    Lien, Amy; Markwardt, Craig B.; Krimm, Hans Albert; Barthelmy, Scott D.; Cenko, Bradley

    2018-01-01

    We will present the current status of the Swift/BAT detector. In particular, we will report the updated detector gain calibration, the number of enable detectors, and the global bad time intervals with potential calibration issues. We will also summarize the results of the yearly BAT calibration using the Crab nebula. Finally, we will discuss the effects on the BAT survey, such as the sensitivity, localization, and spectral analysis, due to the changes in detector status.

  3. Parameter estimation and sensitivity analysis in an agent-based model of Leishmania major infection

    PubMed Central

    Jones, Douglas E.; Dorman, Karin S.

    2009-01-01

    Computer models of disease take a systems biology approach toward understanding host-pathogen interactions. In particular, data driven computer model calibration is the basis for inference of immunological and pathogen parameters, assessment of model validity, and comparison between alternative models of immune or pathogen behavior. In this paper we describe the calibration and analysis of an agent-based model of Leishmania major infection. A model of macrophage loss following uptake of necrotic tissue is proposed to explain macrophage depletion following peak infection. Using Gaussian processes to approximate the computer code, we perform a sensitivity analysis to identify important parameters and to characterize their influence on the simulated infection. The analysis indicates that increasing growth rate can favor or suppress pathogen loads, depending on the infection stage and the pathogen’s ability to avoid detection. Subsequent calibration of the model against previously published biological observations suggests that L. major has a relatively slow growth rate and can replicate for an extended period of time before damaging the host cell. PMID:19837088

  4. Combining EEG and MEG for the Reconstruction of Epileptic Activity Using a Calibrated Realistic Volume Conductor Model

    PubMed Central

    Aydin, Ümit; Vorwerk, Johannes; Küpper, Philipp; Heers, Marcel; Kugel, Harald; Galka, Andreas; Hamid, Laith; Wellmer, Jörg; Kellinghaus, Christoph; Rampp, Stefan; Wolters, Carsten Hermann

    2014-01-01

    To increase the reliability for the non-invasive determination of the irritative zone in presurgical epilepsy diagnosis, we introduce here a new experimental and methodological source analysis pipeline that combines the complementary information in EEG and MEG, and apply it to data from a patient, suffering from refractory focal epilepsy. Skull conductivity parameters in a six compartment finite element head model with brain anisotropy, constructed from individual MRI data, are estimated in a calibration procedure using somatosensory evoked potential (SEP) and field (SEF) data. These data are measured in a single run before acquisition of further runs of spontaneous epileptic activity. Our results show that even for single interictal spikes, volume conduction effects dominate over noise and need to be taken into account for accurate source analysis. While cerebrospinal fluid and brain anisotropy influence both modalities, only EEG is sensitive to skull conductivity and conductivity calibration significantly reduces the difference in especially depth localization of both modalities, emphasizing its importance for combining EEG and MEG source analysis. On the other hand, localization differences which are due to the distinct sensitivity profiles of EEG and MEG persist. In case of a moderate error in skull conductivity, combined source analysis results can still profit from the different sensitivity profiles of EEG and MEG to accurately determine location, orientation and strength of the underlying sources. On the other side, significant errors in skull modeling are reflected in EEG reconstruction errors and could reduce the goodness of fit to combined datasets. For combined EEG and MEG source analysis, we therefore recommend calibrating skull conductivity using additionally acquired SEP/SEF data. PMID:24671208

  5. Reliably detectable flaw size for NDE methods that use calibration

    NASA Astrophysics Data System (ADS)

    Koshti, Ajay M.

    2017-04-01

    Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-1823 and associated mh18232 POD software gives most common methods of POD analysis. In this paper, POD analysis is applied to an NDE method, such as eddy current testing, where calibration is used. NDE calibration standards have known size artificial flaws such as electro-discharge machined (EDM) notches and flat bottom hole (FBH) reflectors which are used to set instrument sensitivity for detection of real flaws. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. Therefore, it is important to correlate signal responses from real flaws with signal responses form artificial flaws used in calibration process to determine reliably detectable flaw size.

  6. Reliably Detectable Flaw Size for NDE Methods that Use Calibration

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M.

    2017-01-01

    Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-1823 and associated mh1823 POD software gives most common methods of POD analysis. In this paper, POD analysis is applied to an NDE method, such as eddy current testing, where calibration is used. NDE calibration standards have known size artificial flaws such as electro-discharge machined (EDM) notches and flat bottom hole (FBH) reflectors which are used to set instrument sensitivity for detection of real flaws. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. Therefore, it is important to correlate signal responses from real flaws with signal responses form artificial flaws used in calibration process to determine reliably detectable flaw size.

  7. SWAT model uncertainty analysis, calibration and validation for runoff simulation in the Luvuvhu River catchment, South Africa

    NASA Astrophysics Data System (ADS)

    Thavhana, M. P.; Savage, M. J.; Moeletsi, M. E.

    2018-06-01

    The soil and water assessment tool (SWAT) was calibrated for the Luvuvhu River catchment, South Africa in order to simulate runoff. The model was executed through QSWAT which is an interface between SWAT and QGIS. Data from four weather stations and four weir stations evenly distributed over the catchment were used. The model was run for a 33-year period of 1983-2015. Sensitivity analysis, calibration and validation were conducted using the sequential uncertainty fitting (SUFI-2) algorithm through its interface with SWAT calibration and uncertainty procedure (SWAT-CUP). The calibration process was conducted for the period 1986 to 2005 while the validation process was from 2006 to 2015. Six model efficiency measures were used, namely: coefficient of determination (R2), Nash-Sutcliffe efficiency (NSE) index, root mean square error (RMSE)-observations standard deviation ratio (RSR), percent bias (PBIAS), probability (P)-factor and correlation coefficient (R)-factor were used. Initial results indicated an over-estimation of low flows with regression slope of less than 0.7. Twelve model parameters were applied for sensitivity analysis with four (ALPHA_BF, CN2, GW_DELAY and SOL_K) found to be more distinguishable and sensitive to streamflow (p < 0.05). The SUFI-2 algorithm through the interface with the SWAT-CUP was capable of capturing the model's behaviour, with calibration results showing an R2 of 0.63, NSE index of 0.66, RSR of 0.56 and a positive PBIAS of 16.3 while validation results revealed an R2 of 0.52, NSE of 0.48, RSR of 0.72 and PBIAS of 19.90. The model produced P-factor of 0.67 and R-factor of 0.68 during calibration and during validation, 0.69 and 0.53 respectively. Although performance indicators yielded fair and acceptable results, the P-factor was still below the recommended model performance of 70%. Apart from the unacceptable P-factor values, the results obtained in this study demonstrate acceptable model performance during calibration while validation results were still inconclusive. It can be concluded that calibration of the SWAT model yielded acceptable results with exception to validation results. Having said this, the model can be a useful tool for general water resources assessment and not for analysing hydrological extremes in the Luvuvhu River catchment.

  8. Development of a calibration protocol and identification of the most sensitive parameters for the particulate biofilm models used in biological wastewater treatment.

    PubMed

    Eldyasti, Ahmed; Nakhla, George; Zhu, Jesse

    2012-05-01

    Biofilm models are valuable tools for process engineers to simulate biological wastewater treatment. In order to enhance the use of biofilm models implemented in contemporary simulation software, model calibration is both necessary and helpful. The aim of this work was to develop a calibration protocol of the particulate biofilm model with a help of the sensitivity analysis of the most important parameters in the biofilm model implemented in BioWin® and verify the predictability of the calibration protocol. A case study of a circulating fluidized bed bioreactor (CFBBR) system used for biological nutrient removal (BNR) with a fluidized bed respirometric study of the biofilm stoichiometry and kinetics was used to verify and validate the proposed calibration protocol. Applying the five stages of the biofilm calibration procedures enhanced the applicability of BioWin®, which was capable of predicting most of the performance parameters with an average percentage error (APE) of 0-20%. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. The impact of standard and hard-coded parameters on the hydrologic fluxes in the Noah-MP land surface model

    NASA Astrophysics Data System (ADS)

    Cuntz, Matthias; Mai, Juliane; Samaniego, Luis; Clark, Martyn; Wulfmeyer, Volker; Branch, Oliver; Attinger, Sabine; Thober, Stephan

    2016-09-01

    Land surface models incorporate a large number of process descriptions, containing a multitude of parameters. These parameters are typically read from tabulated input files. Some of these parameters might be fixed numbers in the computer code though, which hinder model agility during calibration. Here we identified 139 hard-coded parameters in the model code of the Noah land surface model with multiple process options (Noah-MP). We performed a Sobol' global sensitivity analysis of Noah-MP for a specific set of process options, which includes 42 out of the 71 standard parameters and 75 out of the 139 hard-coded parameters. The sensitivities of the hydrologic output fluxes latent heat and total runoff as well as their component fluxes were evaluated at 12 catchments within the United States with very different hydrometeorological regimes. Noah-MP's hydrologic output fluxes are sensitive to two thirds of its applicable standard parameters (i.e., Sobol' indexes above 1%). The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for direct evaporation, which proved to be oversensitive in other land surface models as well. Surface runoff is sensitive to almost all hard-coded parameters of the snow processes and the meteorological inputs. These parameter sensitivities diminish in total runoff. Assessing these parameters in model calibration would require detailed snow observations or the calculation of hydrologic signatures of the runoff data. Latent heat and total runoff exhibit very similar sensitivities because of their tight coupling via the water balance. A calibration of Noah-MP against either of these fluxes should therefore give comparable results. Moreover, these fluxes are sensitive to both plant and soil parameters. Calibrating, for example, only soil parameters hence limit the ability to derive realistic model parameters. It is thus recommended to include the most sensitive hard-coded model parameters that were exposed in this study when calibrating Noah-MP.

  10. Effect of cantilever geometry on the optical lever sensitivities and thermal noise method of the atomic force microscope.

    PubMed

    Sader, John E; Lu, Jianing; Mulvaney, Paul

    2014-11-01

    Calibration of the optical lever sensitivities of atomic force microscope (AFM) cantilevers is especially important for determining the force in AFM measurements. These sensitivities depend critically on the cantilever mode used and are known to differ for static and dynamic measurements. Here, we calculate the ratio of the dynamic and static sensitivities for several common AFM cantilevers, whose shapes vary considerably, and experimentally verify these results. The dynamic-to-static optical lever sensitivity ratio is found to range from 1.09 to 1.41 for the cantilevers studied - in stark contrast to the constant value of 1.09 used widely in current calibration studies. This analysis shows that accuracy of the thermal noise method for the static spring constant is strongly dependent on cantilever geometry - neglect of these dynamic-to-static factors can induce errors exceeding 100%. We also discuss a simple experimental approach to non-invasively and simultaneously determine the dynamic and static spring constants and optical lever sensitivities of cantilevers of arbitrary shape, which is applicable to all AFM platforms that have the thermal noise method for spring constant calibration.

  11. Phantom-based standardization of CT angiography images for spot sign detection.

    PubMed

    Morotti, Andrea; Romero, Javier M; Jessel, Michael J; Hernandez, Andrew M; Vashkevich, Anastasia; Schwab, Kristin; Burns, Joseph D; Shah, Qaisar A; Bergman, Thomas A; Suri, M Fareed K; Ezzeddine, Mustapha; Kirmani, Jawad F; Agarwal, Sachin; Shapshak, Angela Hays; Messe, Steven R; Venkatasubramanian, Chitra; Palmieri, Katherine; Lewandowski, Christopher; Chang, Tiffany R; Chang, Ira; Rose, David Z; Smith, Wade; Hsu, Chung Y; Liu, Chun-Lin; Lien, Li-Ming; Hsiao, Chen-Yu; Iwama, Toru; Afzal, Mohammad Rauf; Cassarly, Christy; Greenberg, Steven M; Martin, Renee' Hebert; Qureshi, Adnan I; Rosand, Jonathan; Boone, John M; Goldstein, Joshua N

    2017-09-01

    The CT angiography (CTA) spot sign is a strong predictor of hematoma expansion in intracerebral hemorrhage (ICH). However, CTA parameters vary widely across centers and may negatively impact spot sign accuracy in predicting ICH expansion. We developed a CT iodine calibration phantom that was scanned at different institutions in a large multicenter ICH clinical trial to determine the effect of image standardization on spot sign detection and performance. A custom phantom containing known concentrations of iodine was designed and scanned using the stroke CT protocol at each institution. Custom software was developed to read the CT volume datasets and calculate the Hounsfield unit as a function of iodine concentration for each phantom scan. CTA images obtained within 8 h from symptom onset were analyzed by two trained readers comparing the calibrated vs. uncalibrated density cutoffs for spot sign identification. ICH expansion was defined as hematoma volume growth >33%. A total of 90 subjects qualified for the study, of whom 17/83 (20.5%) experienced ICH expansion. The number of spot sign positive scans was higher in the calibrated analysis (67.8 vs 38.9% p < 0.001). All spot signs identified in the non-calibrated analysis remained positive after calibration. Calibrated CTA images had higher sensitivity for ICH expansion (76 vs 52%) but inferior specificity (35 vs 63%) compared with uncalibrated images. Normalization of CTA images using phantom data is a feasible strategy to obtain consistent image quantification for spot sign analysis across different sites and may improve sensitivity for identification of ICH expansion.

  12. Automating calibration, sensitivity and uncertainty analysis of complex models using the R package Flexible Modeling Environment (FME): SWAT as an example

    USGS Publications Warehouse

    Wu, Y.; Liu, S.

    2012-01-01

    Parameter optimization and uncertainty issues are a great challenge for the application of large environmental models like the Soil and Water Assessment Tool (SWAT), which is a physically-based hydrological model for simulating water and nutrient cycles at the watershed scale. In this study, we present a comprehensive modeling environment for SWAT, including automated calibration, and sensitivity and uncertainty analysis capabilities through integration with the R package Flexible Modeling Environment (FME). To address challenges (e.g., calling the model in R and transferring variables between Fortran and R) in developing such a two-language coupling framework, 1) we converted the Fortran-based SWAT model to an R function (R-SWAT) using the RFortran platform, and alternatively 2) we compiled SWAT as a Dynamic Link Library (DLL). We then wrapped SWAT (via R-SWAT) with FME to perform complex applications including parameter identifiability, inverse modeling, and sensitivity and uncertainty analysis in the R environment. The final R-SWAT-FME framework has the following key functionalities: automatic initialization of R, running Fortran-based SWAT and R commands in parallel, transferring parameters and model output between SWAT and R, and inverse modeling with visualization. To examine this framework and demonstrate how it works, a case study simulating streamflow in the Cedar River Basin in Iowa in the United Sates was used, and we compared it with the built-in auto-calibration tool of SWAT in parameter optimization. Results indicate that both methods performed well and similarly in searching a set of optimal parameters. Nonetheless, the R-SWAT-FME is more attractive due to its instant visualization, and potential to take advantage of other R packages (e.g., inverse modeling and statistical graphics). The methods presented in the paper are readily adaptable to other model applications that require capability for automated calibration, and sensitivity and uncertainty analysis.

  13. Simulations of the HDO and H2O-18 atmospheric cycles using the NASA GISS general circulation model - Sensitivity experiments for present-day conditions

    NASA Technical Reports Server (NTRS)

    Jouzel, Jean; Koster, R. D.; Suozzo, R. J.; Russell, G. L.; White, J. W. C.

    1991-01-01

    Incorporating the full geochemical cycles of stable water isotopes (HDO and H2O-18) into an atmospheric general circulation model (GCM) allows an improved understanding of global delta-D and delta-O-18 distributions and might even allow an analysis of the GCM's hydrological cycle. A detailed sensitivity analysis using the NASA/Goddard Institute for Space Studies (GISS) model II GCM is presented that examines the nature of isotope modeling. The tests indicate that delta-D and delta-O-18 values in nonpolar regions are not strongly sensitive to details in the model precipitation parameterizations. This result, while implying that isotope modeling has limited potential use in the calibration of GCM convection schemes, also suggests that certain necessarily arbitrary aspects of these schemes are adequate for many isotope studies. Deuterium excess, a second-order variable, does show some sensitivity to precipitation parameterization and thus may be more useful for GCM calibration.

  14. Development of IR Contrast Data Analysis Application for Characterizing Delaminations in Graphite-Epoxy Structures

    NASA Technical Reports Server (NTRS)

    Havican, Marie

    2012-01-01

    Objective: Develop infrared (IR) flash thermography application based on use of a calibration standard for inspecting graphite-epoxy laminated/honeycomb structures. Background: Graphite/Epoxy composites (laminated and honeycomb) are widely used on NASA programs. Composite materials are susceptible for impact damage that is not readily detected by visual inspection. IR inspection can provide required sensitivity to detect surface damage in composites during manufacturing and during service. IR contrast analysis can provide characterization of depth, size and gap thickness of impact damage. Benefits/Payoffs: The research provides an empirical method of calibrating the flash thermography response in nondestructive evaluation. A physical calibration standard with artificial flaws such as flat bottom holes with desired diameter and depth values in a desired material is used in calibration. The research devises several probability of detection (POD) analysis approaches to enable cost effective POD study to meet program requirements.

  15. Design, calibration and application of broad-range optical nanosensors for determining intracellular pH.

    PubMed

    Søndergaard, Rikke V; Henriksen, Jonas R; Andresen, Thomas L

    2014-12-01

    Particle-based nanosensors offer a tool for determining the pH in the endosomal-lysosomal system of living cells. Measurements providing absolute values of pH have so far been restricted by the limited sensitivity range of nanosensors, calibration challenges and the complexity of image analysis. This protocol describes the design and application of a polyacrylamide-based nanosensor (∼60 nm) that covalently incorporates two pH-sensitive fluorophores, fluorescein (FS) and Oregon Green (OG), to broaden the sensitivity range of the sensor (pH 3.1-7.0), and uses the pH-insensitive fluorophore rhodamine as a reference fluorophore. The nanosensors are spontaneously taken up via endocytosis and directed to the lysosomes where dynamic changes in pH can be measured with live-cell confocal microscopy. The most important focus areas of the protocol are the choice of pH-sensitive fluorophores, the design of calibration buffers, the determination of the effective range and especially the description of how to critically evaluate results. The entire procedure typically takes 2-3 weeks.

  16. Simulating soil moisture change in a semiarid rangeland watershed with a process-based water-balance model

    Treesearch

    Howard Evan Canfield; Vicente L. Lopes

    2000-01-01

    A process-based, simulation model for evaporation, soil water and streamflow (BROOK903) was used to estimate soil moisture change on a semiarid rangeland watershed in southeastern Arizona. A sensitivity analysis was performed to select parameters affecting ET and soil moisture for calibration. Automatic parameter calibration was performed using a procedure based on a...

  17. Simulation and sensitivity analysis of carbon storage and fluxes in the New Jersey Pinelands

    Treesearch

    Zewei Miao; Richard G. Lathrop; Ming Xu; Inga P. La Puma; Kenneth L. Clark; John Hom; Nicholas Skowronski; Steve Van Tuyl

    2011-01-01

    A major challenge in modeling the carbon dynamics of vegetation communities is the proper parameterization and calibration of eco-physiological variables that are critical determinants of the ecosystem process-based model behavior. In this study, we improved and calibrated a biochemical process-based WxBGC model by using in situ AmeriFlux eddy covariance tower...

  18. Nitrous oxide emissions from cropland: a procedure for calibrating the DayCent biogeochemical model using inverse modelling

    USGS Publications Warehouse

    Rafique, Rashad; Fienen, Michael N.; Parkin, Timothy B.; Anex, Robert P.

    2013-01-01

    DayCent is a biogeochemical model of intermediate complexity widely used to simulate greenhouse gases (GHG), soil organic carbon and nutrients in crop, grassland, forest and savannah ecosystems. Although this model has been applied to a wide range of ecosystems, it is still typically parameterized through a traditional “trial and error” approach and has not been calibrated using statistical inverse modelling (i.e. algorithmic parameter estimation). The aim of this study is to establish and demonstrate a procedure for calibration of DayCent to improve estimation of GHG emissions. We coupled DayCent with the parameter estimation (PEST) software for inverse modelling. The PEST software can be used for calibration through regularized inversion as well as model sensitivity and uncertainty analysis. The DayCent model was analysed and calibrated using N2O flux data collected over 2 years at the Iowa State University Agronomy and Agricultural Engineering Research Farms, Boone, IA. Crop year 2003 data were used for model calibration and 2004 data were used for validation. The optimization of DayCent model parameters using PEST significantly reduced model residuals relative to the default DayCent parameter values. Parameter estimation improved the model performance by reducing the sum of weighted squared residual difference between measured and modelled outputs by up to 67 %. For the calibration period, simulation with the default model parameter values underestimated mean daily N2O flux by 98 %. After parameter estimation, the model underestimated the mean daily fluxes by 35 %. During the validation period, the calibrated model reduced sum of weighted squared residuals by 20 % relative to the default simulation. Sensitivity analysis performed provides important insights into the model structure providing guidance for model improvement.

  19. Integrating satellite actual evapotranspiration patterns into distributed model parametrization and evaluation for a mesoscale catchment

    NASA Astrophysics Data System (ADS)

    Demirel, M. C.; Mai, J.; Stisen, S.; Mendiguren González, G.; Koch, J.; Samaniego, L. E.

    2016-12-01

    Distributed hydrologic models are traditionally calibrated and evaluated against observations of streamflow. Spatially distributed remote sensing observations offer a great opportunity to enhance spatial model calibration schemes. For that it is important to identify the model parameters that can change spatial patterns before the satellite based hydrologic model calibration. Our study is based on two main pillars: first we use spatial sensitivity analysis to identify the key parameters controlling the spatial distribution of actual evapotranspiration (AET). Second, we investigate the potential benefits of incorporating spatial patterns from MODIS data to calibrate the mesoscale Hydrologic Model (mHM). This distributed model is selected as it allows for a change in the spatial distribution of key soil parameters through the calibration of pedo-transfer function parameters and includes options for using fully distributed daily Leaf Area Index (LAI) directly as input. In addition the simulated AET can be estimated at the spatial resolution suitable for comparison to the spatial patterns observed using MODIS data. We introduce a new dynamic scaling function employing remotely sensed vegetation to downscale coarse reference evapotranspiration. In total, 17 parameters of 47 mHM parameters are identified using both sequential screening and Latin hypercube one-at-a-time sampling methods. The spatial patterns are found to be sensitive to the vegetation parameters whereas streamflow dynamics are sensitive to the PTF parameters. The results of multi-objective model calibration show that calibration of mHM against observed streamflow does not reduce the spatial errors in AET while they improve only the streamflow simulations. We will further examine the results of model calibration using only multi spatial objective functions measuring the association between observed AET and simulated AET maps and another case including spatial and streamflow metrics together.

  20. JUPITER PROJECT - JOINT UNIVERSAL PARAMETER IDENTIFICATION AND EVALUATION OF RELIABILITY

    EPA Science Inventory

    The JUPITER (Joint Universal Parameter IdenTification and Evaluation of Reliability) project builds on the technology of two widely used codes for sensitivity analysis, data assessment, calibration, and uncertainty analysis of environmental models: PEST and UCODE.

  1. JUPITER PROJECT - MERGING INVERSE PROBLEM FORMULATION TECHNOLOGIES

    EPA Science Inventory

    The JUPITER (Joint Universal Parameter IdenTification and Evaluation of Reliability) project seeks to enhance and build on the technology and momentum behind two of the most popular sensitivity analysis, data assessment, calibration, and uncertainty analysis programs used in envi...

  2. Application of Temperature Sensitivities During Iterative Strain-Gage Balance Calibration Analysis

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2011-01-01

    A new method is discussed that may be used to correct wind tunnel strain-gage balance load predictions for the influence of residual temperature effects at the location of the strain-gages. The method was designed for the iterative analysis technique that is used in the aerospace testing community to predict balance loads from strain-gage outputs during a wind tunnel test. The new method implicitly applies temperature corrections to the gage outputs during the load iteration process. Therefore, it can use uncorrected gage outputs directly as input for the load calculations. The new method is applied in several steps. First, balance calibration data is analyzed in the usual manner assuming that the balance temperature was kept constant during the calibration. Then, the temperature difference relative to the calibration temperature is introduced as a new independent variable for each strain--gage output. Therefore, sensors must exist near the strain--gages so that the required temperature differences can be measured during the wind tunnel test. In addition, the format of the regression coefficient matrix needs to be extended so that it can support the new independent variables. In the next step, the extended regression coefficient matrix of the original calibration data is modified by using the manufacturer specified temperature sensitivity of each strain--gage as the regression coefficient of the corresponding temperature difference variable. Finally, the modified regression coefficient matrix is converted to a data reduction matrix that the iterative analysis technique needs for the calculation of balance loads. Original calibration data and modified check load data of NASA's MC60D balance are used to illustrate the new method.

  3. Calibration of gyro G-sensitivity coefficients with FOG monitoring on precision centrifuge

    NASA Astrophysics Data System (ADS)

    Lu, Jiazhen; Yang, Yanqiang; Li, Baoguo; Liu, Ming

    2017-07-01

    The advantages of mechanical gyros, such as high precision, endurance and reliability, make them widely used as the core parts of inertial navigation systems (INS) utilized in the fields of aeronautics, astronautics and underground exploration. In a high-g environment, the accuracy of gyros is degraded. Therefore, the calibration and compensation of the gyro G-sensitivity coefficients is essential when the INS operates in a high-g environment. A precision centrifuge with a counter-rotating platform is the typical equipment for calibrating the gyro, as it can generate large centripetal acceleration and keep the angular rate close to zero; however, its performance is seriously restricted by the angular perturbation in the high-speed rotating process. To reduce the dependence on the precision of the centrifuge and counter-rotating platform, an effective calibration method for the gyro g-sensitivity coefficients under fiber-optic gyroscope (FOG) monitoring is proposed herein. The FOG can efficiently compensate spindle error and improve the anti-interference ability. Harmonic analysis is performed for data processing. Simulations show that the gyro G-sensitivity coefficients can be efficiently estimated to up to 99% of the true value and compensated using a lookup table or fitting method. Repeated tests indicate that the G-sensitivity coefficients can be correctly calibrated when the angular rate accuracy of the precision centrifuge is as low as 0.01%. Verification tests are performed to demonstrate that the attitude errors can be decreased from 0.36° to 0.08° in 200 s. The proposed measuring technology is generally applicable in engineering, as it can reduce the accuracy requirements for the centrifuge and the environment.

  4. Modelling suspended-sediment propagation and related heavy metal contamination in floodplains: a parameter sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Hostache, R.; Hissler, C.; Matgen, P.; Guignard, C.; Bates, P.

    2014-09-01

    Fine sediments represent an important vector of pollutant diffusion in rivers. When deposited in floodplains and riverbeds, they can be responsible for soil pollution. In this context, this paper proposes a modelling exercise aimed at predicting transport and diffusion of fine sediments and dissolved pollutants. The model is based upon the Telemac hydro-informatic system (dynamical coupling Telemac-2D-Sysiphe). As empirical and semiempirical parameters need to be calibrated for such a modelling exercise, a sensitivity analysis is proposed. An innovative point in this study is the assessment of the usefulness of dissolved trace metal contamination information for model calibration. Moreover, for supporting the modelling exercise, an extensive database was set up during two flood events. It includes water surface elevation records, discharge measurements and geochemistry data such as time series of dissolved/particulate contaminants and suspended-sediment concentrations. The most sensitive parameters were found to be the hydraulic friction coefficients and the sediment particle settling velocity in water. It was also found that model calibration did not benefit from dissolved trace metal contamination information. Using the two monitored hydrological events as calibration and validation, it was found that the model is able to satisfyingly predict suspended sediment and dissolve pollutant transport in the river channel. In addition, a qualitative comparison between simulated sediment deposition in the floodplain and a soil contamination map shows that the preferential zones for deposition identified by the model are realistic.

  5. SENSITIVITY ANALYSIS OF THE USEPA WINS PM 2.5 SEPARATOR

    EPA Science Inventory

    Factors affecting the performance of the US EPA WINS PM2.5 separator have been systematically evaluated. In conjunction with the separator's laboratory calibrated penetration curve, analysis of the governing equation that describes conventional impactor performance was used to ...

  6. Calibration of a Distributed Hydrological Model using Remote Sensing Evapotranspiration data in the Semi-Arid Punjab Region of Pakista

    NASA Astrophysics Data System (ADS)

    Becker, R.; Usman, M.

    2017-12-01

    A SWAT (Soil Water Assessment Tool) model is applied in the semi-arid Punjab region in Pakistan. The physically based hydrological model is set up to simulate hydrological processes and water resources demands under future land use, climate change and irrigation management scenarios. In order to successfully run the model, detailed focus is laid on the calibration procedure of the model. The study deals with the following calibration issues:i. lack of reliable calibration/validation data, ii. difficulty to accurately model a highly managed system with a physically based hydrological model and iii. use of alternative and spatially distributed data sets for model calibration. In our study area field observations are rare and the entirely human controlled irrigation system renders central calibration parameters (e.g. runoff/curve number) unsuitable, as it can't be assumed that they represent the natural behavior of the hydrological system. From evapotranspiration (ET) however principal hydrological processes can still be inferred. Usman et al. (2015) derived satellite based monthly ET data for our study area based on SEBAL (Surface Energy Balance Algorithm) and created a reliable ET data set which we use in this study to calibrate our SWAT model. The initial SWAT model performance is evaluated with respect to the SEBAL results using correlation coefficients, RMSE, Nash-Sutcliffe efficiencies and mean differences. Particular focus is laid on the spatial patters, investigating the potential of a spatially differentiated parameterization instead of just using spatially uniform calibration data. A sensitivity analysis reveals the most sensitive parameters with respect to changes in ET, which are then selected for the calibration process.Using the SEBAL-ET product we calibrate the SWAT model for the time period 2005-2006 using a dynamically dimensioned global search algorithm to minimize RMSE. The model improvement after the calibration procedure is finally evaluated based on the previously chosen evaluation criteria for the time period 2007-2008. The study reveals the sensitivity of SWAT model parameters to changes in ET in a semi-arid and human controlled system and the potential of calibrating those parameters using satellite derived ET data.

  7. Can we calibrate simultaneously groundwater recharge and aquifer hydrodynamic parameters ?

    NASA Astrophysics Data System (ADS)

    Hassane Maina, Fadji; Ackerer, Philippe; Bildstein, Olivier

    2017-04-01

    By groundwater model calibration, we consider here fitting the measured piezometric heads by estimating the hydrodynamic parameters (storage term and hydraulic conductivity) and the recharge. It is traditionally recommended to avoid simultaneous calibration of groundwater recharge and flow parameters because of correlation between recharge and the flow parameters. From a physical point of view, little recharge associated with low hydraulic conductivity can provide very similar piezometric changes than higher recharge and higher hydraulic conductivity. If this correlation is true under steady state conditions, we assume that this correlation is much weaker under transient conditions because recharge varies in time and the parameters do not. Moreover, the recharge is negligible during summer time for many climatic conditions due to reduced precipitation, increased evaporation and transpiration by vegetation cover. We analyze our hypothesis through global sensitivity analysis (GSA) in conjunction with the polynomial chaos expansion (PCE) methodology. We perform GSA by calculating the Sobol indices, which provide a variance-based 'measure' of the effects of uncertain parameters (storage and hydraulic conductivity) and recharge on the piezometric heads computed by the flow model. The choice of PCE has the following two benefits: (i) it provides the global sensitivity indices in a straightforward manner, and (ii) PCE can serve as a surrogate model for the calibration of parameters. The coefficients of the PCE are computed by probabilistic collocation. We perform the GSA on simplified real conditions coming from an already built groundwater model dedicated to a subdomain of the Upper-Rhine aquifer (geometry, boundary conditions, climatic data). GSA shows that the simultaneous calibration of recharge and flow parameters is possible if the calibration is performed over at least one year. It provides also the valuable information of the sensitivity versus time, depending on the aquifer inertia and climatic conditions. The groundwater levels variations during recharge (increase) are sensitive to the storage coefficient whereas the groundwater levels variations after recharge (decrease) are sensitive to the hydraulic conductivity. The performed model calibration on synthetic data sets shows that the parameters and recharge are estimated quite accurately.

  8. Sensitivity analysis and calibration of a dynamic physically based slope stability model

    NASA Astrophysics Data System (ADS)

    Zieher, Thomas; Rutzinger, Martin; Schneider-Muntau, Barbara; Perzl, Frank; Leidinger, David; Formayer, Herbert; Geitner, Clemens

    2017-06-01

    Physically based modelling of slope stability on a catchment scale is still a challenging task. When applying a physically based model on such a scale (1 : 10 000 to 1 : 50 000), parameters with a high impact on the model result should be calibrated to account for (i) the spatial variability of parameter values, (ii) shortcomings of the selected model, (iii) uncertainties of laboratory tests and field measurements or (iv) parameters that cannot be derived experimentally or measured in the field (e.g. calibration constants). While systematic parameter calibration is a common task in hydrological modelling, this is rarely done using physically based slope stability models. In the present study a dynamic, physically based, coupled hydrological-geomechanical slope stability model is calibrated based on a limited number of laboratory tests and a detailed multitemporal shallow landslide inventory covering two landslide-triggering rainfall events in the Laternser valley, Vorarlberg (Austria). Sensitive parameters are identified based on a local one-at-a-time sensitivity analysis. These parameters (hydraulic conductivity, specific storage, angle of internal friction for effective stress, cohesion for effective stress) are systematically sampled and calibrated for a landslide-triggering rainfall event in August 2005. The identified model ensemble, including 25 behavioural model runs with the highest portion of correctly predicted landslides and non-landslides, is then validated with another landslide-triggering rainfall event in May 1999. The identified model ensemble correctly predicts the location and the supposed triggering timing of 73.0 % of the observed landslides triggered in August 2005 and 91.5 % of the observed landslides triggered in May 1999. Results of the model ensemble driven with raised precipitation input reveal a slight increase in areas potentially affected by slope failure. At the same time, the peak run-off increases more markedly, suggesting that precipitation intensities during the investigated landslide-triggering rainfall events were already close to or above the soil's infiltration capacity.

  9. Dual-angle, self-calibrating Thomson scattering measurements in RFX-MOD

    NASA Astrophysics Data System (ADS)

    Giudicotti, L.; Pasqualotto, R.; Fassina, A.

    2014-11-01

    In the multipoint Thomson scattering (TS) system of the RFX-MOD experiment the signals from a few spatial positions can be observed simultaneously under two different scattering angles. In addition the detection system uses optical multiplexing by signal delays in fiber optic cables of different length so that the two sets of TS signals can be observed by the same polychromator. Owing to the dependence of the TS spectrum on the scattering angle, it was then possible to implement self-calibrating TS measurements in which the electron temperature Te, the electron density ne and the relative calibration coefficients of spectral channels sensitivity Ci were simultaneously determined by a suitable analysis of the two sets of TS data collected at the two angles. The analysis has shown that, in spite of the small difference in the spectra obtained at the two angles, reliable values of the relative calibration coefficients can be determined by the analysis of good S/N dual-angle spectra recorded in a few tens of plasma shots. This analysis suggests that in RFX-MOD the calibration of the entire set of TS polychromators by means of the similar, dual-laser (Nd:YAG/Nd:YLF) TS technique, should be feasible.

  10. Calibration of groundwater vulnerability mapping using the generalized reduced gradient method.

    PubMed

    Elçi, Alper

    2017-12-01

    Groundwater vulnerability assessment studies are essential in water resources management. Overlay-and-index methods such as DRASTIC are widely used for mapping of groundwater vulnerability, however, these methods mainly suffer from a subjective selection of model parameters. The objective of this study is to introduce a calibration procedure that results in a more accurate assessment of groundwater vulnerability. The improvement of the assessment is formulated as a parameter optimization problem using an objective function that is based on the correlation between actual groundwater contamination and vulnerability index values. The non-linear optimization problem is solved with the generalized-reduced-gradient (GRG) method, which is numerical algorithm based optimization method. To demonstrate the applicability of the procedure, a vulnerability map for the Tahtali stream basin is calibrated using nitrate concentration data. The calibration procedure is easy to implement and aims the maximization of correlation between observed pollutant concentrations and groundwater vulnerability index values. The influence of each vulnerability parameter in the calculation of the vulnerability index is assessed by performing a single-parameter sensitivity analysis. Results of the sensitivity analysis show that all factors are effective on the final vulnerability index. Calibration of the vulnerability map improves the correlation between index values and measured nitrate concentrations by 19%. The regression coefficient increases from 0.280 to 0.485. It is evident that the spatial distribution and the proportions of vulnerability class areas are significantly altered with the calibration process. Although the applicability of the calibration method is demonstrated on the DRASTIC model, the applicability of the approach is not specific to a certain model and can also be easily applied to other overlay-and-index methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Calibration of groundwater vulnerability mapping using the generalized reduced gradient method

    NASA Astrophysics Data System (ADS)

    Elçi, Alper

    2017-12-01

    Groundwater vulnerability assessment studies are essential in water resources management. Overlay-and-index methods such as DRASTIC are widely used for mapping of groundwater vulnerability, however, these methods mainly suffer from a subjective selection of model parameters. The objective of this study is to introduce a calibration procedure that results in a more accurate assessment of groundwater vulnerability. The improvement of the assessment is formulated as a parameter optimization problem using an objective function that is based on the correlation between actual groundwater contamination and vulnerability index values. The non-linear optimization problem is solved with the generalized-reduced-gradient (GRG) method, which is numerical algorithm based optimization method. To demonstrate the applicability of the procedure, a vulnerability map for the Tahtali stream basin is calibrated using nitrate concentration data. The calibration procedure is easy to implement and aims the maximization of correlation between observed pollutant concentrations and groundwater vulnerability index values. The influence of each vulnerability parameter in the calculation of the vulnerability index is assessed by performing a single-parameter sensitivity analysis. Results of the sensitivity analysis show that all factors are effective on the final vulnerability index. Calibration of the vulnerability map improves the correlation between index values and measured nitrate concentrations by 19%. The regression coefficient increases from 0.280 to 0.485. It is evident that the spatial distribution and the proportions of vulnerability class areas are significantly altered with the calibration process. Although the applicability of the calibration method is demonstrated on the DRASTIC model, the applicability of the approach is not specific to a certain model and can also be easily applied to other overlay-and-index methods.

  12. Definition and sensitivity of the conceptual MORDOR rainfall-runoff model parameters using different multi-criteria calibration strategies

    NASA Astrophysics Data System (ADS)

    Garavaglia, F.; Seyve, E.; Gottardi, F.; Le Lay, M.; Gailhard, J.; Garçon, R.

    2014-12-01

    MORDOR is a conceptual hydrological model extensively used in Électricité de France (EDF, French electric utility company) operational applications: (i) hydrological forecasting, (ii) flood risk assessment, (iii) water balance and (iv) climate change studies. MORDOR is a lumped, reservoir, elevation based model with hourly or daily areal rainfall and air temperature as the driving input data. The principal hydrological processes represented are evapotranspiration, direct and indirect runoff, ground water, snow accumulation and melt and routing. The model has been intensively used at EDF for more than 20 years, in particular for modeling French mountainous watersheds. In the matter of parameters calibration we propose and test alternative multi-criteria techniques based on two specific approaches: automatic calibration using single-objective functions and a priori parameter calibration founded on hydrological watershed features. The automatic calibration approach uses single-objective functions, based on Kling-Gupta efficiency, to quantify the good agreement between the simulated and observed runoff focusing on four different runoff samples: (i) time-series sample, (I) annual hydrological regime, (iii) monthly cumulative distribution functions and (iv) recession sequences.The primary purpose of this study is to analyze the definition and sensitivity of MORDOR parameters testing different calibration techniques in order to: (i) simplify the model structure, (ii) increase the calibration-validation performance of the model and (iii) reduce the equifinality problem of calibration process. We propose an alternative calibration strategy that reaches these goals. The analysis is illustrated by calibrating MORDOR model to daily data for 50 watersheds located in French mountainous regions.

  13. Power Pattern Sensitivity to Calibration Errors and Mutual Coupling in Linear Arrays through Circular Interval Arithmetics

    PubMed Central

    Anselmi, Nicola; Salucci, Marco; Rocca, Paolo; Massa, Andrea

    2016-01-01

    The sensitivity to both calibration errors and mutual coupling effects of the power pattern radiated by a linear array is addressed. Starting from the knowledge of the nominal excitations of the array elements and the maximum uncertainty on their amplitudes, the bounds of the pattern deviations from the ideal one are analytically derived by exploiting the Circular Interval Analysis (CIA). A set of representative numerical results is reported and discussed to assess the effectiveness and the reliability of the proposed approach also in comparison with state-of-the-art methods and full-wave simulations. PMID:27258274

  14. A Novel Protocol for Model Calibration in Biological Wastewater Treatment

    PubMed Central

    Zhu, Ao; Guo, Jianhua; Ni, Bing-Jie; Wang, Shuying; Yang, Qing; Peng, Yongzhen

    2015-01-01

    Activated sludge models (ASMs) have been widely used for process design, operation and optimization in wastewater treatment plants. However, it is still a challenge to achieve an efficient calibration for reliable application by using the conventional approaches. Hereby, we propose a novel calibration protocol, i.e. Numerical Optimal Approaching Procedure (NOAP), for the systematic calibration of ASMs. The NOAP consists of three key steps in an iterative scheme flow: i) global factors sensitivity analysis for factors fixing; ii) pseudo-global parameter correlation analysis for non-identifiable factors detection; and iii) formation of a parameter subset through an estimation by using genetic algorithm. The validity and applicability are confirmed using experimental data obtained from two independent wastewater treatment systems, including a sequencing batch reactor and a continuous stirred-tank reactor. The results indicate that the NOAP can effectively determine the optimal parameter subset and successfully perform model calibration and validation for these two different systems. The proposed NOAP is expected to use for automatic calibration of ASMs and be applied potentially to other ordinary differential equations models. PMID:25682959

  15. Fixed-head star tracker magnitude calibration on the solar maximum mission

    NASA Technical Reports Server (NTRS)

    Pitone, Daniel S.; Twambly, B. J.; Eudell, A. H.; Roberts, D. A.

    1990-01-01

    The sensitivity of the fixed-head star trackers (FHSTs) on the Solar Maximum Mission (SMM) is defined as the accuracy of the electronic response to the magnitude of a star in the sensor field-of-view, which is measured as intensity in volts. To identify stars during attitude determination and control processes, a transformation equation is required to convert from star intensity in volts to units of magnitude and vice versa. To maintain high accuracy standards, this transformation is calibrated frequently. A sensitivity index is defined as the observed intensity in volts divided by the predicted intensity in volts; thus, the sensitivity index is a measure of the accuracy of the calibration. Using the sensitivity index, analysis is presented that compares the strengths and weaknesses of two possible transformation equations. The effect on the transformation equations of variables, such as position in the sensor field-of-view, star color, and star magnitude, is investigated. In addition, results are given that evaluate the aging process of each sensor. The results in this work can be used by future missions as an aid to employing data from star cameras as effectively as possible.

  16. In Situ Determination of Trace Elements in Fish Otoliths by Laser Ablation Double Focusing Sector Field Inductively Coupled Plasma Mass Spectrometry Using a Solution Standard Addition Calibration Method

    NASA Astrophysics Data System (ADS)

    Chen, Z.; Jones, C. M.

    2002-05-01

    Microchemistry of fish otoliths (fish ear bones) is a very useful tool for monitoring aquatic environments and fish migration. However, determination of the elemental composition in fish otolith by ICP-MS has been limited to either analysis of dissolved sample solution or measurement of limited number of trace elements by laser ablation (LA)- ICP-MS due to low sensitivity, lack of available calibration standards, and complexity of polyatomic molecular interference. In this study, a method was developed for in situ determination of trace elements in fish otoliths by laser ablation double focusing sector field ultra high sensitivity Finnigan Element 2 ICP-MS using a solution standard addition calibration method. Due to the lack of matrix-match solid calibration standards, sixteen trace elements (Na, Mg, P, Cr, Mn, Fe, Ni, Cu, Rb, Sr, Y, Cd, La, Ba, Pb and U) were determined using a solution standard calibration with Ca as an internal standard. Flexibility, easy preparation and stable signals are the advantages of using solution calibration standards. In order to resolve polyatomic molecular interferences, medium resolution (M/delta M > 4000) was used for some elements (Na, Mg, P, Cr, Mn, Fe, Ni, and Cu). Both external calibration and standard addition quantification strategies are compared and discussed. Precision, accuracy, and limits of detection are presented.

  17. The sensitivity of EGRET to gamma ray polarization

    NASA Astrophysics Data System (ADS)

    Mattox, John R.

    1990-05-01

    A Monte Carlo simulation shows that EGRET (Energetic Gamma-Ray Experimental Telescope) does not even have sufficient sensitivity to detect 100 percent polarized gamma-rays. This is confirmed by analysis of calibration data. A Monte Carlo study shows that the sensitivity of EGRET to polarization peaks around 100 MeV. However, more than 10 5 gamma-ray events with 100 percent polarization would be required for a 3 sigma significance detection - more than available from calibration, and probably more than will result from a single score source during flight. A drift chamber gamma ray telescope under development (Hunter and Cuddapah 1989) will offer better sensitivity to polarization. The lateral position uncertainty will be improved by an order of magnitude. Also, if pair production occurs in the drift chamber gas (xenon at 2 bar) instead of tantalum foils, the effects of multiple Coulomb scattering will be reduced.

  18. Calibration test of the temperature and strain sensitivity coefficient in regional reference grating method

    NASA Astrophysics Data System (ADS)

    Wu, Jing; Huang, Junbing; Wu, Hanping; Gu, Hongcan; Tang, Bo

    2014-12-01

    In order to verify the validity of the regional reference grating method in solve the strain/temperature cross sensitive problem in the actual ship structural health monitoring system, and to meet the requirements of engineering, for the sensitivity coefficients of regional reference grating method, national standard measurement equipment is used to calibrate the temperature sensitivity coefficient of selected FBG temperature sensor and strain sensitivity coefficient of FBG strain sensor in this modal. And the thermal expansion sensitivity coefficient of the steel for ships is calibrated with water bath method. The calibration results show that the temperature sensitivity coefficient of FBG temperature sensor is 28.16pm/°C within -10~30°C, and its linearity is greater than 0.999, the strain sensitivity coefficient of FBG strain sensor is 1.32pm/μɛ within -2900~2900μɛ whose linearity is almost to 1, the thermal expansion sensitivity coefficient of the steel for ships is 23.438pm/°C within 30~90°C, and its linearity is greater than 0.998. Finally, the calibration parameters are used in the actual ship structure health monitoring system for temperature compensation. The results show that the effect of temperature compensation is good, and the calibration parameters meet the engineering requirements, which provide an important reference for fiber Bragg grating sensor is widely used in engineering.

  19. A calibration protocol of a one-dimensional moving bed bioreactor (MBBR) dynamic model for nitrogen removal.

    PubMed

    Barry, U; Choubert, J-M; Canler, J-P; Héduit, A; Robin, L; Lessard, P

    2012-01-01

    This work suggests a procedure to correctly calibrate the parameters of a one-dimensional MBBR dynamic model in nitrification treatment. The study deals with the MBBR configuration with two reactors in series, one for carbon treatment and the other for nitrogen treatment. Because of the influence of the first reactor on the second one, the approach needs a specific calibration strategy. Firstly, a comparison between measured values and simulated ones obtained with default parameters has been carried out. Simulated values of filtered COD, NH(4)-N and dissolved oxygen are underestimated and nitrates are overestimated compared with observed data. Thus, nitrifying rate and oxygen transfer into the biofilm are overvalued. Secondly, a sensitivity analysis was carried out for parameters and for COD fractionation. It revealed three classes of sensitive parameters: physical, diffusional and kinetic. Then a calibration protocol of the MBBR dynamic model was proposed. It was successfully tested on data recorded at a pilot-scale plant and a calibrated set of values was obtained for four parameters: the maximum biofilm thickness, the detachment rate, the maximum autotrophic growth rate and the oxygen transfer rate.

  20. Spectroradiometric calibration of the thematic mapper and multispectral scanner system

    NASA Technical Reports Server (NTRS)

    Slater, Philip N.; Palmer, James M.

    1986-01-01

    A list of personnel who have contributed to the program is provided. Sixteen publications and presentations are also listed. A preprint summarizing five in-flight absolute radiometric calibrations of the solar reflective bands of the LANDSAT-5 Thematic Mapper is presented. The 23 band calibrations made on the five dates show a 2.5% RMS variation from the mean as a percentage of the mean. A preprint is also presented that discusses the reflectance-based results of the above preprint. It proceeds to analyze and present results of a second, independent calibration method based on radiance measurements from a helicopter. Radiative transfer through the atmosphere, model atmospheres, the calibration methodology used at White Sands and the results of a sensitivity analysis of the reflectance-based approach is also discussed.

  1. Deep-space navigation with differenced data types. Part 3: An expanded information content and sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Estefan, J. A.; Thurman, S. W.

    1992-01-01

    An approximate six-parameter analytic model for Earth-based differential range measurements is presented and is used to derive a representative analytic approximation for differenced Doppler measurements. The analytical models are tasked to investigate the ability of these data types to estimate spacecraft geocentric angular motion, Deep Space Network station oscillator (clock/frequency) offsets, and signal-path calibration errors over a period of a few days, in the presence of systematic station location and transmission media calibration errors. Quantitative results indicate that a few differenced Doppler plus ranging passes yield angular position estimates with a precision on the order of 0.1 to 0.4 micro-rad, and angular rate precision on the order of 10 to 25 x 10(exp -12) rad/sec, assuming no a priori information on the coordinate parameters. Sensitivity analyses suggest that troposphere zenith delay calibration error is the dominant systematic error source in most of the tracking scenarios investigated; as expected, the differenced Doppler data were found to be much more sensitive to troposphere calibration errors than differenced range. By comparison, results computed using wideband and narrowband (delta) VLBI under similar circumstances yielded angular precisions of 0.07 to 0.4 micro-rad, and angular rate precisions of 0.5 to 1.0 x 10(exp -12) rad/sec.

  2. Deep-space navigation with differenced data types. Part 3: An expanded information content and sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Estefan, J. A.; Thurman, S. W.

    1992-01-01

    An approximate six-parameter analytic model for Earth-based differenced range measurements is presented and is used to derive a representative analytic approximation for differenced Doppler measurements. The analytical models are tasked to investigate the ability of these data types to estimate spacecraft geocentric angular motion, Deep Space Network station oscillator (clock/frequency) offsets, and signal-path calibration errors over a period of a few days, in the presence of systematic station location and transmission media calibration errors. Quantitative results indicate that a few differenced Doppler plus ranging passes yield angular position estimates with a precision on the order of 0.1 to 0.4 microrad, and angular rate precision on the order of 10 to 25(10)(exp -12) rad/sec, assuming no a priori information on the coordinate parameters. Sensitivity analyses suggest that troposphere zenith delay calibration error is the dominant systematic error source in most of the tracking scenarios investigated; as expected, the differenced Doppler data were found to be much more sensitive to troposphere calibration errors than differenced range. By comparison, results computed using wide band and narrow band (delta)VLBI under similar circumstances yielded angular precisions of 0.07 to 0.4 /microrad, and angular rate precisions of 0.5 to 1.0(10)(exp -12) rad/sec.

  3. Diagnosing the impact of alternative calibration strategies on coupled hydrologic models

    NASA Astrophysics Data System (ADS)

    Smith, T. J.; Perera, C.; Corrigan, C.

    2017-12-01

    Hydrologic models represent a significant tool for understanding, predicting, and responding to the impacts of water on society and society on water resources and, as such, are used extensively in water resources planning and management. Given this important role, the validity and fidelity of hydrologic models is imperative. While extensive focus has been paid to improving hydrologic models through better process representation, better parameter estimation, and better uncertainty quantification, significant challenges remain. In this study, we explore a number of competing model calibration scenarios for simple, coupled snowmelt-runoff models to better understand the sensitivity / variability of parameterizations and its impact on model performance, robustness, fidelity, and transferability. Our analysis highlights the sensitivity of coupled snowmelt-runoff model parameterizations to alterations in calibration approach, underscores the concept of information content in hydrologic modeling, and provides insight into potential strategies for improving model robustness / fidelity.

  4. Development of the Burst and Transient Source Experiment (BATSE)

    NASA Technical Reports Server (NTRS)

    Horack, J. M.

    1991-01-01

    The Burst and Transient Source Experiment (BATSE), one of four instruments on the Gamma Ray Observatory, consists of eight identical detector modules mounted on the corners of the spacecraft. Developed at MSFC, BATSE is the most sensitive gamma ray burst detector flown to date. Details of the assembly and test phase of the flight hardware development are presented. Results and descriptions of calibrations performed at MSFC, TRW, and KSC are documented extensively. With the presentation of each calibration results, the reader is provided with the means to access raw calibration data for further review or analysis.

  5. A multimethod Global Sensitivity Analysis to aid the calibration of geomechanical models via time-lapse seismic data

    NASA Astrophysics Data System (ADS)

    Price, D. C.; Angus, D. A.; Garcia, A.; Fisher, Q. J.; Parsons, S.; Kato, J.

    2018-03-01

    Time-lapse seismic attributes are used extensively in the history matching of production simulator models. However, although proven to contain information regarding production induced stress change, it is typically only loosely (i.e. qualitatively) used to calibrate geomechanical models. In this study we conduct a multimethod Global Sensitivity Analysis (GSA) to assess the feasibility and aid the quantitative calibration of geomechanical models via near-offset time-lapse seismic data. Specifically, the calibration of mechanical properties of the overburden. Via the GSA, we analyse the near-offset overburden seismic traveltimes from over 4000 perturbations of a Finite Element (FE) geomechanical model of a typical High Pressure High Temperature (HPHT) reservoir in the North Sea. We find that, out of an initially large set of material properties, the near-offset overburden traveltimes are primarily affected by Young's modulus and the effective stress (i.e. Biot) coefficient. The unexpected significance of the Biot coefficient highlights the importance of modelling fluid flow and pore pressure outside of the reservoir. The FE model is complex and highly nonlinear. Multiple combinations of model parameters can yield equally possible model realizations. Consequently, numerical calibration via a large number of random model perturbations is unfeasible. However, the significant differences in traveltime results suggest that more sophisticated calibration methods could potentially be feasible for finding numerous suitable solutions. The results of the time-varying GSA demonstrate how acquiring multiple vintages of time-lapse seismic data can be advantageous. However, they also suggest that significant overburden near-offset seismic time-shifts, useful for model calibration, may take up to 3 yrs after the start of production to manifest. Due to the nonlinearity of the model behaviour, similar uncertainty in the reservoir mechanical properties appears to influence overburden traveltime to a much greater extent. Therefore, reservoir properties must be known to a suitable degree of accuracy before the calibration of the overburden can be considered.

  6. Summarising and validating test accuracy results across multiple studies for use in clinical practice.

    PubMed

    Riley, Richard D; Ahmed, Ikhlaaq; Debray, Thomas P A; Willis, Brian H; Noordzij, J Pieter; Higgins, Julian P T; Deeks, Jonathan J

    2015-06-15

    Following a meta-analysis of test accuracy studies, the translation of summary results into clinical practice is potentially problematic. The sensitivity, specificity and positive (PPV) and negative (NPV) predictive values of a test may differ substantially from the average meta-analysis findings, because of heterogeneity. Clinicians thus need more guidance: given the meta-analysis, is a test likely to be useful in new populations, and if so, how should test results inform the probability of existing disease (for a diagnostic test) or future adverse outcome (for a prognostic test)? We propose ways to address this. Firstly, following a meta-analysis, we suggest deriving prediction intervals and probability statements about the potential accuracy of a test in a new population. Secondly, we suggest strategies on how clinicians should derive post-test probabilities (PPV and NPV) in a new population based on existing meta-analysis results and propose a cross-validation approach for examining and comparing their calibration performance. Application is made to two clinical examples. In the first example, the joint probability that both sensitivity and specificity will be >80% in a new population is just 0.19, because of a low sensitivity. However, the summary PPV of 0.97 is high and calibrates well in new populations, with a probability of 0.78 that the true PPV will be at least 0.95. In the second example, post-test probabilities calibrate better when tailored to the prevalence in the new population, with cross-validation revealing a probability of 0.97 that the observed NPV will be within 10% of the predicted NPV. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  7. A novel Bayesian approach to accounting for uncertainty in fMRI-derived estimates of cerebral oxygen metabolism fluctuations

    PubMed Central

    Simon, Aaron B.; Dubowitz, David J.; Blockley, Nicholas P.; Buxton, Richard B.

    2016-01-01

    Calibrated blood oxygenation level dependent (BOLD) imaging is a multimodal functional MRI technique designed to estimate changes in cerebral oxygen metabolism from measured changes in cerebral blood flow and the BOLD signal. This technique addresses fundamental ambiguities associated with quantitative BOLD signal analysis; however, its dependence on biophysical modeling creates uncertainty in the resulting oxygen metabolism estimates. In this work, we developed a Bayesian approach to estimating the oxygen metabolism response to a neural stimulus and used it to examine the uncertainty that arises in calibrated BOLD estimation due to the presence of unmeasured model parameters. We applied our approach to estimate the CMRO2 response to a visual task using the traditional hypercapnia calibration experiment as well as to estimate the metabolic response to both a visual task and hypercapnia using the measurement of baseline apparent R2′ as a calibration technique. Further, in order to examine the effects of cerebral spinal fluid (CSF) signal contamination on the measurement of apparent R2′, we examined the effects of measuring this parameter with and without CSF-nulling. We found that the two calibration techniques provided consistent estimates of the metabolic response on average, with a median R2′-based estimate of the metabolic response to CO2 of 1.4%, and R2′- and hypercapnia-calibrated estimates of the visual response of 27% and 24%, respectively. However, these estimates were sensitive to different sources of estimation uncertainty. The R2′-calibrated estimate was highly sensitive to CSF contamination and to uncertainty in unmeasured model parameters describing flow-volume coupling, capillary bed characteristics, and the iso-susceptibility saturation of blood. The hypercapnia-calibrated estimate was relatively insensitive to these parameters but highly sensitive to the assumed metabolic response to CO2. PMID:26790354

  8. A novel Bayesian approach to accounting for uncertainty in fMRI-derived estimates of cerebral oxygen metabolism fluctuations.

    PubMed

    Simon, Aaron B; Dubowitz, David J; Blockley, Nicholas P; Buxton, Richard B

    2016-04-01

    Calibrated blood oxygenation level dependent (BOLD) imaging is a multimodal functional MRI technique designed to estimate changes in cerebral oxygen metabolism from measured changes in cerebral blood flow and the BOLD signal. This technique addresses fundamental ambiguities associated with quantitative BOLD signal analysis; however, its dependence on biophysical modeling creates uncertainty in the resulting oxygen metabolism estimates. In this work, we developed a Bayesian approach to estimating the oxygen metabolism response to a neural stimulus and used it to examine the uncertainty that arises in calibrated BOLD estimation due to the presence of unmeasured model parameters. We applied our approach to estimate the CMRO2 response to a visual task using the traditional hypercapnia calibration experiment as well as to estimate the metabolic response to both a visual task and hypercapnia using the measurement of baseline apparent R2' as a calibration technique. Further, in order to examine the effects of cerebral spinal fluid (CSF) signal contamination on the measurement of apparent R2', we examined the effects of measuring this parameter with and without CSF-nulling. We found that the two calibration techniques provided consistent estimates of the metabolic response on average, with a median R2'-based estimate of the metabolic response to CO2 of 1.4%, and R2'- and hypercapnia-calibrated estimates of the visual response of 27% and 24%, respectively. However, these estimates were sensitive to different sources of estimation uncertainty. The R2'-calibrated estimate was highly sensitive to CSF contamination and to uncertainty in unmeasured model parameters describing flow-volume coupling, capillary bed characteristics, and the iso-susceptibility saturation of blood. The hypercapnia-calibrated estimate was relatively insensitive to these parameters but highly sensitive to the assumed metabolic response to CO2. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Algorithm for automatic analysis of electro-oculographic data

    PubMed Central

    2013-01-01

    Background Large amounts of electro-oculographic (EOG) data, recorded during electroencephalographic (EEG) measurements, go underutilized. We present an automatic, auto-calibrating algorithm that allows efficient analysis of such data sets. Methods The auto-calibration is based on automatic threshold value estimation. Amplitude threshold values for saccades and blinks are determined based on features in the recorded signal. The performance of the developed algorithm was tested by analyzing 4854 saccades and 213 blinks recorded in two different conditions: a task where the eye movements were controlled (saccade task) and a task with free viewing (multitask). The results were compared with results from a video-oculography (VOG) device and manually scored blinks. Results The algorithm achieved 93% detection sensitivity for blinks with 4% false positive rate. The detection sensitivity for horizontal saccades was between 98% and 100%, and for oblique saccades between 95% and 100%. The classification sensitivity for horizontal and large oblique saccades (10 deg) was larger than 89%, and for vertical saccades larger than 82%. The duration and peak velocities of the detected horizontal saccades were similar to those in the literature. In the multitask measurement the detection sensitivity for saccades was 97% with a 6% false positive rate. Conclusion The developed algorithm enables reliable analysis of EOG data recorded both during EEG and as a separate metrics. PMID:24160372

  10. Algorithm for automatic analysis of electro-oculographic data.

    PubMed

    Pettersson, Kati; Jagadeesan, Sharman; Lukander, Kristian; Henelius, Andreas; Haeggström, Edward; Müller, Kiti

    2013-10-25

    Large amounts of electro-oculographic (EOG) data, recorded during electroencephalographic (EEG) measurements, go underutilized. We present an automatic, auto-calibrating algorithm that allows efficient analysis of such data sets. The auto-calibration is based on automatic threshold value estimation. Amplitude threshold values for saccades and blinks are determined based on features in the recorded signal. The performance of the developed algorithm was tested by analyzing 4854 saccades and 213 blinks recorded in two different conditions: a task where the eye movements were controlled (saccade task) and a task with free viewing (multitask). The results were compared with results from a video-oculography (VOG) device and manually scored blinks. The algorithm achieved 93% detection sensitivity for blinks with 4% false positive rate. The detection sensitivity for horizontal saccades was between 98% and 100%, and for oblique saccades between 95% and 100%. The classification sensitivity for horizontal and large oblique saccades (10 deg) was larger than 89%, and for vertical saccades larger than 82%. The duration and peak velocities of the detected horizontal saccades were similar to those in the literature. In the multitask measurement the detection sensitivity for saccades was 97% with a 6% false positive rate. The developed algorithm enables reliable analysis of EOG data recorded both during EEG and as a separate metrics.

  11. Ring Laser Gyro G-Sensitive Misalignment Calibration in Linear Vibration Environments.

    PubMed

    Wang, Lin; Wu, Wenqi; Li, Geng; Pan, Xianfei; Yu, Ruihang

    2018-02-16

    The ring laser gyro (RLG) dither axis will bend and exhibit errors due to the specific forces acting on the instrument, which are known as g-sensitive misalignments of the gyros. The g-sensitive misalignments of the RLG triad will cause severe attitude error in vibration or maneuver environments where large-amplitude specific forces and angular rates coexist. However, g-sensitive misalignments are usually ignored when calibrating the strapdown inertial navigation system (SINS). This paper proposes a novel method to calibrate the g-sensitive misalignments of an RLG triad in linear vibration environments. With the SINS is attached to a linear vibration bench through outer rubber dampers, rocking of the SINS can occur when the linear vibration is performed on the SINS. Therefore, linear vibration environments can be created to simulate the harsh environment during aircraft flight. By analyzing the mathematical model of g-sensitive misalignments, the relationship between attitude errors and specific forces as well as angular rates is established, whereby a calibration scheme with approximately optimal observations is designed. Vibration experiments are conducted to calibrate g-sensitive misalignments of the RLG triad. Vibration tests also show that SINS velocity error decreases significantly after g-sensitive misalignments compensation.

  12. Dual-angle, self-calibrating Thomson scattering measurements in RFX-MOD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giudicotti, L., E-mail: leonardo.giudicotti@unipd.it; Department of Industrial Engineering, Padova University, Via Gradenigo 6/a, 35131 Padova; Pasqualotto, R.

    2014-11-15

    In the multipoint Thomson scattering (TS) system of the RFX-MOD experiment the signals from a few spatial positions can be observed simultaneously under two different scattering angles. In addition the detection system uses optical multiplexing by signal delays in fiber optic cables of different length so that the two sets of TS signals can be observed by the same polychromator. Owing to the dependence of the TS spectrum on the scattering angle, it was then possible to implement self-calibrating TS measurements in which the electron temperature T{sub e}, the electron density n{sub e} and the relative calibration coefficients of spectralmore » channels sensitivity C{sub i} were simultaneously determined by a suitable analysis of the two sets of TS data collected at the two angles. The analysis has shown that, in spite of the small difference in the spectra obtained at the two angles, reliable values of the relative calibration coefficients can be determined by the analysis of good S/N dual‑angle spectra recorded in a few tens of plasma shots. This analysis suggests that in RFX-MOD the calibration of the entire set of TS polychromators by means of the similar, dual-laser (Nd:YAG/Nd:YLF) TS technique, should be feasible.« less

  13. A New Raman Water Vapor Lidar Calibration Technique and Measurements in the Vicinity of Hurricane Bonnie

    NASA Technical Reports Server (NTRS)

    Evans, Keith D.; Demoz, Belay B.; Cadirola, Martin P.; Melfi, S. H.; Whiteman, David N.; Schwemmer, Geary K.; Starr, David OC.; Schmidlin, F. J.; Feltz, Wayne

    2000-01-01

    The NAcA/Goddard Space Flight Center Scanning Raman Lidar has made measurements of water vapor and aerosols for almost ten years. Calibration of the water vapor data has typically been performed by comparison with another water vapor sensor such as radiosondes. We present a new method for water vapor calibration that only requires low clouds, and surface pressure and temperature measurements. A sensitivity study was performed and the cloud base algorithm agrees with the radiosonde calibration to within 10- 15%. Knowledge of the true atmospheric lapse rate is required to obtain more accurate cloud base temperatures. Analysis of water vapor and aerosol measurements made in the vicinity of Hurricane Bonnie are discussed.

  14. Are quantitative sensitivity analysis methods always reliable?

    NASA Astrophysics Data System (ADS)

    Huang, X.

    2016-12-01

    Physical parameterizations developed to represent subgrid-scale physical processes include various uncertain parameters, leading to large uncertainties in today's Earth System Models (ESMs). Sensitivity Analysis (SA) is an efficient approach to quantitatively determine how the uncertainty of the evaluation metric can be apportioned to each parameter. Also, SA can identify the most influential parameters, as a result to reduce the high dimensional parametric space. In previous studies, some SA-based approaches, such as Sobol' and Fourier amplitude sensitivity testing (FAST), divide the parameters into sensitive and insensitive groups respectively. The first one is reserved but the other is eliminated for certain scientific study. However, these approaches ignore the disappearance of the interactive effects between the reserved parameters and the eliminated ones, which are also part of the total sensitive indices. Therefore, the wrong sensitive parameters might be identified by these traditional SA approaches and tools. In this study, we propose a dynamic global sensitivity analysis method (DGSAM), which iteratively removes the least important parameter until there are only two parameters left. We use the CLM-CASA, a global terrestrial model, as an example to verify our findings with different sample sizes ranging from 7000 to 280000. The result shows DGSAM has abilities to identify more influential parameters, which is confirmed by parameter calibration experiments using four popular optimization methods. For example, optimization using Top3 parameters filtered by DGSAM could achieve substantial improvement against Sobol' by 10%. Furthermore, the current computational cost for calibration has been reduced to 1/6 of the original one. In future, it is necessary to explore alternative SA methods emphasizing parameter interactions.

  15. An Open-Source Auto-Calibration Routine Supporting the Stormwater Management Model

    NASA Astrophysics Data System (ADS)

    Tiernan, E. D.; Hodges, B. R.

    2017-12-01

    The stormwater management model (SWMM) is a clustered model that relies on subcatchment-averaged parameter assignments to correctly capture catchment stormwater runoff behavior. Model calibration is considered a critical step for SWMM performance, an arduous task that most stormwater management designers undertake manually. This research presents an open-source, automated calibration routine that increases the efficiency and accuracy of the model calibration process. The routine makes use of a preliminary sensitivity analysis to reduce the dimensions of the parameter space, at which point a multi-objective function, genetic algorithm (modified Non-dominated Sorting Genetic Algorithm II) determines the Pareto front for the objective functions within the parameter space. The solutions on this Pareto front represent the optimized parameter value sets for the catchment behavior that could not have been reasonably obtained through manual calibration.

  16. Obtaining changes in calibration-coil to seismometer output constants using sine waves

    USGS Publications Warehouse

    Ringler, Adam T.; Hutt, Charles R.; Gee, Lind S.; Sandoval, Leo D.; Wilson, David C.

    2013-01-01

    The midband sensitivity of a broadband seismometer is one of the most commonly used parameters from station metadata. Thus, it is critical for station operators to robustly estimate this quantity with a high degree of accuracy. We develop an in situ method for estimating changes in sensitivity using sine‐wave calibrations, assuming the calibration coil and its drive are stable over time and temperature. This approach has been used in the past for passive instruments (e.g., geophones) but has not been applied, to our knowledge, to derive sensitivities of modern force‐feedback broadband seismometers. We are able to detect changes in sensitivity to well within 1%, and our method is capable of detecting these sensitivity changes using any frequency of sine calibration within the passband of the instrument.

  17. Calibrating SANS data for instrument geometry and pixel sensitivity effects: access to an extended Q range

    PubMed Central

    Karge, Lukas; Gilles, Ralph

    2017-01-01

    An improved data-reduction procedure is proposed and demonstrated for small-angle neutron scattering (SANS) measurements. Its main feature is the correction of geometry- and wavelength-dependent intensity variations on the detector in a separate step from the different pixel sensitivities: the geometric and wavelength effects can be corrected analytically, while pixel sensitivities have to be calibrated to a reference measurement. The geometric effects are treated for position-sensitive 3He proportional counter tubes, where they are anisotropic owing to the cylindrical geometry of the gas tubes. For the calibration of pixel sensitivities, a procedure is developed that is valid for isotropic and anisotropic signals. The proposed procedure can save a significant amount of beamtime which has hitherto been used for calibration measurements. PMID:29021734

  18. Analysis of the calibration methods and error propagation for the sensitivity S and the cooling time constant τc of the gold metal foil bolometers

    NASA Astrophysics Data System (ADS)

    Murari, A.; Cecconello, M.; Marrelli, L.; Mast, K. F.

    2004-08-01

    Bolometers are radiation sensors designed to have a spectral response as constant as possible in the region of interest. In high-temperature plasmas, the main radiation output is in the ultraviolet and SXR part of the spectrum and the metal foil bolometers are special detectors developed for this interval. For such sensors, as in general for all bolometers, the absolute calibration is a crucial issue. This problem becomes particularly severe when, like in nuclear fusion, the sensors are not easily accessible. In this article, a detailed description of the in situ calibration methods for the bolometer sensitivity S and the cooling time τc, the two essential parameters characterizing the behavior of the sensor, is provided and an estimate of the uncertainties for both constants is presented. The sensitivity S is determined via an electrical calibration, in which the effect of the cables connecting the bolometers to the powering circuitry is taken into account leading to an effective estimate for S. Experimental measurements confirming the quality of the adopted coaxial cable modelling are reported. The cooling time constant τc is calculated via an optical calibration, in which the bolometer is stimulated by a light-emitting diode. The behavior of τc in a broad pressure range is investigated, showing that it does not depend upon this quantity up until 10-2 mbar, well above the standard operating conditions of many applications. The described methods were tested on 36 bolometric channels of RFX tomography, providing a significant statistical basis for present applications and future developments of both the calibration procedures and the detectors.

  19. Partial pressure analysis in space testing

    NASA Technical Reports Server (NTRS)

    Tilford, Charles R.

    1994-01-01

    For vacuum-system or test-article analysis it is often desirable to know the species and partial pressures of the vacuum gases. Residual gas or Partial Pressure Analyzers (PPA's) are commonly used for this purpose. These are mass spectrometer-type instruments, most commonly employing quadrupole filters. These instruments can be extremely useful, but they should be used with caution. Depending on the instrument design, calibration procedures, and conditions of use, measurements made with these instruments can be accurate to within a few percent, or in error by two or more orders of magnitude. Significant sources of error can include relative gas sensitivities that differ from handbook values by an order of magnitude, changes in sensitivity with pressure by as much as two orders of magnitude, changes in sensitivity with time after exposure to chemically active gases, and the dependence of the sensitivity for one gas on the pressures of other gases. However, for most instruments, these errors can be greatly reduced with proper operating procedures and conditions of use. In this paper, data are presented illustrating performance characteristics for different instruments and gases, operating parameters are recommended to minimize some errors, and calibrations procedures are described that can detect and/or correct other errors.

  20. SU-E-T-377: Inaccurate Positioning Might Introduce Significant MapCheck Calibration Error in Flatten Filter Free Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, S; Chao, C; Columbia University, NY, NY

    2014-06-01

    Purpose: This study investigates the calibration error of detector sensitivity for MapCheck due to inaccurate positioning of the device, which is not taken into account by the current commercial iterative calibration algorithm. We hypothesize the calibration is more vulnerable to the positioning error for the flatten filter free (FFF) beams than the conventional flatten filter flattened beams. Methods: MapCheck2 was calibrated with 10MV conventional and FFF beams, with careful alignment and with 1cm positioning error during calibration, respectively. Open fields of 37cmx37cm were delivered to gauge the impact of resultant calibration errors. The local calibration error was modeled as amore » detector independent multiplication factor, with which propagation error was estimated with positioning error from 1mm to 1cm. The calibrated sensitivities, without positioning error, were compared between the conventional and FFF beams to evaluate the dependence on the beam type. Results: The 1cm positioning error leads to 0.39% and 5.24% local calibration error in the conventional and FFF beams respectively. After propagating to the edges of MapCheck, the calibration errors become 6.5% and 57.7%, respectively. The propagation error increases almost linearly with respect to the positioning error. The difference of sensitivities between the conventional and FFF beams was small (0.11 ± 0.49%). Conclusion: The results demonstrate that the positioning error is not handled by the current commercial calibration algorithm of MapCheck. Particularly, the calibration errors for the FFF beams are ~9 times greater than those for the conventional beams with identical positioning error, and a small 1mm positioning error might lead to up to 8% calibration error. Since the sensitivities are only slightly dependent of the beam type and the conventional beam is less affected by the positioning error, it is advisable to cross-check the sensitivities between the conventional and FFF beams to detect potential calibration errors due to inaccurate positioning. This work was partially supported by a DOD Grant No.; DOD W81XWH1010862.« less

  1. Large-scale collision cross-section profiling on a travelling wave ion mobility mass spectrometer

    PubMed Central

    Lietz, Christopher B.; Yu, Qing; Li, Lingjun

    2014-01-01

    Ion mobility (IM) is a gas-phase electrophoretic method that separates ions according to charge and ion-neutral collision cross-section (CCS). Herein, we attempt to apply a travelling wave (TW) IM polyalanine calibration method to shotgun proteomics and create a large peptide CCS database. Mass spectrometry methods that utilize IM, such as HDMSE, often use high transmission voltages for sensitive analysis. However, polyalanine calibration has only been demonstrated with low voltage transmission used to prevent gas-phase activation. If polyalanine ions change conformation under higher transmission voltages used for HDMSE, the calibration may no longer be valid. Thus, we aimed to characterize the accuracy of calibration and CCS measurement under high transmission voltages on a TW IM instrument using the polyalanine calibration method and found that the additional error was not significant. We also evaluated the potential error introduced by liquid chromatography (LC)-HDMSE analysis, and found it to be insignificant as well, validating the calibration method. Finally, we demonstrated the utility of building a large-population peptide CCS database by investigating the effects of terminal lysine position, via LysC or LysN digestion, on the formation of two structural sub-families formed by triply charged ions. PMID:24845359

  2. Improvements in absolute seismometer sensitivity calibration using local earth gravity measurements

    USGS Publications Warehouse

    Anthony, Robert E.; Ringler, Adam; Wilson, David

    2018-01-01

    The ability to determine both absolute and relative seismic amplitudes is fundamentally limited by the accuracy and precision with which scientists are able to calibrate seismometer sensitivities and characterize their response. Currently, across the Global Seismic Network (GSN), errors in midband sensitivity exceed 3% at the 95% confidence interval and are the least‐constrained response parameter in seismic recording systems. We explore a new methodology utilizing precise absolute Earth gravity measurements to determine the midband sensitivity of seismic instruments. We first determine the absolute sensitivity of Kinemetrics EpiSensor accelerometers to 0.06% at the 99% confidence interval by inverting them in a known gravity field at the Albuquerque Seismological Laboratory (ASL). After the accelerometer is calibrated, we install it in its normal configuration next to broadband seismometers and subject the sensors to identical ground motions to perform relative calibrations of the broadband sensors. Using this technique, we are able to determine the absolute midband sensitivity of the vertical components of Nanometrics Trillium Compact seismometers to within 0.11% and Streckeisen STS‐2 seismometers to within 0.14% at the 99% confidence interval. The technique enables absolute calibrations from first principles that are traceable to National Institute of Standards and Technology (NIST) measurements while providing nearly an order of magnitude more precision than step‐table calibrations.

  3. Absolute sensitivity calibration of an extreme ultraviolet spectrometer for tokamak measurements

    NASA Astrophysics Data System (ADS)

    Guirlet, R.; Schwob, J. L.; Meyer, O.; Vartanian, S.

    2017-01-01

    An extreme ultraviolet spectrometer installed on the Tore Supra tokamak has been calibrated in absolute units of brightness in the range 10-340 Å. This has been performed by means of a combination of techniques. The range 10-113 Å was absolutely calibrated by using an ultrasoft-X ray source emitting six spectral lines in this range. The calibration transfer to the range 113-182 Å was performed using the spectral line intensity branching ratio method. The range 182-340 Å was calibrated thanks to radiative-collisional modelling of spectral line intensity ratios. The maximum sensitivity of the spectrometer was found to lie around 100 Å. Around this wavelength, the sensitivity is fairly flat in a 80 Å wide interval. The spatial variations of sensitivity along the detector assembly were also measured. The observed trend is related to the quantum efficiency decrease as the angle of the incoming photon trajectories becomes more grazing.

  4. Evaluating remedial alternatives for an acid mine drainage stream: A model post audit

    USGS Publications Warehouse

    Runkel, Robert L.; Kimball, Briant A.; Walton-Day, Katherine; Verplanck, Philip L.; Broshears, Robert E.

    2012-01-01

    A post audit for a reactive transport model used to evaluate acid mine drainage treatment systems is presented herein. The post audit is based on a paired synoptic approach in which hydrogeochemical data are collected at low (existing conditions) and elevated (following treatment) pH. Data obtained under existing, low-pH conditions are used for calibration, and the resultant model is used to predict metal concentrations observed following treatment. Predictions for Al, As, Fe, H+, and Pb accurately reproduce the observed reduction in dissolved concentrations afforded by the treatment system, and the information provided in regard to standard attainment is also accurate (predictions correctly indicate attainment or nonattainment of water quality standards for 19 of 25 cases). Errors associated with Cd, Cu, and Zn are attributed to misspecification of sorbent mass (precipitated Fe). In addition to these specific results, the post audit provides insight in regard to calibration and sensitivity analysis that is contrary to conventional wisdom. Steps taken during the calibration process to improve simulations of As sorption were ultimately detrimental to the predictive results, for example, and the sensitivity analysis failed to bracket observed metal concentrations.

  5. Evaluating remedial alternatives for an acid mine drainage stream: a model post audit.

    PubMed

    Runkel, Robert L; Kimball, Briant A; Walton-Day, Katherine; Verplanck, Philip L; Broshears, Robert E

    2012-01-03

    A post audit for a reactive transport model used to evaluate acid mine drainage treatment systems is presented herein. The post audit is based on a paired synoptic approach in which hydrogeochemical data are collected at low (existing conditions) and elevated (following treatment) pH. Data obtained under existing, low-pH conditions are used for calibration, and the resultant model is used to predict metal concentrations observed following treatment. Predictions for Al, As, Fe, H(+), and Pb accurately reproduce the observed reduction in dissolved concentrations afforded by the treatment system, and the information provided in regard to standard attainment is also accurate (predictions correctly indicate attainment or nonattainment of water quality standards for 19 of 25 cases). Errors associated with Cd, Cu, and Zn are attributed to misspecification of sorbent mass (precipitated Fe). In addition to these specific results, the post audit provides insight in regard to calibration and sensitivity analysis that is contrary to conventional wisdom. Steps taken during the calibration process to improve simulations of As sorption were ultimately detrimental to the predictive results, for example, and the sensitivity analysis failed to bracket observed metal concentrations.

  6. Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Mai, J.; Tolson, B.

    2017-12-01

    The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method's independency of the convergence testing method, we applied it to two widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991) and the variance-based Sobol' method (Solbol' 1993). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an efficient way. The appealing feature of this new technique is the necessity of no further model evaluation and therefore enables checking of already processed sensitivity results. This is one step towards reliable and transferable, published sensitivity results.

  7. SUMS calibration test report

    NASA Technical Reports Server (NTRS)

    Robertson, G.

    1982-01-01

    Calibration was performed on the shuttle upper atmosphere mass spectrometer (SUMS). The results of the calibration and the as run test procedures are presented. The output data is described, and engineering data conversion factors, tables and curves, and calibration on instrument gauges are included. Static calibration results which include: instrument sensitive versus external pressure for N2 and O2, data from each scan of calibration, data plots from N2 and O2, and sensitivity of SUMS at inlet for N2 and O2, and ratios of 14/28 for nitrogen and 16/32 for oxygen are given.

  8. Support for Online Calibration in the ALICE HLT Framework

    NASA Astrophysics Data System (ADS)

    Krzewicki, Mikolaj; Rohr, David; Zampolli, Chiara; Wiechula, Jens; Gorbunov, Sergey; Chauvin, Alex; Vorobyev, Ivan; Weber, Steffen; Schweda, Kai; Shahoyan, Ruben; Lindenstruth, Volker; ALICE Collaboration

    2017-10-01

    The ALICE detector employs sub detectors sensitive to environmental conditions such as pressure and temperature, e.g. the time projection chamber (TPC). A precise reconstruction of particle trajectories requires precise calibration of these detectors. Performing the calibration in real time in the HLT improves the online reconstruction and potentially renders certain offline calibration steps obsolete, speeding up offline physics analysis. For LHC Run 3, starting in 2020 when data reduction will rely on reconstructed data, online calibration becomes a necessity. In order to run the calibration online, the HLT now supports the processing of tasks that typically run offline. These tasks run massively in parallel on all HLT compute nodes and their output is gathered and merged periodically. The calibration results are both stored offline for later use and fed back into the HLT chain via a feedback loop in order to apply calibration information to the online track reconstruction. Online calibration and feedback loop are subject to certain time constraints in order to provide up-to-date calibration information and they must not interfere with ALICE data taking. Our approach to run these tasks in asynchronous processes enables us to separate them from normal data taking in a way that makes it failure resilient. We performed a first test of online TPC drift time calibration under real conditions during the heavy-ion run in December 2015. We present an analysis and conclusions of this first test, new improvements and developments based on this, as well as our current scheme to commission this for production use.

  9. A new method to calibrate the absolute sensitivity of a soft X-ray streak camera

    NASA Astrophysics Data System (ADS)

    Yu, Jian; Liu, Shenye; Li, Jin; Yang, Zhiwen; Chen, Ming; Guo, Luting; Yao, Li; Xiao, Shali

    2016-12-01

    In this paper, we introduce a new method to calibrate the absolute sensitivity of a soft X-ray streak camera (SXRSC). The calibrations are done in the static mode by using a small laser-produced X-ray source. A calibrated X-ray CCD is used as a secondary standard detector to monitor the X-ray source intensity. In addition, two sets of holographic flat-field grating spectrometers are chosen as the spectral discrimination systems of the SXRSC and the X-ray CCD. The absolute sensitivity of the SXRSC is obtained by comparing the signal counts of the SXRSC to the output counts of the X-ray CCD. Results show that the calibrated spectrum covers the range from 200 eV to 1040 eV. The change of the absolute sensitivity in the vicinity of the K-edge of the carbon can also be clearly seen. The experimental values agree with the calculated values to within 29% error. Compared with previous calibration methods, the proposed method has several advantages: a wide spectral range, high accuracy, and simple data processing. Our calibration results can be used to make quantitative X-ray flux measurements in laser fusion research.

  10. Utility of bromide and heat tracers for aquifer characterization affected by highly transient flow conditions

    NASA Astrophysics Data System (ADS)

    Ma, Rui; Zheng, Chunmiao; Zachara, John M.; Tonkin, Matthew

    2012-08-01

    A tracer test using both bromide and heat tracers conducted at the Integrated Field Research Challenge site in Hanford 300 Area (300A), Washington, provided an instrument for evaluating the utility of bromide and heat tracers for aquifer characterization. The bromide tracer data were critical to improving the calibration of the flow model complicated by the highly dynamic nature of the flow field. However, most bromide concentrations were obtained from fully screened observation wells, lacking depth-specific resolution for vertical characterization. On the other hand, depth-specific temperature data were relatively simple and inexpensive to acquire. However, temperature-driven fluid density effects influenced heat plume movement. Moreover, the temperature data contained "noise" caused by heating during fluid injection and sampling events. Using the hydraulic conductivity distribution obtained from the calibration of the bromide transport model, the temperature depth profiles and arrival times of temperature peaks simulated by the heat transport model were in reasonable agreement with observations. This suggested that heat can be used as a cost-effective proxy for solute tracers for calibration of the hydraulic conductivity distribution, especially in the vertical direction. However, a heat tracer test must be carefully designed and executed to minimize fluid density effects and sources of noise in temperature data. A sensitivity analysis also revealed that heat transport was most sensitive to hydraulic conductivity and porosity, less sensitive to thermal distribution factor, and least sensitive to thermal dispersion and heat conduction. This indicated that the hydraulic conductivity remains the primary calibration parameter for heat transport.

  11. Utility of Bromide and Heat Tracers for Aquifer Characterization Affected by Highly Transient Flow Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Rui; Zheng, Chunmiao; Zachara, John M.

    A tracer test using both bromide and heat tracers conducted at the Integrated Field Research Challenge site in Hanford 300 Area (300A), Washington, provided an instrument for evaluating the utility of bromide and heat tracers for aquifer characterization. The bromide tracer data were critical to improving the calibration of the flow model complicated by the highly dynamic nature of the flow field. However, most bromide concentrations were obtained from fully screened observation wells, lacking depth-specific resolution for vertical characterization. On the other hand, depth-specific temperature data were relatively simple and inexpensive to acquire. However, temperature-driven fluid density effects influenced heatmore » plume movement. Moreover, the temperature data contained “noise” caused by heating during fluid injection and sampling events. Using the hydraulic conductivity distribution obtained from the calibration of the bromide transport model, the temperature depth profiles and arrival times of temperature peaks simulated by the heat transport model were in reasonable agreement with observations. This suggested that heat can be used as a cost-effective proxy for solute tracers for calibration of the hydraulic conductivity distribution, especially in the vertical direction. However, a heat tracer test must be carefully designed and executed to minimize fluid density effects and sources of noise in temperature data. A sensitivity analysis also revealed that heat transport was most sensitive to hydraulic conductivity and porosity, less sensitive to thermal distribution factor, and least sensitive to thermal dispersion and heat conduction. This indicated that the hydraulic conductivity remains the primary calibration parameter for heat transport.« less

  12. Fast hydrological model calibration based on the heterogeneous parallel computing accelerated shuffled complex evolution method

    NASA Astrophysics Data System (ADS)

    Kan, Guangyuan; He, Xiaoyan; Ding, Liuqian; Li, Jiren; Hong, Yang; Zuo, Depeng; Ren, Minglei; Lei, Tianjie; Liang, Ke

    2018-01-01

    Hydrological model calibration has been a hot issue for decades. The shuffled complex evolution method developed at the University of Arizona (SCE-UA) has been proved to be an effective and robust optimization approach. However, its computational efficiency deteriorates significantly when the amount of hydrometeorological data increases. In recent years, the rise of heterogeneous parallel computing has brought hope for the acceleration of hydrological model calibration. This study proposed a parallel SCE-UA method and applied it to the calibration of a watershed rainfall-runoff model, the Xinanjiang model. The parallel method was implemented on heterogeneous computing systems using OpenMP and CUDA. Performance testing and sensitivity analysis were carried out to verify its correctness and efficiency. Comparison results indicated that heterogeneous parallel computing-accelerated SCE-UA converged much more quickly than the original serial version and possessed satisfactory accuracy and stability for the task of fast hydrological model calibration.

  13. A novel implementation of homodyne time interval analysis method for primary vibration calibration

    NASA Astrophysics Data System (ADS)

    Sun, Qiao; Zhou, Ling; Cai, Chenguang; Hu, Hongbo

    2011-12-01

    In this paper, the shortcomings and their causes of the conventional homodyne time interval analysis (TIA) method is described with respect to its software algorithm and hardware implementation, based on which a simplified TIA method is proposed with the help of virtual instrument technology. Equipped with an ordinary Michelson interferometer and dual channel synchronous data acquisition card, the primary vibration calibration system using the simplified method can perform measurements of complex sensitivity of accelerometers accurately, meeting the uncertainty requirements laid down in pertaining ISO standard. The validity and accuracy of the simplified TIA method is verified by simulation and comparison experiments with its performance analyzed. This simplified method is recommended to apply in national metrology institute of developing countries and industrial primary vibration calibration labs for its simplified algorithm and low requirements on hardware.

  14. Monte Carlo calculation of the sensitivity of a commercial dose calibrator to gamma and beta radiation.

    PubMed

    Laedermann, Jean-Pascal; Valley, Jean-François; Bulling, Shelley; Bochud, François O

    2004-06-01

    The detection process used in a commercial dose calibrator was modeled using the GEANT 3 Monte Carlo code. Dose calibrator efficiency for gamma and beta emitters, and the response to monoenergetic photons and electrons was calculated. The model shows that beta emitters below 2.5 MeV deposit energy indirectly in the detector through bremsstrahlung produced in the chamber wall or in the source itself. Higher energy beta emitters (E > 2.5 MeV) deposit energy directly in the chamber sensitive volume, and dose calibrator sensitivity increases abruptly for these radionuclides. The Monte Carlo calculations were compared with gamma and beta emitter measurements. The calculations show that the variation in dose calibrator efficiency with measuring conditions (source volume, container diameter, container wall thickness and material, position of the source within the calibrator) is relatively small and can be considered insignificant for routine measurement applications. However, dose calibrator efficiency depends strongly on the inner-wall thickness of the detector.

  15. Sensitive analytical method for simultaneous analysis of some vasoconstrictors with highly overlapped analytical signals

    NASA Astrophysics Data System (ADS)

    Nikolić, G. S.; Žerajić, S.; Cakić, M.

    2011-10-01

    Multivariate calibration method is a powerful mathematical tool that can be applied in analytical chemistry when the analytical signals are highly overlapped. The method with regression by partial least squares is proposed for the simultaneous spectrophotometric determination of adrenergic vasoconstrictors in decongestive solution containing two active components: phenyleprine hydrochloride and trimazoline hydrochloride. These sympathomimetic agents are that frequently associated in pharmaceutical formulations against the common cold. The proposed method, which is, simple and rapid, offers the advantages of sensitivity and wide range of determinations without the need for extraction of the vasoconstrictors. In order to minimize the optimal factors necessary to obtain the calibration matrix by multivariate calibration, different parameters were evaluated. The adequate selection of the spectral regions proved to be important on the number of factors. In order to simultaneously quantify both hydrochlorides among excipients, the spectral region between 250 and 290 nm was selected. A recovery for the vasoconstrictor was 98-101%. The developed method was applied to assay of two decongestive pharmaceutical preparations.

  16. Comparison Between One-Point Calibration and Two-Point Calibration Approaches in a Continuous Glucose Monitoring Algorithm

    PubMed Central

    Mahmoudi, Zeinab; Johansen, Mette Dencker; Christiansen, Jens Sandahl

    2014-01-01

    Background: The purpose of this study was to investigate the effect of using a 1-point calibration approach instead of a 2-point calibration approach on the accuracy of a continuous glucose monitoring (CGM) algorithm. Method: A previously published real-time CGM algorithm was compared with its updated version, which used a 1-point calibration instead of a 2-point calibration. In addition, the contribution of the corrective intercept (CI) to the calibration performance was assessed. Finally, the sensor background current was estimated real-time and retrospectively. The study was performed on 132 type 1 diabetes patients. Results: Replacing the 2-point calibration with the 1-point calibration improved the CGM accuracy, with the greatest improvement achieved in hypoglycemia (18.4% median absolute relative differences [MARD] in hypoglycemia for the 2-point calibration, and 12.1% MARD in hypoglycemia for the 1-point calibration). Using 1-point calibration increased the percentage of sensor readings in zone A+B of the Clarke error grid analysis (EGA) in the full glycemic range, and also enhanced hypoglycemia sensitivity. Exclusion of CI from calibration reduced hypoglycemia accuracy, while slightly increased euglycemia accuracy. Both real-time and retrospective estimation of the sensor background current suggest that the background current can be considered zero in the calibration of the SCGM1 sensor. Conclusions: The sensor readings calibrated with the 1-point calibration approach indicated to have higher accuracy than those calibrated with the 2-point calibration approach. PMID:24876420

  17. In-Flight Calibration of the Energetic Gamma-Ray Experiment Telescope (EGRET) on the Compton Gamma-Ray Observatory

    NASA Technical Reports Server (NTRS)

    Esposito, J. A.; Bertsch, D. L.; Chen, A. W.; Dingus, B. L.; Fichtel, C. E.; Hartman, R. C.; Hunter, S. D.; Kanbach, G.; Kniffen, D. A.; Lin, Y. C.; hide

    1998-01-01

    The Energetic Gamma-Ray Experiment Telescope (EGRET) on the Compton Gamma-Ray Observatory has been operating for over seven years since its launch in 1991 April. This span of time far exceeds the design lifetime of two years. As the instrument has aged, several changes have occurred due to spark chamber gas exchanges as well as some hardware degradation and failures, all of which have an influence on the instrument sensitivity. This paper describes post-launch measurements and analysis that are done to calibrate the instrument response functions. The updated instrument characteristics are incorporated into the analysis software.

  18. Comparison between Surrogate Indexes of Insulin Sensitivity/Resistance and Hyperinsulinemic Euglycemic Glucose Clamps in Rhesus Monkeys

    PubMed Central

    Lee, Ho-Won; Muniyappa, Ranganath; Yan, Xu; Yue, Lilly Q.; Linden, Ellen H.; Chen, Hui; Hansen, Barbara C.

    2011-01-01

    The euglycemic glucose clamp is the reference method for assessing insulin sensitivity in humans and animals. However, clamps are ill-suited for large studies because of extensive requirements for cost, time, labor, and technical expertise. Simple surrogate indexes of insulin sensitivity/resistance including quantitative insulin-sensitivity check index (QUICKI) and homeostasis model assessment (HOMA) have been developed and validated in humans. However, validation studies of QUICKI and HOMA in both rats and mice suggest that differences in metabolic physiology between rodents and humans limit their value in rodents. Rhesus monkeys are a species more similar to humans than rodents. Therefore, in the present study, we evaluated data from 199 glucose clamp studies obtained from a large cohort of 86 monkeys with a broad range of insulin sensitivity. Data were used to evaluate simple surrogate indexes of insulin sensitivity/resistance (QUICKI, HOMA, Log HOMA, 1/HOMA, and 1/Fasting insulin) with respect to linear regression, predictive accuracy using a calibration model, and diagnostic performance using receiver operating characteristic. Most surrogates had modest linear correlations with SIClamp (r ≈ 0.4–0.64) with comparable correlation coefficients. Predictive accuracy determined by calibration model analysis demonstrated better predictive accuracy of QUICKI than HOMA and Log HOMA. Receiver operating characteristic analysis showed equivalent sensitivity and specificity of most surrogate indexes to detect insulin resistance. Thus, unlike in rodents but similar to humans, surrogate indexes of insulin sensitivity/resistance including QUICKI and log HOMA may be reasonable to use in large studies of rhesus monkeys where it may be impractical to conduct glucose clamp studies. PMID:21209021

  19. MOESHA: A genetic algorithm for automatic calibration and estimation of parameter uncertainty and sensitivity of hydrologic models

    EPA Science Inventory

    Characterization of uncertainty and sensitivity of model parameters is an essential and often overlooked facet of hydrological modeling. This paper introduces an algorithm called MOESHA that combines input parameter sensitivity analyses with a genetic algorithm calibration routin...

  20. Parameter regionalization of a monthly water balance model for the conterminous United States

    USGS Publications Warehouse

    Bock, Andrew R.; Hay, Lauren E.; McCabe, Gregory J.; Markstrom, Steven L.; Atkinson, R. Dwight

    2016-01-01

    A parameter regionalization scheme to transfer parameter values from gaged to ungaged areas for a monthly water balance model (MWBM) was developed and tested for the conterminous United States (CONUS). The Fourier Amplitude Sensitivity Test, a global-sensitivity algorithm, was implemented on a MWBM to generate parameter sensitivities on a set of 109 951 hydrologic response units (HRUs) across the CONUS. The HRUs were grouped into 110 calibration regions based on similar parameter sensitivities. Subsequently, measured runoff from 1575 streamgages within the calibration regions were used to calibrate the MWBM parameters to produce parameter sets for each calibration region. Measured and simulated runoff at the 1575 streamgages showed good correspondence for the majority of the CONUS, with a median computed Nash–Sutcliffe efficiency coefficient of 0.76 over all streamgages. These methods maximize the use of available runoff information, resulting in a calibrated CONUS-wide application of the MWBM suitable for providing estimates of water availability at the HRU resolution for both gaged and ungaged areas of the CONUS.

  1. Parameter regionalization of a monthly water balance model for the conterminous United States

    NASA Astrophysics Data System (ADS)

    Bock, Andrew R.; Hay, Lauren E.; McCabe, Gregory J.; Markstrom, Steven L.; Atkinson, R. Dwight

    2016-07-01

    A parameter regionalization scheme to transfer parameter values from gaged to ungaged areas for a monthly water balance model (MWBM) was developed and tested for the conterminous United States (CONUS). The Fourier Amplitude Sensitivity Test, a global-sensitivity algorithm, was implemented on a MWBM to generate parameter sensitivities on a set of 109 951 hydrologic response units (HRUs) across the CONUS. The HRUs were grouped into 110 calibration regions based on similar parameter sensitivities. Subsequently, measured runoff from 1575 streamgages within the calibration regions were used to calibrate the MWBM parameters to produce parameter sets for each calibration region. Measured and simulated runoff at the 1575 streamgages showed good correspondence for the majority of the CONUS, with a median computed Nash-Sutcliffe efficiency coefficient of 0.76 over all streamgages. These methods maximize the use of available runoff information, resulting in a calibrated CONUS-wide application of the MWBM suitable for providing estimates of water availability at the HRU resolution for both gaged and ungaged areas of the CONUS.

  2. Calculations to support JET neutron yield calibration: Modelling of neutron emission from a compact DT neutron generator

    NASA Astrophysics Data System (ADS)

    Čufar, Aljaž; Batistoni, Paola; Conroy, Sean; Ghani, Zamir; Lengar, Igor; Milocco, Alberto; Packer, Lee; Pillon, Mario; Popovichev, Sergey; Snoj, Luka; JET Contributors

    2017-03-01

    At the Joint European Torus (JET) the ex-vessel fission chambers and in-vessel activation detectors are used as the neutron production rate and neutron yield monitors respectively. In order to ensure that these detectors produce accurate measurements they need to be experimentally calibrated. A new calibration of neutron detectors to 14 MeV neutrons, resulting from deuterium-tritium (DT) plasmas, is planned at JET using a compact accelerator based neutron generator (NG) in which a D/T beam impinges on a solid target containing T/D, producing neutrons by DT fusion reactions. This paper presents the analysis that was performed to model the neutron source characteristics in terms of energy spectrum, angle-energy distribution and the effect of the neutron generator geometry. Different codes capable of simulating the accelerator based DT neutron sources are compared and sensitivities to uncertainties in the generator's internal structure analysed. The analysis was performed to support preparation to the experimental measurements performed to characterize the NG as a calibration source. Further extensive neutronics analyses, performed with this model of the NG, will be needed to support the neutron calibration experiments and take into account various differences between the calibration experiment and experiments using the plasma as a source of neutrons.

  3. Calibration and Validation of Nonpoint Source Pollution and Erosion Comparison Tool,N- SPECT, for Tropical Conditions

    NASA Astrophysics Data System (ADS)

    Fares, A.; Cheng, C. L.; Dogan, A.

    2006-12-01

    Impaired water quality caused by agriculture, urbanization, and spread of invasive species has been identified as a major factor in the degradation of coastal ecosystems in the tropics. Watershed-scale nonpoint source pollution models facilitate in evaluating effective management practices to alleviate the negative impacts of different land-use changes. The Non-Point Source Pollution and Erosion Comparison Tool (N-SPECT) is a newly released watershed model that was not previously tested under tropical conditions. The two objectives of this study were to: i) calibrate and validate N-SPECT for the Hanalei Watershed of the Hawai`ian island of Kaua`i; ii) evaluate the performance of N-SPECT under tropical conditions using the sensitivity analysis approach. Hanalei watershed has one of the wettest points on earth, Mt. Waialeale with an average annual rainfall of 11,000 mm. This rainfall decreases to 2,000 mm at the outlet of the watershed near the coast. Number of rain days is one of the major input parameters that influences N-SPECT's simulation results. This parameter was used to account for plant canopy interception losses. The watershed was divided into sub- basins to accurately distribute the number of rain days throughout the watershed. Total runoff volume predicted by the model compared well with measured data. The model underestimated measured runoff by 1% for calibration period and 5% for validation period due to higher intensity precipitation in the validation period. Sensitivity analysis revealed that the model was most sensitive to the number of rain days, followed by canopy interception, and least sensitive to the number of sub-basins. The sediment and water quality portion of the model is currently being evaluated.

  4. Development, sensitivity and uncertainty analysis of LASH model

    USDA-ARS?s Scientific Manuscript database

    Many hydrologic models have been developed to help manage natural resources all over the world. Nevertheless, most models have presented a high complexity regarding data base requirements, as well as, many calibration parameters. This has brought serious difficulties for applying them in watersheds ...

  5. Sensitivity and Calibration of Non-Destructive Evaluation Method That Uses Neural-Net Processing of Characteristic Fringe Patterns

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.; Weiland, Kenneth E.

    2003-01-01

    This paper answers some performance and calibration questions about a non-destructive-evaluation (NDE) procedure that uses artificial neural networks to detect structural damage or other changes from sub-sampled characteristic patterns. The method shows increasing sensitivity as the number of sub-samples increases from 108 to 6912. The sensitivity of this robust NDE method is not affected by noisy excitations of the first vibration mode. A calibration procedure is proposed and demonstrated where the output of a trained net can be correlated with the outputs of the point sensors used for vibration testing. The calibration procedure is based on controlled changes of fastener torques. A heterodyne interferometer is used as a displacement sensor for a demonstration of the challenges to be handled in using standard point sensors for calibration.

  6. Testing the capability of ORCHIDEE land surface model to simulate Arctic ecosystems: Sensitivity analysis and site-level model calibration

    NASA Astrophysics Data System (ADS)

    Dantec-Nédélec, S.; Ottlé, C.; Wang, T.; Guglielmo, F.; Maignan, F.; Delbart, N.; Valdayskikh, V.; Radchenko, T.; Nekrasova, O.; Zakharov, V.; Jouzel, J.

    2017-06-01

    The ORCHIDEE land surface model has recently been updated to improve the representation of high-latitude environments. The model now includes improved soil thermodynamics and the representation of permafrost physical processes (soil thawing and freezing), as well as a new snow model to improve the representation of the seasonal evolution of the snow pack and the resulting insulation effects. The model was evaluated against data from the experimental sites of the WSibIso-Megagrant project (www.wsibiso.ru). ORCHIDEE was applied in stand-alone mode, on two experimental sites located in the Yamal Peninsula in the northwestern part of Siberia. These sites are representative of circumpolar-Arctic tundra environments and differ by their respective fractions of shrub/tree cover and soil type. After performing a global sensitivity analysis to identify those parameters that have most influence on the simulation of energy and water transfers, the model was calibrated at local scale and evaluated against in situ measurements (vertical profiles of soil temperature and moisture, as well as active layer thickness) acquired during summer 2012. The results show how sensitivity analysis can identify the dominant processes and thereby reduce the parameter space for the calibration process. We also discuss the model performance at simulating the soil temperature and water content (i.e., energy and water transfers in the soil-vegetation-atmosphere continuum) and the contribution of the vertical discretization of the hydrothermal properties. This work clearly shows, at least at the two sites used for validation, that the new ORCHIDEE vertical discretization can represent the water and heat transfers through complex cryogenic Arctic soils—soils which present multiple horizons sometimes with peat inclusions. The improved model allows us to prescribe the vertical heterogeneity of the soil hydrothermal properties.

  7. Dynamic State Estimation and Parameter Calibration of DFIG based on Ensemble Kalman Filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, Rui; Huang, Zhenyu; Wang, Shaobu

    2015-07-30

    With the growing interest in the application of wind energy, doubly fed induction generator (DFIG) plays an essential role in the industry nowadays. To deal with the increasing stochastic variations introduced by intermittent wind resource and responsive loads, dynamic state estimation (DSE) are introduced in any power system associated with DFIGs. However, sometimes this dynamic analysis canould not work because the parameters of DFIGs are not accurate enough. To solve the problem, an ensemble Kalman filter (EnKF) method is proposed for the state estimation and parameter calibration tasks. In this paper, a DFIG is modeled and implemented with the EnKFmore » method. Sensitivity analysis is demonstrated regarding the measurement noise, initial state errors and parameter errors. The results indicate this EnKF method has a robust performance on the state estimation and parameter calibration of DFIGs.« less

  8. An automated pressure data acquisition system for evaluation of pressure sensitive paint chemistries

    NASA Technical Reports Server (NTRS)

    Sealey, Bradley S.; Mitchell, Michael; Burkett, Cecil G.; Oglesby, Donald M.

    1993-01-01

    An automated pressure data acquisition system for testing of pressure sensitive phosphorescent paints was designed, assembled, and tested. The purpose of the calibration system is the evaluation and selection of pressure sensitive paint chemistries that could be used to obtain global aerodynamic pressure distribution measurements. The test apparatus and setup used for pressure sensitive paint characterizations is described. The pressure calibrations, thermal sensitivity effects, and photodegradation properties are discussed.

  9. Sensitivity analysis of radionuclides atmospheric dispersion following the Fukushima accident

    NASA Astrophysics Data System (ADS)

    Girard, Sylvain; Korsakissok, Irène; Mallet, Vivien

    2014-05-01

    Atmospheric dispersion models are used in response to accidental releases with two purposes: - minimising the population exposure during the accident; - complementing field measurements for the assessment of short and long term environmental and sanitary impacts. The predictions of these models are subject to considerable uncertainties of various origins. Notably, input data, such as meteorological fields or estimations of emitted quantities as function of time, are highly uncertain. The case studied here is the atmospheric release of radionuclides following the Fukushima Daiichi disaster. The model used in this study is Polyphemus/Polair3D, from which derives IRSN's operational long distance atmospheric dispersion model ldX. A sensitivity analysis was conducted in order to estimate the relative importance of a set of identified uncertainty sources. The complexity of this task was increased by four characteristics shared by most environmental models: - high dimensional inputs; - correlated inputs or inputs with complex structures; - high dimensional output; - multiplicity of purposes that require sophisticated and non-systematic post-processing of the output. The sensitivities of a set of outputs were estimated with the Morris screening method. The input ranking was highly dependent on the considered output. Yet, a few variables, such as horizontal diffusion coefficient or clouds thickness, were found to have a weak influence on most of them and could be discarded from further studies. The sensitivity analysis procedure was also applied to indicators of the model performance computed on a set of gamma dose rates observations. This original approach is of particular interest since observations could be used later to calibrate the input variables probability distributions. Indeed, only the variables that are influential on performance scores are likely to allow for calibration. An indicator based on emission peaks time matching was elaborated in order to complement classical statistical scores which were dominated by deposit dose rates and almost insensitive to lower atmosphere dose rates. The substantial sensitivity of these performance indicators is auspicious for future calibration attempts and indicates that the simple perturbations used here may be sufficient to represent an essential part of the overall uncertainty.

  10. A digitally implemented phase-locked loop detection scheme for analysis of the phase and power stability of a calibration tone

    NASA Technical Reports Server (NTRS)

    Densmore, A. C.

    1988-01-01

    A digital phase-locked loop (PLL) scheme is described which detects the phase and power of a high SNR calibration tone. The digital PLL is implemented in software directly from the given description. It was used to evaluate the stability of the Goldstone Deep Space Station open loop receivers for Radio Science. Included is a derivative of the Allan variance sensitivity of the PLL imposed by additive white Gaussian noise; a lower limit is placed on the carrier frequency.

  11. Fricke-gel dosimeter: overview of Xylenol Orange chemical behavior

    NASA Astrophysics Data System (ADS)

    Liosi, G. M.; Dondi, D.; Vander Griend, D. A.; Lazzaroni, S.; D'Agostino, G.; Mariani, M.

    2017-11-01

    The complexation between Xylenol Orange (XO) and Fe3+ ions plays a key role in Fricke-gel dosimeters for the determination of the absorbed dose via UV-vis analysis. In this study, the effect of XO and the acidity of the solution on the complexation mechanism was investigated. Moreover, starting from the results of complexation titration and Equilibrium Restricted Factor Analysis, four XO-Fe3+ complexes were identified to contribute to the absorption spectra. Based on the acquired knowledge, a new [Fe3+] vs dose calibration method is proposed. The preliminary results show a significant improvement of the sensitivity and dose threshold with respect to the commonly used Abs vs dose calibration method.

  12. Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Tolson, Bryan

    2017-04-01

    The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters or model processes. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method independency of the convergence testing method, we applied it to three widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991, Campolongo et al., 2000), the variance-based Sobol' method (Solbol' 1993, Saltelli et al. 2010) and a derivative-based method known as Parameter Importance index (Goehler et al. 2013). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. Subsequently, we focus on the model-independency by testing the frugal method using the hydrologic model mHM (www.ufz.de/mhm) with about 50 model parameters. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an efficient way. The appealing feature of this new technique is the necessity of no further model evaluation and therefore enables checking of already processed (and published) sensitivity results. This is one step towards reliable and transferable, published sensitivity results.

  13. Gamma Ray Observatory (GRO) OBC attitude error analysis

    NASA Technical Reports Server (NTRS)

    Harman, R. R.

    1990-01-01

    This analysis involves an in-depth look into the onboard computer (OBC) attitude determination algorithm. A review of TRW error analysis and necessary ground simulations to understand the onboard attitude determination process are performed. In addition, a plan is generated for the in-flight calibration and validation of OBC computed attitudes. Pre-mission expected accuracies are summarized and sensitivity of onboard algorithms to sensor anomalies and filter tuning parameters are addressed.

  14. GHRS Cycle 5 Echelle Wavelength Monitor

    NASA Astrophysics Data System (ADS)

    Soderblom, David

    1995-07-01

    This proposal defines the spectral lamp test for Echelle A. It is an internal test which makes measurements of the wavelength lamp SC2. It calibrates the carrousel function, Y deflections, resolving power, sensitivity, and scattered light. The wavelength calibration dispersion constants will be updated in the PODPS calibration data base. This proposal defines the spectral lamp test for Echelle B. It is an internal test which makes measurements of the wavelength lamp SC2. It calibrates the carrousel function, Y deflections, resolving power, sensitivity, and scattered light. The wavelength calibration dispersion constants will be updated in the PODPS calibration data base. It will be run every 4 months. The wavelengths may be out of range according to PEPSI or TRANS. Please ignore the errors.

  15. Parameter regionalization of a monthly water balance model for the conterminous United States

    NASA Astrophysics Data System (ADS)

    Bock, A. R.; Hay, L. E.; McCabe, G. J.; Markstrom, S. L.; Atkinson, R. D.

    2015-09-01

    A parameter regionalization scheme to transfer parameter values and model uncertainty information from gaged to ungaged areas for a monthly water balance model (MWBM) was developed and tested for the conterminous United States (CONUS). The Fourier Amplitude Sensitivity Test, a global-sensitivity algorithm, was implemented on a MWBM to generate parameter sensitivities on a set of 109 951 hydrologic response units (HRUs) across the CONUS. The HRUs were grouped into 110 calibration regions based on similar parameter sensitivities. Subsequently, measured runoff from 1575 streamgages within the calibration regions were used to calibrate the MWBM parameters to produce parameter sets for each calibration region. Measured and simulated runoff at the 1575 streamgages showed good correspondence for the majority of the CONUS, with a median computed Nash-Sutcliffe Efficiency coefficient of 0.76 over all streamgages. These methods maximize the use of available runoff information, resulting in a calibrated CONUS-wide application of the MWBM suitable for providing estimates of water availability at the HRU resolution for both gaged and ungaged areas of the CONUS.

  16. Detailed Calibration of SphinX instrument at the Palermo XACT facility of INAF-OAPA

    NASA Astrophysics Data System (ADS)

    Szymon, Gburek; Collura, Alfonso; Barbera, Marco; Reale, Fabio; Sylwester, Janusz; Kowalinski, Miroslaw; Bakala, Jaroslaw; Kordylewski, Zbigniew; Plocieniak, Stefan; Podgorski, Piotr; Trzebinski, Witold; Varisco, Salvatore

    The Solar photometer in X-rays (SphinX) experiment is scheduled for launch late summer 2008 on-board the Russian CORONAS-Photon satellite. SphinX will use three silicon PIN diode detectors with selected effective areas in order to record solar spectra in the X-ray energy range 0.3-15 keV with unprecedented temporal and medium energy resolution. High sensitivity and large dynamic range of the SphinX instrument will give for the first time possibility of observing solar soft X-ray variability from the weakest levels, ten times below present thresholds, to the largest X20+ flares. We present the results of the ground X-ray calibrations of the SphinX instrument performed at the X-ray Astronomy Calibration and Testing (XACT) facility of INAF-OAPA. The calibrations were essential for determination of SphinX detector energy resolution and efficiency. We describe the ground tests instrumental set-up, adopted measurement techniques and present results of the calibration data analysis.

  17. Calibration of two complex ecosystem models with different likelihood functions

    NASA Astrophysics Data System (ADS)

    Hidy, Dóra; Haszpra, László; Pintér, Krisztina; Nagy, Zoltán; Barcza, Zoltán

    2014-05-01

    The biosphere is a sensitive carbon reservoir. Terrestrial ecosystems were approximately carbon neutral during the past centuries, but they became net carbon sinks due to climate change induced environmental change and associated CO2 fertilization effect of the atmosphere. Model studies and measurements indicate that the biospheric carbon sink can saturate in the future due to ongoing climate change which can act as a positive feedback. Robustness of carbon cycle models is a key issue when trying to choose the appropriate model for decision support. The input parameters of the process-based models are decisive regarding the model output. At the same time there are several input parameters for which accurate values are hard to obtain directly from experiments or no local measurements are available. Due to the uncertainty associated with the unknown model parameters significant bias can be experienced if the model is used to simulate the carbon and nitrogen cycle components of different ecosystems. In order to improve model performance the unknown model parameters has to be estimated. We developed a multi-objective, two-step calibration method based on Bayesian approach in order to estimate the unknown parameters of PaSim and Biome-BGC models. Biome-BGC and PaSim are a widely used biogeochemical models that simulate the storage and flux of water, carbon, and nitrogen between the ecosystem and the atmosphere, and within the components of the terrestrial ecosystems (in this research the developed version of Biome-BGC is used which is referred as BBGC MuSo). Both models were calibrated regardless the simulated processes and type of model parameters. The calibration procedure is based on the comparison of measured data with simulated results via calculating a likelihood function (degree of goodness-of-fit between simulated and measured data). In our research different likelihood function formulations were used in order to examine the effect of the different model goodness metric on calibration. The different likelihoods are different functions of RMSE (root mean squared error) weighted by measurement uncertainty: exponential / linear / quadratic / linear normalized by correlation. As a first calibration step sensitivity analysis was performed in order to select the influential parameters which have strong effect on the output data. In the second calibration step only the sensitive parameters were calibrated (optimal values and confidence intervals were calculated). In case of PaSim more parameters were found responsible for the 95% of the output data variance than is case of BBGC MuSo. Analysis of the results of the optimized models revealed that the exponential likelihood estimation proved to be the most robust (best model simulation with optimized parameter, highest confidence interval increase). The cross-validation of the model simulations can help in constraining the highly uncertain greenhouse gas budget of grasslands.

  18. Assessment and Reduction of Model Parametric Uncertainties: A Case Study with A Distributed Hydrological Model

    NASA Astrophysics Data System (ADS)

    Gan, Y.; Liang, X. Z.; Duan, Q.; Xu, J.; Zhao, P.; Hong, Y.

    2017-12-01

    The uncertainties associated with the parameters of a hydrological model need to be quantified and reduced for it to be useful for operational hydrological forecasting and decision support. An uncertainty quantification framework is presented to facilitate practical assessment and reduction of model parametric uncertainties. A case study, using the distributed hydrological model CREST for daily streamflow simulation during the period 2008-2010 over ten watershed, was used to demonstrate the performance of this new framework. Model behaviors across watersheds were analyzed by a two-stage stepwise sensitivity analysis procedure, using LH-OAT method for screening out insensitive parameters, followed by MARS-based Sobol' sensitivity indices for quantifying each parameter's contribution to the response variance due to its first-order and higher-order effects. Pareto optimal sets of the influential parameters were then found by the adaptive surrogate-based multi-objective optimization procedure, using MARS model for approximating the parameter-response relationship and SCE-UA algorithm for searching the optimal parameter sets of the adaptively updated surrogate model. The final optimal parameter sets were validated against the daily streamflow simulation of the same watersheds during the period 2011-2012. The stepwise sensitivity analysis procedure efficiently reduced the number of parameters that need to be calibrated from twelve to seven, which helps to limit the dimensionality of calibration problem and serves to enhance the efficiency of parameter calibration. The adaptive MARS-based multi-objective calibration exercise provided satisfactory solutions to the reproduction of the observed streamflow for all watersheds. The final optimal solutions showed significant improvement when compared to the default solutions, with about 65-90% reduction in 1-NSE and 60-95% reduction in |RB|. The validation exercise indicated a large improvement in model performance with about 40-85% reduction in 1-NSE, and 35-90% reduction in |RB|. Overall, this uncertainty quantification framework is robust, effective and efficient for parametric uncertainty analysis, the results of which provide useful information that helps to understand the model behaviors and improve the model simulations.

  19. Wind Tunnel Strain-Gage Balance Calibration Data Analysis Using a Weighted Least Squares Approach

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; Volden, T.

    2017-01-01

    A new approach is presented that uses a weighted least squares fit to analyze wind tunnel strain-gage balance calibration data. The weighted least squares fit is specifically designed to increase the influence of single-component loadings during the regression analysis. The weighted least squares fit also reduces the impact of calibration load schedule asymmetries on the predicted primary sensitivities of the balance gages. A weighting factor between zero and one is assigned to each calibration data point that depends on a simple count of its intentionally loaded load components or gages. The greater the number of a data point's intentionally loaded load components or gages is, the smaller its weighting factor becomes. The proposed approach is applicable to both the Iterative and Non-Iterative Methods that are used for the analysis of strain-gage balance calibration data in the aerospace testing community. The Iterative Method uses a reasonable estimate of the tare corrected load set as input for the determination of the weighting factors. The Non-Iterative Method, on the other hand, uses gage output differences relative to the natural zeros as input for the determination of the weighting factors. Machine calibration data of a six-component force balance is used to illustrate benefits of the proposed weighted least squares fit. In addition, a detailed derivation of the PRESS residuals associated with a weighted least squares fit is given in the appendices of the paper as this information could not be found in the literature. These PRESS residuals may be needed to evaluate the predictive capabilities of the final regression models that result from a weighted least squares fit of the balance calibration data.

  20. Error modeling and sensitivity analysis of a parallel robot with SCARA(selective compliance assembly robot arm) motions

    NASA Astrophysics Data System (ADS)

    Chen, Yuzhen; Xie, Fugui; Liu, Xinjun; Zhou, Yanhua

    2014-07-01

    Parallel robots with SCARA(selective compliance assembly robot arm) motions are utilized widely in the field of high speed pick-and-place manipulation. Error modeling for these robots generally simplifies the parallelogram structures included by the robots as a link. As the established error model fails to reflect the error feature of the parallelogram structures, the effect of accuracy design and kinematic calibration based on the error model come to be undermined. An error modeling methodology is proposed to establish an error model of parallel robots with parallelogram structures. The error model can embody the geometric errors of all joints, including the joints of parallelogram structures. Thus it can contain more exhaustively the factors that reduce the accuracy of the robot. Based on the error model and some sensitivity indices defined in the sense of statistics, sensitivity analysis is carried out. Accordingly, some atlases are depicted to express each geometric error's influence on the moving platform's pose errors. From these atlases, the geometric errors that have greater impact on the accuracy of the moving platform are identified, and some sensitive areas where the pose errors of the moving platform are extremely sensitive to the geometric errors are also figured out. By taking into account the error factors which are generally neglected in all existing modeling methods, the proposed modeling method can thoroughly disclose the process of error transmission and enhance the efficacy of accuracy design and calibration.

  1. Parameter Estimation and Sensitivity Analysis of an Urban Surface Energy Balance Parameterization at a Tropical Suburban Site

    NASA Astrophysics Data System (ADS)

    Harshan, S.; Roth, M.; Velasco, E.

    2014-12-01

    Forecasting of the urban weather and climate is of great importance as our cities become more populated and considering the combined effects of global warming and local land use changes which make urban inhabitants more vulnerable to e.g. heat waves and flash floods. In meso/global scale models, urban parameterization schemes are used to represent the urban effects. However, these schemes require a large set of input parameters related to urban morphological and thermal properties. Obtaining all these parameters through direct measurements are usually not feasible. A number of studies have reported on parameter estimation and sensitivity analysis to adjust and determine the most influential parameters for land surface schemes in non-urban areas. Similar work for urban areas is scarce, in particular studies on urban parameterization schemes in tropical cities have so far not been reported. In order to address above issues, the town energy balance (TEB) urban parameterization scheme (part of the SURFEX land surface modeling system) was subjected to a sensitivity and optimization/parameter estimation experiment at a suburban site in, tropical Singapore. The sensitivity analysis was carried out as a screening test to identify the most sensitive or influential parameters. Thereafter, an optimization/parameter estimation experiment was performed to calibrate the input parameter. The sensitivity experiment was based on the "improved Sobol's global variance decomposition method" . The analysis showed that parameters related to road, roof and soil moisture have significant influence on the performance of the model. The optimization/parameter estimation experiment was performed using the AMALGM (a multi-algorithm genetically adaptive multi-objective method) evolutionary algorithm. The experiment showed a remarkable improvement compared to the simulations using the default parameter set. The calibrated parameters from this optimization experiment can be used for further model validation studies to identify inherent deficiencies in model physics.

  2. Probe-Specific Procedure to Estimate Sensitivity and Detection Limits for 19F Magnetic Resonance Imaging.

    PubMed

    Taylor, Alexander J; Granwehr, Josef; Lesbats, Clémentine; Krupa, James L; Six, Joseph S; Pavlovskaya, Galina E; Thomas, Neil R; Auer, Dorothee P; Meersmann, Thomas; Faas, Henryk M

    2016-01-01

    Due to low fluorine background signal in vivo, 19F is a good marker to study the fate of exogenous molecules by magnetic resonance imaging (MRI) using equilibrium nuclear spin polarization schemes. Since 19F MRI applications require high sensitivity, it can be important to assess experimental feasibility during the design stage already by estimating the minimum detectable fluorine concentration. Here we propose a simple method for the calibration of MRI hardware, providing sensitivity estimates for a given scanner and coil configuration. An experimental "calibration factor" to account for variations in coil configuration and hardware set-up is specified. Once it has been determined in a calibration experiment, the sensitivity of an experiment or, alternatively, the minimum number of required spins or the minimum marker concentration can be estimated without the need for a pilot experiment. The definition of this calibration factor is derived based on standard equations for the sensitivity in magnetic resonance, yet the method is not restricted by the limited validity of these equations, since additional instrument-dependent factors are implicitly included during calibration. The method is demonstrated using MR spectroscopy and imaging experiments with different 19F samples, both paramagnetically and susceptibility broadened, to approximate a range of realistic environments.

  3. Sensitivity analysis of a multilayer, finite-difference model of the Southeastern Coastal Plain regional aquifer system; Mississippi, Alabama, Georgia, and South Carolina

    USGS Publications Warehouse

    Pernik, Meribeth

    1987-01-01

    The sensitivity of a multilayer finite-difference regional flow model was tested by changing the calibrated values for five parameters in the steady-state model and one in the transient-state model. The parameters that changed under the steady-state condition were those that had been routinely adjusted during the calibration process as part of the effort to match pre-development potentiometric surfaces, and elements of the water budget. The tested steady-state parameters include: recharge, riverbed conductance, transmissivity, confining unit leakance, and boundary location. In the transient-state model, the storage coefficient was adjusted. The sensitivity of the model to changes in the calibrated values of these parameters was evaluated with respect to the simulated response of net base flow to the rivers, and the mean value of the absolute head residual. To provide a standard measurement of sensitivity from one parameter to another, the standard deviation of the absolute head residual was calculated. The steady-state model was shown to be most sensitive to changes in rates of recharge. When the recharge rate was held constant, the model was more sensitive to variations in transmissivity. Near the rivers, the riverbed conductance becomes the dominant parameter in controlling the heads. Changes in confining unit leakance had little effect on simulated base flow, but greatly affected head residuals. The model was relatively insensitive to changes in the location of no-flow boundaries and to moderate changes in the altitude of constant head boundaries. The storage coefficient was adjusted under transient conditions to illustrate the model 's sensitivity to changes in storativity. The model is less sensitive to an increase in storage coefficient than it is to a decrease in storage coefficient. As the storage coefficient decreased, the aquifer drawdown increases, the base flow decreased. The opposite response occurred when the storage coefficient was increased. (Author 's abstract)

  4. Noninvasive determination of optical lever sensitivity in atomic force microscopy

    NASA Astrophysics Data System (ADS)

    Higgins, M. J.; Proksch, R.; Sader, J. E.; Polcik, M.; Mc Endoo, S.; Cleveland, J. P.; Jarvis, S. P.

    2006-01-01

    Atomic force microscopes typically require knowledge of the cantilever spring constant and optical lever sensitivity in order to accurately determine the force from the cantilever deflection. In this study, we investigate a technique to calibrate the optical lever sensitivity of rectangular cantilevers that does not require contact to be made with a surface. This noncontact approach utilizes the method of Sader et al. [Rev. Sci. Instrum. 70, 3967 (1999)] to calibrate the spring constant of the cantilever in combination with the equipartition theorem [J. L. Hutter and J. Bechhoefer, Rev. Sci. Instrum. 64, 1868 (1993)] to determine the optical lever sensitivity. A comparison is presented between sensitivity values obtained from conventional static mode force curves and those derived using this noncontact approach for a range of different cantilevers in air and liquid. These measurements indicate that the method offers a quick, alternative approach for the calibration of the optical lever sensitivity.

  5. The mechanistic model, GoMDOM: Development , calibration and sensitivity analysis

    EPA Science Inventory

    This presentation will be in a series of Gulf Hypoxia modeling presentations which will be used to: 1) aid NOAA in informing scientific directions and funding decisions for their cooperators and 2) a Technical Review of all models will be provided to the Mississippi River Nutrie...

  6. Panoramic attitude sensor

    NASA Technical Reports Server (NTRS)

    Meek, I. C.

    1976-01-01

    Each subassembly, design analysis, and final calibration data on all assemblies for the Panormic Attitude Sensor (PAS) are described. The PAS is used for course attitude determination on the International Ultraviolet Explorer Spacecraft (IUE). The PAS contains a sun sensor which is sensitive only to the sun's radiation and a mechanically scanned sensor which is sensitive to the earth, moon, and the sun. The signals from these two sensors are encoded and sent back in the telemetry data stream to determine the spacecraft attitude.

  7. The Whipple Strip Sky Survey

    NASA Astrophysics Data System (ADS)

    Kertzman, M. P.

    As part of the normal operation of the Whipple 10m Gamma Ray telescope, ten minute drift scan “zenith” runs are made each night of observation for use as calibration. Most of the events recorded during a zenith run are due to the background of cosmic ray showers. However, it would be possible for a hitherto unknown source of gamma rays to drift through the field. This paper reports the results of a search for serendipitous high energy gamma ray sources in the Whipple 10m nightly calibration zenith data. From 2000-2004 nightly calibration runs were taken at an elevation of 89 º. A 2- D analysis of these drift scan runs produces a strip of width ~ 3.5º in declination and spanning the full range of right ascension. In the 2004-05 observing season the calibration runs were taken at elevations of 86° and 83°. Beginning in the 2005-06 season, the nightly calibration runs were taken at an elevation of 80º. Collectively, these drift scans cover a strip approximately 12.5º wide in declination, centered at declination 37.18º, and spanning the full range of RA. The analysis procedures developed for drift scan data, the sensitivity of the method, and the results will be presented.

  8. Sobol' sensitivity analysis for stressor impacts on honeybee ...

    EPA Pesticide Factsheets

    We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather, colony resources, population structure, and other important variables. This allows us to test the effects of defined pesticide exposure scenarios versus controlled simulations that lack pesticide exposure. The daily resolution of the model also allows us to conditionally identify sensitivity metrics. We use the variancebased global decomposition sensitivity analysis method, Sobol’, to assess firstand secondorder parameter sensitivities within VarroaPop, allowing us to determine how variance in the output is attributed to each of the input variables across different exposure scenarios. Simulations with VarroaPop indicate queen strength, forager life span and pesticide toxicity parameters are consistent, critical inputs for colony dynamics. Further analysis also reveals that the relative importance of these parameters fluctuates throughout the simulation period according to the status of other inputs. Our preliminary results show that model variability is conditional and can be attributed to different parameters depending on different timescales. By using sensitivity analysis to assess model output and variability, calibrations of simulation models can be better informed to yield more

  9. Remote sensing of selective logging in Amazonia Assessing limitations based on detailed field observations, Landsat ETM+, and textural analysis.

    Treesearch

    Gregory P. Asner; Michael Keller; Rodrigo Pereira; Johan C. Zweede

    2002-01-01

    We combined a detailed field study of forest canopy damage with calibrated Landsat 7 Enhanced Thematic Mapper Plus (ETM+) reflectance data and texture analysis to assess the sensitivity of basic broadband optical remote sensing to selective logging in Amazonia. Our field study encompassed measurements of ground damage and canopy gap fractions along a chronosequence of...

  10. A new approach for the pixel map sensitivity (PMS) evaluation of an electronic portal imaging device (EPID)

    PubMed Central

    Lucio, Francesco; Calamia, Elisa; Russi, Elvio; Marchetto, Flavio

    2013-01-01

    When using an electronic portal imaging device (EPID) for dosimetric verifications, the calibration of the sensitive area is of paramount importance. Two calibration methods are generally adopted: one, empirical, based on an external reference dosimeter or on multiple narrow beam irradiations, and one based on the EPID response simulation. In this paper we present an alternative approach based on an intercalibration procedure, independent from external dosimeters and from simulations, and is quick and easy to perform. Each element of a detector matrix is characterized by a different gain; the aim of the calibration procedure is to relate the gain of each element to a reference one. The method that we used to compute the relative gains is based on recursive acquisitions with the EPID placed in different positions, assuming a constant fluence of the beam for subsequent deliveries. By applying an established procedure and analysis algorithm, the EPID calibration was repeated in several working conditions. Data show that both the photons energy and the presence of a medium between the source and the detector affect the calibration coefficients less than 1%. The calibration coefficients were then applied to the acquired images, comparing the EPID dose images with films. Measurements were performed with open field, placing the film at the level of the EPID. The standard deviation of the distribution of the point‐to‐point difference is 0.6%. An approach of this type for the EPID calibration has many advantages with respect to the standard methods — it does not need an external dosimeter, it is not related to the irradiation techniques, and it is easy to implement in the clinical practice. Moreover, it can be applied in case of transit or nontransit dosimetry, solving the problem of the EPID calibration independently from the dose reconstruction method. PACS number: 87.56.‐v PMID:24257285

  11. Impact of the hard-coded parameters on the hydrologic fluxes of the land surface model Noah-MP

    NASA Astrophysics Data System (ADS)

    Cuntz, Matthias; Mai, Juliane; Samaniego, Luis; Clark, Martyn; Wulfmeyer, Volker; Attinger, Sabine; Thober, Stephan

    2016-04-01

    Land surface models incorporate a large number of processes, described by physical, chemical and empirical equations. The process descriptions contain a number of parameters that can be soil or plant type dependent and are typically read from tabulated input files. Land surface models may have, however, process descriptions that contain fixed, hard-coded numbers in the computer code, which are not identified as model parameters. Here we searched for hard-coded parameters in the computer code of the land surface model Noah with multiple process options (Noah-MP) to assess the importance of the fixed values on restricting the model's agility during parameter estimation. We found 139 hard-coded values in all Noah-MP process options, which are mostly spatially constant values. This is in addition to the 71 standard parameters of Noah-MP, which mostly get distributed spatially by given vegetation and soil input maps. We performed a Sobol' global sensitivity analysis of Noah-MP to variations of the standard and hard-coded parameters for a specific set of process options. 42 standard parameters and 75 hard-coded parameters were active with the chosen process options. The sensitivities of the hydrologic output fluxes latent heat and total runoff as well as their component fluxes were evaluated. These sensitivities were evaluated at twelve catchments of the Eastern United States with very different hydro-meteorological regimes. Noah-MP's hydrologic output fluxes are sensitive to two thirds of its standard parameters. The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for evaporation, which proved to be oversensitive in other land surface models as well. Surface runoff is sensitive to almost all hard-coded parameters of the snow processes and the meteorological inputs. These parameter sensitivities diminish in total runoff. Assessing these parameters in model calibration would require detailed snow observations or the calculation of hydrologic signatures of the runoff data. Latent heat and total runoff exhibit very similar sensitivities towards standard and hard-coded parameters in Noah-MP because of their tight coupling via the water balance. It should therefore be comparable to calibrate Noah-MP either against latent heat observations or against river runoff data. Latent heat and total runoff are sensitive to both, plant and soil parameters. Calibrating only a parameter sub-set of only soil parameters, for example, thus limits the ability to derive realistic model parameters. It is thus recommended to include the most sensitive hard-coded model parameters that were exposed in this study when calibrating Noah-MP.

  12. Orbit-determination performance of Doppler data for interplanetary cruise trajectories. Part 1: Error analysis methodology

    NASA Technical Reports Server (NTRS)

    Ulvestad, J. S.; Thurman, S. W.

    1992-01-01

    An error covariance analysis methodology is used to investigate different weighting schemes for two-way (coherent) Doppler data in the presence of transmission-media and observing-platform calibration errors. The analysis focuses on orbit-determination performance in the interplanetary cruise phase of deep-space missions. Analytical models for the Doppler observable and for transmission-media and observing-platform calibration errors are presented, drawn primarily from previous work. Previously published analytical models were improved upon by the following: (1) considering the effects of errors in the calibration of radio signal propagation through the troposphere and ionosphere as well as station-location errors; (2) modelling the spacecraft state transition matrix using a more accurate piecewise-linear approximation to represent the evolution of the spacecraft trajectory; and (3) incorporating Doppler data weighting functions that are functions of elevation angle, which reduce the sensitivity of the estimated spacecraft trajectory to troposphere and ionosphere calibration errors. The analysis is motivated by the need to develop suitable weighting functions for two-way Doppler data acquired at 8.4 GHz (X-band) and 32 GHz (Ka-band). This weighting is likely to be different from that in the weighting functions currently in use; the current functions were constructed originally for use with 2.3 GHz (S-band) Doppler data, which are affected much more strongly by the ionosphere than are the higher frequency data.

  13. Analysis and Calibration of Sources of Electronic Error in PSD Sensor Response.

    PubMed

    Rodríguez-Navarro, David; Lázaro-Galilea, José Luis; Bravo-Muñoz, Ignacio; Gardel-Vicente, Alfredo; Tsirigotis, Georgios

    2016-04-29

    In order to obtain very precise measurements of the position of agents located at a considerable distance using a sensor system based on position sensitive detectors (PSD), it is necessary to analyze and mitigate the factors that generate substantial errors in the system's response. These sources of error can be divided into electronic and geometric factors. The former stem from the nature and construction of the PSD as well as the performance, tolerances and electronic response of the system, while the latter are related to the sensor's optical system. Here, we focus solely on the electrical effects, since the study, analysis and correction of these are a prerequisite for subsequently addressing geometric errors. A simple calibration method is proposed, which considers PSD response, component tolerances, temperature variations, signal frequency used, signal to noise ratio (SNR), suboptimal operational amplifier parameters, and analog to digital converter (ADC) quantitation SNRQ, etc. Following an analysis of these effects and calibration of the sensor, it was possible to correct the errors, thus rendering the effects negligible, as reported in the results section.

  14. Analysis and Calibration of Sources of Electronic Error in PSD Sensor Response

    PubMed Central

    Rodríguez-Navarro, David; Lázaro-Galilea, José Luis; Bravo-Muñoz, Ignacio; Gardel-Vicente, Alfredo; Tsirigotis, Georgios

    2016-01-01

    In order to obtain very precise measurements of the position of agents located at a considerable distance using a sensor system based on position sensitive detectors (PSD), it is necessary to analyze and mitigate the factors that generate substantial errors in the system’s response. These sources of error can be divided into electronic and geometric factors. The former stem from the nature and construction of the PSD as well as the performance, tolerances and electronic response of the system, while the latter are related to the sensor’s optical system. Here, we focus solely on the electrical effects, since the study, analysis and correction of these are a prerequisite for subsequently addressing geometric errors. A simple calibration method is proposed, which considers PSD response, component tolerances, temperature variations, signal frequency used, signal to noise ratio (SNR), suboptimal operational amplifier parameters, and analog to digital converter (ADC) quantitation SNRQ, etc. Following an analysis of these effects and calibration of the sensor, it was possible to correct the errors, thus rendering the effects negligible, as reported in the results section. PMID:27136562

  15. Optics-Only Calibration of a Neural-Net Based Optical NDE Method for Structural Health Monitoring

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.

    2004-01-01

    A calibration process is presented that uses optical measurements alone to calibrate a neural-net based NDE method. The method itself detects small changes in the vibration mode shapes of structures. The optics-only calibration process confirms previous work that the sensitivity to vibration-amplitude changes can be as small as 10 nanometers. A more practical value in an NDE service laboratory is shown to be 50 nanometers. Both model-generated and experimental calibrations are demonstrated using two implementations of the calibration technique. The implementations are based on previously published demonstrations of the NDE method and an alternative calibration procedure that depends on comparing neural-net and point sensor measurements. The optics-only calibration method, unlike the alternative method, does not require modifications of the structure being tested or the creation of calibration objects. The calibration process can be used to test improvements in the NDE process and to develop a vibration-mode-independence of damagedetection sensitivity. The calibration effort was intended to support NASA s objective to promote safety in the operations of ground test facilities or aviation safety, in general, by allowing the detection of the gradual onset of structural changes and damage.

  16. In-situ calibration of nonuniformity in infrared staring and modulated systems

    NASA Astrophysics Data System (ADS)

    Black, Wiley T.

    Infrared cameras can directly measure the apparent temperature of objects, providing thermal imaging. However, the raw output from most infrared cameras suffers from a strong, often limiting noise source called nonuniformity. Manufacturing imperfections in infrared focal planes lead to high pixel-to-pixel sensitivity to electronic bias, focal plane temperature, and other effects. The resulting imagery can only provide useful thermal imaging after a nonuniformity calibration has been performed. Traditionally, these calibrations are performed by momentarily blocking the field of view with a at temperature plate or blackbody cavity. However because the pattern is a coupling of manufactured sensitivities with operational variations, periodic recalibration is required, sometimes on the order of tens of seconds. A class of computational methods called Scene-Based Nonuniformity Correction (SBNUC) has been researched for over 20 years where the nonuniformity calibration is estimated in digital processing by analysis of the video stream in the presence of camera motion. The most sophisticated SBNUC methods can completely and robustly eliminate the high-spatial frequency component of nonuniformity with only an initial reference calibration or potentially no physical calibration. I will demonstrate a novel algorithm that advances these SBNUC techniques to support all spatial frequencies of nonuniformity correction. Long-wave infrared microgrid polarimeters are a class of camera that incorporate a microscale per-pixel wire-grid polarizer directly affixed to each pixel of the focal plane. These cameras have the capability of simultaneously measuring thermal imagery and polarization in a robust integrated package with no moving parts. I will describe the necessary adaptations of my SBNUC method to operate on this class of sensor as well as demonstrate SBNUC performance in LWIR polarimetry video collected on the UA mall.

  17. ADVANCED UTILITY SIMULATION MODEL, REPORT OF SENSITIVITY TESTING, CALIBRATION, AND MODEL OUTPUT COMPARISONS (VERSION 3.0)

    EPA Science Inventory

    The report gives results of activities relating to the Advanced Utility Simulation Model (AUSM): sensitivity testing. comparison with a mature electric utility model, and calibration to historical emissions. The activities were aimed at demonstrating AUSM's validity over input va...

  18. Sensitivity analysis of a ground-water-flow model

    USGS Publications Warehouse

    Torak, Lynn J.; ,

    1991-01-01

    A sensitivity analysis was performed on 18 hydrological factors affecting steady-state groundwater flow in the Upper Floridan aquifer near Albany, southwestern Georgia. Computations were based on a calibrated, two-dimensional, finite-element digital model of the stream-aquifer system and the corresponding data inputs. Flow-system sensitivity was analyzed by computing water-level residuals obtained from simulations involving individual changes to each hydrological factor. Hydrological factors to which computed water levels were most sensitive were those that produced the largest change in the sum-of-squares of residuals for the smallest change in factor value. Plots of the sum-of-squares of residuals against multiplier or additive values that effect change in the hydrological factors are used to evaluate the influence of each factor on the simulated flow system. The shapes of these 'sensitivity curves' indicate the importance of each hydrological factor to the flow system. Because the sensitivity analysis can be performed during the preliminary phase of a water-resource investigation, it can be used to identify the types of hydrological data required to accurately characterize the flow system prior to collecting additional data or making management decisions.

  19. Dimethylsulfide model calibration and parametric sensitivity analysis for the Greenland Sea

    NASA Astrophysics Data System (ADS)

    Qu, Bo; Gabric, Albert J.; Zeng, Meifang; Xi, Jiaojiao; Jiang, Limei; Zhao, Li

    2017-09-01

    Sea-to-air fluxes of marine biogenic aerosols have the potential to modify cloud microphysics and regional radiative budgets, and thus moderate Earth's warming. Polar regions play a critical role in the evolution of global climate. In this work, we use a well-established biogeochemical model to simulate the DMS flux from the Greenland Sea (20°W-10°E and 70°N-80°N) for the period 2003-2004. Parameter sensitivity analysis is employed to identify the most sensitive parameters in the model. A genetic algorithm (GA) technique is used for DMS model parameter calibration. Data from phase 5 of the Coupled Model Intercomparison Project (CMIP5) are used to drive the DMS model under 4 × CO2 conditions. DMS flux under quadrupled CO2 levels increases more than 300% compared with late 20th century levels (1 × CO2). Reasons for the increase in DMS flux include changes in the ocean state-namely an increase in sea surface temperature (SST) and loss of sea ice-and an increase in DMS transfer velocity, especially in spring and summer. Such a large increase in DMS flux could slow the rate of warming in the Arctic via radiative budget changes associated with DMS-derived aerosols.

  20. An analysis of cross-coupling of a multicomponent jet engine test stand using finite element modeling techniques

    NASA Technical Reports Server (NTRS)

    Schweikhard, W. G.; Singnoi, W. N.

    1985-01-01

    A two axis thrust measuring system was analyzed by using a finite a element computer program to determine the sensitivities of the thrust vectoring nozzle system to misalignment of the load cells and applied loads, and the stiffness of the structural members. Three models were evaluated: (1) the basic measuring element and its internal calibration load cells; (2) the basic measuring element and its external load calibration equipment; and (3) the basic measuring element, external calibration load frame and the altitude facility support structure. Alignment of calibration loads was the greatest source of error for multiaxis thrust measuring systems. Uniform increases or decreases in stiffness of the members, which might be caused by the selection of the materials, have little effect on the accuracy of the measurements. It is found that the POLO-FINITE program is a viable tool for designing and analyzing multiaxis thrust measurement systems. The response of the test stand to step inputs that might be encountered with thrust vectoring tests was determined. The dynamic analysis show a potential problem for measuring the dynamic response characteristics of thrust vectoring systems because of the inherently light damping of the test stand.

  1. Online Calibration of the TPC Drift Time in the ALICE High Level Trigger

    NASA Astrophysics Data System (ADS)

    Rohr, David; Krzewicki, Mikolaj; Zampolli, Chiara; Wiechula, Jens; Gorbunov, Sergey; Chauvin, Alex; Vorobyev, Ivan; Weber, Steffen; Schweda, Kai; Lindenstruth, Volker

    2017-06-01

    A Large Ion Collider Experiment (ALICE) is one of the four major experiments at the Large Hadron Collider (LHC) at CERN. The high level trigger (HLT) is a compute cluster, which reconstructs collisions as recorded by the ALICE detector in real-time. It employs a custom online data-transport framework to distribute data and workload among the compute nodes. ALICE employs subdetectors that are sensitive to environmental conditions such as pressure and temperature, e.g., the time projection chamber (TPC). A precise reconstruction of particle trajectories requires calibration of these detectors. Performing calibration in real time in the HLT improves the online reconstructions and renders certain offline calibration steps obsolete speeding up offline physics analysis. For LHC Run 3, starting in 2020 when data reduction will rely on reconstructed data, online calibration becomes a necessity. Reconstructed particle trajectories build the basis for the calibration making a fast online-tracking mandatory. The main detectors used for this purpose are the TPC and Inner Tracking System. Reconstructing the trajectories in the TPC is the most compute-intense step. We present several improvements to the ALICE HLT developed to facilitate online calibration. The main new development for online calibration is a wrapper that can run ALICE offline analysis and calibration tasks inside the HLT. In addition, we have added asynchronous processing capabilities to support long-running calibration tasks in the HLT framework, which runs event-synchronously otherwise. In order to improve the resiliency, an isolated process performs the asynchronous operations such that even a fatal error does not disturb data taking. We have complemented the original loop-free HLT chain with ZeroMQ data-transfer components. The ZeroMQ components facilitate a feedback loop that inserts the calibration result created at the end of the chain back into tracking components at the beginning of the chain, after a short delay. All these new features are implemented in a general way, such that they have use-cases aside from online calibration. In order to gather sufficient statistics for the calibration, the asynchronous calibration component must process enough events per time interval. Since the calibration is valid only for a certain time period, the delay until the feedback loop provides updated calibration data must not be too long. A first full-scale test of the online calibration functionality was performed during 2015 heavy-ion run under real conditions. Since then, online calibration is enabled and benchmarked in 2016 proton-proton data taking. We present a timing analysis of this first online-calibration test, which concludes that the HLT is capable of online TPC drift time calibration fast enough to calibrate the tracking via the feedback loop. We compare the calibration results with the offline calibration and present a comparison of the residuals of the TPC cluster coordinates with respect to offline reconstruction.

  2. Search for the lepton-family-number nonconserving decay μ+-->e+γ

    NASA Astrophysics Data System (ADS)

    Ahmed, M.; Amann, J. F.; Barlow, D.; Black, K.; Bolton, R. D.; Brooks, M. L.; Carius, S.; Chen, Y. K.; Chernyshev, A.; Concannon, H. M.; Cooper, M. D.; Cooper, P. S.; Crocker, J.; Dittmann, J. R.; Dzemidzic, M.; Empl, A.; Fisk, R. J.; Fleet, E.; Foreman, W.; Gagliardi, C. A.; Haim, D.; Hallin, A.; Hoffman, C. M.; Hogan, G. E.; Hughes, E. B.; Hungerford, E. V.; Jui, C. C.; Kim, G. J.; Knott, J. E.; Koetke, D. D.; Kozlowski, T.; Kroupa, M. A.; Kunselman, A. R.; Lan, K. A.; Laptev, V.; Lee, D.; Liu, F.; Manweiler, R. W.; Marshall, R.; Mayes, B. W.; Mischke, R. E.; Nefkens, B. M.; Nickerson, L. M.; Nord, P. M.; Oothoudt, M. A.; Otis, J. N.; Phelps, R.; Piilonen, L. E.; Pillai, C.; Pinsky, L.; Ritter, M. W.; Smith, C.; Stanislaus, T. D.; Stantz, K. M.; Szymanski, J. J.; Tang, L.; Tippens, W. B.; Tribble, R. E.; Tu, X. L.; van Ausdeln, L. A.; von Witch, W. H.; Whitehouse, D.; Wilkinson, C.; Wright, B.; Wright, S. C.; Zhang, Y.; Ziock, K. O.

    2002-06-01

    The MEGA experiment, which searched for the muon- and electron-number violating decay μ+→e+γ, is described. The spectrometer system, the calibrations, the data taking procedures, the data analysis, and the sensitivity of the experiment are discussed. The most stringent upper limit on the branching ratio, B(μ+→e+γ)<1.2×10-11 with 90% confidence, is derived from a likelihood analysis.

  3. Fusion neutron detector for time-of-flight measurements in z-pinch and plasma focus experiments.

    PubMed

    Klir, D; Kravarik, J; Kubes, P; Rezac, K; Litseva, E; Tomaszewski, K; Karpinski, L; Paduch, M; Scholz, M

    2011-03-01

    We have developed and tested sensitive neutron detectors for neutron time-of-flight measurements in z-pinch and plasma focus experiments with neutron emission times in tens of nanoseconds and with neutron yields between 10(6) and 10(12) per one shot. The neutron detectors are composed of a BC-408 fast plastic scintillator and Hamamatsu H1949-51 photomultiplier tube (PMT). During the calibration procedure, a PMT delay was determined for various operating voltages. The temporal resolution of the neutron detector was measured for the most commonly used PMT voltage of 1.4 kV. At the PF-1000 plasma focus, a novel method of the acquisition of a pulse height distribution has been used. This pulse height analysis enabled to determine the single neutron sensitivity for various neutron energies and to calibrate the neutron detector for absolute neutron yields at about 2.45 MeV.

  4. Influence of the quality of intraoperative fluoroscopic images on the spatial positioning accuracy of a CAOS system.

    PubMed

    Wang, Junqiang; Wang, Yu; Zhu, Gang; Chen, Xiangqian; Zhao, Xiangrui; Qiao, Huiting; Fan, Yubo

    2018-06-01

    Spatial positioning accuracy is a key issue in a computer-assisted orthopaedic surgery (CAOS) system. Since intraoperative fluoroscopic images are one of the most important input data to the CAOS system, the quality of these images should have a significant influence on the accuracy of the CAOS system. But the regularities and mechanism of the influence of the quality of intraoperative images on the accuracy of a CAOS system have yet to be studied. Two typical spatial positioning methods - a C-arm calibration-based method and a bi-planar positioning method - are used to study the influence of different image quality parameters, such as resolution, distortion, contrast and signal-to-noise ratio, on positioning accuracy. The error propagation rules of image error in different spatial positioning methods are analyzed by the Monte Carlo method. Correlation analysis showed that resolution and distortion had a significant influence on spatial positioning accuracy. In addition the C-arm calibration-based method was more sensitive to image distortion, while the bi-planar positioning method was more susceptible to image resolution. The image contrast and signal-to-noise ratio have no significant influence on the spatial positioning accuracy. The result of Monte Carlo analysis proved that generally the bi-planar positioning method was more sensitive to image quality than the C-arm calibration-based method. The quality of intraoperative fluoroscopic images is a key issue in the spatial positioning accuracy of a CAOS system. Although the 2 typical positioning methods have very similar mathematical principles, they showed different sensitivities to different image quality parameters. The result of this research may help to create a realistic standard for intraoperative fluoroscopic images for CAOS systems. Copyright © 2018 John Wiley & Sons, Ltd.

  5. Polarization Calibration of the Chromospheric Lyman-Alpha SpectroPolarimeter for a 0.1% Polarization Sensitivity in the VUV Range. Part II: In-Flight Calibration

    NASA Astrophysics Data System (ADS)

    Giono, G.; Ishikawa, R.; Narukage, N.; Kano, R.; Katsukawa, Y.; Kubo, M.; Ishikawa, S.; Bando, T.; Hara, H.; Suematsu, Y.; Winebarger, A.; Kobayashi, K.; Auchère, F.; Trujillo Bueno, J.; Tsuneta, S.; Shimizu, T.; Sakao, T.; Cirtain, J.; Champey, P.; Asensio Ramos, A.; Štěpán, J.; Belluzzi, L.; Manso Sainz, R.; De Pontieu, B.; Ichimoto, K.; Carlsson, M.; Casini, R.; Goto, M.

    2017-04-01

    The Chromospheric Lyman-Alpha SpectroPolarimeter is a sounding rocket instrument designed to measure for the first time the linear polarization of the hydrogen Lyman-{α} line (121.6 nm). The instrument was successfully launched on 3 September 2015 and observations were conducted at the solar disc center and close to the limb during the five-minutes flight. In this article, the disc center observations are used to provide an in-flight calibration of the instrument spurious polarization. The derived in-flight spurious polarization is consistent with the spurious polarization levels determined during the pre-flight calibration and a statistical analysis of the polarization fluctuations from solar origin is conducted to ensure a 0.014% precision on the spurious polarization. The combination of the pre-flight and the in-flight polarization calibrations provides a complete picture of the instrument response matrix, and a proper error transfer method is used to confirm the achieved polarization accuracy. As a result, the unprecedented 0.1% polarization accuracy of the instrument in the vacuum ultraviolet is ensured by the polarization calibration.

  6. A practical approach to spectral calibration of short wavelength infrared hyper-spectral imaging systems

    NASA Astrophysics Data System (ADS)

    Bürmen, Miran; Pernuš, Franjo; Likar, Boštjan

    2010-02-01

    Near-infrared spectroscopy is a promising, rapidly developing, reliable and noninvasive technique, used extensively in the biomedicine and in pharmaceutical industry. With the introduction of acousto-optic tunable filters (AOTF) and highly sensitive InGaAs focal plane sensor arrays, real-time high resolution hyper-spectral imaging has become feasible for a number of new biomedical in vivo applications. However, due to the specificity of the AOTF technology and lack of spectral calibration standardization, maintaining long-term stability and compatibility of the acquired hyper-spectral images across different systems is still a challenging problem. Efficiently solving both is essential as the majority of methods for analysis of hyper-spectral images relay on a priori knowledge extracted from large spectral databases, serving as the basis for reliable qualitative or quantitative analysis of various biological samples. In this study, we propose and evaluate fast and reliable spectral calibration of hyper-spectral imaging systems in the short wavelength infrared spectral region. The proposed spectral calibration method is based on light sources or materials, exhibiting distinct spectral features, which enable robust non-rigid registration of the acquired spectra. The calibration accounts for all of the components of a typical hyper-spectral imaging system such as AOTF, light source, lens and optical fibers. The obtained results indicated that practical, fast and reliable spectral calibration of hyper-spectral imaging systems is possible, thereby assuring long-term stability and inter-system compatibility of the acquired hyper-spectral images.

  7. Calibration of HST wide field camera for quantitative analysis of faint galaxy images

    NASA Technical Reports Server (NTRS)

    Ratnatunga, Kavan U.; Griffiths, Richard E.; Casertano, Stefano; Neuschaefer, Lyman W.; Wyckoff, Eric W.

    1994-01-01

    We present the methods adopted to optimize the calibration of images obtained with the Hubble Space Telescope (HST) Wide Field Camera (WFC) (1991-1993). Our main goal is to improve quantitative measurement of faint images, with special emphasis on the faint (I approximately 20-24 mag) stars and galaxies observed as a part of the Medium-Deep Survey. Several modifications to the standard calibration procedures have been introduced, including improved bias and dark images, and a new supersky flatfield obtained by combining a large number of relatively object-free Medium-Deep Survey exposures of random fields. The supersky flat has a pixel-to-pixel rms error of about 2.0% in F555W and of 2.4% in F785LP; large-scale variations are smaller than 1% rms. Overall, our modifications improve the quality of faint images with respect to the standard calibration by about a factor of five in photometric accuracy and about 0.3 mag in sensitivity, corresponding to about a factor of two in observing time. The relevant calibration images have been made available to the scientific community.

  8. Multi-Dimensional Calibration of Impact Dynamic Models

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.; Annett, Martin S.; Jackson, Karen E.

    2011-01-01

    NASA Langley, under the Subsonic Rotary Wing Program, recently completed two helicopter tests in support of an in-house effort to study crashworthiness. As part of this effort, work is on-going to investigate model calibration approaches and calibration metrics for impact dynamics models. Model calibration of impact dynamics problems has traditionally assessed model adequacy by comparing time histories from analytical predictions to test at only a few critical locations. Although this approach provides for a direct measure of the model predictive capability, overall system behavior is only qualitatively assessed using full vehicle animations. In order to understand the spatial and temporal relationships of impact loads as they migrate throughout the structure, a more quantitative approach is needed. In this work impact shapes derived from simulated time history data are used to recommend sensor placement and to assess model adequacy using time based metrics and orthogonality multi-dimensional metrics. An approach for model calibration is presented that includes metric definitions, uncertainty bounds, parameter sensitivity, and numerical optimization to estimate parameters to reconcile test with analysis. The process is illustrated using simulated experiment data.

  9. Lamp mapping technique for independent determination of the water vapor mixing ratio calibration factor for a Raman lidar system

    NASA Astrophysics Data System (ADS)

    Venable, Demetrius D.; Whiteman, David N.; Calhoun, Monique N.; Dirisu, Afusat O.; Connell, Rasheen M.; Landulfo, Eduardo

    2011-08-01

    We have investigated a technique that allows for the independent determination of the water vapor mixing ratio calibration factor for a Raman lidar system. This technique utilizes a procedure whereby a light source of known spectral characteristics is scanned across the aperture of the lidar system's telescope and the overall optical efficiency of the system is determined. Direct analysis of the temperature-dependent differential scattering cross sections for vibration and vibration-rotation transitions (convolved with narrowband filters) along with the measured efficiency of the system, leads to a theoretical determination of the water vapor mixing ratio calibration factor. A calibration factor was also obtained experimentally from lidar measurements and radiosonde data. A comparison of the theoretical and experimentally determined values agrees within 5%. We report on the sensitivity of the water vapor mixing ratio calibration factor to uncertainties in parameters that characterize the narrowband transmission filters, the temperature-dependent differential scattering cross section, and the variability of the system efficiency ratios as the lamp is scanned across the aperture of the telescope used in the Howard University Raman Lidar system.

  10. Calibrationless parallel magnetic resonance imaging: a joint sparsity model.

    PubMed

    Majumdar, Angshul; Chaudhury, Kunal Narayan; Ward, Rabab

    2013-12-05

    State-of-the-art parallel MRI techniques either explicitly or implicitly require certain parameters to be estimated, e.g., the sensitivity map for SENSE, SMASH and interpolation weights for GRAPPA, SPIRiT. Thus all these techniques are sensitive to the calibration (parameter estimation) stage. In this work, we have proposed a parallel MRI technique that does not require any calibration but yields reconstruction results that are at par with (or even better than) state-of-the-art methods in parallel MRI. Our proposed method required solving non-convex analysis and synthesis prior joint-sparsity problems. This work also derives the algorithms for solving them. Experimental validation was carried out on two datasets-eight channel brain and eight channel Shepp-Logan phantom. Two sampling methods were used-Variable Density Random sampling and non-Cartesian Radial sampling. For the brain data, acceleration factor of 4 was used and for the other an acceleration factor of 6 was used. The reconstruction results were quantitatively evaluated based on the Normalised Mean Squared Error between the reconstructed image and the originals. The qualitative evaluation was based on the actual reconstructed images. We compared our work with four state-of-the-art parallel imaging techniques; two calibrated methods-CS SENSE and l1SPIRiT and two calibration free techniques-Distributed CS and SAKE. Our method yields better reconstruction results than all of them.

  11. Comparison of laser ablation and dried solution aerosol as sampling systems in inductively coupled plasma mass spectrometry.

    PubMed

    Coedo, A G; Padilla, I; Dorado, M T

    2004-12-01

    This paper describes a study designed to determine the possibility of using a dried aerosol solution for calibration in laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS). The relative sensitivities of tested materials mobilized by laser ablation and by aqueous nebulization were established, and the experimentally determined relative sensitivity factors (RSFs) were used in conjunction with aqueous calibration for the analysis of solid steel samples. To such a purpose a set of CRM carbon steel samples (SS-451/1 to SS-460/1) were sampled into an ICP-MS instrument by solution nebulization using a microconcentric nebulizer with membrane desolvating (D-MCN) and by laser ablation (LA). Both systems were applied with the same ICP-MS operating parameters and the analyte signals were compared. The RSF (desolvated aerosol response/ablated solid response) values were close to 1 for the analytes Cr, Ni, Co, V, and W, about 1.3 for Mo, and 1.7 for As, P, and Mn. Complementary tests were carried out using CRM SS-455/1 as a solid standard for one-point calibration, applying LAMTRACE software for data reduction and quantification. The analytical results are in good agreement with the certified values in all cases, showing that the applicability of dried aerosol solutions is a good alternative calibration system for laser ablation sampling.

  12. Quantitative estimation of α-PVP metabolites in urine by GC-APCI-QTOFMS with nitrogen chemiluminescence detection based on parent drug calibration.

    PubMed

    Mesihää, Samuel; Rasanen, Ilpo; Ojanperä, Ilkka

    2018-05-01

    Gas chromatography (GC) hyphenated with nitrogen chemiluminescence detection (NCD) and quadrupole time-of-flight mass spectrometry (QTOFMS) was applied for the first time to the quantitative analysis of new psychoactive substances (NPS) in urine, based on the N-equimolar response of NCD. A method was developed and validated to estimate the concentrations of three metabolites of the common stimulant NPS α-pyrrolidinovalerophenone (α-PVP) in spiked urine samples, simulating an analysis having no authentic reference standards for the metabolites and using the parent drug instead for quantitative calibration. The metabolites studied were OH-α-PVP (M1), 2″-oxo-α-PVP (M3), and N,N-bis-dealkyl-PVP (2-amino-1-phenylpentan-1-one; M5). Sample preparation involved liquid-liquid extraction with a mixture of ethyl acetate and butyl chloride at a basic pH and subsequent silylation of the sec-hydroxyl and prim-amino groups of M1 and M5, respectively. Simultaneous compound identification was based on the accurate masses of the protonated molecules for each compound by QTOFMS following atmospheric pressure chemical ionization. The accuracy of quantification of the parent-calibrated NCD method was compared with that of the corresponding parent-calibrated QTOFMS method, as well as with a reference QTOFMS method calibrated with the authentic reference standards. The NCD method produced an equally good accuracy to the reference method for α-PVP, M3 and M5, while a higher negative bias (25%) was obtained for M1, best explainable by recovery and stability issues. The performance of the parent-calibrated QTOFMS method was inferior to the reference method with an especially high negative bias (60%) for M1. The NCD method enabled better quantitative precision than the QTOFMS methods To evaluate the novel approach in casework, twenty post- mortem urine samples previously found positive for α-PVP were analyzed by the parent calibrated NCD method and the reference QTOFMS method. The highest difference in the quantitative results between the two methods was only 33%, and the NCD method's precision as the coefficient of variation was better than 13%. The limit of quantification for the NCD method was approximately 0.25μg/mL in urine, which generally allowed the analysis of α-PVP and the main metabolite M1. However, the sensitivity was not sufficient for the low concentrations of M3 and M5. Consequently, while having potential for instant analysis of NPS and metabolites in moderate concentrations without reference standards, the NCD method should be further developed for improved sensitivity to be more generally applicable. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Sensitive analysis of blonanserin, a novel antipsychotic agent, in human plasma by ultra-performance liquid chromatography-tandem mass spectrometry.

    PubMed

    Ogawa, Tadashi; Hattori, Hideki; Kaneko, Rina; Ito, Kenjiro; Iwai, Masayo; Mizutani, Yoko; Arinobu, Tetsuya; Ishii, Akira; Suzuki, Osamu; Seno, Hiroshi

    2010-01-01

    A rapid and sensitive method for analysis of blonanserin in human plasma by ultra-performance liquid chromatography-tandem mass spectrometry is presented. After pretreatment of a plasma sample by solid-phase extraction, blonanserin was analyzed by the system with a C(18) column. This method gave satisfactory recovery rates, reproducibility, and good linearity of calibration curve in the range of 0.01-10.0 ng/mL for quality control samples spiked with blonanserin. The detection limit was as low as 1 pg/mL. This method seems very useful in forensic and clinical toxicology and pharmacokinetic studies.

  14. High purity polyimide analysis by solid sampling graphite furnace atomic absorption spectrometry

    NASA Astrophysics Data System (ADS)

    Santos, Rafael F.; Carvalho, Gabriel S.; Duarte, Fabio A.; Bolzan, Rodrigo C.; Flores, Erico M. M.

    2017-03-01

    In this work, Cr, Cu, Mn, Na and Ni were determined in high purity polyimides (99.5%) by solid sampling graphite furnace atomic absorption spectrometry (SS-GFAAS) using Zeeman effect background correction system with variable magnetic field, making possible the simultaneous measurement at high or low sensitivity. The following analytical parameters were evaluated: pyrolysis and atomization temperatures, feasibility of calibration with aqueous solution, linear calibration range, sample mass range and the use of chemical modifier. Calibration with aqueous standard solutions was feasible for all analytes. No under or overestimated results were observed and up to 10 mg sample could be introduced on the platform for the determination of Cr, Cu, Mn, Na and Ni. The relative standard deviation ranged from 3 to 20%. The limits of detection (LODs) achieved using the high sensitivity mode were as low as 7.0, 2.5, 1.7, 17 and 0.12 ng g- 1 for Cr, Cu, Mn, Na and Ni, respectively. No addition of chemical modifier was necessary, except for Mn determination where Pd was required. The accuracy was evaluated by analyte spike and by comparison of the results with those obtained by inductively coupled plasma optical emission spectrometry and inductively coupled plasma mass spectrometry after microwave-assisted digestion in a single reaction chamber system and also by neutron activation analysis. No difference among the results obtained by SS-GFAAS and those obtained by alternative analytical methods using independent techniques. SS-GFAAS method showed some advantages, such as the determination of metallic contaminants in high purity polyimides with practically no sample preparation, very low LODs, calibration with aqueous standards and determination in a wide range of concentration.

  15. Positioning system for single or multi-axis sensitive instrument calibration and calibration system for use therewith

    NASA Technical Reports Server (NTRS)

    Finley, Tom D. (Inventor); Parker, Peter A. (Inventor)

    2008-01-01

    A positioning and calibration system are provided for use in calibrating a single or multi axis sensitive instrument, such as an inclinometer. The positioning system includes a positioner that defines six planes of tangential contact. A mounting region within the six planes is adapted to have an inclinometer coupled thereto. The positioning system also includes means for defining first and second flat surfaces that are approximately perpendicular to one another with the first surface adapted to be oriented relative to a local or induced reference field of interest to the instrument being calibrated, such as a gravitational vector. The positioner is positioned such that one of its six planes tangentially rests on the first flat surface and another of its six planes tangentially contacts the second flat surface. A calibration system is formed when the positioning system is used with a data collector and processor.

  16. Probabilistic calibration of the distributed hydrological model RIBS applied to real-time flood forecasting: the Harod river basin case study (Israel)

    NASA Astrophysics Data System (ADS)

    Nesti, Alice; Mediero, Luis; Garrote, Luis; Caporali, Enrica

    2010-05-01

    An automatic probabilistic calibration method for distributed rainfall-runoff models is presented. The high number of parameters in hydrologic distributed models makes special demands on the optimization procedure to estimate model parameters. With the proposed technique it is possible to reduce the complexity of calibration while maintaining adequate model predictions. The first step of the calibration procedure of the main model parameters is done manually with the aim to identify their variation range. Afterwards a Monte-Carlo technique is applied, which consists on repetitive model simulations with randomly generated parameters. The Monte Carlo Analysis Toolbox (MCAT) includes a number of analysis methods to evaluate the results of these Monte Carlo parameter sampling experiments. The study investigates the use of a global sensitivity analysis as a screening tool to reduce the parametric dimensionality of multi-objective hydrological model calibration problems, while maximizing the information extracted from hydrological response data. The method is applied to the calibration of the RIBS flood forecasting model in the Harod river basin, placed on Israel. The Harod basin has an extension of 180 km2. The catchment has a Mediterranean climate and it is mainly characterized by a desert landscape, with a soil that is able to absorb large quantities of rainfall and at the same time is capable to generate high peaks of discharge. Radar rainfall data with 6 minute temporal resolution are available as input to the model. The aim of the study is the validation of the model for real-time flood forecasting, in order to evaluate the benefits of improved precipitation forecasting within the FLASH European project.

  17. MODIS airborne simulator visible and near-infrared calibration, 1991 FIRE-Cirrus field experiment. Calibration version: FIRE King 1.1

    NASA Technical Reports Server (NTRS)

    Arnold, G. Thomas; Fitzgerald, Michael; Grant, Patrick S.; King, Michael D.

    1994-01-01

    Calibration of the visible and near-infrared channels of the MODIS Airborne Simulator (MAS) is derived from observations of a calibrated light source. For the 1991 FIRE-Cirrus field experiment, the calibrated light source was the NASA Goddard 48-inch integrating hemisphere. Laboratory tests during the FIRE Cirrus field experiment were conducted to calibrate the hemisphere and from the hemisphere to the MAS. The purpose of this report is to summarize the FIRE-Cirrus hemisphere calibration, and then describe how the MAS was calibrated from observations of the hemisphere data. All MAS calibration measurements are presented, and determination of the MAS calibration coefficients (raw counts to radiance conversion) is discussed. Thermal sensitivity of the MAS visible and near-infrared calibration is also discussed. Typically, the MAS in-flight is 30 to 60 degrees C colder than the room temperature laboratory calibration. Results from in-flight temperature measurements and tests of the MAS in a cold chamber are given, and from these, equations are derived to adjust the MAS in-flight data to what the value would be at laboratory conditions. For FIRE-Cirrus data, only channels 3 through 6 were found to be temperature sensitive. The final section of this report describes comparisons to an independent MAS (room temperature) calibration by Ames personnel using their 30-inch integrating sphere.

  18. Use of local noise power spectrum and wavelet analysis in quantitative image quality assurance for EPIDs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Soyoung

    Purpose: To investigate the use of local noise power spectrum (NPS) to characterize image noise and wavelet analysis to isolate defective pixels and inter-subpanel flat-fielding artifacts for quantitative quality assurance (QA) of electronic portal imaging devices (EPIDs). Methods: A total of 93 image sets including custom-made bar-pattern images and open exposure images were collected from four iViewGT a-Si EPID systems over three years. Global quantitative metrics such as modulation transform function (MTF), NPS, and detective quantum efficiency (DQE) were computed for each image set. Local NPS was also calculated for individual subpanels by sampling region of interests within each subpanelmore » of the EPID. The 1D NPS, obtained by radially averaging the 2D NPS, was fitted to a power-law function. The r-square value of the linear regression analysis was used as a singular metric to characterize the noise properties of individual subpanels of the EPID. The sensitivity of the local NPS was first compared with the global quantitative metrics using historical image sets. It was then compared with two commonly used commercial QA systems with images collected after applying two different EPID calibration methods (single-level gain and multilevel gain). To detect isolated defective pixels and inter-subpanel flat-fielding artifacts, Haar wavelet transform was applied on the images. Results: Global quantitative metrics including MTF, NPS, and DQE showed little change over the period of data collection. On the contrary, a strong correlation between the local NPS (r-square values) and the variation of the EPID noise condition was observed. The local NPS analysis indicated image quality improvement with the r-square values increased from 0.80 ± 0.03 (before calibration) to 0.85 ± 0.03 (after single-level gain calibration) and to 0.96 ± 0.03 (after multilevel gain calibration), while the commercial QA systems failed to distinguish the image quality improvement between the two calibration methods. With wavelet analysis, defective pixels and inter-subpanel flat-fielding artifacts were clearly identified as spikes after thresholding the inversely transformed images. Conclusions: The proposed local NPS (r-square values) showed superior sensitivity to the noise level variations of individual subpanels compared with global quantitative metrics such as MTF, NPS, and DQE. Wavelet analysis was effective in detecting isolated defective pixels and inter-subpanel flat-fielding artifacts. The proposed methods are promising for the early detection of imaging artifacts of EPIDs.« less

  19. Deuterium-tritium neutron yield measurements with the 4.5 m neutron-time-of-flight detectors at NIF.

    PubMed

    Moran, M J; Bond, E J; Clancy, T J; Eckart, M J; Khater, H Y; Glebov, V Yu

    2012-10-01

    The first several campaigns of laser fusion experiments at the National Ignition Facility (NIF) included a family of high-sensitivity scintillator∕photodetector neutron-time-of-flight (nTOF) detectors for measuring deuterium-deuterium (DD) and DT neutron yields. The detectors provided consistent neutron yield (Y(n)) measurements from below 10(9) (DD) to nearly 10(15) (DT). The detectors initially demonstrated detector-to-detector Y(n) precisions better than 5%, but lacked in situ absolute calibrations. Recent experiments at NIF now have provided in situ DT yield calibration data that establish the absolute sensitivity of the 4.5 m differential tissue harmonic imaging (DTHI) detector with an accuracy of ± 10% and precision of ± 1%. The 4.5 m nTOF calibration measurements also have helped to establish improved detector impulse response functions and data analysis methods, which have contributed to improving the accuracy of the Y(n) measurements. These advances have also helped to extend the usefulness of nTOF measurements of ion temperature and downscattered neutron ratio (neutron yield 10-12 MeV divided by yield 13-15 MeV) with other nTOF detectors.

  20. Impact of influent data frequency and model structure on the quality of WWTP model calibration and uncertainty.

    PubMed

    Cierkens, Katrijn; Plano, Salvatore; Benedetti, Lorenzo; Weijers, Stefan; de Jonge, Jarno; Nopens, Ingmar

    2012-01-01

    Application of activated sludge models (ASMs) to full-scale wastewater treatment plants (WWTPs) is still hampered by the problem of model calibration of these over-parameterised models. This either requires expert knowledge or global methods that explore a large parameter space. However, a better balance in structure between the submodels (ASM, hydraulic, aeration, etc.) and improved quality of influent data result in much smaller calibration efforts. In this contribution, a methodology is proposed that links data frequency and model structure to calibration quality and output uncertainty. It is composed of defining the model structure, the input data, an automated calibration, confidence interval computation and uncertainty propagation to the model output. Apart from the last step, the methodology is applied to an existing WWTP using three models differing only in the aeration submodel. A sensitivity analysis was performed on all models, allowing the ranking of the most important parameters to select in the subsequent calibration step. The aeration submodel proved very important to get good NH(4) predictions. Finally, the impact of data frequency was explored. Lowering the frequency resulted in larger deviations of parameter estimates from their default values and larger confidence intervals. Autocorrelation due to high frequency calibration data has an opposite effect on the confidence intervals. The proposed methodology opens doors to facilitate and improve calibration efforts and to design measurement campaigns.

  1. Multielevation calibration of frequency-domain electromagnetic data

    USGS Publications Warehouse

    Minsley, Burke J.; Kass, M. Andy; Hodges, Greg; Smith, Bruce D.

    2014-01-01

    Systematic calibration errors must be taken into account because they can substantially impact the accuracy of inverted subsurface resistivity models derived from frequency-domain electromagnetic data, resulting in potentially misleading interpretations. We have developed an approach that uses data acquired at multiple elevations over the same location to assess calibration errors. A significant advantage is that this method does not require prior knowledge of subsurface properties from borehole or ground geophysical data (though these can be readily incorporated if available), and is, therefore, well suited to remote areas. The multielevation data were used to solve for calibration parameters and a single subsurface resistivity model that are self consistent over all elevations. The deterministic and Bayesian formulations of the multielevation approach illustrate parameter sensitivity and uncertainty using synthetic- and field-data examples. Multiplicative calibration errors (gain and phase) were found to be better resolved at high frequencies and when data were acquired over a relatively conductive area, whereas additive errors (bias) were reasonably resolved over conductive and resistive areas at all frequencies. The Bayesian approach outperformed the deterministic approach when estimating calibration parameters using multielevation data at a single location; however, joint analysis of multielevation data at multiple locations using the deterministic algorithm yielded the most accurate estimates of calibration parameters. Inversion results using calibration-corrected data revealed marked improvement in misfit, lending added confidence to the interpretation of these models.

  2. Calibrating Detailed Chemical Analysis of M dwarfs

    NASA Astrophysics Data System (ADS)

    Veyette, Mark; Muirhead, Philip Steven; Mann, Andrew; Brewer, John; Allard, France; Homeier, Derek

    2018-01-01

    The ability to perform detailed chemical analysis of Sun-like F-, G-, and K-type stars is a powerful tool with many applications including studying the chemical evolution of the Galaxy, assessing membership in stellar kinematic groups, and constraining planet formation theories. Unfortunately, complications in modeling cooler stellar atmospheres has hindered similar analysis of M-dwarf stars. Large surveys of FGK abundances play an important role in developing methods to measure the compositions of M dwarfs by providing benchmark FGK stars that have widely-separated M dwarf companions. These systems allow us to empirically calibrate metallicity-sensitive features in M dwarf spectra. However, current methods to measure metallicity in M dwarfs from moderate-resolution spectra are limited to measuring overall metallicity and largely rely on astrophysical abundance correlations in stellar populations. In this talk, I will discuss how large, homogeneous catalogs of precise FGK abundances are crucial to advancing chemical analysis of M dwarfs beyond overall metallicity to direct measurements of individual elemental abundances. I will present a new method to analyze high-resolution, NIR spectra of M dwarfs that employs an empirical calibration of synthetic M dwarf spectra to infer effective temperature, Fe abundance, and Ti abundance. This work is a step toward detailed chemical analysis of M dwarfs at a similar precision achieved for FGK stars.

  3. An Accurate Temperature Correction Model for Thermocouple Hygrometers 1

    PubMed Central

    Savage, Michael J.; Cass, Alfred; de Jager, James M.

    1982-01-01

    Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques. In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38°C). The model based on calibration at two temperatures is superior to that based on only one calibration. The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25°C, if the calibration slopes are corrected for temperature. PMID:16662241

  4. Revisiting Short-Wave-Infrared (SWIR) Bands for Atmospheric Correction in Coastal Waters

    NASA Technical Reports Server (NTRS)

    Pahlevan, Nima; Roger, Jean-Claude; Ahmad, Ziauddin

    2017-01-01

    The shortwave infrared (SWIR) bands on the existing Earth Observing missions like MODIS have been designed to meet land and atmospheric science requirements. The future geostationary and polar-orbiting ocean color missions, however, require highly sensitive SWIR bands (greater than 1550nm) to allow for a precise removal of aerosol contributions. This will allow for reasonable retrievals of the remote sensing reflectance (R(sub rs)) using standard NASA atmospheric corrections over turbid coastal waters. Design, fabrication, and maintaining high-performance SWIR bands at very low signal levels bear significant costs on dedicated ocean color missions. This study aims at providing a full analysis of the utility of alternative SWIR bands within the 1600nm atmospheric window if the bands within the 2200nm window were to be excluded due to engineering/cost constraints. Following a series of sensitivity analyses for various spectral band configurations as a function of water vapor amount, we chose spectral bands centered at 1565 and 1675nm as suitable alternative bands within the 1600nm window for a future geostationary imager. The sensitivity of this band combination to different aerosol conditions, calibration uncertainties, and extreme water turbidity were studied and compared with that of all band combinations available on existing polar-orbiting missions. The combination of the alternative channels was shown to be as sensitive to test aerosol models as existing near-infrared (NIR) band combinations (e.g., 748 and 869nm) over clear open ocean waters. It was further demonstrated that while in extremely turbid waters the 1565/1675 band pair yields R(sub rs) retrievals as good as those derived from all other existing SWIR band pairs (greater than 1550nm), their total calibration uncertainties must be less than 1% to meet current science requirements for ocean color retrievals (i.e., delta R(sub rs) (443) less than 5%). We further show that the aerosol removal using the NIR and SWIR bands (available on the existing polar-orbiting missions) can be very sensitive to calibration uncertainties. This requires the need for monitoring the calibration of these bands to ensure consistent multi-mission ocean color products in coastal/inland waters.

  5. Revisiting short-wave-infrared (SWIR) bands for atmospheric correction in coastal waters.

    PubMed

    Pahlevan, Nima; Roger, Jean-Claude; Ahmad, Ziauddin

    2017-03-20

    The shortwave infrared (SWIR) bands on the existing Earth Observing missions like MODIS have been designed to meet land and atmospheric science requirements. The future geostationary and polar-orbiting ocean color missions, however, require highly sensitive SWIR bands (> 1550nm) to allow for a precise removal of aerosol contributions. This will allow for reasonable retrievals of the remote sensing reflectance (Rrs) using standard NASA atmospheric corrections over turbid coastal waters. Design, fabrication, and maintaining high-performance SWIR bands at very low signal levels bear significant costs on dedicated ocean color missions. This study aims at providing a full analysis of the utility of alternative SWIR bands within the 1600nm atmospheric window if the bands within the 2200nm window were to be excluded due to engineering/cost constraints. Following a series of sensitivity analyses for various spectral band configurations as a function of water vapor amount, we chose spectral bands centered at 1565 and 1675nm as suitable alternative bands within the 1600nm window for a future geostationary imager. The sensitivity of this band combination to different aerosol conditions, calibration uncertainties, and extreme water turbidity were studied and compared with that of all band combinations available on existing polar-orbiting missions. The combination of the alternative channels was shown to be as sensitive to test aerosol models as existing near-infrared (NIR) band combinations (e.g., 748 and 869nm) over clear open ocean waters. It was further demonstrated that while in extremely turbid waters the 1565/1675 band pair yields Rrs retrievals as good as those derived from all other existing SWIR band pairs (> 1550nm), their total calibration uncertainties must be < 1% to meet current science requirements for ocean color retrievals (i.e., Δ Rrs (443) < 5%). We further show that the aerosol removal using the NIR and SWIR bands (available on the existing polar-orbiting missions) can be very sensitive to calibration uncertainties. This requires the need for monitoring the calibration of these bands to ensure consistent multi-mission ocean color products in coastal/inland waters.

  6. High-speed spectral calibration by complex FIR filter in phase-sensitive optical coherence tomography.

    PubMed

    Kim, Sangmin; Raphael, Patrick D; Oghalai, John S; Applegate, Brian E

    2016-04-01

    Swept-laser sources offer a number of advantages for Phase-sensitive Optical Coherence Tomography (PhOCT). However, inter- and intra-sweep variability leads to calibration errors that adversely affect phase sensitivity. While there are several approaches to overcoming this problem, our preferred method is to simply calibrate every sweep of the laser. This approach offers high accuracy and phase stability at the expense of a substantial processing burden. In this approach, the Hilbert phase of the interferogram from a reference interferometer provides the instantaneous wavenumber of the laser, but is computationally expensive. Fortunately, the Hilbert transform may be approximated by a Finite Impulse-Response (FIR) filter. Here we explore the use of several FIR filter based Hilbert transforms for calibration, explicitly considering the impact of filter choice on phase sensitivity and OCT image quality. Our results indicate that the complex FIR filter approach is the most robust and accurate among those considered. It provides similar image quality and slightly better phase sensitivity than the traditional FFT-IFFT based Hilbert transform while consuming fewer resources in an FPGA implementation. We also explored utilizing the Hilbert magnitude of the reference interferogram to calculate an ideal window function for spectral amplitude calibration. The ideal window function is designed to carefully control sidelobes on the axial point spread function. We found that after a simple chromatic correction, calculating the window function using the complex FIR filter and the reference interferometer gave similar results to window functions calculated using a mirror sample and the FFT-IFFT Hilbert transform. Hence, the complex FIR filter can enable accurate and high-speed calibration of the magnitude and phase of spectral interferograms.

  7. High-speed spectral calibration by complex FIR filter in phase-sensitive optical coherence tomography

    PubMed Central

    Kim, Sangmin; Raphael, Patrick D.; Oghalai, John S.; Applegate, Brian E.

    2016-01-01

    Swept-laser sources offer a number of advantages for Phase-sensitive Optical Coherence Tomography (PhOCT). However, inter- and intra-sweep variability leads to calibration errors that adversely affect phase sensitivity. While there are several approaches to overcoming this problem, our preferred method is to simply calibrate every sweep of the laser. This approach offers high accuracy and phase stability at the expense of a substantial processing burden. In this approach, the Hilbert phase of the interferogram from a reference interferometer provides the instantaneous wavenumber of the laser, but is computationally expensive. Fortunately, the Hilbert transform may be approximated by a Finite Impulse-Response (FIR) filter. Here we explore the use of several FIR filter based Hilbert transforms for calibration, explicitly considering the impact of filter choice on phase sensitivity and OCT image quality. Our results indicate that the complex FIR filter approach is the most robust and accurate among those considered. It provides similar image quality and slightly better phase sensitivity than the traditional FFT-IFFT based Hilbert transform while consuming fewer resources in an FPGA implementation. We also explored utilizing the Hilbert magnitude of the reference interferogram to calculate an ideal window function for spectral amplitude calibration. The ideal window function is designed to carefully control sidelobes on the axial point spread function. We found that after a simple chromatic correction, calculating the window function using the complex FIR filter and the reference interferometer gave similar results to window functions calculated using a mirror sample and the FFT-IFFT Hilbert transform. Hence, the complex FIR filter can enable accurate and high-speed calibration of the magnitude and phase of spectral interferograms. PMID:27446666

  8. Sensitivity Analysis of the Land Surface Model NOAH-MP for Different Model Fluxes

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Thober, Stephan; Samaniego, Luis; Branch, Oliver; Wulfmeyer, Volker; Clark, Martyn; Attinger, Sabine; Kumar, Rohini; Cuntz, Matthias

    2015-04-01

    Land Surface Models (LSMs) use a plenitude of process descriptions to represent the carbon, energy and water cycles. They are highly complex and computationally expensive. Practitioners, however, are often only interested in specific outputs of the model such as latent heat or surface runoff. In model applications like parameter estimation, the most important parameters are then chosen by experience or expert knowledge. Hydrologists interested in surface runoff therefore chose mostly soil parameters while biogeochemists interested in carbon fluxes focus on vegetation parameters. However, this might lead to the omission of parameters that are important, for example, through strong interactions with the parameters chosen. It also happens during model development that some process descriptions contain fixed values, which are supposedly unimportant parameters. However, these hidden parameters remain normally undetected although they might be highly relevant during model calibration. Sensitivity analyses are used to identify informative model parameters for a specific model output. Standard methods for sensitivity analysis such as Sobol indexes require large amounts of model evaluations, specifically in case of many model parameters. We hence propose to first use a recently developed inexpensive sequential screening method based on Elementary Effects that has proven to identify the relevant informative parameters. This reduces the number parameters and therefore model evaluations for subsequent analyses such as sensitivity analysis or model calibration. In this study, we quantify parametric sensitivities of the land surface model NOAH-MP that is a state-of-the-art LSM and used at regional scale as the land surface scheme of the atmospheric Weather Research and Forecasting Model (WRF). NOAH-MP contains multiple process parameterizations yielding a considerable amount of parameters (˜ 100). Sensitivities for the three model outputs (a) surface runoff, (b) soil drainage and (c) latent heat are calculated on twelve Model Parameter Estimation Experiment (MOPEX) catchments ranging in size from 1020 to 4421 km2. This allows investigation of parametric sensitivities for distinct hydro-climatic characteristics, emphasizing different land-surface processes. The sequential screening identifies the most informative parameters of NOAH-MP for different model output variables. The number of parameters is reduced substantially for all of the three model outputs to approximately 25. The subsequent Sobol method quantifies the sensitivities of these informative parameters. The study demonstrates the existence of sensitive, important parameters in almost all parts of the model irrespective of the considered output. Soil parameters, e.g., are informative for all three output variables whereas plant parameters are not only informative for latent heat but also for soil drainage because soil drainage is strongly coupled to transpiration through the soil water balance. These results contrast to the choice of only soil parameters in hydrological studies and only plant parameters in biogeochemical ones. The sequential screening identified several important hidden parameters that carry large sensitivities and have hence to be included during model calibration.

  9. Modelling irrigated maize with a combination of coupled-model simulation and uncertainty analysis, in the northwest of China

    NASA Astrophysics Data System (ADS)

    Li, Y.; Kinzelbach, W.; Zhou, J.; Cheng, G. D.; Li, X.

    2012-05-01

    The hydrologic model HYDRUS-1-D and the crop growth model WOFOST are coupled to efficiently manage water resources in agriculture and improve the prediction of crop production. The results of the coupled model are validated by experimental studies of irrigated-maize done in the middle reaches of northwest China's Heihe River, a semi-arid to arid region. Good agreement is achieved between the simulated evapotranspiration, soil moisture and crop production and their respective field measurements made under current maize irrigation and fertilization. Based on the calibrated model, the scenario analysis reveals that the most optimal amount of irrigation is 500-600 mm in this region. However, for regions without detailed observation, the results of the numerical simulation can be unreliable for irrigation decision making owing to the shortage of calibrated model boundary conditions and parameters. So, we develop a method of combining model ensemble simulations and uncertainty/sensitivity analysis to speculate the probability of crop production. In our studies, the uncertainty analysis is used to reveal the risk of facing a loss of crop production as irrigation decreases. The global sensitivity analysis is used to test the coupled model and further quantitatively analyse the impact of the uncertainty of coupled model parameters and environmental scenarios on crop production. This method can be used for estimation in regions with no or reduced data availability.

  10. Space shuttle navigation analysis

    NASA Technical Reports Server (NTRS)

    Jones, H. L.; Luders, G.; Matchett, G. A.; Sciabarrasi, J. E.

    1976-01-01

    A detailed analysis of space shuttle navigation for each of the major mission phases is presented. A covariance analysis program for prelaunch IMU calibration and alignment for the orbital flight tests (OFT) is described, and a partial error budget is presented. The ascent, orbital operations and deorbit maneuver study considered GPS-aided inertial navigation in the Phase III GPS (1984+) time frame. The entry and landing study evaluated navigation performance for the OFT baseline system. Detailed error budgets and sensitivity analyses are provided for both the ascent and entry studies.

  11. The DFMS sensor of ROSINA onboard Rosetta: A computer-assisted approach to resolve mass calibration, flux calibration, and fragmentation issues

    NASA Astrophysics Data System (ADS)

    Dhooghe, Frederik; De Keyser, Johan; Altwegg, Kathrin; Calmonte, Ursina; Fuselier, Stephen; Hässig, Myrtha; Berthelier, Jean-Jacques; Mall, Urs; Gombosi, Tamas; Fiethe, Björn

    2014-05-01

    Rosetta will rendezvous with comet 67P/Churyumov-Gerasimenko in May 2014. The Rosetta Orbiter Spectrometer for Ion and Neutral Analysis (ROSINA) instrument comprises three sensors: the pressure sensor (COPS) and two mass spectrometers (RTOF and DFMS). The double focusing mass spectrometer DFMS is optimized for mass resolution and consists of an ion source, a mass analyser and a detector package operated in analogue mode. The magnetic sector of the analyser provides the mass dispersion needed for use with the position-sensitive microchannel plate (MCP) detector. Ions that hit the MCP release electrons that are recorded digitally using a linear electron detector array with 512 pixels. Raw data for a given commanded mass are obtained as ADC counts as a function of pixel number. We have developed a computer-assisted approach to address the problem of calibrating such raw data. Mass calibration: Ion identification is based on their mass-over-charge (m/Z) ratio and requires an accurate correlation of pixel number and m/Z. The m/Z scale depends on the commanded mass and the magnetic field and can be described by an offset of the pixel associated with the commanded mass from the centre of the detector array and a scaling factor. Mass calibration is aided by the built-in gas calibration unit (GCU), which allows one to inject a known gas mixture into the instrument. In a first, fully automatic step of the mass calibration procedure, the calibration uses all GCU spectra and extracts information about the mass peak closest to the centre pixel, since those peaks can be identified unambiguously. This preliminary mass-calibration relation can then be applied to all spectra. Human-assisted identification of additional mass peaks further improves the mass calibration. Ion flux calibration: ADC counts per pixel are converted to ion counts per second using the overall gain, the individual pixel gain, and the total data accumulation time. DFMS can perform an internal scan to determine the pixel gain and related detector aging. The software automatically corrects for these effects to calibrate the fluxes. The COPS sensor can be used for an a posteriori calibration of the fluxes. Neutral gas number densities: Neutrals are ionized in the ion source before they are transferred to the mass analyser, but during this process fragmentation may occur. Our software allows one to identify which neutrals entered the instrument, given the ion fragments that are detected. First, multiple spectra with a limited mass range are combined to provide an overview of as many ion fragments as possible. We then exploit a fragmentation database to assist in figuring out the relation between entering species and recorded fragments. Finally, using experimentally determined sensitivities, gas number densities are obtained. The instrument characterisation (experimental determination of sensitivities, fragmentation patterns for the most common neutral species, etc.) has been conducted by the consortium using an instrument copy in the University of Bern test facilities during the cruise phase of the mission.

  12. Simulating muscular thin films using thermal contraction capabilities in finite element analysis tools.

    PubMed

    Webster, Victoria A; Nieto, Santiago G; Grosberg, Anna; Akkus, Ozan; Chiel, Hillel J; Quinn, Roger D

    2016-10-01

    In this study, new techniques for approximating the contractile properties of cells in biohybrid devices using Finite Element Analysis (FEA) have been investigated. Many current techniques for modeling biohybrid devices use individual cell forces to simulate the cellular contraction. However, such techniques result in long simulation runtimes. In this study we investigated the effect of the use of thermal contraction on simulation runtime. The thermal contraction model was significantly faster than models using individual cell forces, making it beneficial for rapidly designing or optimizing devices. Three techniques, Stoney׳s Approximation, a Modified Stoney׳s Approximation, and a Thermostat Model, were explored for calibrating thermal expansion/contraction parameters (TECPs) needed to simulate cellular contraction using thermal contraction. The TECP values were calibrated by using published data on the deflections of muscular thin films (MTFs). Using these techniques, TECP values that suitably approximate experimental deflections can be determined by using experimental data obtained from cardiomyocyte MTFs. Furthermore, a sensitivity analysis was performed in order to investigate the contribution of individual variables, such as elastic modulus and layer thickness, to the final calibrated TECP for each calibration technique. Additionally, the TECP values are applicable to other types of biohybrid devices. Two non-MTF models were simulated based on devices reported in the existing literature. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Novak, Erik; Trolinger, James D.; Lacey, Ian

    This work reports on the development of a binary pseudo-random test sample optimized to calibrate the MTF of optical microscopes. The sample consists of a number of 1-D and 2-D patterns, with different minimum sizes of spatial artifacts from 300 nm to 2 microns. We describe the mathematical background, fabrication process, data acquisition and analysis procedure to return spatial frequency based instrument calibration. We show that the developed samples satisfy the characteristics of a test standard: functionality, ease of specification and fabrication, reproducibility, and low sensitivity to manufacturing error. © (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading ofmore » the abstract is permitted for personal use only.« less

  14. HAWC+/SOFIA Instrumental Polarization Calibration

    NASA Astrophysics Data System (ADS)

    Michail, Joseph M.; Chuss, David; Dowell, Charles D.; Santos, Fabio; Siah, Javad; Vaillancourt, John; HAWC+ Instrument Team

    2018-01-01

    HAWC+ is a new far-infrared polarimeter for the NASA/DLR SOFIA (Stratospheric Observatory for Infrared Astronomy) telescope. HAWC+ has the capability to measure the polarization of astronomical sources with unprecedented sensitivity and angular resolution in four bands from 50-250 microns. Using data obtained during commissioning flights, we implemented a calibration strategy that separates the astronomical polarization signal from the induced instrumental polarization. The result of this analysis is a map of the instrumental polarization as a function of position in the instrument's focal plane in each band. The results show consistency between bands, as well as with other methods used to determine preliminary instrumental polarization values.

  15. Remote Calibration Procedure and Results for the Ctbto AS109 STS-2HG at Ybh

    NASA Astrophysics Data System (ADS)

    Uhrhammer, R. A.; Taira, T.; Hellweg, M.

    2013-12-01

    Berkeley Digital Seismic Station (BDSN) YBH, located in Yreka, CA, USA, is certified as Auxiliary Seismic Station 109 (AS109) by the Preparatory Commission for the Comprehensive Nuclear-Test-Ban Treaty organization (CTBTO). YBH, sited in an abandoned hard rock mining drift, houses a Streckeisen STS-2HG triaxial broadband seismometer (the AS109 sensor) and a co-sited three-component set of Streckeisen STS-1 broadband seismometers and a Kinemetrics Episensor strong motion accelerometer (the BDSN sensors). CTBTO requested that we preform a remote calibration test of the STS-2HG (20,000 V/(m/s) nominal sensitivity) to verify its response and sensitivity. The remote calibration test was done successfully on June 17, 2013 and we report here on the procedure and results of the calibration. The calibration of the STS-2HG (s/n 30235) was accomplished using two Random Telegraph (RT) stimuli which were applied to the triaxial U,V,W component calibration coils through an appropriate series resistance to limit the drive current. The first was a four hour RT at 1.25 Hz (to determine the low-frequency response) and the second was a one hour RT at 25 Hz (to determine the high-frequency response). The RT stimulus signals were generated by the Kinemetrics Q330 data logger and both the stimuli and the response were recorded simultaneously with synchronous sampling at 100 sps. The RT calibrations were invoked remotely from Berkeley. The response to the 1.25 Hz RT stimulus was used to determine the seismometer natural period, fraction of critical damping and sensitivity of the STS-2HG sensors and the response to the 25 Hz RT stimulus was used to determine their corresponding high-frequency response. The accuracy of the sensitivity as determined by the response to the RT stimuli is limited by the accuracy of the calibration coil motor constant (2 g/A) provided on the factory calibration sheet. As a check on the accuracy of the sensitivity determined from the response to the RT stimuli, we also compare the ground motions inferred from the STS-2HG with the corresponding ground motions inferred from the co-sited STS-1's and the Episensor strong motion accelerometer using seismic signals which have adequate signal-to-noise ratios in passband common to both instruments.

  16. ASPASIA: A toolkit for evaluating the effects of biological interventions on SBML model behaviour.

    PubMed

    Evans, Stephanie; Alden, Kieran; Cucurull-Sanchez, Lourdes; Larminie, Christopher; Coles, Mark C; Kullberg, Marika C; Timmis, Jon

    2017-02-01

    A calibrated computational model reflects behaviours that are expected or observed in a complex system, providing a baseline upon which sensitivity analysis techniques can be used to analyse pathways that may impact model responses. However, calibration of a model where a behaviour depends on an intervention introduced after a defined time point is difficult, as model responses may be dependent on the conditions at the time the intervention is applied. We present ASPASIA (Automated Simulation Parameter Alteration and SensItivity Analysis), a cross-platform, open-source Java toolkit that addresses a key deficiency in software tools for understanding the impact an intervention has on system behaviour for models specified in Systems Biology Markup Language (SBML). ASPASIA can generate and modify models using SBML solver output as an initial parameter set, allowing interventions to be applied once a steady state has been reached. Additionally, multiple SBML models can be generated where a subset of parameter values are perturbed using local and global sensitivity analysis techniques, revealing the model's sensitivity to the intervention. To illustrate the capabilities of ASPASIA, we demonstrate how this tool has generated novel hypotheses regarding the mechanisms by which Th17-cell plasticity may be controlled in vivo. By using ASPASIA in conjunction with an SBML model of Th17-cell polarisation, we predict that promotion of the Th1-associated transcription factor T-bet, rather than inhibition of the Th17-associated transcription factor RORγt, is sufficient to drive switching of Th17 cells towards an IFN-γ-producing phenotype. Our approach can be applied to all SBML-encoded models to predict the effect that intervention strategies have on system behaviour. ASPASIA, released under the Artistic License (2.0), can be downloaded from http://www.york.ac.uk/ycil/software.

  17. Calibration of marginal oscillator sensitivity for use in ICR spectrometry

    NASA Technical Reports Server (NTRS)

    Anicich, V. G.; Huntress, W. T., Jr.

    1977-01-01

    A constant-reference load is utilized as Q-spoiler in calibrations of relative sensitivity variations of a marginal oscillator with frequency. Frequency-dependent effects troublesome in earlier Q-spoilers are compensated by employing a pure resistive calibration load with compensation for the small distributed capacitance of large resistors. The validity of the approach is demonstrated for a 2:1 mass ratio range, and validity for a mass ratio range greater than 10:1 is claimed. The circuit and technique were developed for use in ion cyclotron resonance (ICR) spectrometric practice.

  18. Computing sensitivity and selectivity in parallel factor analysis and related multiway techniques: the need for further developments in net analyte signal theory.

    PubMed

    Olivieri, Alejandro C

    2005-08-01

    Sensitivity and selectivity are important figures of merit in multiway analysis, regularly employed for comparison of the analytical performance of methods and for experimental design and planning. They are especially interesting in the second-order advantage scenario, where the latter property allows for the analysis of samples with a complex background, permitting analyte determination even in the presence of unsuspected interferences. Since no general theory exists for estimating the multiway sensitivity, Monte Carlo numerical calculations have been developed for estimating variance inflation factors, as a convenient way of assessing both sensitivity and selectivity parameters for the popular parallel factor (PARAFAC) analysis and also for related multiway techniques. When the second-order advantage is achieved, the existing expressions derived from net analyte signal theory are only able to adequately cover cases where a single analyte is calibrated using second-order instrumental data. However, they fail for certain multianalyte cases, or when third-order data are employed, calling for an extension of net analyte theory. The results have strong implications in the planning of multiway analytical experiments.

  19. Computationally inexpensive identification of noninformative model parameters by sequential screening

    NASA Astrophysics Data System (ADS)

    Cuntz, Matthias; Mai, Juliane; Zink, Matthias; Thober, Stephan; Kumar, Rohini; Schäfer, David; Schrön, Martin; Craven, John; Rakovec, Oldrich; Spieler, Diana; Prykhodko, Vladyslav; Dalmasso, Giovanni; Musuuza, Jude; Langenberg, Ben; Attinger, Sabine; Samaniego, Luis

    2015-08-01

    Environmental models tend to require increasing computational time and resources as physical process descriptions are improved or new descriptions are incorporated. Many-query applications such as sensitivity analysis or model calibration usually require a large number of model evaluations leading to high computational demand. This often limits the feasibility of rigorous analyses. Here we present a fully automated sequential screening method that selects only informative parameters for a given model output. The method requires a number of model evaluations that is approximately 10 times the number of model parameters. It was tested using the mesoscale hydrologic model mHM in three hydrologically unique European river catchments. It identified around 20 informative parameters out of 52, with different informative parameters in each catchment. The screening method was evaluated with subsequent analyses using all 52 as well as only the informative parameters. Subsequent Sobol's global sensitivity analysis led to almost identical results yet required 40% fewer model evaluations after screening. mHM was calibrated with all and with only informative parameters in the three catchments. Model performances for daily discharge were equally high in both cases with Nash-Sutcliffe efficiencies above 0.82. Calibration using only the informative parameters needed just one third of the number of model evaluations. The universality of the sequential screening method was demonstrated using several general test functions from the literature. We therefore recommend the use of the computationally inexpensive sequential screening method prior to rigorous analyses on complex environmental models.

  20. Computationally inexpensive identification of noninformative model parameters by sequential screening

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Cuntz, Matthias; Zink, Matthias; Thober, Stephan; Kumar, Rohini; Schäfer, David; Schrön, Martin; Craven, John; Rakovec, Oldrich; Spieler, Diana; Prykhodko, Vladyslav; Dalmasso, Giovanni; Musuuza, Jude; Langenberg, Ben; Attinger, Sabine; Samaniego, Luis

    2016-04-01

    Environmental models tend to require increasing computational time and resources as physical process descriptions are improved or new descriptions are incorporated. Many-query applications such as sensitivity analysis or model calibration usually require a large number of model evaluations leading to high computational demand. This often limits the feasibility of rigorous analyses. Here we present a fully automated sequential screening method that selects only informative parameters for a given model output. The method requires a number of model evaluations that is approximately 10 times the number of model parameters. It was tested using the mesoscale hydrologic model mHM in three hydrologically unique European river catchments. It identified around 20 informative parameters out of 52, with different informative parameters in each catchment. The screening method was evaluated with subsequent analyses using all 52 as well as only the informative parameters. Subsequent Sobol's global sensitivity analysis led to almost identical results yet required 40% fewer model evaluations after screening. mHM was calibrated with all and with only informative parameters in the three catchments. Model performances for daily discharge were equally high in both cases with Nash-Sutcliffe efficiencies above 0.82. Calibration using only the informative parameters needed just one third of the number of model evaluations. The universality of the sequential screening method was demonstrated using several general test functions from the literature. We therefore recommend the use of the computationally inexpensive sequential screening method prior to rigorous analyses on complex environmental models.

  1. Parameterization and Uncertainty Analysis of SWAT model in Hydrological Simulation of Chaohe River Basin

    NASA Astrophysics Data System (ADS)

    Jie, M.; Zhang, J.; Guo, B. B.

    2017-12-01

    As a typical distributed hydrological model, the SWAT model also has a challenge in calibrating parameters and analysis their uncertainty. This paper chooses the Chaohe River Basin China as the study area, through the establishment of the SWAT model, loading the DEM data of the Chaohe river basin, the watershed is automatically divided into several sub-basins. Analyzing the land use, soil and slope which are on the basis of the sub-basins and calculating the hydrological response unit (HRU) of the study area, after running SWAT model, the runoff simulation values in the watershed are obtained. On this basis, using weather data, known daily runoff of three hydrological stations, combined with the SWAT-CUP automatic program and the manual adjustment method are used to analyze the multi-site calibration of the model parameters. Furthermore, the GLUE algorithm is used to analyze the parameters uncertainty of the SWAT model. Through the sensitivity analysis, calibration and uncertainty study of SWAT, the results indicate that the parameterization of the hydrological characteristics of the Chaohe river is successful and feasible which can be used to simulate the Chaohe river basin.

  2. Cross-calibration of liquid and solid QCT calibration standards: corrections to the UCSF normative data

    NASA Technical Reports Server (NTRS)

    Faulkner, K. G.; Gluer, C. C.; Grampp, S.; Genant, H. K.

    1993-01-01

    Quantitative computed tomography (QCT) has been shown to be a precise and sensitive method for evaluating spinal bone mineral density (BMD) and skeletal response to aging and therapy. Precise and accurate determination of BMD using QCT requires a calibration standard to compensate for and reduce the effects of beam-hardening artifacts and scanner drift. The first standards were based on dipotassium hydrogen phosphate (K2HPO4) solutions. Recently, several manufacturers have developed stable solid calibration standards based on calcium hydroxyapatite (CHA) in water-equivalent plastic. Due to differences in attenuating properties of the liquid and solid standards, the calibrated BMD values obtained with each system do not agree. In order to compare and interpret the results obtained on both systems, cross-calibration measurements were performed in phantoms and patients using the University of California San Francisco (UCSF) liquid standard and the Image Analysis (IA) solid standard on the UCSF GE 9800 CT scanner. From the phantom measurements, a highly linear relationship was found between the liquid- and solid-calibrated BMD values. No influence on the cross-calibration due to simulated variations in body size or vertebral fat content was seen, though a significant difference in the cross-calibration was observed between scans acquired at 80 and 140 kVp. From the patient measurements, a linear relationship between the liquid (UCSF) and solid (IA) calibrated values was derived for GE 9800 CT scanners at 80 kVp (IA = [1.15 x UCSF] - 7.32).(ABSTRACT TRUNCATED AT 250 WORDS).

  3. Accelerated dynamic cardiac MRI exploiting sparse-Kalman-smoother self-calibration and reconstruction (k  -  t SPARKS)

    NASA Astrophysics Data System (ADS)

    Park, Suhyung; Park, Jaeseok

    2015-05-01

    Accelerated dynamic MRI, which exploits spatiotemporal redundancies in k  -  t space and coil dimension, has been widely used to reduce the number of signal encoding and thus increase imaging efficiency with minimal loss of image quality. Nonetheless, particularly in cardiac MRI it still suffers from artifacts and amplified noise in the presence of time-drifting coil sensitivity due to relative motion between coil and subject (e.g. free breathing). Furthermore, a substantial number of additional calibrating signals is to be acquired to warrant accurate calibration of coil sensitivity. In this work, we propose a novel, accelerated dynamic cardiac MRI with sparse-Kalman-smoother self-calibration and reconstruction (k  -  t SPARKS), which is robust to time-varying coil sensitivity even with a small number of calibrating signals. The proposed k  -  t SPARKS incorporates Kalman-smoother self-calibration in k  -  t space and sparse signal recovery in x  -   f space into a single optimization problem, leading to iterative, joint estimation of time-varying convolution kernels and missing signals in k  -  t space. In the Kalman-smoother calibration, motion-induced uncertainties over the entire time frames were included in modeling state transition while a coil-dependent noise statistic in describing measurement process. The sparse signal recovery iteratively alternates with the self-calibration to tackle the ill-conditioning problem potentially resulting from insufficient calibrating signals. Simulations and experiments were performed using both the proposed and conventional methods for comparison, revealing that the proposed k  -  t SPARKS yields higher signal-to-error ratio and superior temporal fidelity in both breath-hold and free-breathing cardiac applications over all reduction factors.

  4. Accelerated dynamic cardiac MRI exploiting sparse-Kalman-smoother self-calibration and reconstruction (k  -  t SPARKS).

    PubMed

    Park, Suhyung; Park, Jaeseok

    2015-05-07

    Accelerated dynamic MRI, which exploits spatiotemporal redundancies in k  -  t space and coil dimension, has been widely used to reduce the number of signal encoding and thus increase imaging efficiency with minimal loss of image quality. Nonetheless, particularly in cardiac MRI it still suffers from artifacts and amplified noise in the presence of time-drifting coil sensitivity due to relative motion between coil and subject (e.g. free breathing). Furthermore, a substantial number of additional calibrating signals is to be acquired to warrant accurate calibration of coil sensitivity. In this work, we propose a novel, accelerated dynamic cardiac MRI with sparse-Kalman-smoother self-calibration and reconstruction (k  -  t SPARKS), which is robust to time-varying coil sensitivity even with a small number of calibrating signals. The proposed k  -  t SPARKS incorporates Kalman-smoother self-calibration in k  -  t space and sparse signal recovery in x  -   f space into a single optimization problem, leading to iterative, joint estimation of time-varying convolution kernels and missing signals in k  -  t space. In the Kalman-smoother calibration, motion-induced uncertainties over the entire time frames were included in modeling state transition while a coil-dependent noise statistic in describing measurement process. The sparse signal recovery iteratively alternates with the self-calibration to tackle the ill-conditioning problem potentially resulting from insufficient calibrating signals. Simulations and experiments were performed using both the proposed and conventional methods for comparison, revealing that the proposed k  -  t SPARKS yields higher signal-to-error ratio and superior temporal fidelity in both breath-hold and free-breathing cardiac applications over all reduction factors.

  5. Are LOD and LOQ Reliable Parameters for Sensitivity Evaluation of Spectroscopic Methods?

    PubMed

    Ershadi, Saba; Shayanfar, Ali

    2018-03-22

    The limit of detection (LOD) and the limit of quantification (LOQ) are common parameters to assess the sensitivity of analytical methods. In this study, the LOD and LOQ of previously reported terbium sensitized analysis methods were calculated by different methods, and the results were compared with sensitivity parameters [lower limit of quantification (LLOQ)] of U.S. Food and Drug Administration guidelines. The details of the calibration curve and standard deviation of blank samples of three different terbium-sensitized luminescence methods for the quantification of mycophenolic acid, enrofloxacin, and silibinin were used for the calculation of LOD and LOQ. A comparison of LOD and LOQ values calculated by various methods and LLOQ shows a considerable difference. The significant difference of the calculated LOD and LOQ with various methods and LLOQ should be considered in the sensitivity evaluation of spectroscopic methods.

  6. A primary method for the complex calibration of a hydrophone from 1 Hz to 2 kHz

    NASA Astrophysics Data System (ADS)

    Slater, W. H.; E Crocker, S.; Baker, S. R.

    2018-02-01

    A primary calibration method is demonstrated to obtain the magnitude and phase of the complex sensitivity for a hydrophone at frequencies between 1 Hz and 2 kHz. The measurement is performed in a coupler reciprocity chamber (‘coupler’) a closed test chamber where time harmonic oscillations in pressure can be achieved and the reciprocity conditions required for a primary calibration can be realized. Relevant theory is reviewed and the reciprocity parameter updated for the complex measurement. Systematic errors and corrections for magnitude are reviewed and more added for phase. The combined expanded uncertainties of the magnitude and phase of the complex sensitivity at 1 Hz were 0.1 dB re 1 V μ Pa-1 and  ± 1\\circ , respectively. Complex sensitivity, sensitivity magnitude, and phase measurements are presented on an example primary reference hydrophone.

  7. [Sensitivity analysis of AnnAGNPS model's hydrology and water quality parameters based on the perturbation analysis method].

    PubMed

    Xi, Qing; Li, Zhao-Fu; Luo, Chuan

    2014-05-01

    Sensitivity analysis of hydrology and water quality parameters has a great significance for integrated model's construction and application. Based on AnnAGNPS model's mechanism, terrain, hydrology and meteorology, field management, soil and other four major categories of 31 parameters were selected for the sensitivity analysis in Zhongtian river watershed which is a typical small watershed of hilly region in the Taihu Lake, and then used the perturbation method to evaluate the sensitivity of the parameters to the model's simulation results. The results showed that: in the 11 terrain parameters, LS was sensitive to all the model results, RMN, RS and RVC were generally sensitive and less sensitive to the output of sediment but insensitive to the remaining results. For hydrometeorological parameters, CN was more sensitive to runoff and sediment and relatively sensitive for the rest results. In field management, fertilizer and vegetation parameters, CCC, CRM and RR were less sensitive to sediment and particulate pollutants, the six fertilizer parameters (FR, FD, FID, FOD, FIP, FOP) were particularly sensitive for nitrogen and phosphorus nutrients. For soil parameters, K is quite sensitive to all the results except the runoff, the four parameters of the soil's nitrogen and phosphorus ratio (SONR, SINR, SOPR, SIPR) were less sensitive to the corresponding results. The simulation and verification results of runoff in Zhongtian watershed show a good accuracy with the deviation less than 10% during 2005- 2010. Research results have a direct reference value on AnnAGNPS model's parameter selection and calibration adjustment. The runoff simulation results of the study area also proved that the sensitivity analysis was practicable to the parameter's adjustment and showed the adaptability to the hydrology simulation in the Taihu Lake basin's hilly region and provide reference for the model's promotion in China.

  8. Trace analysis of high-purity graphite by LA-ICP-MS.

    PubMed

    Pickhardt, C; Becker, J S

    2001-07-01

    Laser-ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) has been established as a very efficient and sensitive technique for the direct analysis of solids. In this work the capability of LA-ICP-MS was investigated for determination of trace elements in high-purity graphite. Synthetic laboratory standards with a graphite matrix were prepared for the purpose of quantifying the analytical results. Doped trace elements, concentration 0.5 microg g(-1), in a laboratory standard were determined with an accuracy of 1% to +/- 7% and a relative standard deviation (RSD) of 2-13%. Solution-based calibration was also used for quantitative analysis of high-purity graphite. It was found that such calibration led to analytical results for trace-element determination in graphite with accuracy similar to that obtained by use of synthetic laboratory standards for quantification of analytical results. Results from quantitative determination of trace impurities in a real reactor-graphite sample, using both quantification approaches, were in good agreement. Detection limits for all elements of interest were determined in the low ng g(-1) concentration range. Improvement of detection limits by a factor of 10 was achieved for analyses of high-purity graphite with LA-ICP-MS under wet plasma conditions, because the lower background signal and increased element sensitivity.

  9. Calibration of HEC-Ras hydrodynamic model using gauged discharge data and flood inundation maps

    NASA Astrophysics Data System (ADS)

    Tong, Rui; Komma, Jürgen

    2017-04-01

    The estimation of flood is essential for disaster alleviation. Hydrodynamic models are implemented to predict the occurrence and variance of flood in different scales. In practice, the calibration of hydrodynamic models aims to search the best possible parameters for the representation the natural flow resistance. Recent years have seen the calibration of hydrodynamic models being more actual and faster following the advance of earth observation products and computer based optimization techniques. In this study, the Hydrologic Engineering River Analysis System (HEC-Ras) model was set up with high-resolution digital elevation model from Laser scanner for the river Inn in Tyrol, Austria. 10 largest flood events from 19 hourly discharge gauges and flood inundation maps were selected to calibrate the HEC-Ras model. Manning roughness values and lateral inflow factors as parameters were automatically optimized with the Shuffled complex with Principal component analysis (SP-UCI) algorithm developed from the Shuffled Complex Evolution (SCE-UA). Different objective functions (Nash-Sutcliffe model efficiency coefficient, the timing of peak, peak value and Root-mean-square deviation) were used in single or multiple way. It was found that the lateral inflow factor was the most sensitive parameter. SP-UCI algorithm could avoid the local optimal and achieve efficient and effective parameters in the calibration of HEC-Ras model using flood extension images. As results showed, calibration by means of gauged discharge data and flood inundation maps, together with objective function of Nash-Sutcliffe model efficiency coefficient, was very robust to obtain more reliable flood simulation, and also to catch up with the peak value and the timing of peak.

  10. A parallel calibration utility for WRF-Hydro on high performance computers

    NASA Astrophysics Data System (ADS)

    Wang, J.; Wang, C.; Kotamarthi, V. R.

    2017-12-01

    A successful modeling of complex hydrological processes comprises establishing an integrated hydrological model which simulates the hydrological processes in each water regime, calibrates and validates the model performance based on observation data, and estimates the uncertainties from different sources especially those associated with parameters. Such a model system requires large computing resources and often have to be run on High Performance Computers (HPC). The recently developed WRF-Hydro modeling system provides a significant advancement in the capability to simulate regional water cycles more completely. The WRF-Hydro model has a large range of parameters such as those in the input table files — GENPARM.TBL, SOILPARM.TBL and CHANPARM.TBL — and several distributed scaling factors such as OVROUGHRTFAC. These parameters affect the behavior and outputs of the model and thus may need to be calibrated against the observations in order to obtain a good modeling performance. Having a parameter calibration tool specifically for automate calibration and uncertainty estimates of WRF-Hydro model can provide significant convenience for the modeling community. In this study, we developed a customized tool using the parallel version of the model-independent parameter estimation and uncertainty analysis tool, PEST, to enabled it to run on HPC with PBS and SLURM workload manager and job scheduler. We also developed a series of PEST input file templates that are specifically for WRF-Hydro model calibration and uncertainty analysis. Here we will present a flood case study occurred in April 2013 over Midwest. The sensitivity and uncertainties are analyzed using the customized PEST tool we developed.

  11. Experimental study on cross-sensitivity of temperature and vibration of embedded fiber Bragg grating sensors

    NASA Astrophysics Data System (ADS)

    Chen, Tao; Ye, Meng-li; Liu, Shu-liang; Deng, Yan

    2018-03-01

    In view of the principle for occurrence of cross-sensitivity, a series of calibration experiments are carried out to solve the cross-sensitivity problem of embedded fiber Bragg gratings (FBGs) using the reference grating method. Moreover, an ultrasonic-vibration-assisted grinding (UVAG) model is established, and finite element analysis (FEA) is carried out under the monitoring environment of embedded temperature measurement system. In addition, the related temperature acquisition tests are set in accordance with requirements of the reference grating method. Finally, comparative analyses of the simulation and experimental results are performed, and it may be concluded that the reference grating method may be utilized to effectively solve the cross-sensitivity of embedded FBGs.

  12. A combined application of thermal desorber and gas chromatography to the analysis of gaseous carbonyls with the aid of two internal standards.

    PubMed

    Kim, Ki-Hyun; Anthwal, A; Pandey, Sudhir Kumar; Kabir, Ehsanul; Sohn, Jong Ryeul

    2010-11-01

    In this study, a series of GC calibration experiments were conducted to examine the feasibility of the thermal desorption approach for the quantification of five carbonyl compounds (acetaldehyde, propionaldehyde, butyraldehyde, isovaleraldehyde, and valeraldehyde) in conjunction with two internal standard compounds. The gaseous working standards of carbonyls were calibrated with the aid of thermal desorption as a function of standard concentration and of loading volume. The detection properties were then compared against two types of external calibration data sets derived by fixed standard volume and fixed standard concentration approach. According to this comparison, the fixed standard volume-based calibration of carbonyls should be more sensitive and reliable than its fixed standard concentration counterpart. Moreover, the use of internal standard can improve the analytical reliability of aromatics and some carbonyls to a considerable extent. Our preliminary test on real samples, however, indicates that the performance of internal calibration, when tested using samples of varying dilution ranges, can be moderately different from that derivable from standard gases. It thus suggests that the reliability of calibration approaches should be examined carefully with the considerations on the interactive relationships between the compound-specific properties and the operation conditions of the instrumental setups.

  13. Distinguishing new science from calibration effects in the electron-volt neutron spectrometer VESUVIO at ISIS

    NASA Astrophysics Data System (ADS)

    Chatzidimitriou-Dreismann, C. A.; Gray, E. MacA.; Blach, T. P.

    2012-06-01

    The "standard" procedure for calibrating the Vesuvio eV neutron spectrometer at the ISIS neutron source, forming the basis for data analysis over at least the last decade, was recently documented in considerable detail by the instrument's scientists. Additionally, we recently derived analytic expressions of the sensitivity of recoil peak positions with respect to fight-path parameters and presented neutron-proton scattering results that together called into question the validity of the "standard" calibration. These investigations should contribute significantly to the assessment of the experimental results obtained with Vesuvio. Here we present new results of neutron-deuteron scattering from D2 in the backscattering angular range (θ>90°) which are accompanied by a striking energy increase that violates the Impulse Approximation, thus leading unequivocally the following dilemma: (A) either the "standard" calibration is correct and then the experimental results represent a novel quantum dynamical effect of D which stands in blatant contradiction of conventional theoretical expectations; (B) or the present "standard" calibration procedure is seriously deficient and leads to artificial outcomes. For Case (A), we allude to the topic of attosecond quantum dynamical phenomena and our recent neutron scattering experiments from H2 molecules. For Case (B), some suggestions as to how the "standard" calibration could be considerably improved are made.

  14. Identification and quantification of ciprofloxacin in urine through excitation-emission fluorescence and three-way PARAFAC calibration.

    PubMed

    Ortiz, M C; Sarabia, L A; Sánchez, M S; Giménez, D

    2009-05-29

    Due to the second-order advantage, calibration models based on parallel factor analysis (PARAFAC) decomposition of three-way data are becoming important in routine analysis. This work studies the possibility of fitting PARAFAC models with excitation-emission fluorescence data for the determination of ciprofloxacin in human urine. The finally chosen PARAFAC decomposition is built with calibration samples spiked with ciprofloxacin, and with other series of urine samples that were also spiked. One of the series of samples has also another drug because the patient was taking mesalazine. The mesalazine is a fluorescent substance that interferes with the ciprofloxacin. Finally, the procedure is applied to samples of a patient who was being treated with ciprofloxacin. The trueness has been established by the regression "predicted concentration versus added concentration". The recovery factor is 88.3% for ciprofloxacin in urine, and the mean of the absolute value of the relative errors is 4.2% for 46 test samples. The multivariate sensitivity of the fit calibration model is evaluated by a regression between the loadings of PARAFAC linked to ciprofloxacin versus the true concentration in spiked samples. The multivariate capability of discrimination is near 8 microg L(-1) when the probabilities of false non-compliance and false compliance are fixed at 5%.

  15. Orbit-determination performance of Doppler data for interplanetary cruise trajectories. Part 2: 8.4-GHz performance and data-weighting strategies

    NASA Technical Reports Server (NTRS)

    Ulvestad, J. S.

    1992-01-01

    A consider error covariance analysis was performed in order to investigate the orbit-determination performance attainable using two-way (coherent) 8.4-GHz (X-band) Doppler data for two segments of the planned Mars Observer trajectory. The analysis includes the effects of the current level of calibration errors in tropospheric delay, ionospheric delay, and station locations, with particular emphasis placed on assessing the performance of several candidate elevation-dependent data-weighting functions. One weighting function was found that yields good performance for a variety of tracking geometries. This weighting function is simple and robust; it reduces the danger of error that might exist if an analyst had to select one of several different weighting functions that are highly sensitive to the exact choice of parameters and to the tracking geometry. Orbit-determination accuracy improvements that may be obtained through the use of calibration data derived from Global Positioning System (GPS) satellites also were investigated, and can be as much as a factor of three in some components of the spacecraft state vector. Assuming that both station-location errors and troposphere calibration errors are reduced simultaneously, the recommended data-weighting function need not be changed when GPS calibrations are incorporated in the orbit-determination process.

  16. The efficacy of calibrating hydrologic model using remotely sensed evapotranspiration and soil moisture for streamflow prediction

    NASA Astrophysics Data System (ADS)

    Kunnath-Poovakka, A.; Ryu, D.; Renzullo, L. J.; George, B.

    2016-04-01

    Calibration of spatially distributed hydrologic models is frequently limited by the availability of ground observations. Remotely sensed (RS) hydrologic information provides an alternative source of observations to inform models and extend modelling capability beyond the limits of ground observations. This study examines the capability of RS evapotranspiration (ET) and soil moisture (SM) in calibrating a hydrologic model and its efficacy to improve streamflow predictions. SM retrievals from the Advanced Microwave Scanning Radiometer-EOS (AMSR-E) and daily ET estimates from the CSIRO MODIS ReScaled potential ET (CMRSET) are used to calibrate a simplified Australian Water Resource Assessment - Landscape model (AWRA-L) for a selection of parameters. The Shuffled Complex Evolution Uncertainty Algorithm (SCE-UA) is employed for parameter estimation at eleven catchments in eastern Australia. A subset of parameters for calibration is selected based on the variance-based Sobol' sensitivity analysis. The efficacy of 15 objective functions for calibration is assessed based on streamflow predictions relative to control cases, and relative merits of each are discussed. Synthetic experiments were conducted to examine the effect of bias in RS ET observations on calibration. The objective function containing the root mean square deviation (RMSD) of ET result in best streamflow predictions and the efficacy is superior for catchments with medium to high average runoff. Synthetic experiments revealed that accurate ET product can improve the streamflow predictions in catchments with low average runoff.

  17. Quantitative analysis of time-resolved microwave conductivity data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reid, Obadiah G.; Moore, David T.; Li, Zhen

    Flash-photolysis time-resolved microwave conductivity (fp-TRMC) is a versatile, highly sensitive technique for studying the complex photoconductivity of solution, solid, and gas-phase samples. The purpose of this paper is to provide a standard reference work for experimentalists interested in using microwave conductivity methods to study functional electronic materials, describing how to conduct and calibrate these experiments in order to obtain quantitative results. The main focus of the paper is on calculating the calibration factor, K, which is used to connect the measured change in microwave power absorption to the conductance of the sample. We describe the standard analytical formulae that havemore » been used in the past, and compare them to numerical simulations. This comparison shows that the most widely used analytical analysis of fp-TRMC data systematically under-estimates the transient conductivity by ~60%. We suggest a more accurate semi-empirical way of calibrating these experiments. However, we emphasize that the full numerical calculation is necessary to quantify both transient and steady-state conductance for arbitrary sample properties and geometry.« less

  18. Quantitative analysis of time-resolved microwave conductivity data

    DOE PAGES

    Reid, Obadiah G.; Moore, David T.; Li, Zhen; ...

    2017-11-10

    Flash-photolysis time-resolved microwave conductivity (fp-TRMC) is a versatile, highly sensitive technique for studying the complex photoconductivity of solution, solid, and gas-phase samples. The purpose of this paper is to provide a standard reference work for experimentalists interested in using microwave conductivity methods to study functional electronic materials, describing how to conduct and calibrate these experiments in order to obtain quantitative results. The main focus of the paper is on calculating the calibration factor, K, which is used to connect the measured change in microwave power absorption to the conductance of the sample. We describe the standard analytical formulae that havemore » been used in the past, and compare them to numerical simulations. This comparison shows that the most widely used analytical analysis of fp-TRMC data systematically under-estimates the transient conductivity by ~60%. We suggest a more accurate semi-empirical way of calibrating these experiments. However, we emphasize that the full numerical calculation is necessary to quantify both transient and steady-state conductance for arbitrary sample properties and geometry.« less

  19. Panchromatic Calibration of Astronomical Observations with State-of-the-Art White Dwarf Model Atmospheres

    NASA Astrophysics Data System (ADS)

    Rauch, T.

    2016-05-01

    Theoretical spectral energy distributions (SEDs) of white dwarfs provide a powerful tool for cross-calibration and sensitivity control of instruments from the far infrared to the X-ray energy range. Such SEDs can be calculated from fully metal-line blanketed NLTE model-atmospheres that are e.g. computed by the Tübingen NLTE Model-Atmosphere Package (TMAP) that has arrived at a high level of sophistication. TMAP was successfully employed for the reliable spectral analysis of many hot, compact post-AGB stars. High-quality stellar spectra obtained over a wide energy range establish a data base with a large number of spectral lines of many successive ions of different species. Their analysis allows to determine effective temperatures, surface gravities, and element abundances of individual (pre-)white dwarfs with very small error ranges. We present applications of TMAP SEDs for spectral analyses of hot, compact stars in the parameter range from (pre-) white dwarfs to neutron stars and demonstrate the improvement of flux calibration using white-dwarf SEDs that are e.g. available via registered services in the Virtual Observatory.

  20. Hydrogeology and flow of water in a sand and gravel aquifer contaminated by wood-preserving compounds, Pensacola, Florida

    USGS Publications Warehouse

    Franks, B.J.

    1988-01-01

    The sand and gravel aquifer in southern Escambia County, Florida , is a typical surficial aquifer composed of quartz sands and gravels interbedded locally with silts and clays. Problems of groundwater contamination from leaking surface impoundments are common in surficial aquifers and are a subject of increasing concern and attention. A potentially widespread contamination problem involves organic chemicals from wood-preserving processes. Because creosote is the most extensively used industrial preservative in the United States, an abandoned wood-treatment plant near Pensacola was chosen for investigation. This report describes the hydrogeology and groundwater flow system of the sand and gravel aquifer near the plant. A three-dimensional simulation of groundwater flow in the aquifer was evaluated under steady-state conditions. The model was calibrated on the basis of observed water levels from January 1986. Calibration criteria included reproducing all water levels within the accuracy of the data (one-half contour interval in most cases). Sensitivity analysis showed that the simulations were most sensitive to recharge and vertical leakance of the confining units between layers 1 and 2, and relatively insensitive to changes in hydraulic conductivity and transmissivity and to other changes in vertical leakance. Applications of the results of the calibrated flow model in evaluation of solute transport may require further discretization of the contaminated area, including more sublayers, than were needed for calibration of the groundwater flow system itself. (USGS)

  1. The Effects of Temperature and Salinity on Mg Incorporation in Planktonic Foraminifera Globigerinoides ruber (white): Results from a Global Sediment Trap Mg/Ca Database

    NASA Astrophysics Data System (ADS)

    Gray, W. R.; Weldeab, S.; Lea, D. W.

    2015-12-01

    Mg/Ca in Globigerinoides ruber is arguably the most important proxy for sea surface temperature (SST) in tropical and sub tropical regions, and as such guides our understanding of past climatic change in these regions. However, the sensitivity of Mg/Ca to salinity is debated; while analysis of foraminifera grown in cultures generally indicates a sensitivity of 3 - 6% per salinity unit, core-top studies have suggested a much higher sensitivity of between 15 - 27% per salinity unit, bringing the utility of Mg/Ca as a SST proxy into dispute. Sediment traps circumvent the issues of dissolution and post-depositional calcite precipitation that hamper core-top calibration studies, whilst allowing the analysis of foraminifera that have calcified under natural conditions within a well constrained period of time. We collated previously published sediment trap/plankton tow G. ruber (white) Mg/Ca data, and generated new Mg/Ca data from a sediment trap located in the highly-saline tropical North Atlantic, close to West Africa. Calcification temperature and salinity were calculated for the time interval represented by each trap/tow sample using World Ocean Atlas 2013 data. The resulting dataset comprises >240 Mg/Ca measurements (in the size fraction 150 - 350 µm), that span a temperature range of 18 - 28 °C and 33.6 - 36.7 PSU. Multiple regression of the dataset reveals a temperature sensitivity of 7 ± 0.4% per °C (p < 2.2*10-16) and a salinity sensitivity of 4 ± 1% per salinity unit (p = 2*10-5). Application of this calibration has significant implications for both the magnitude and timing of glacial-interglacial temperature changes when variations in salinity are accounted for.

  2. Free-field Calibration of the Pressure Sensitivity of Microphones at Frequencies up to 80 kHz

    NASA Technical Reports Server (NTRS)

    Herring, G. C.; Zuckerwar, Allan J.; Elbing, Brian R.

    2006-01-01

    A free-field (FF) substitution method for calibrating the pressure sensitivity of microphones at frequencies up to 80 kHz is demonstrated with both grazing and normal incidence geometries. The substitution-based method, as opposed to a simultaneous method, avoids problems associated with the non-uniformity of the sound field and, as applied here, uses a 1/2 -inch air-condenser pressure microphone as a known reference. Best results were obtained with a centrifugal fan, which is used as a random, broadband sound source. A broadband source minimizes reflection-related interferences that often plague FF measurements. Calibrations were performed on 1/4-inch FF air-condenser, electret, and micro-electromechanical systems (MEMS) microphones in an anechoic chamber. The accuracy of this FF method is estimated by comparing the pressure sensitivity of an air-condenser microphone, as derived from the FF measurement, with that of an electrostatic actuator calibration and is typically 0.3 dB (95% confidence), over the range 2-80 kHz.

  3. Agricultural Policy Environmental eXtender Simulation of Three Adjacent Row-Crop Watersheds in the Claypan Region.

    PubMed

    Anomaa Senaviratne, G M M M; Udawatta, Ranjith P; Baffaut, Claire; Anderson, Stephen H

    2013-01-01

    The Agricultural Policy Environmental Extender (APEX) model is used to evaluate best management practices on pollutant loading in whole farms or small watersheds. The objectives of this study were to conduct a sensitivity analysis to determine the effect of model parameters on APEX output and use the parameterized, calibrated, and validated model to evaluate long-term benefits of grass waterways. The APEX model was used to model three (East, Center, and West) adjacent field-size watersheds with claypan soils under a no-till corn ( L.)/soybean [ (L.) Merr.] rotation. Twenty-seven parameters were sensitive for crop yield, runoff, sediment, nitrogen (dissolved and total), and phosphorous (dissolved and total) simulations. The model was calibrated using measured event-based data from the Center watershed from 1993 to 1997 and validated with data from the West and East watersheds. Simulated crop yields were within ±13% of the measured yield. The model performance for event-based runoff was excellent, with calibration and validation > 0.9 and Nash-Sutcliffe coefficients (NSC) > 0.8, respectively. Sediment and total nitrogen calibration results were satisfactory for larger rainfall events (>50 mm), with > 0.5 and NSC > 0.4, but validation results remained poor, with NSC between 0.18 and 0.3. Total phosphorous was well calibrated and validated, with > 0.8 and NSC > 0.7, respectively. The presence of grass waterways reduced annual total phosphorus loadings by 13 to 25%. The replicated study indicates that APEX provides a convenient and efficient tool to evaluate long-term benefits of conservation practices. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  4. Sensitivity analysis of urban flood flows to hydraulic controls

    NASA Astrophysics Data System (ADS)

    Chen, Shangzhi; Garambois, Pierre-André; Finaud-Guyot, Pascal; Dellinger, Guilhem; Terfous, Abdelali; Ghenaim, Abdallah

    2017-04-01

    Flooding represents one of the most significant natural hazards on each continent and particularly in highly populated areas. Improving the accuracy and robustness of prediction systems has become a priority. However, in situ measurements of floods remain difficult while a better understanding of flood flow spatiotemporal dynamics along with dataset for model validations appear essential. The present contribution is based on a unique experimental device at the scale 1/200, able to produce urban flooding with flood flows corresponding to frequent to rare return periods. The influence of 1D Saint Venant and 2D Shallow water model input parameters on simulated flows is assessed using global sensitivity analysis (GSA). The tested parameters are: global and local boundary conditions (water heights and discharge), spatially uniform or distributed friction coefficient and or porosity respectively tested in various ranges centered around their nominal values - calibrated thanks to accurate experimental data and related uncertainties. For various experimental configurations a variance decomposition method (ANOVA) is used to calculate spatially distributed Sobol' sensitivity indices (Si's). The sensitivity of water depth to input parameters on two main streets of the experimental device is presented here. Results show that the closer from the downstream boundary condition on water height, the higher the Sobol' index as predicted by hydraulic theory for subcritical flow, while interestingly the sensitivity to friction decreases. The sensitivity indices of all lateral inflows, representing crossroads in 1D, are also quantified in this study along with their asymptotic trends along flow distance. The relationship between lateral discharge magnitude and resulting sensitivity index of water depth is investigated. Concerning simulations with distributed friction coefficients, crossroad friction is shown to have much higher influence on upstream water depth profile than street friction coefficients. This methodology could be applied to any urban flood configuration in order to better understand flow dynamics and repartition but also guide model calibration in the light of flow controls.

  5. Risk finance for catastrophe losses with Pareto-calibrated Lévy-stable severities.

    PubMed

    Powers, Michael R; Powers, Thomas Y; Gao, Siwei

    2012-11-01

    For catastrophe losses, the conventional risk finance paradigm of enterprise risk management identifies transfer, as opposed to pooling or avoidance, as the preferred solution. However, this analysis does not necessarily account for differences between light- and heavy-tailed characteristics of loss portfolios. Of particular concern are the decreasing benefits of diversification (through pooling) as the tails of severity distributions become heavier. In the present article, we study a loss portfolio characterized by nonstochastic frequency and a class of Lévy-stable severity distributions calibrated to match the parameters of the Pareto II distribution. We then propose a conservative risk finance paradigm that can be used to prepare the firm for worst-case scenarios with regard to both (1) the firm's intrinsic sensitivity to risk and (2) the heaviness of the severity's tail. © 2012 Society for Risk Analysis.

  6. Gamma/Hadron Separation for the HAWC Observatory

    NASA Astrophysics Data System (ADS)

    Gerhardt, Michael J.

    The High-Altitude Water Cherenkov (HAWC) Observatory is a gamma-ray observatory sensitive to gamma rays from 100 GeV to 100 TeV with an instantaneous field of view of ˜2 sr. It is located on the Sierra Negra plateau in Mexico at an elevation of 4,100 m and began full operation in March 2015. The purpose of the detector is to study relativistic particles that are produced by interstellar and intergalactic objects such as: pulsars, supernova remnants, molecular clouds, black holes and more. To achieve optimal angular resolution, energy reconstruction and cosmic ray background suppression for the extensive air showers detected by HAWC, good timing and charge calibration are crucial, as well as optimization of quality cuts on background suppression variables. Additions to the HAWC timing calibration, in particular automating the calibration quality checks and a new method for background suppression using a multivariate analysis are presented in this thesis.

  7. Optical Calibration Process Developed for Neural-Network-Based Optical Nondestructive Evaluation Method

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.

    2004-01-01

    A completely optical calibration process has been developed at Glenn for calibrating a neural-network-based nondestructive evaluation (NDE) method. The NDE method itself detects very small changes in the characteristic patterns or vibration mode shapes of vibrating structures as discussed in many references. The mode shapes or characteristic patterns are recorded using television or electronic holography and change when a structure experiences, for example, cracking, debonds, or variations in fastener properties. An artificial neural network can be trained to be very sensitive to changes in the mode shapes, but quantifying or calibrating that sensitivity in a consistent, meaningful, and deliverable manner has been challenging. The standard calibration approach has been difficult to implement, where the response to damage of the trained neural network is compared with the responses of vibration-measurement sensors. In particular, the vibration-measurement sensors are intrusive, insufficiently sensitive, and not numerous enough. In response to these difficulties, a completely optical alternative to the standard calibration approach was proposed and tested successfully. Specifically, the vibration mode to be monitored for structural damage was intentionally contaminated with known amounts of another mode, and the response of the trained neural network was measured as a function of the peak-to-peak amplitude of the contaminating mode. The neural network calibration technique essentially uses the vibration mode shapes of the undamaged structure as standards against which the changed mode shapes are compared. The published response of the network can be made nearly independent of the contaminating mode, if enough vibration modes are used to train the net. The sensitivity of the neural network can be adjusted for the environment in which the test is to be conducted. The response of a neural network trained with measured vibration patterns for use on a vibration isolation table in the presence of various sources of laboratory noise is shown. The output of the neural network is called the degradable classification index. The curve was generated by a simultaneous comparison of means, and it shows a peak-to-peak sensitivity of about 100 nm. The following graph uses model generated data from a compressor blade to show that much higher sensitivities are possible when the environment can be controlled better. The peak-to-peak sensitivity here is about 20 nm. The training procedure was modified for the second graph, and the data were subjected to an intensity-dependent transformation called folding. All the measurements for this approach to calibration were optical. The peak-to-peak amplitudes of the vibration modes were measured using heterodyne interferometry, and the modes themselves were recorded using television (electronic) holography.

  8. Method for Ground-to-Satellite Laser Calibration System

    NASA Technical Reports Server (NTRS)

    Lukashin, Constantine (Inventor); Wielicki, Bruce A. (Inventor)

    2015-01-01

    The present invention comprises an approach for calibrating the sensitivity to polarization, optics degradation, spectral and stray light response functions of instruments on orbit. The concept is based on using an accurate ground-based laser system, Ground-to-Space Laser Calibration (GSLC), transmitting laser light to instrument on orbit during nighttime substantially clear-sky conditions. To minimize atmospheric contribution to the calibration uncertainty the calibration cycles should be performed in short time intervals, and all required measurements are designed to be relative. The calibration cycles involve ground operations with laser beam polarization and wavelength changes.

  9. Method for Ground-to-Space Laser Calibration System

    NASA Technical Reports Server (NTRS)

    Lukashin, Constantine (Inventor); Wielicki, Bruce A. (Inventor)

    2014-01-01

    The present invention comprises an approach for calibrating the sensitivity to polarization, optics degradation, spectral and stray light response functions of instruments on orbit. The concept is based on using an accurate ground-based laser system, Ground-to-Space Laser Calibration (GSLC), transmitting laser light to instrument on orbit during nighttime substantially clear-sky conditions. To minimize atmospheric contribution to the calibration uncertainty the calibration cycles should be performed in short time intervals, and all required measurements are designed to be relative. The calibration cycles involve ground operations with laser beam polarization and wavelength changes.

  10. Calibration of a Noble Gas Mass Spectrometer with an Atmospheric Argon Standard (Invited)

    NASA Astrophysics Data System (ADS)

    Prasad, V.; Grove, M.

    2009-12-01

    Like other mass spectrometers, gas source instruments are very good at precisely measuring isotopic ratios but need to be calibrated with a standard to be accurate. The need for calibration arises due to the complicated ionization process which inefficiently and differentially creates ions from the various isotopes that make up the elemental gas. Calibration of the ionization process requires materials with well understood isotopic compositions as standards. Our project goal was to calibrate a noble gas (Noblesse) mass spectrometer with a purified air sample. Our sample obtained from Ocean Beach in San Francisco was under known temperature, pressure, volume, humidity. We corrected the pressure for humidity and used the ideal gas law to calculate the number of moles of argon gas. We then removed all active gasses using specialized equipment designed for this purpose at the United States Geological Survey. At the same time, we measured the volume ratios of various parts of the gas extraction line system associated with the Noblesse mass spectrometer. Using this data, we calculated how much Ar was transferred to the reservoir from the vacuum-sealed vial that contained the purified gas standard. Using similar measurements, we also calculated how much Ar was introduced into the extraction line from a pipette system and how much of this Ar was ultimately expanded into the Noblesse mass spectrometer. Based upon this information, it was possible to calibrate the argon sensitivity of the mass spectrometer. From a knowledge of the isotopic composition of air, it was also possible to characterize how ionized argon isotopes were fractionated during analysis. By repeatedly analyzing our standard we measured a 40Ar Sensitivity of 2.05 amps/bar and a 40Ar/36Ar ratio of 309.2 on the Faraday detector. In contrast, measurements carried out by ion counting using electron multipliers yield a value (296.8) which is much closer to the actual atmospheric 40Ar/36Ar value of 295.5.

  11. Procedures for adjusting regional regression models of urban-runoff quality using local data

    USGS Publications Warehouse

    Hoos, A.B.; Sisolak, J.K.

    1993-01-01

    Statistical operations termed model-adjustment procedures (MAP?s) can be used to incorporate local data into existing regression models to improve the prediction of urban-runoff quality. Each MAP is a form of regression analysis in which the local data base is used as a calibration data set. Regression coefficients are determined from the local data base, and the resulting `adjusted? regression models can then be used to predict storm-runoff quality at unmonitored sites. The response variable in the regression analyses is the observed load or mean concentration of a constituent in storm runoff for a single storm. The set of explanatory variables used in the regression analyses is different for each MAP, but always includes the predicted value of load or mean concentration from a regional regression model. The four MAP?s examined in this study were: single-factor regression against the regional model prediction, P, (termed MAP-lF-P), regression against P,, (termed MAP-R-P), regression against P, and additional local variables (termed MAP-R-P+nV), and a weighted combination of P, and a local-regression prediction (termed MAP-W). The procedures were tested by means of split-sample analysis, using data from three cities included in the Nationwide Urban Runoff Program: Denver, Colorado; Bellevue, Washington; and Knoxville, Tennessee. The MAP that provided the greatest predictive accuracy for the verification data set differed among the three test data bases and among model types (MAP-W for Denver and Knoxville, MAP-lF-P and MAP-R-P for Bellevue load models, and MAP-R-P+nV for Bellevue concentration models) and, in many cases, was not clearly indicated by the values of standard error of estimate for the calibration data set. A scheme to guide MAP selection, based on exploratory data analysis of the calibration data set, is presented and tested. The MAP?s were tested for sensitivity to the size of a calibration data set. As expected, predictive accuracy of all MAP?s for the verification data set decreased as the calibration data-set size decreased, but predictive accuracy was not as sensitive for the MAP?s as it was for the local regression models.

  12. Development in the SONNE Instrument, a Solar Neutron Spectrometer for Solar Orbiter and Solar Sentinels

    NASA Astrophysics Data System (ADS)

    Ryan, J. M.; Bravar, U.; Macri, J. R.; McConnell, M. L.; Woolf, R.; Moser, M.; Flueckiger, E.; Pirard, B.; MacKinnon, A.; Mallik, P.; Bruillard, P.

    2007-12-01

    We report on the technical development of SONNE (Solar Neutron Experiment), a solar neutron spectrometer intended for use on the ESA Solar Orbiter and/or the NASA Solar Sentinels Missions. Development has taken place on three fronts, (1) simulations of a flight instrument, including the spacecraft radiation environment, (2) calibrating a prototype instrument in a monoenergetic neutron beam and (3) mechanical and electrical design of a deep space mission instrument. SONNE will be sensitive to fast neutrons up to 20 MeV, using double scatter imaging techniques to dramatically reduce background. Preliminary beam measurement analysis, conducted just before this abstract, supports advertised design goals in terms of sensitivity and energy resolution, meaning that time stamping neutron emission from the Sun will be possible. Combined with gamma ray measurements, new insight into particle acceleration will emerge when deployed on an inner heliospheric mission. Progress will be reported on simulations and physical design as well as calibrations.

  13. Calibration of a subcutaneous amperometric glucose sensor. Part 1. Effect of measurement uncertainties on the determination of sensor sensitivity and background current.

    PubMed

    Choleau, C; Klein, J C; Reach, G; Aussedat, B; Demaria-Pesce, V; Wilson, G S; Gifford, R; Ward, W K

    2002-08-01

    The calibration of a continuous glucose monitoring system, i.e. the transformation of the signal I(t) generated by the glucose sensor at time (t) into an estimation of glucose concentration G(t), represents a key issue. The two-point calibration procedure consists of the determination of a sensor sensitivity S and of a background current I(o) by plotting two values of the sensor signal versus the concomitant blood glucose concentrations. The estimation of G(t) is subsequently given by G(t) = (I(t)-I(o))/S. A glucose sensor was implanted in the subcutaneous tissue of nine type 1 diabetic patients during 3 (n = 2) and 7 days (n = 7). For each individual trial, S and I(o) were determined by taking into account the values of two sets of sensor output and blood glucose concentration distant by at least 1 h, the procedure being repeated for each consecutive set of values. S and I(o) were found to be negatively correlated, the value of I(o) being sometimes negative. Theoretical analysis demonstrates that this phenomenon can be explained by the effect of measurement uncertainties on the determination of capillary glucose concentration and of sensor output.

  14. In-flight calibration of SCIAMACHY's polarization sensitivity

    NASA Astrophysics Data System (ADS)

    Liebing, Patricia; Krijger, Matthijs; Snel, Ralph; Bramstedt, Klaus; Noël, Stefan; Bovensmann, Heinrich; Burrows, John P.

    2018-01-01

    This paper describes the in-flight calibration of the polarization response of the SCIAMACHY polarization measurement devices (PMDs) and a selected region of its science channels. With the lack of polarized calibration sources it is not possible to obtain such a calibration from dedicated calibration measurements. Instead, the earthshine itself, together with a simplified radiative transfer model (RTM), is used to derive time-dependent and measurement-configuration-dependent polarization sensitivities. The results are compared to an instrument model that describes the degradation of the instrument as a result of a slow buildup of contaminant layers on its elevation and azimuth scan mirrors. This comparison reveals significant differences between the model prediction and the data, suggesting an unforeseen change between on-ground and in-flight calibration in at least one of the polarization-sensitive components of the optical bench. The possibility of mechanisms other than scan mirror contamination contributing to the degradation of the instrument will be discussed. The data are consistent with a polarization phase shift occurring in the beam split prism used to divert the light coming from the telescope to the different channels and polarization measurement devices. The extension of the instrument degradation model with a linear retarder enables the determination of the relevant parameters to describe this phase shift and ultimately results in a significant improvement of the polarization measurements as well as the polarization response correction of measured radiances.

  15. Approach of technical decision-making by element flow analysis and Monte-Carlo simulation of municipal solid waste stream.

    PubMed

    Tian, Bao-Guo; Si, Ji-Tao; Zhao, Yan; Wang, Hong-Tao; Hao, Ji-Ming

    2007-01-01

    This paper deals with the procedure and methodology which can be used to select the optimal treatment and disposal technology of municipal solid waste (MSW), and to provide practical and effective technical support to policy-making, on the basis of study on solid waste management status and development trend in China and abroad. Focusing on various treatment and disposal technologies and processes of MSW, this study established a Monte-Carlo mathematical model of cost minimization for MSW handling subjected to environmental constraints. A new method of element stream (such as C, H, O, N, S) analysis in combination with economic stream analysis of MSW was developed. By following the streams of different treatment processes consisting of various techniques from generation, separation, transfer, transport, treatment, recycling and disposal of the wastes, the element constitution as well as its economic distribution in terms of possibility functions was identified. Every technique step was evaluated economically. The Mont-Carlo method was then conducted for model calibration. Sensitivity analysis was also carried out to identify the most sensitive factors. Model calibration indicated that landfill with power generation of landfill gas was economically the optimal technology at the present stage under the condition of more than 58% of C, H, O, N, S going to landfill. Whether or not to generate electricity was the most sensitive factor. If landfilling cost increases, MSW separation treatment was recommended by screening first followed with incinerating partially and composting partially with residue landfilling. The possibility of incineration model selection as the optimal technology was affected by the city scale. For big cities and metropolitans with large MSW generation, possibility for constructing large-scale incineration facilities increases, whereas, for middle and small cities, the effectiveness of incinerating waste decreases.

  16. Quantitative aspects of inductively coupled plasma mass spectrometry

    NASA Astrophysics Data System (ADS)

    Bulska, Ewa; Wagner, Barbara

    2016-10-01

    Accurate determination of elements in various kinds of samples is essential for many areas, including environmental science, medicine, as well as industry. Inductively coupled plasma mass spectrometry (ICP-MS) is a powerful tool enabling multi-elemental analysis of numerous matrices with high sensitivity and good precision. Various calibration approaches can be used to perform accurate quantitative measurements by ICP-MS. They include the use of pure standards, matrix-matched standards, or relevant certified reference materials, assuring traceability of the reported results. This review critically evaluates the advantages and limitations of different calibration approaches, which are used in quantitative analyses by ICP-MS. Examples of such analyses are provided. This article is part of the themed issue 'Quantitative mass spectrometry'.

  17. Optimization and Calibration of Slat Position for a SPECT With Slit-Slat Collimator and Pixelated Detector Crystals

    NASA Astrophysics Data System (ADS)

    Deng, Xiao; Ma, Tianyu; Lecomte, Roger; Yao, Rutao

    2011-10-01

    To expand the availability of SPECT for biomedical research, we developed a SPECT imaging system on an existing animal PET detector by adding a slit-slat collimator. As the detector crystals are pixelated, the relative slat-to-crystal position (SCP) in the axial direction affects the photon flux distribution onto the crystals. The accurate knowledge of SCP is important to the axial resolution and sensitivity of the system. This work presents a method for optimizing SCP in system design and for determining SCP in system geometrical calibration. The optimization was achieved by finding the SCP that provides higher spatial resolution in terms of average-root-mean-square (R̅M̅S̅) width of the axial point spread function (PSF) without loss of sensitivity. The calibration was based on the least-square-error method that minimizes the difference between the measured and modeled axial point spread projections. The uniqueness and accuracy of the calibration results were validated through a singular value decomposition (SVD) based approach. Both the optimization and calibration techniques were evaluated with Monte Carlo (MC) simulated data. We showed that the [R̅M̅S̅] was improved about 15% with the optimal SCP as compared to the least-optimal SCP, and system sensitivity was not affected by SCP. The SCP error achieved by the proposed calibration method was less than 0.04 mm. The calibrated SCP value was used in MC simulation to generate the system matrix which was used for image reconstruction. The images of simulated phantoms showed the expected resolution performance and were artifact free. We conclude that the proposed optimization and calibration method is effective for the slit-slat collimator based SPECT systems.

  18. Genetic algorithm applied to a Soil-Vegetation-Atmosphere system: Sensitivity and uncertainty analysis

    NASA Astrophysics Data System (ADS)

    Schneider, Sébastien; Jacques, Diederik; Mallants, Dirk

    2010-05-01

    Numerical models are of precious help for predicting water fluxes in the vadose zone and more specifically in Soil-Vegetation-Atmosphere (SVA) systems. For such simulations, robust models and representative soil hydraulic parameters are required. Calibration of unsaturated hydraulic properties is known to be a difficult optimization problem due to the high non-linearity of the water flow equations. Therefore, robust methods are needed to avoid the optimization process to lead to non-optimal parameters. Evolutionary algorithms and specifically genetic algorithms (GAs) are very well suited for those complex parameter optimization problems. Additionally, GAs offer the opportunity to assess the confidence in the hydraulic parameter estimations, because of the large number of model realizations. The SVA system in this study concerns a pine stand on a heterogeneous sandy soil (podzol) in the Campine region in the north of Belgium. Throughfall and other meteorological data and water contents at different soil depths have been recorded during one year at a daily time step in two lysimeters. The water table level, which is varying between 95 and 170 cm, has been recorded with intervals of 0.5 hour. The leaf area index was measured as well at some selected time moments during the year in order to evaluate the energy which reaches the soil and to deduce the potential evaporation. Water contents at several depths have been recorded. Based on the profile description, five soil layers have been distinguished in the podzol. Two models have been used for simulating water fluxes: (i) a mechanistic model, the HYDRUS-1D model, which solves the Richards' equation, and (ii) a compartmental model, which treats the soil profile as a bucket into which water flows until its maximum capacity is reached. A global sensitivity analysis (Morris' one-at-a-time sensitivity analysis) was run previously to the calibration, in order to check the sensitivity in the chosen parameter search space. For the inversion procedure a genetical algorithm (GA) was used. Specific features such as elitism, roulette-wheel process for selection operator and island theory were implemented. Optimization was based on the water content measurements recorded at several depths. Ten scenarios have been elaborated and applied on the two lysimeters in order to investigate the impact of the conceptual model in terms of processes description (mechanistic or compartmental) and geometry (number of horizons in the profile description) on the calibration accuracy. Calibration leads to a good agreement with the measured water contents. The most critical parameters for improving the goodness of fit are the number of horizons and the type of process description. Best fit are found for a mechanistic model with 5 horizons resulting in absolute differences between observed and simulated water contents less than 0.02 cm3cm-3 in average. Parameter estimate analysis shows that layers thicknesses are poorly constrained whereas hydraulic parameters are much well defined.

  19. Neural-Net Based Optical NDE Method for Structural Health Monitoring

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.; Weiland, Kenneth E.

    2003-01-01

    This paper answers some performance and calibration questions about a non-destructive-evaluation (NDE) procedure that uses artificial neural networks to detect structural damage or other changes from sub-sampled characteristic patterns. The method shows increasing sensitivity as the number of sub-samples increases from 108 to 6912. The sensitivity of this robust NDE method is not affected by noisy excitations of the first vibration mode. A calibration procedure is proposed and demonstrated where the output of a trained net can be correlated with the outputs of the point sensors used for vibration testing. The calibration procedure is based on controlled changes of fastener torques. A heterodyne interferometer is used as a displacement sensor for a demonstration of the challenges to be handled in using standard point sensors for calibration.

  20. Using the GOCE star trackers for validating the calibration of its accelerometers

    NASA Astrophysics Data System (ADS)

    Visser, P. N. A. M.

    2017-12-01

    A method for validating the calibration parameters of the six accelerometers on board the Gravity field and steady-state Ocean Circulation Explorer (GOCE) from star tracker observations that was originally tested by an end-to-end simulation, has been updated and applied to real data from GOCE. It is shown that the method provides estimates of scale factors for all three axes of the six GOCE accelerometers that are consistent at a level significantly better than 0.01 compared to the a priori calibrated value of 1. In addition, relative accelerometer biases and drift terms were estimated consistent with values obtained by precise orbit determination, where the first GOCE accelerometer served as reference. The calibration results clearly reveal the different behavior of the sensitive and less-sensitive accelerometer axes.

  1. Construction of robust dynamic genome-scale metabolic model structures of Saccharomyces cerevisiae through iterative re-parameterization.

    PubMed

    Sánchez, Benjamín J; Pérez-Correa, José R; Agosin, Eduardo

    2014-09-01

    Dynamic flux balance analysis (dFBA) has been widely employed in metabolic engineering to predict the effect of genetic modifications and environmental conditions in the cell׳s metabolism during dynamic cultures. However, the importance of the model parameters used in these methodologies has not been properly addressed. Here, we present a novel and simple procedure to identify dFBA parameters that are relevant for model calibration. The procedure uses metaheuristic optimization and pre/post-regression diagnostics, fixing iteratively the model parameters that do not have a significant role. We evaluated this protocol in a Saccharomyces cerevisiae dFBA framework calibrated for aerobic fed-batch and anaerobic batch cultivations. The model structures achieved have only significant, sensitive and uncorrelated parameters and are able to calibrate different experimental data. We show that consumption, suboptimal growth and production rates are more useful for calibrating dynamic S. cerevisiae metabolic models than Boolean gene expression rules, biomass requirements and ATP maintenance. Copyright © 2014 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.

  2. [MODIS Investigation

    NASA Technical Reports Server (NTRS)

    Abbott, Mark R.

    1996-01-01

    The objectives of the last six months were: (1) Complete sensitivity analysis of fluorescence; line height algorithms (2) Deliver fluorescence algorithm code and test data to the University of Miami for integration; (3) Complete analysis of bio-optical data from Southern Ocean cruise; (4) Conduct laboratory experiments based on analyses of field data; (5) Analyze data from bio-optical mooring off Hawaii; (6) Develop calibration/validation plan for MODIS fluorescence data; (7) Respond to the Japanese Research Announcement for GLI; and (8) Continue to review plans for EOSDIS and assist ECS contractor.

  3. Sensitivity and fragmentation calibration of the time-of-flight mass spectrometer RTOF on board ESA's Rosetta mission

    NASA Astrophysics Data System (ADS)

    Gasc, Sébastien; Altwegg, Kathrin; Jäckel, Annette; Le Roy, Léna; Rubin, Martin; Fiethe, Björn; Mall, Urs; Rème, Henri

    2014-05-01

    The European Space Agency's Rosetta mission will rendez-vous comet 67P/Churyumov-Gerasimenko (67P) in September 2014. The Rosetta spacecraft with the Rosetta Orbiter Spectrometer for Ion and Neutral Analysis (ROSINA) onboard will follow and survey 67P for more than a year until the comet reaches its perihelion and beyond. ROSINA will provide new information on the global molecular, elemental, and isotopic composition of the coma [1]. ROSINA consists of a pressure sensor (COPS) and two mass spectrometers, the Double Focusing Mass Spectrometer (DFMS) and the Reflectron Time Of Flight mass spectrometer (RTOF). RTOF has a wide mass range, from 1 amu/e to >300 amu/e, and contains two ion sources, a reflectron and two detectors. The two ion sources, the orthogonal and the storage source, are capable to measure cometary ions while the latter also allows measuring cometary neutral gas. In neutral gas mode the ionization is performed through electron impact. A built-in Gas Calibration Unit (GCU) contains a known gas mixture composed of He, CO2, and Kr that can be used for in-flight calibration of the instrument. Among other ROSINA specific scientific goals, RTOF's task will be to determine molecular composition of volatiles via measuring and separating heavy hydrocarbons; it has been designed to study the development of the cometary activity as well as the coma chemistry between 3.5 AU and perihelion. From the spectroscopic studies and in-situ observations of other comets, we expect to find molecules such as H2O, CO, CO2, hydrocarbons, alcohols, formaldehyde, and other organic compounds in the coma of 67P/Churyumov-Gerasimenko [2]. To demonstrate and quantify the sensitivity and functionality of RTOF, calibration measurements have been realized with more than 20 species among the most abundant molecules quoted above, as well as other species such as PAHs. We will describe the applied methods used to realize this calibration and will discuss our preliminary results, i.e. RTOF capabilities in terms of sensitivity, isotopic ratios, and fragmentation patterns. We will demonstrate that RTOF is well capable to meet the requirements to address the scientific questions discussed above. [1] Balsiger, H. et al.: ROSINA-Rosetta Orbiter Spectrometer for Ion and Neutral Analysis, Space Science Reviews, Vol. 128, 745-801, 2007. [2] Bockelée-Morvan, D., Crovisier, J., Mumma, M. J., and Weaver, H. A.: The Composition of Cometary Volatiles, in Comets II (M. C. Festou et al., eds), Univ. Arizona Press, Tucson, 2004

  4. Uncertainty and sensitivity assessments of an agricultural-hydrological model (RZWQM2) using the GLUE method

    NASA Astrophysics Data System (ADS)

    Sun, Mei; Zhang, Xiaolin; Huo, Zailin; Feng, Shaoyuan; Huang, Guanhua; Mao, Xiaomin

    2016-03-01

    Quantitatively ascertaining and analyzing the effects of model uncertainty on model reliability is a focal point for agricultural-hydrological models due to more uncertainties of inputs and processes. In this study, the generalized likelihood uncertainty estimation (GLUE) method with Latin hypercube sampling (LHS) was used to evaluate the uncertainty of the RZWQM-DSSAT (RZWQM2) model outputs responses and the sensitivity of 25 parameters related to soil properties, nutrient transport and crop genetics. To avoid the one-sided risk of model prediction caused by using a single calibration criterion, the combined likelihood (CL) function integrated information concerning water, nitrogen, and crop production was introduced in GLUE analysis for the predictions of the following four model output responses: the total amount of water content (T-SWC) and the nitrate nitrogen (T-NIT) within the 1-m soil profile, the seed yields of waxy maize (Y-Maize) and winter wheat (Y-Wheat). In the process of evaluating RZWQM2, measurements and meteorological data were obtained from a field experiment that involved a winter wheat and waxy maize crop rotation system conducted from 2003 to 2004 in southern Beijing. The calibration and validation results indicated that RZWQM2 model can be used to simulate the crop growth and water-nitrogen migration and transformation in wheat-maize crop rotation planting system. The results of uncertainty analysis using of GLUE method showed T-NIT was sensitive to parameters relative to nitrification coefficient, maize growth characteristics on seedling period, wheat vernalization period, and wheat photoperiod. Parameters on soil saturated hydraulic conductivity, nitrogen nitrification and denitrification, and urea hydrolysis played an important role in crop yield component. The prediction errors for RZWQM2 outputs with CL function were relatively lower and uniform compared with other likelihood functions composed of individual calibration criterion. This new and successful application of the GLUE method for determining the uncertainty and sensitivity of the RZWQM2 could provide a reference for the optimization of model parameters with different emphases according to research interests.

  5. Calculating the sensitivity of wind turbine loads to wind inputs using response surfaces

    NASA Astrophysics Data System (ADS)

    Rinker, Jennifer M.

    2016-09-01

    This paper presents a methodology to calculate wind turbine load sensitivities to turbulence parameters through the use of response surfaces. A response surface is a highdimensional polynomial surface that can be calibrated to any set of input/output data and then used to generate synthetic data at a low computational cost. Sobol sensitivity indices (SIs) can then be calculated with relative ease using the calibrated response surface. The proposed methodology is demonstrated by calculating the total sensitivity of the maximum blade root bending moment of the WindPACT 5 MW reference model to four turbulence input parameters: a reference mean wind speed, a reference turbulence intensity, the Kaimal length scale, and a novel parameter reflecting the nonstationarity present in the inflow turbulence. The input/output data used to calibrate the response surface were generated for a previous project. The fit of the calibrated response surface is evaluated in terms of error between the model and the training data and in terms of the convergence. The Sobol SIs are calculated using the calibrated response surface, and the convergence is examined. The Sobol SIs reveal that, of the four turbulence parameters examined in this paper, the variance caused by the Kaimal length scale and nonstationarity parameter are negligible. Thus, the findings in this paper represent the first systematic evidence that stochastic wind turbine load response statistics can be modeled purely by mean wind wind speed and turbulence intensity.

  6. Post-Servicing Mission 4 Flux Calibration of the STIS Echelle Modes

    NASA Astrophysics Data System (ADS)

    Azalee Bostroem, K.; Aloisi, A.; Proffitt, C.; Osten, R.; Bohlin, R.

    2011-01-01

    STIS echelle modes show a wavelength-dependent decline in sensitivity with time. While this trend is observed in all STIS spectroscopic modes, the echelle sensitivity is further affected by a time-dependent shift in the blaze function. To improve the echelle flux calibration, new baselines for the echelle sensitivities are derived from post-Servicing Mission 4 (SM4) observations of the Hubble Space Telescope standard star G191-B2B. We present how these baseline sensitivities compare to pre-failure trends. Specifically, where the new results differ from expectations and discuss anomalous results found in E140H monitoring observations are highlighted.

  7. Two-Step Calibration of a Multiwavelength Pyrometer for High Temperature Measurement Using a Quartz Lamp

    NASA Technical Reports Server (NTRS)

    Ng, Daniel

    2001-01-01

    There is no theoretical upper temperature limit for pyrometer application in temperature measurements. NASA Glenn's multiwavelength pyrometer can make measurements over wide temperature ranges. However, the radiation spectral response of the pyrometer's detector must be calibrated before any temperature measurement is attempted, and it is recommended that calibration be done at temperatures close to those for which measurements will be made. Calibration is a determination of the constants of proportionality at all wavelengths between the detector's output (voltage) and its input signals (usually from a blackbody radiation source) in order to convert detector output into radiation intensity. To measure high temperatures, the detectors are chosen to be sensitive in the spectral range from 0.4 to 2.5 micrometers. A blackbody furnace equilibrated at around 1000 C is often used for this calibration. Though the detector may respond sensitively to short wavelengths radiation, a blackbody furnace at 1000 C emits only feebly at very short wavelengths. As a consequence, the calibration constants that result may not be the most accurate. For pyrometry calibration, a radiation source emitting strongly at the short wavelengths is preferred. We have chosen a quartz halogen lamp for this purpose.

  8. Experimental Results of Site Calibration and Sensitivity Measurements in OTR for UWB Systems

    NASA Astrophysics Data System (ADS)

    Viswanadham, Chandana; Rao, P. Mallikrajuna

    2017-06-01

    System calibration and parameter accuracy measurement of electronic support measures (ESM) systems is a major activity, carried out by electronic warfare (EW) engineers. These activities are very critical and needs good understanding in the field of microwaves, antennas, wave propagation, digital and communication domains. EW systems are broad band, built with state-of-the art electronic hardware, installed on different varieties of military platforms to guard country's security from time to time. EW systems operate in wide frequency ranges, typically in the order of thousands of MHz, hence these are ultra wide band (UWB) systems. Few calibration activities are carried within the system and in the test sites, to meet the accuracies of final specifications. After calibration, parameters are measured for their accuracies either in feed mode by injecting the RF signals into the front end or in radiation mode by transmitting the RF signals on to system antenna. To carry out these activities in radiation mode, a calibrated open test range (OTR) is necessary in the frequency band of interest. Thus site calibration of OTR is necessary to be carried out before taking up system calibration and parameter measurements. This paper presents the experimental results of OTR site calibration and sensitivity measurements of UWB systems in radiation mode.

  9. Technical Note: Procedure for the calibration and validation of kilo-voltage cone-beam CT models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vilches-Freixas, Gloria; Létang, Jean Michel; Rit,

    2016-09-15

    Purpose: The aim of this work is to propose a general and simple procedure for the calibration and validation of kilo-voltage cone-beam CT (kV CBCT) models against experimental data. Methods: The calibration and validation of the CT model is a two-step procedure: the source model then the detector model. The source is described by the direction dependent photon energy spectrum at each voltage while the detector is described by the pixel intensity value as a function of the direction and the energy of incident photons. The measurements for the source consist of a series of dose measurements in air performedmore » at each voltage with varying filter thicknesses and materials in front of the x-ray tube. The measurements for the detector are acquisitions of projection images using the same filters and several tube voltages. The proposed procedure has been applied to calibrate and assess the accuracy of simple models of the source and the detector of three commercial kV CBCT units. If the CBCT system models had been calibrated differently, the current procedure would have been exclusively used to validate the models. Several high-purity attenuation filters of aluminum, copper, and silver combined with a dosimeter which is sensitive to the range of voltages of interest were used. A sensitivity analysis of the model has also been conducted for each parameter of the source and the detector models. Results: Average deviations between experimental and theoretical dose values are below 1.5% after calibration for the three x-ray sources. The predicted energy deposited in the detector agrees with experimental data within 4% for all imaging systems. Conclusions: The authors developed and applied an experimental procedure to calibrate and validate any model of the source and the detector of a CBCT unit. The present protocol has been successfully applied to three x-ray imaging systems. The minimum requirements in terms of material and equipment would make its implementation suitable in most clinical environments.« less

  10. Empirical performance of the calibrated self-controlled cohort analysis within temporal pattern discovery: lessons for developing a risk identification and analysis system.

    PubMed

    Norén, G Niklas; Bergvall, Tomas; Ryan, Patrick B; Juhlin, Kristina; Schuemie, Martijn J; Madigan, David

    2013-10-01

    Observational healthcare data offer the potential to identify adverse drug reactions that may be missed by spontaneous reporting. The self-controlled cohort analysis within the Temporal Pattern Discovery framework compares the observed-to-expected ratio of medical outcomes during post-exposure surveillance periods with those during a set of distinct pre-exposure control periods in the same patients. It utilizes an external control group to account for systematic differences between the different time periods, thus combining within- and between-patient confounder adjustment in a single measure. To evaluate the performance of the calibrated self-controlled cohort analysis within Temporal Pattern Discovery as a tool for risk identification in observational healthcare data. Different implementations of the calibrated self-controlled cohort analysis were applied to 399 drug-outcome pairs (165 positive and 234 negative test cases across 4 health outcomes of interest) in 5 real observational databases (four with administrative claims and one with electronic health records). Performance was evaluated on real data through sensitivity/specificity, the area under receiver operator characteristics curve (AUC), and bias. The calibrated self-controlled cohort analysis achieved good predictive accuracy across the outcomes and databases under study. The optimal design based on this reference set uses a 360 days surveillance period and a single control period 180 days prior to new prescriptions. It achieved an average AUC of 0.75 and AUC >0.70 in all but one scenario. A design with three separate control periods performed better for the electronic health records database and for acute renal failure across all data sets. The estimates for negative test cases were generally unbiased, but a minor negative bias of up to 0.2 on the RR-scale was observed with the configurations using multiple control periods, for acute liver injury and upper gastrointestinal bleeding. The calibrated self-controlled cohort analysis within Temporal Pattern Discovery shows promise as a tool for risk identification; it performs well at discriminating positive from negative test cases. The optimal parameter configuration may vary with the data set and medical outcome of interest.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, J; Li, X; Liu, G

    Purpose: We compare and investigate the dosimetric impacts on pencil beam scanning (PBS) proton treatment plans generated with CT calibration curves from four different CT scanners and one averaged ‘global’ CT calibration curve. Methods: The four CT scanners are located at three different hospital locations within the same health system. CT density calibration curves were collected from these scanners using the same CT calibration phantom and acquisition parameters. Mass density to HU value tables were then commissioned in a commercial treatment planning system. Five disease sites were chosen for dosimetric comparisons at brain, lung, head and neck, adrenal, and prostate.more » Three types of PBS plans were generated at each treatment site using SFUD, IMPT, and robustness optimized IMPT techniques. 3D dose differences were investigated using 3D Gamma analysis. Results: The CT calibration curves for all four scanners display very similar shapes. Large HU differences were observed at both the high HU and low HU regions of the curves. Large dose differences were generally observed at the distal edges of the beams and they are beam angle dependent. Out of the five treatment sites, lung plans exhibits the most overall range uncertainties and prostate plans have the greatest dose discrepancy. There are no significant differences between the SFUD, IMPT, and the RO-IMPT methods. 3D gamma analysis with 3%, 3 mm criteria showed all plans with greater than 95% passing rate. Two of the scanners with close HU values have negligible dose difference except for lung. Conclusion: Our study shows that there are more than 5% dosimetric differences between different CT calibration curves. PBS treatment plans generated with SFUD, IMPT, and the robustness optimized IMPT has similar sensitivity to the CT density uncertainty. More patient data and tighter gamma criteria based on structure location and size will be used for further investigation.« less

  12. Cross modality registration of video and magnetic tracker data for 3D appearance and structure modeling

    NASA Astrophysics Data System (ADS)

    Sargent, Dusty; Chen, Chao-I.; Wang, Yuan-Fang

    2010-02-01

    The paper reports a fully-automated, cross-modality sensor data registration scheme between video and magnetic tracker data. This registration scheme is intended for use in computerized imaging systems to model the appearance, structure, and dimension of human anatomy in three dimensions (3D) from endoscopic videos, particularly colonoscopic videos, for cancer research and clinical practices. The proposed cross-modality calibration procedure operates this way: Before a colonoscopic procedure, the surgeon inserts a magnetic tracker into the working channel of the endoscope or otherwise fixes the tracker's position on the scope. The surgeon then maneuvers the scope-tracker assembly to view a checkerboard calibration pattern from a few different viewpoints for a few seconds. The calibration procedure is then completed, and the relative pose (translation and rotation) between the reference frames of the magnetic tracker and the scope is determined. During the colonoscopic procedure, the readings from the magnetic tracker are used to automatically deduce the pose (both position and orientation) of the scope's reference frame over time, without complicated image analysis. Knowing the scope movement over time then allows us to infer the 3D appearance and structure of the organs and tissues in the scene. While there are other well-established mechanisms for inferring the movement of the camera (scope) from images, they are often sensitive to mistakes in image analysis, error accumulation, and structure deformation. The proposed method using a magnetic tracker to establish the camera motion parameters thus provides a robust and efficient alternative for 3D model construction. Furthermore, the calibration procedure does not require special training nor use expensive calibration equipment (except for a camera calibration pattern-a checkerboard pattern-that can be printed on any laser or inkjet printer).

  13. A Physically Motivated and Empirically Calibrated Method to Measure the Effective Temperature, Metallicity, and Ti Abundance of M Dwarfs

    NASA Astrophysics Data System (ADS)

    Veyette, Mark J.; Muirhead, Philip S.; Mann, Andrew W.; Brewer, John M.; Allard, France; Homeier, Derek

    2017-12-01

    The ability to perform detailed chemical analysis of Sun-like F-, G-, and K-type stars is a powerful tool with many applications, including studying the chemical evolution of the Galaxy and constraining planet formation theories. Unfortunately, complications in modeling cooler stellar atmospheres hinders similar analyses of M dwarf stars. Empirically calibrated methods to measure M dwarf metallicity from moderate-resolution spectra are currently limited to measuring overall metallicity and rely on astrophysical abundance correlations in stellar populations. We present a new, empirical calibration of synthetic M dwarf spectra that can be used to infer effective temperature, Fe abundance, and Ti abundance. We obtained high-resolution (R ˜ 25,000), Y-band (˜1 μm) spectra of 29 M dwarfs with NIRSPEC on Keck II. Using the PHOENIX stellar atmosphere modeling code (version 15.5), we generated a grid of synthetic spectra covering a range of temperatures, metallicities, and alpha-enhancements. From our observed and synthetic spectra, we measured the equivalent widths of multiple Fe I and Ti I lines and a temperature-sensitive index based on the FeH band head. We used abundances measured from widely separated solar-type companions to empirically calibrate transformations to the observed indices and equivalent widths that force agreement with the models. Our calibration achieves precisions in T eff, [Fe/H], and [Ti/Fe] of 60 K, 0.1 dex, and 0.05 dex, respectively, and is calibrated for 3200 K < T eff < 4100 K, -0.7 < [Fe/H] < +0.3, and -0.05 < [Ti/Fe] < +0.3. This work is a step toward detailed chemical analysis of M dwarfs at a precision similar to what has been achieved for FGK stars.

  14. 40 CFR 86.122-78 - Carbon monoxide analyzer calibration.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... sensitive range to be used. (2) Zero the carbon monoxide analyzer with either zero-grade air or zero-grade... conditioning columns is one form of corrective action which may be taken.) (b) Initial and periodic calibration... calibrated. (1) Adjust the analyzer to optimize performance. (2) Zero the carbon monoxide analyzer with...

  15. Simultaneous multi-headed imager geometry calibration method

    DOEpatents

    Tran, Vi-Hoa [Newport News, VA; Meikle, Steven Richard [Penshurst, AU; Smith, Mark Frederick [Yorktown, VA

    2008-02-19

    A method for calibrating multi-headed high sensitivity and high spatial resolution dynamic imaging systems, especially those useful in the acquisition of tomographic images of small animals. The method of the present invention comprises: simultaneously calibrating two or more detectors to the same coordinate system; and functionally correcting for unwanted detector movement due to gantry flexing.

  16. Psychophysical contrast calibration

    PubMed Central

    To, Long; Woods, Russell L; Goldstein, Robert B; Peli, Eli

    2013-01-01

    Electronic displays and computer systems offer numerous advantages for clinical vision testing. Laboratory and clinical measurements of various functions and in particular of (letter) contrast sensitivity require accurately calibrated display contrast. In the laboratory this is achieved using expensive light meters. We developed and evaluated a novel method that uses only psychophysical responses of a person with normal vision to calibrate the luminance contrast of displays for experimental and clinical applications. Our method combines psychophysical techniques (1) for detection (and thus elimination or reduction) of display saturating nonlinearities; (2) for luminance (gamma function) estimation and linearization without use of a photometer; and (3) to measure without a photometer the luminance ratios of the display’s three color channels that are used in a bit-stealing procedure to expand the luminance resolution of the display. Using a photometer we verified that the calibration achieved with this procedure is accurate for both LCD and CRT displays enabling testing of letter contrast sensitivity to 0.5%. Our visual calibration procedure enables clinical, internet and home implementation and calibration verification of electronic contrast testing. PMID:23643843

  17. Radiometric calibration of optical microscopy and microspectroscopy apparata over a broad spectral range using a special thin-film luminescence standard

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valenta, J., E-mail: jan.valenta@mff.cuni.cz; Greben, M.

    2015-04-15

    Application capabilities of optical microscopes and microspectroscopes can be considerably enhanced by a proper calibration of their spectral sensitivity. We propose and demonstrate a method of relative and absolute calibration of a microspectroscope over an extraordinary broad spectral range covered by two (parallel) detection branches in visible and near-infrared spectral regions. The key point of the absolute calibration of a relative spectral sensitivity is application of the standard sample formed by a thin layer of Si nanocrystals with stable and efficient photoluminescence. The spectral PL quantum yield and the PL spatial distribution of the standard sample must be characterized bymore » separate experiments. The absolutely calibrated microspectroscope enables to characterize spectral photon emittance of a studied object or even its luminescence quantum yield (QY) if additional knowledge about spatial distribution of emission and about excitance is available. Capabilities of the calibrated microspectroscope are demonstrated by measuring external QY of electroluminescence from a standard poly-Si solar-cell and of photoluminescence of Er-doped Si nanocrystals.« less

  18. Radiometric properties of the NS001 Thematic Mapper Simulator aircraft multispectral scanner

    NASA Technical Reports Server (NTRS)

    Markham, Brian L.; Ahmad, Suraiya P.

    1990-01-01

    Laboratory tests of the NS001 TM are described emphasizing absolute calibration to determine the radiometry of the simulator's reflective channels. In-flight calibration of the data is accomplished with the NS001 internal integrating-sphere source because instabilities in the source can limit the absolute calibration. The data from 1987-89 indicate uncertainties of up to 25 percent with an apparent average uncertainty of about 15 percent. Also identified are dark current drift and sensitivity changes along the scan line, random noise, and nonlinearity which contribute errors of 1-2 percent. Uncertainties similar to hysteresis are also noted especially in the 2.08-2.35-micron range which can reduce sensitivity and cause errors. The NS001 TM Simulator demonstrates a polarization sensitivity that can generate errors of up to about 10 percent depending on the wavelength.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorham, P. W.; Allison, P.; DuVernois, M.

    The Antarctic Impulsive Transient Antenna (ANITA) completed its second Long Duration Balloon flight in January 2009, with 31 days aloft (28.5 live days) over Antarctica. ANITA searches for impulsive coherent radio Cherenkov emission from 200 to 1200 MHz, arising from the Askaryan charge excess in ultrahigh energy neutrino-induced cascades within Antarctic ice. This flight included significant improvements over the first flight in payload sensitivity, efficiency, and flight trajectory. Analysis of in-flight calibration pulses from surface and subsurface locations verifies the expected sensitivity. In a blind analysis, we find 2 surviving events on a background, mostly anthropogenic, of 0.97{+-}0.42 events. Wemore » set the strongest limit to date for 10{sup 18}-10{sup 21} eV cosmic neutrinos, excluding several current cosmogenic neutrino models.« less

  20. Columnar aerosol properties over oceans by combining surface and aircraft measurements: sensitivity analysis.

    PubMed

    Zhang, T; Gordon, H R

    1997-04-20

    We report a sensitivity analysis for the algorithm presented by Gordon and Zhang [Appl. Opt. 34, 5552 (1995)] for inverting the radiance exiting the top and bottom of the atmosphere to yield the aerosol-scattering phase function [P(?)] and single-scattering albedo (omega(0)). The study of the algorithm's sensitivity to radiometric calibration errors, mean-zero instrument noise, sea-surface roughness, the curvature of the Earth's atmosphere, the polarization of the light field, and incorrect assumptions regarding the vertical structure of the atmosphere, indicates that the retrieved omega(0) has excellent stability even for very large values (~2) of the aerosol optical thickness; however, the error in the retrieved P(?) strongly depends on the measurement error and on the assumptions made in the retrieval algorithm. The retrieved phase functions in the blue are usually poor compared with those in the near infrared.

  1. Sensitivity of predicted bioaerosol exposure from open windrow composting facilities to ADMS dispersion model parameters.

    PubMed

    Douglas, P; Tyrrel, S F; Kinnersley, R P; Whelan, M; Longhurst, P J; Walsh, K; Pollard, S J T; Drew, G H

    2016-12-15

    Bioaerosols are released in elevated quantities from composting facilities and are associated with negative health effects, although dose-response relationships are not well understood, and require improved exposure classification. Dispersion modelling has great potential to improve exposure classification, but has not yet been extensively used or validated in this context. We present a sensitivity analysis of the ADMS dispersion model specific to input parameter ranges relevant to bioaerosol emissions from open windrow composting. This analysis provides an aid for model calibration by prioritising parameter adjustment and targeting independent parameter estimation. Results showed that predicted exposure was most sensitive to the wet and dry deposition modules and the majority of parameters relating to emission source characteristics, including pollutant emission velocity, source geometry and source height. This research improves understanding of the accuracy of model input data required to provide more reliable exposure predictions. Copyright © 2016. Published by Elsevier Ltd.

  2. Understanding hydrological and nitrogen interactions by sensitivity analysis of a catchment-scale nitrogen model

    NASA Astrophysics Data System (ADS)

    Medici, Chiara; Wade, Andrew; Frances, Felix

    2010-05-01

    Nitrogen is present in both terrestrial and aquatic ecosystems and research is needed to understand its storage, transportation and transformations in river catchments world-wide because of its importance in controlling plant growth and freshwater trophic status (Vitousek et al. 2009; Chu et al. 2008; Schlesinger et al 2006; Ocampo et al. 2006; Green et al., 2004; Arheimer et al., 1996). Numerous mathematical models have been developed to describe the nitrogen dynamics, but there is a substantial gap between the outputs now expected from these models and what modellers are able to provide with scientific justification (McIntyre et al., 2005). In fact, models will always necessarily be simplification of reality; hence simplifying assumptions are sources of uncertainty that must be well understood for an accurate model results interpretation. Therefore, estimating prediction uncertainties in water quality modelling is becoming increasingly appreciated (Dean et al., 2009, Kruger et al., 2007, Rode et al., 2007). In this work the lumped LU4-N model (Medici et al., 2008; Medici et al., EGU2009-7497) is subjected to an extensive regionalised sensitivity analysis (GSA, based on Monte Carlo simulations) in application to the Fuirosos catchment, Catalonia. The main results are: 1) the hydrological model is greatly affected by the maximum static storage water content (Hu_max), which defines the amount of water held in soil that can leave the catchment only by evapotranspiration. Thus, it defines also the amount of water not retained that is free to move and supplies the other model tanks; 2) the use of several objective functions in order to take into account different hydrograph characteristic helped to constrain parameter values; 3) concerning nitrogen, to obtain a sufficient level of behavioural parameter sets for the statistical analysis, not very severe criteria could be adopted; 4) stream water concentrations are sensitive to the shallow aquifer parameters, especially the nitrification constant (Knitr-aquif) and also to the certain soil parameters, like the mineralization constant (Kmin), the annual maximum ammonium uptake (MaxUPNH4) and the mineralization, nitrification and immobilisation thresholds (Umin, Unitr and Uimmob). Moreover the results give a clear indication that the hydrological model greatly affects the streamwater nitrate and ammonium concentrations; 5) result shows that the LU4-N model succeeded in achieving near-optimum fits simultaneously to flow and nitrate, but not ammonium; 6) however, the optimum flow model has not produced a near-optimum nitrate model. The analysis of this result indicated that calibrating the flow-related parameters first, then calibrating the remaining parameters instead of calibrating all parameters together, may not be the best strategy as pointed out for another study by McIntyre et al., 2005 ; 7) a final analysis seems also to support the idea that to obtain a satisfactory nitrogen simulation necessarily the flow should be acceptably represented, which lead to the conclusion that observed stream concentrations may indirectly help to calibrated the rainfall-runoff model, or at least the parameters to which they are sensitive.

  3. Evaluation of dental therapists undertaking dental examinations in a school setting in Scotland.

    PubMed

    O'Keefe, Emma J; McMahon, Alex D; Jones, Colwyn M; Curnow, Morag M; Macpherson, Lorna M D

    2016-12-01

    To measure agreement between dental therapists and the Scottish gold-standard dentist undertaking National Dental Inspection Programme (NDIP) examinations. A study of interexaminer agreement between 19 dental therapists and the national gold-standard dentist was carried out. Pre-calibration training used the caries diagnostic criteria and examination techniques agreed by the British Association for the Study of Community Dentistry (BASCD). Twenty-three 5-year-old children (Primary 1) and 17 11-year-old children (Primary 7) children were examined. Agreement was assessed using kappa statistics on d 3 mft and D 3 MFT for P1 and P7 children, sensitivity and specificity values, and kappa statistics on d 3 t/D 3 T and ft/FT. Calibration data on P1 and P7 children from 2009-2012 involving dentists as examiners were used for comparison. Economic evaluation was undertaken using a cost minimization analysis approach. The mean kappa score was 0.84 (SD 0.07) ranging from 0.69 to 0.94. All dental therapists scored good or very good agreement with the gold-standard dentist. This compares with historic NDIP calibration data with dentists, against the same gold-standard dentist, where the mean kappa value was 0.68 (SD 0.22) with a range of 0.35-1.00. The mean sensitivity score was 0.98 (SD 0.04) (range 0.88-1.0) and mean specificity score was 0.90 (SD 0.06) (range 0.78-0.96). Health economic analysis estimated that salary costs would be 33.6% lower if dental therapists were substituted for dentists in the year 2013, with an estimated saving of approximately £103 646 per annum on the national budget. We conclude that dental therapists show a high level of interexaminer agreement, and with the appropriate annual training and calibration, they could undertake dental examinations as part of the NDIP programme. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, S. N.; Revet, G.; Fuchs, J.

    Radiochromic films (RCF) are commonly used in dosimetry for a wide range of radiation sources (electrons, protons, and photons) for medical, industrial, and scientific applications. They are multi-layered, which includes plastic substrate layers and sensitive layers that incorporate a radiation-sensitive dye. Quantitative dose can be retrieved by digitizing the film, provided that a prior calibration exists. Here, to calibrate the newly developed EBT3 and HDv2 RCFs from Gafchromic™, we used the Stanford Medical LINAC to deposit in the films various doses of 10 MeV photons, and by scanning the films using three independent EPSON Precision 2450 scanners, three independent EPSONmore » V750 scanners, and two independent EPSON 11000XL scanners. The films were scanned in separate RGB channels, as well as in black and white, and film orientation was varied. We found that the green channel of the RGB scan and the grayscale channel are in fact quite consistent over the different models of the scanner, although this comes at the cost of a reduction in sensitivity (by a factor ∼2.5 compared to the red channel). To allow any user to extend the absolute calibration reported here to any other scanner, we furthermore provide a calibration curve of the EPSON 2450 scanner based on absolutely calibrated, commercially available, optical density filters.« less

  5. Modeling technical change in climate analysis: evidence from agricultural crop damages.

    PubMed

    Ahmed, Adeel; Devadason, Evelyn S; Al-Amin, Abul Quasem

    2017-05-01

    This study accounts for the Hicks neutral technical change in a calibrated model of climate analysis, to identify the optimum level of technical change for addressing climate changes. It demonstrates the reduction to crop damages, the costs to technical change, and the net gains for the adoption of technical change for a climate-sensitive Pakistan economy. The calibrated model assesses the net gains of technical change for the overall economy and at the agriculture-specific level. The study finds that the gains of technical change are overwhelmingly higher than the costs across the agriculture subsectors. The gains and costs following technical change differ substantially for different crops. More importantly, the study finds a cost-effective optimal level of technical change that potentially reduces crop damages to a minimum possible level. The study therefore contends that the climate policy for Pakistan should consider the role of technical change in addressing climate impacts on the agriculture sector.

  6. Plasma properties of hot coronal loops utilizing coordinated SMM and solar research rocket observations

    NASA Technical Reports Server (NTRS)

    Moses, J. Daniel

    1989-01-01

    Three improvements in photographic x-ray imaging techniques for solar astronomy are presented. The testing and calibration of a new film processor was conducted; the resulting product will allow photometric development of sounding rocket flight film immediately upon recovery at the missile range. Two fine grained photographic films were calibrated and flight tested to provide alternative detector choices when the need for high resolution is greater than the need for high sensitivity. An analysis technique used to obtain the characteristic curve directly from photographs of UV solar spectra were applied to the analysis of soft x-ray photographic images. The resulting procedure provides a more complete and straightforward determination of the parameters describing the x-ray characteristic curve than previous techniques. These improvements fall into the category of refinements instead of revolutions, indicating the fundamental suitability of the photographic process for x-ray imaging in solar astronomy.

  7. Development of a semi-adiabatic isoperibol solution calorimeter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Venkata Krishnan, R.; Jogeswararao, G.; Parthasarathy, R.

    2014-12-15

    A semi-adiabatic isoperibol solution calorimeter has been indigenously developed. The measurement system comprises modules for sensitive temperature measurement probe, signal processing, data collection, and joule calibration. The sensitivity of the temperature measurement module was enhanced by using a sensitive thermistor coupled with a lock-in amplifier based signal processor. A microcontroller coordinates the operation and control of these modules. The latter in turn is controlled through personal computer (PC) based custom made software developed with LabView. An innovative summing amplifier concept was used to cancel out the base resistance of the thermistor. The latter was placed in the dewar. The temperaturemore » calibration was carried out with a standard platinum resistance (PT100) sensor coupled with an 8½ digit multimeter. The water equivalent of this calorimeter was determined by using electrical calibration with the joule calibrator. The experimentally measured values of the quantum of heat were validated by measuring heats of dissolution of pure KCl (for endotherm) and tris (hydroxyl methyl) amino-methane (for exotherm). The uncertainity in the measurements was found to be within ±3%.« less

  8. Utility analysis and calibration of QOL assessment in disease management.

    PubMed

    Liu, Mo

    2018-05-02

    In clinical trials, the assessment of health-related quality of life (QOL) (or patient-reported outcome [PRO] measure) has become very popular especially for clinical studies conducted for evaluating clinical benefits of patients with chronic, severe, and/or life threatening diseases. Health-related QOL information and PRO measures are useful for disease management for achieving best clinical practice. In this article, we will focus on health-related QOL assessment. The concept, design, and analysis of health-related QOL in clinical trials are reviewed. Validation of the use of health-related QOL instrument in terms of some key performance characteristics such as accuracy, reliability, sensitivity, and responsibility for assuring quality, integrity, and validity of collected QOL data are discussed. The concept of utility analysis and calibration (e.g., with respect to life events) for achieving the optimization of disease management are proposed. The change of the QOL could be translated into different life events for effective disease management. These translations could evaluate the treatment effect by more directly displaying the change of the QOL.

  9. Setting Standards for Reporting and Quantification in Fluorescence-Guided Surgery.

    PubMed

    Hoogstins, Charlotte; Burggraaf, Jan Jaap; Koller, Marjory; Handgraaf, Henricus; Boogerd, Leonora; van Dam, Gooitzen; Vahrmeijer, Alexander; Burggraaf, Jacobus

    2018-05-29

    Intraoperative fluorescence imaging (FI) is a promising technique that could potentially guide oncologic surgeons toward more radical resections and thus improve clinical outcome. Despite the increase in the number of clinical trials, fluorescent agents and imaging systems for intraoperative FI, a standardized approach for imaging system performance assessment and post-acquisition image analysis is currently unavailable. We conducted a systematic, controlled comparison between two commercially available imaging systems using a novel calibration device for FI systems and various fluorescent agents. In addition, we analyzed fluorescence images from previous studies to evaluate signal-to-background ratio (SBR) and determinants of SBR. Using the calibration device, imaging system performance could be quantified and compared, exposing relevant differences in sensitivity. Image analysis demonstrated a profound influence of background noise and the selection of the background on SBR. In this article, we suggest clear approaches for the quantification of imaging system performance assessment and post-acquisition image analysis, attempting to set new standards in the field of FI.

  10. FlowCal: A user-friendly, open source software tool for automatically converting flow cytometry data from arbitrary to calibrated units

    PubMed Central

    Castillo-Hair, Sebastian M.; Sexton, John T.; Landry, Brian P.; Olson, Evan J.; Igoshin, Oleg A.; Tabor, Jeffrey J.

    2017-01-01

    Flow cytometry is widely used to measure gene expression and other molecular biological processes with single cell resolution via fluorescent probes. Flow cytometers output data in arbitrary units (a.u.) that vary with the probe, instrument, and settings. Arbitrary units can be converted to the calibrated unit molecules of equivalent fluorophore (MEF) using commercially available calibration particles. However, there is no convenient, non-proprietary tool available to perform this calibration. Consequently, most researchers report data in a.u., limiting interpretation. Here, we report a software tool named FlowCal to overcome current limitations. FlowCal can be run using an intuitive Microsoft Excel interface, or customizable Python scripts. The software accepts Flow Cytometry Standard (FCS) files as inputs and is compatible with different calibration particles, fluorescent probes, and cell types. Additionally, FlowCal automatically gates data, calculates common statistics, and produces publication quality plots. We validate FlowCal by calibrating a.u. measurements of E. coli expressing superfolder GFP (sfGFP) collected at 10 different detector sensitivity (gain) settings to a single MEF value. Additionally, we reduce day-to-day variability in replicate E. coli sfGFP expression measurements due to instrument drift by 33%, and calibrate S. cerevisiae mVenus expression data to MEF units. Finally, we demonstrate a simple method for using FlowCal to calibrate fluorescence units across different cytometers. FlowCal should ease the quantitative analysis of flow cytometry data within and across laboratories and facilitate the adoption of standard fluorescence units in synthetic biology and beyond. PMID:27110723

  11. Measurements of 55Fe activity in activated steel samples with GEMPix

    NASA Astrophysics Data System (ADS)

    Curioni, A.; Dinar, N.; La Torre, F. P.; Leidner, J.; Murtas, F.; Puddu, S.; Silari, M.

    2017-03-01

    In this paper we present a novel method, based on the recently developed GEMPix detector, to measure the 55Fe content in samples of metallic material activated during operation of CERN accelerators and experimental facilities. The GEMPix, a gas detector with highly pixelated read-out, has been obtained by coupling a triple Gas Electron Multiplier (GEM) to a quad Timepix ASIC. Sample preparation, measurements performed on 45 samples and data analysis are described. The calibration factor (counts per second per unit specific activity) has been obtained via measurements of the 55Fe activity determined by radiochemical analysis of the same samples. Detection limit and sensitivity to the current Swiss exemption limit are calculated. Comparison with radiochemical analysis shows inconsistency for the sensitivity for only two samples, most likely due to underestimated uncertainties of the GEMPix analysis. An operative test phase of this technique is already planned at CERN.

  12. Sensitivity of blackbody effective emissivity to wavelength and temperature: By genetic algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ejigu, E. K.; Liedberg, H. G.

    A variable-temperature blackbody (VTBB) is used to calibrate an infrared radiation thermometer (pyrometer). The effective emissivity (ε{sub eff}) of a VTBB is dependent on temperature and wavelength other than the geometry of the VTBB. In the calibration process the effective emissivity is often assumed to be constant within the wavelength and temperature range. There are practical situations where the sensitivity of the effective emissivity needs to be known and correction has to be applied. We present a method using a genetic algorithm to investigate the sensitivity of the effective emissivity to wavelength and temperature variation. Two matlab® programs are generated:more » the first to model the radiance temperature calculation and the second to connect the model to the genetic algorithm optimization toolbox. The effective emissivity parameter is taken as a chromosome and optimized at each wavelength and temperature point. The difference between the contact temperature (reading from a platinum resistance thermometer or liquid in glass thermometer) and radiance temperature (calculated from the ε{sub eff} values) is used as an objective function where merit values are calculated and best fit ε{sub eff} values selected. The best fit ε{sub eff} values obtained as a solution show how sensitive they are to temperature and wavelength parameter variation. Uncertainty components that arise from wavelength and temperature variation are determined based on the sensitivity analysis. Numerical examples are considered for illustration.« less

  13. Absolute calibration of optical streak cameras on picosecond time scales using supercontinuum generation

    DOE PAGES

    Patankar, S.; Gumbrell, E. T.; Robinson, T. S.; ...

    2017-08-17

    Here we report a new method using high stability, laser-driven supercontinuum generation in a liquid cell to calibrate the absolute photon response of fast optical streak cameras as a function of wavelength when operating at fastest sweep speeds. A stable, pulsed white light source based around the use of self-phase modulation in a salt solution was developed to provide the required brightness on picosecond timescales, enabling streak camera calibration in fully dynamic operation. The measured spectral brightness allowed for absolute photon response calibration over a broad spectral range (425-650nm). Calibrations performed with two Axis Photonique streak cameras using the Photonismore » P820PSU streak tube demonstrated responses which qualitatively follow the photocathode response. Peak sensitivities were 1 photon/count above background. The absolute dynamic sensitivity is less than the static by up to an order of magnitude. We attribute this to the dynamic response of the phosphor being lower.« less

  14. Simultaneous overpass off nadir (SOON): a method for unified calibration/validation across IEOS and GEOSS system of systems

    NASA Astrophysics Data System (ADS)

    Ardanuy, Philip; Bergen, Bill; Huang, Allen; Kratz, Gene; Puschell, Jeff; Schueler, Carl; Walker, Joe

    2006-08-01

    The US operates a diverse, evolving constellation of research and operational environmental satellites, principally in polar and geosynchronous orbits. Our current and enhanced future domestic remote sensing capability is complemented by the significant capabilities of our current and potential future international partners. In this analysis, we define "success" through the data customers' "eyes": participating in the sufficient and continuously improving satisfaction of their mission responsibilities. To successfully fuse together observations from multiple simultaneous platforms and sensors into a common, self-consistent, operational environment requires that there exist a unified calibration and validation approach. Here, we consider develop a concept for an integrating framework for absolute accuracy; long-term stability; self-consistency among sensors, platforms, techniques, and observing systems; and validation and characterization of performance. Across all systems, this is a non-trivial problem. Simultaneous Nadir Overpasses, or SNO's, provide a proven intercomparison technique: simultaneous, collocated, co-angular measurements. Many systems have off-nadir elements, or effects, that must be calibrated. For these systems, the nadir technique constrains the process. We define the term "SOON," for simultaneous overpass off nadir. We present a target architecture and sensitivity analysis for the affordable, sustainable implementation of a global SOON calibration/validation network that can deliver the much-needed comprehensive, common, self-consistent operational picture in near-real time, at an affordable cost.

  15. Uncertainty of climate change impact on groundwater reserves - Application to a chalk aquifer

    NASA Astrophysics Data System (ADS)

    Goderniaux, Pascal; Brouyère, Serge; Wildemeersch, Samuel; Therrien, René; Dassargues, Alain

    2015-09-01

    Recent studies have evaluated the impact of climate change on groundwater resources for different geographical and climatic contexts. However, most studies have either not estimated the uncertainty around projected impacts or have limited the analysis to the uncertainty related to climate models. In this study, the uncertainties around impact projections from several sources (climate models, natural variability of the weather, hydrological model calibration) are calculated and compared for the Geer catchment (465 km2) in Belgium. We use a surface-subsurface integrated model implemented using the finite element code HydroGeoSphere, coupled with climate change scenarios (2010-2085) and the UCODE_2005 inverse model, to assess the uncertainty related to the calibration of the hydrological model. This integrated model provides a more realistic representation of the water exchanges between surface and subsurface domains and constrains more the calibration with the use of both surface and subsurface observed data. Sensitivity and uncertainty analyses were performed on predictions. The linear uncertainty analysis is approximate for this nonlinear system, but it provides some measure of uncertainty for computationally demanding models. Results show that, for the Geer catchment, the most important uncertainty is related to calibration of the hydrological model. The total uncertainty associated with the prediction of groundwater levels remains large. By the end of the century, however, the uncertainty becomes smaller than the predicted decline in groundwater levels.

  16. Parameter Identification and Uncertainty Analysis for Visual MODFLOW based Groundwater Flow Model in a Small River Basin, Eastern India

    NASA Astrophysics Data System (ADS)

    Jena, S.

    2015-12-01

    The overexploitation of groundwater resulted in abandoning many shallow tube wells in the river Basin in Eastern India. For the sustainability of groundwater resources, basin-scale modelling of groundwater flow is essential for the efficient planning and management of the water resources. The main intent of this study is to develope a 3-D groundwater flow model of the study basin using the Visual MODFLOW package and successfully calibrate and validate it using 17 years of observed data. The sensitivity analysis was carried out to quantify the susceptibility of aquifer system to the river bank seepage, recharge from rainfall and agriculture practices, horizontal and vertical hydraulic conductivities, and specific yield. To quantify the impact of parameter uncertainties, Sequential Uncertainty Fitting Algorithm (SUFI-2) and Markov chain Monte Carlo (MCMC) techniques were implemented. Results from the two techniques were compared and the advantages and disadvantages were analysed. Nash-Sutcliffe coefficient (NSE) and coefficient of determination (R2) were adopted as two criteria during calibration and validation of the developed model. NSE and R2 values of groundwater flow model for calibration and validation periods were in acceptable range. Also, the MCMC technique was able to provide more reasonable results than SUFI-2. The calibrated and validated model will be useful to identify the aquifer properties, analyse the groundwater flow dynamics and the change in groundwater levels in future forecasts.

  17. SRAM Detector Calibration

    NASA Technical Reports Server (NTRS)

    Soli, G. A.; Blaes, B. R.; Beuhler, M. G.

    1994-01-01

    Custom proton sensitive SRAM chips are being flown on the BMDO Clementine missions and Space Technology Research Vehicle experiments. This paper describes the calibration procedure for the SRAM proton detectors and their response to the space environment.

  18. A simplified gross primary production and evapotranspiration model for boreal coniferous forests - is a generic calibration sufficient?

    NASA Astrophysics Data System (ADS)

    Minunno, F.; Peltoniemi, M.; Launiainen, S.; Aurela, M.; Lindroth, A.; Lohila, A.; Mammarella, I.; Minkkinen, K.; Mäkelä, A.

    2015-07-01

    The problem of model complexity has been lively debated in environmental sciences as well as in the forest modelling community. Simple models are less input demanding and their calibration involves a lower number of parameters, but they might be suitable only at local scale. In this work we calibrated a simplified ecosystem process model (PRELES) to data from multiple sites and we tested if PRELES can be used at regional scale to estimate the carbon and water fluxes of Boreal conifer forests. We compared a multi-site (M-S) with site-specific (S-S) calibrations. Model calibrations and evaluations were carried out by the means of the Bayesian method; Bayesian calibration (BC) and Bayesian model comparison (BMC) were used to quantify the uncertainty in model parameters and model structure. To evaluate model performances BMC results were combined with more classical analysis of model-data mismatch (M-DM). Evapotranspiration (ET) and gross primary production (GPP) measurements collected in 10 sites of Finland and Sweden were used in the study. Calibration results showed that similar estimates were obtained for the parameters at which model outputs are most sensitive. No significant differences were encountered in the predictions of the multi-site and site-specific versions of PRELES with exception of a site with agricultural history (Alkkia). Although PRELES predicted GPP better than evapotranspiration, we concluded that the model can be reliably used at regional scale to simulate carbon and water fluxes of Boreal forests. Our analyses underlined also the importance of using long and carefully collected flux datasets in model calibration. In fact, even a single site can provide model calibrations that can be applied at a wider spatial scale, since it covers a wide range of variability in climatic conditions.

  19. Photogrammetry Applied to Wind Tunnel Testing

    NASA Technical Reports Server (NTRS)

    Liu, Tian-Shu; Cattafesta, L. N., III; Radeztsky, R. H.; Burner, A. W.

    2000-01-01

    In image-based measurements, quantitative image data must be mapped to three-dimensional object space. Analytical photogrammetric methods, which may be used to accomplish this task, are discussed from the viewpoint of experimental fluid dynamicists. The Direct Linear Transformation (DLT) for camera calibration, used in pressure sensitive paint, is summarized. An optimization method for camera calibration is developed that can be used to determine the camera calibration parameters, including those describing lens distortion, from a single image. Combined with the DLT method, this method allows a rapid and comprehensive in-situ camera calibration and therefore is particularly useful for quantitative flow visualization and other measurements such as model attitude and deformation in production wind tunnels. The paper also includes a brief description of typical photogrammetric applications to temperature- and pressure-sensitive paint measurements and model deformation measurements in wind tunnels.

  20. Efficient Reduction and Analysis of Model Predictive Error

    NASA Astrophysics Data System (ADS)

    Doherty, J.

    2006-12-01

    Most groundwater models are calibrated against historical measurements of head and other system states before being used to make predictions in a real-world context. Through the calibration process, parameter values are estimated or refined such that the model is able to reproduce historical behaviour of the system at pertinent observation points reasonably well. Predictions made by the model are deemed to have greater integrity because of this. Unfortunately, predictive integrity is not as easy to achieve as many groundwater practitioners would like to think. The level of parameterisation detail estimable through the calibration process (especially where estimation takes place on the basis of heads alone) is strictly limited, even where full use is made of modern mathematical regularisation techniques such as those encapsulated in the PEST calibration package. (Use of these mechanisms allows more information to be extracted from a calibration dataset than is possible using simpler regularisation devices such as zones of piecewise constancy.) Where a prediction depends on aspects of parameterisation detail that are simply not inferable through the calibration process (which is often the case for predictions related to contaminant movement, and/or many aspects of groundwater/surface water interaction), then that prediction may be just as much in error as it would have been if the model had not been calibrated at all. Model predictive error arises from two sources. These are (a) the presence of measurement noise within the calibration dataset through which linear combinations of parameters spanning the "calibration solution space" are inferred, and (b) the sensitivity of the prediction to members of the "calibration null space" spanned by linear combinations of parameters which are not inferable through the calibration process. The magnitude of the former contribution depends on the level of measurement noise. The magnitude of the latter contribution (which often dominates the former) depends on the "innate variability" of hydraulic properties within the model domain. Knowledge of both of these is a prerequisite for characterisation of the magnitude of possible model predictive error. Unfortunately, in most cases, such knowledge is incomplete and subjective. Nevertheless, useful analysis of model predictive error can still take place. The present paper briefly discusses the means by which mathematical regularisation can be employed in the model calibration process in order to extract as much information as possible on hydraulic property heterogeneity prevailing within the model domain, thereby reducing predictive error to the lowest that can be achieved on the basis of that dataset. It then demonstrates the means by which predictive error variance can be quantified based on information supplied by the regularised inversion process. Both linear and nonlinear predictive error variance analysis is demonstrated using a number of real-world and synthetic examples.

  1. Scientific impact of MODIS C5 calibration degradation and C6+ improvements

    NASA Astrophysics Data System (ADS)

    Lyapustin, A.; Wang, Y.; Xiong, X.; Meister, G.; Platnick, S.; Levy, R.; Franz, B.; Korkin, S.; Hilker, T.; Tucker, J.; Hall, F.; Sellers, P.; Wu, A.; Angal, A.

    2014-12-01

    The Collection 6 (C6) MODIS (Moderate Resolution Imaging Spectroradiometer) land and atmosphere data sets are scheduled for release in 2014. C6 contains significant revisions of the calibration approach to account for sensor aging. This analysis documents the presence of systematic temporal trends in the visible and near-infrared (500 m) bands of the Collection 5 (C5) MODIS Terra and, to lesser extent, in MODIS Aqua geophysical data sets. Sensor degradation is largest in the blue band (B3) of the MODIS sensor on Terra and decreases with wavelength. Calibration degradation causes negative global trends in multiple MODIS C5 products including the dark target algorithm's aerosol optical depth over land and Ångström exponent over the ocean, global liquid water and ice cloud optical thickness, as well as surface reflectance and vegetation indices, including the normalized difference vegetation index (NDVI) and enhanced vegetation index (EVI). As the C5 production will be maintained for another year in parallel with C6, one objective of this paper is to raise awareness of the calibration-related trends for the broad MODIS user community. The new C6 calibration approach removes major calibrations trends in the Level 1B (L1B) data. This paper also introduces an enhanced C6+ calibration of the MODIS data set which includes an additional polarization correction (PC) to compensate for the increased polarization sensitivity of MODIS Terra since about 2007, as well as detrending and Terra-Aqua cross-calibration over quasi-stable desert calibration sites. The PC algorithm, developed by the MODIS ocean biology processing group (OBPG), removes residual scan angle, mirror side and seasonal biases from aerosol and surface reflectance (SR) records along with spectral distortions of SR. Using the multiangle implementation of atmospheric correction (MAIAC) algorithm over deserts, we have also developed a detrending and cross-calibration method which removes residual decadal trends on the order of several tenths of 1% of the top-of-atmosphere (TOA) reflectance in the visible and near-infrared MODIS bands B1-B4, and provides a good consistency between the two MODIS sensors. MAIAC analysis over the southern USA shows that the C6+ approach removed an additional negative decadal trend of Terra ΔNDVI ~ 0.01 as compared to Aqua data. This change is particularly important for analysis of vegetation dynamics and trends in the tropics, e.g., Amazon rainforest, where the morning orbit of Terra provides considerably more cloud-free observations compared to the afternoon Aqua measurements.

  2. Scientific Impact of MODIS C5 Calibration Degradation and C6+ Improvements

    NASA Technical Reports Server (NTRS)

    Lyapustin, A.; Wang, Y.; Xiong, X.; Meister, G.; Platnick, S.; Levy, R.; Franz, B.; Korkin, S.; Hilker, T.; Tucker, J.; hide

    2014-01-01

    The Collection 6 (C6) MODIS (Moderate Resolution Imaging Spectroradiometer) land and atmosphere data sets are scheduled for release in 2014. C6 contains significant revisions of the calibration approach to account for sensor aging. This analysis documents the presence of systematic temporal trends in the visible and near-infrared (500 m) bands of the Collection 5 (C5) MODIS Terra and, to lesser extent, in MODIS Aqua geophysical data sets. Sensor degradation is largest in the blue band (B3) of the MODIS sensor on Terra and decreases with wavelength. Calibration degradation causes negative global trends in multiple MODIS C5 products including the dark target algorithm's aerosol optical depth over land and Ångstrom exponent over the ocean, global liquid water and ice cloud optical thickness, as well as surface reflectance and vegetation indices, including the normalized difference vegetation index (NDVI) and enhanced vegetation index (EVI). As the C5 production will be maintained for another year in parallel with C6, one objective of this paper is to raise awareness of the calibration-related trends for the broad MODIS user community. The new C6 calibration approach removes major calibrations trends in the Level 1B (L1B) data. This paper also introduces an enhanced C6C calibration of the MODIS data set which includes an additional polarization correction (PC) to compensate for the increased polarization sensitivity of MODIS Terra since about 2007, as well as detrending and Terra- Aqua cross-calibration over quasi-stable desert calibration sites. The PC algorithm, developed by the MODIS ocean biology processing group (OBPG), removes residual scan angle, mirror side and seasonal biases from aerosol and surface reflectance (SR) records along with spectral distortions of SR. Using the multiangle implementation of atmospheric correction (MAIAC) algorithm over deserts, we have also developed a detrending and cross-calibration method which removes residual decadal trends on the order of several tenths of 1% of the top-of-atmosphere (TOA) reflectance in the visible and near-infrared MODIS bands B1-B4, and provides a good consistency between the two MODIS sensors. MAIAC analysis over the southern USA shows that the C6C approach removed an additional negative decadal trend of Terra (Delta)NDVI approx.0.01 as compared to Aqua data. This change is particularly important for analysis of vegetation dynamics and trends in the tropics, e.g., Amazon rainforest, where the morning orbit of Terra provides considerably more cloud-free observations compared to the afternoon Aqua measurements.

  3. Radiometric calibration of an ultra-compact microbolometer thermal imaging module

    NASA Astrophysics Data System (ADS)

    Riesland, David W.; Nugent, Paul W.; Laurie, Seth; Shaw, Joseph A.

    2017-05-01

    As microbolometer focal plane array formats are steadily decreasing, new challenges arise in correcting for thermal drift in the calibration coefficients. As the thermal mass of the cameras decrease the focal plane becomes more sensitive to external thermal inputs. This paper shows results from a temperature compensation algorithm for characterizing and radiometrically calibrating a FLIR Lepton camera.

  4. Bayesian model calibration of computational models in velocimetry diagnosed dynamic compression experiments.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Justin; Hund, Lauren

    2017-02-01

    Dynamic compression experiments are being performed on complicated materials using increasingly complex drivers. The data produced in these experiments are beginning to reach a regime where traditional analysis techniques break down; requiring the solution of an inverse problem. A common measurement in dynamic experiments is an interface velocity as a function of time, and often this functional output can be simulated using a hydrodynamics code. Bayesian model calibration is a statistical framework to estimate inputs into a computational model in the presence of multiple uncertainties, making it well suited to measurements of this type. In this article, we apply Bayesianmore » model calibration to high pressure (250 GPa) ramp compression measurements in tantalum. We address several issues speci c to this calibration including the functional nature of the output as well as parameter and model discrepancy identi ability. Speci cally, we propose scaling the likelihood function by an e ective sample size rather than modeling the autocorrelation function to accommodate the functional output and propose sensitivity analyses using the notion of `modularization' to assess the impact of experiment-speci c nuisance input parameters on estimates of material properties. We conclude that the proposed Bayesian model calibration procedure results in simple, fast, and valid inferences on the equation of state parameters for tantalum.« less

  5. OEDIPE: a new graphical user interface for fast construction of numerical phantoms and MCNP calculations.

    PubMed

    Franck, D; de Carlan, L; Pierrat, N; Broggio, D; Lamart, S

    2007-01-01

    Although great efforts have been made to improve the physical phantoms used to calibrate in vivo measurement systems, these phantoms represent a single average counting geometry and usually contain a uniform distribution of the radionuclide over the tissue substitute. As a matter of fact, significant corrections must be made to phantom-based calibration factors in order to obtain absolute calibration efficiencies applicable to a given individual. The importance of these corrections is particularly crucial when considering in vivo measurements of low energy photons emitted by radionuclides deposited in the lung such as actinides. Thus, it was desirable to develop a method for calibrating in vivo measurement systems that is more sensitive to these types of variability. Previous works have demonstrated the possibility of such a calibration using the Monte Carlo technique. Our research programme extended such investigations to the reconstruction of numerical anthropomorphic phantoms based on personal physiological data obtained by computed tomography. New procedures based on a new graphical user interface (GUI) for development of computational phantoms for Monte Carlo calculations and data analysis are being developed to take advantage of recent progress in image-processing codes. This paper presents the principal features of this new GUI. Results of calculations and comparison with experimental data are also presented and discussed in this work.

  6. Parameter optimization of a hydrologic model in a snow-dominated basin using a modular Python framework

    NASA Astrophysics Data System (ADS)

    Volk, J. M.; Turner, M. A.; Huntington, J. L.; Gardner, M.; Tyler, S.; Sheneman, L.

    2016-12-01

    Many distributed models that simulate watershed hydrologic processes require a collection of multi-dimensional parameters as input, some of which need to be calibrated before the model can be applied. The Precipitation Runoff Modeling System (PRMS) is a physically-based and spatially distributed hydrologic model that contains a considerable number of parameters that often need to be calibrated. Modelers can also benefit from uncertainty analysis of these parameters. To meet these needs, we developed a modular framework in Python to conduct PRMS parameter optimization, uncertainty analysis, interactive visual inspection of parameters and outputs, and other common modeling tasks. Here we present results for multi-step calibration of sensitive parameters controlling solar radiation, potential evapo-transpiration, and streamflow in a PRMS model that we applied to the snow-dominated Dry Creek watershed in Idaho. We also demonstrate how our modular approach enables the user to use a variety of parameter optimization and uncertainty methods or easily define their own, such as Monte Carlo random sampling, uniform sampling, or even optimization methods such as the downhill simplex method or its commonly used, more robust counterpart, shuffled complex evolution.

  7. Classification of hydrological parameter sensitivity and evaluation of parameter transferability across 431 US MOPEX basins

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Huiying; Hou, Zhangshuan; Huang, Maoyi

    The Community Land Model (CLM) represents physical, chemical, and biological processes of the terrestrial ecosystems that interact with climate across a range of spatial and temporal scales. As CLM includes numerous sub-models and associated parameters, the high-dimensional parameter space presents a formidable challenge for quantifying uncertainty and improving Earth system predictions needed to assess environmental changes and risks. This study aims to evaluate the potential of transferring hydrologic model parameters in CLM through sensitivity analyses and classification across watersheds from the Model Parameter Estimation Experiment (MOPEX) in the United States. The sensitivity of CLM-simulated water and energy fluxes to hydrologicalmore » parameters across 431 MOPEX basins are first examined using an efficient stochastic sampling-based sensitivity analysis approach. Linear, interaction, and high-order nonlinear impacts are all identified via statistical tests and stepwise backward removal parameter screening. The basins are then classified accordingly to their parameter sensitivity patterns (internal attributes), as well as their hydrologic indices/attributes (external hydrologic factors) separately, using a Principal component analyses (PCA) and expectation-maximization (EM) –based clustering approach. Similarities and differences among the parameter sensitivity-based classification system (S-Class), the hydrologic indices-based classification (H-Class), and the Koppen climate classification systems (K-Class) are discussed. Within each S-class with similar parameter sensitivity characteristics, similar inversion modeling setups can be used for parameter calibration, and the parameters and their contribution or significance to water and energy cycling may also be more transferrable. This classification study provides guidance on identifiable parameters, and on parameterization and inverse model design for CLM but the methodology is applicable to other models. Inverting parameters at representative sites belonging to the same class can significantly reduce parameter calibration efforts.« less

  8. Atacama Cosmology Telescope: Polarization calibration analysis for CMB measurements with ACTPol and Advanced ACTPol

    NASA Astrophysics Data System (ADS)

    Koopman, Brian; ACTPol Collaboration

    2015-04-01

    The Atacama Cosmology Telescope Polarimeter (ACTPol) is a polarization sensitive upgrade for the Atacama Cosmology Telescope, located at an elevation of 5190 m on Cerro Toco in Chile. Achieving first light in 2013, ACTPol is entering its third observation season. Advanced ACTPol is a next generation upgrade for ACTPol, with additional frequencies, polarization modulation, and new detector arrays, that will begin in 2016. I will first present an overview of the two projects and then focus on describing the methods used for polarization angle calibration of the ACTPol detectors. These methods utilize polarization ray tracing in the optical design software CODEV together with detector positions determined from planet observations and represent a critical input for mapping the polarization of the CMB.

  9. Quantitative aspects of inductively coupled plasma mass spectrometry

    PubMed Central

    Wagner, Barbara

    2016-01-01

    Accurate determination of elements in various kinds of samples is essential for many areas, including environmental science, medicine, as well as industry. Inductively coupled plasma mass spectrometry (ICP-MS) is a powerful tool enabling multi-elemental analysis of numerous matrices with high sensitivity and good precision. Various calibration approaches can be used to perform accurate quantitative measurements by ICP-MS. They include the use of pure standards, matrix-matched standards, or relevant certified reference materials, assuring traceability of the reported results. This review critically evaluates the advantages and limitations of different calibration approaches, which are used in quantitative analyses by ICP-MS. Examples of such analyses are provided. This article is part of the themed issue ‘Quantitative mass spectrometry’. PMID:27644971

  10. A continuous stream flash evaporator for the calibration of an IR cavity ring-down spectrometer for the isotopic analysis of water.

    PubMed

    Gkinis, Vasileios; Popp, Trevor J; Johnsen, Sigfus J; Blunier, Thomas

    2010-12-01

    A new technique for high-resolution simultaneous isotopic analysis of δ¹⁸O and δD in liquid water is presented. A continuous stream flash evaporator has been designed that is able to vapourise a stream of liquid water in a continuous mode and deliver a stable and finely controlled water vapour sample to a commercially available infrared cavity ring-down spectrometer. Injection of sub-microlitre amounts of the liquid water is achieved by pumping liquid water sample through a fused silica capillary and instantaneously vapourising it with 100% efficiency in a home-made oven at a temperature of 170 °C. The system's simplicity, low power consumption and low dead volume together with the possibility for automated unattended operation provides a solution for the calibration of laser instruments performing isotopic analysis of water vapour. Our work is mainly driven by the possibility to perform high-resolution online water isotopic analysis on continuous-flow analysis (CFA) systems typically used to analyse the chemical composition of ice cores drilled in polar regions. In the following, we describe the system's precision and stability and sensitivity to varying levels of sample size and we assess the observed memory effects. A test run with standard waters of different isotopic compositions is presented, demonstrating the ability to calibrate the spectrometer's measurements on a VSMOW scale with a relatively simple and fast procedure.

  11. An integrated approach to monitoring the calibration stability of operational dual-polarization radars

    DOE PAGES

    Vaccarono, Mattia; Bechini, Renzo; Chandrasekar, Chandra V.; ...

    2016-11-08

    The stability of weather radar calibration is a mandatory aspect for quantitative applications, such as rainfall estimation, short-term weather prediction and initialization of numerical atmospheric and hydrological models. Over the years, calibration monitoring techniques based on external sources have been developed, specifically calibration using the Sun and calibration based on ground clutter returns. In this paper, these two techniques are integrated and complemented with a self-consistency procedure and an intercalibration technique. The aim of the integrated approach is to implement a robust method for online monitoring, able to detect significant changes in the radar calibration. The physical consistency of polarimetricmore » radar observables is exploited using the self-consistency approach, based on the expected correspondence between dual-polarization power and phase measurements in rain. This technique allows a reference absolute value to be provided for the radar calibration, from which eventual deviations may be detected using the other procedures. In particular, the ground clutter calibration is implemented on both polarization channels (horizontal and vertical) for each radar scan, allowing the polarimetric variables to be monitored and hardware failures to promptly be recognized. The Sun calibration allows monitoring the calibration and sensitivity of the radar receiver, in addition to the antenna pointing accuracy. It is applied using observations collected during the standard operational scans but requires long integration times (several days) in order to accumulate a sufficient amount of useful data. Finally, an intercalibration technique is developed and performed to compare colocated measurements collected in rain by two radars in overlapping regions. The integrated approach is performed on the C-band weather radar network in northwestern Italy, during July–October 2014. The set of methods considered appears suitable to establish an online tool to monitor the stability of the radar calibration with an accuracy of about 2 dB. In conclusion, this is considered adequate to automatically detect any unexpected change in the radar system requiring further data analysis or on-site measurements.« less

  12. Mesh refinement and numerical sensitivity analysis for parameter calibration of partial differential equations

    NASA Astrophysics Data System (ADS)

    Becker, Roland; Vexler, Boris

    2005-06-01

    We consider the calibration of parameters in physical models described by partial differential equations. This task is formulated as a constrained optimization problem with a cost functional of least squares type using information obtained from measurements. An important issue in the numerical solution of this type of problem is the control of the errors introduced, first, by discretization of the equations describing the physical model, and second, by measurement errors or other perturbations. Our strategy is as follows: we suppose that the user defines an interest functional I, which might depend on both the state variable and the parameters and which represents the goal of the computation. First, we propose an a posteriori error estimator which measures the error with respect to this functional. This error estimator is used in an adaptive algorithm to construct economic meshes by local mesh refinement. The proposed estimator requires the solution of an auxiliary linear equation. Second, we address the question of sensitivity. Applying similar techniques as before, we derive quantities which describe the influence of small changes in the measurements on the value of the interest functional. These numbers, which we call relative condition numbers, give additional information on the problem under consideration. They can be computed by means of the solution of the auxiliary problem determined before. Finally, we demonstrate our approach at hand of a parameter calibration problem for a model flow problem.

  13. Remote Determination of the in situ Sensitivity of a Streckeisen STS-2 Broadband Seismometer

    NASA Astrophysics Data System (ADS)

    Uhrhammer, R. A.; Taira, T.; Hellweg, M.

    2015-12-01

    The sensitivity of a STS-2 broadband seismometer can be determined remotely by two basic methods: 1) via comparison of the inferred ground motions with a reference seismometer, and: 2) via excitation of the calibration coil with a simultaneously recorded stimulus signal. The first method is limited by the accuracy of the reference seismometer and the second method is limited by the accuracy of the motor constant (Gc) of the calibration coil. The accuracy of both methods is also influenced by the signal-to-noise ratio (SNR) in the presence of background seismic noise and the degree of orthogonality of the tri-axial suspension in the STS-2 seismometer. The Streckeisen STS-2 manual states that the signal coil sensitivity (Gs) is 1500 V/(m/s) (+/-1.5%) and it gives Gc to only one decimal place (ie, Gc = 2 g/A). Unfortunately the factory Gc value is not given with sufficient accuracy to be useful for determining the sensitivity of Gs to within 1.5%. Thus we need to determine Gc to enable accurate calibration of the STS-2 via remote excitation of the Gc with a known stimulus. The Berkeley Digital Seismic Network (BDSN) has 12 STS-2 seismometers with co-sited reference sensors (strong motion accelerometers) and they are all recorded by Q330HR data loggers with factory cabling. The procedure is to first verify the sensitivity of the STS-2 signal coils (Gs) via comparison of the ground motions recorded by the STS-2 with the ground motions recorded by the co-sited strong motion accelerometer for an earthquake with has sufficiently high SNR in a passband common to both sensors. The second step in the procedure is to remotely (from Berkeley) excite to calibration coil with a 1 Hz sinusoid which is simultaneously recorded and, using the above measured Gs values, solve for Gc of the calibration coils. The resulting Gc values are typically 2.20-2.50 g/A (accurate to 3+ decimal places) and once the Gc values are found, the STS-2 absolute sensitivity can be determined remotely to an accuracy of better than 1%. The primary advantage of using strong motion accelerometers as the reference instrument is that their absolute calibration can be checked via tilt tests if the need arises.

  14. Limb Sensing, on the Path to Better Weather Forecasting.

    NASA Astrophysics Data System (ADS)

    Gordley, L. L.; Marshall, B. T.; Lachance, R. L.; Fritts, D. C.; Fisher, J.

    2017-12-01

    Earth limb observations from orbiting sensors have a rich history. The cold space background, long optical paths, and limb geometry provide formidable advantages for calibration, sensitivity and retrieval of vertically well-resolved geophysical parameters. The measurement of limb ray refraction now provides temperature and pressure profiles unburdened by requirements of spectral calibration or gas concentration knowledge, leading to reliable long-term trends. This talk discusses those advantages and our relevant achievements with data from the SOFIE instrument on the AIM satellite. We then describe a path to advances in calibration, sensitivity, profile fidelity, and synergy between limb sensors and nadir sounders. These advances also include small-sat compatible size, elimination of on-board calibration systems and simple static designs, dramatically reducing risk, complexity and cost. Finally, we show how these advances, made possible by modern ADCS, FPA and GPS capabilities, will lead to improvements in weather forecasting and climate observation.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patankar, S.; Gumbrell, E. T.; Robinson, T. S.

    Here we report a new method using high stability, laser-driven supercontinuum generation in a liquid cell to calibrate the absolute photon response of fast optical streak cameras as a function of wavelength when operating at fastest sweep speeds. A stable, pulsed white light source based around the use of self-phase modulation in a salt solution was developed to provide the required brightness on picosecond timescales, enabling streak camera calibration in fully dynamic operation. The measured spectral brightness allowed for absolute photon response calibration over a broad spectral range (425-650nm). Calibrations performed with two Axis Photonique streak cameras using the Photonismore » P820PSU streak tube demonstrated responses which qualitatively follow the photocathode response. Peak sensitivities were 1 photon/count above background. The absolute dynamic sensitivity is less than the static by up to an order of magnitude. We attribute this to the dynamic response of the phosphor being lower.« less

  16. Local sensitivity analysis for inverse problems solved by singular value decomposition

    USGS Publications Warehouse

    Hill, M.C.; Nolan, B.T.

    2010-01-01

    Local sensitivity analysis provides computationally frugal ways to evaluate models commonly used for resource management, risk assessment, and so on. This includes diagnosing inverse model convergence problems caused by parameter insensitivity and(or) parameter interdependence (correlation), understanding what aspects of the model and data contribute to measures of uncertainty, and identifying new data likely to reduce model uncertainty. Here, we consider sensitivity statistics relevant to models in which the process model parameters are transformed using singular value decomposition (SVD) to create SVD parameters for model calibration. The statistics considered include the PEST identifiability statistic, and combined use of the process-model parameter statistics composite scaled sensitivities and parameter correlation coefficients (CSS and PCC). The statistics are complimentary in that the identifiability statistic integrates the effects of parameter sensitivity and interdependence, while CSS and PCC provide individual measures of sensitivity and interdependence. PCC quantifies correlations between pairs or larger sets of parameters; when a set of parameters is intercorrelated, the absolute value of PCC is close to 1.00 for all pairs in the set. The number of singular vectors to include in the calculation of the identifiability statistic is somewhat subjective and influences the statistic. To demonstrate the statistics, we use the USDA’s Root Zone Water Quality Model to simulate nitrogen fate and transport in the unsaturated zone of the Merced River Basin, CA. There are 16 log-transformed process-model parameters, including water content at field capacity (WFC) and bulk density (BD) for each of five soil layers. Calibration data consisted of 1,670 observations comprising soil moisture, soil water tension, aqueous nitrate and bromide concentrations, soil nitrate concentration, and organic matter content. All 16 of the SVD parameters could be estimated by regression based on the range of singular values. Identifiability statistic results varied based on the number of SVD parameters included. Identifiability statistics calculated for four SVD parameters indicate the same three most important process-model parameters as CSS/PCC (WFC1, WFC2, and BD2), but the order differed. Additionally, the identifiability statistic showed that BD1 was almost as dominant as WFC1. The CSS/PCC analysis showed that this results from its high correlation with WCF1 (-0.94), and not its individual sensitivity. Such distinctions, combined with analysis of how high correlations and(or) sensitivities result from the constructed model, can produce important insights into, for example, the use of sensitivity analysis to design monitoring networks. In conclusion, the statistics considered identified similar important parameters. They differ because (1) with CSS/PCC can be more awkward because sensitivity and interdependence are considered separately and (2) identifiability requires consideration of how many SVD parameters to include. A continuing challenge is to understand how these computationally efficient methods compare with computationally demanding global methods like Markov-Chain Monte Carlo given common nonlinear processes and the often even more nonlinear models.

  17. On the cross-sensitivity between water vapor mixing ratio and stable isotope measurements of in-situ analyzers

    NASA Astrophysics Data System (ADS)

    Parkes, Stephen; Wang, Lixin; McCabe, Matthew

    2015-04-01

    In recent years there has been an increasing amount of water vapor stable isotope data collected using in-situ instrumentation. A number of papers have characterized the performance of these in-situ analyzers and suggested methods for calibrating raw measurements. The cross-sensitivity of the isotopic measurements on the mixing ratio has been shown to be a major uncertainty and a variety of techniques have been suggested to characterize this inaccuracy. However, most of these are based on relating isotopic ratios to water vapor mixing ratios from in-situ analyzers when the mixing ratio is varied and the isotopic composition kept constant. An additional correction for the span of the isotopic ratio scale is then applied by measuring different isotopic standards. Here we argue that the water vapor cross-sensitivity arises from different instrument responses (span and offset) of the parent H2O isotope and the heavier isotopes, rather than spectral overlap that could cause a true variation in the isotopic ratio with mixing ratio. This is especially relevant for commercial laser optical instruments where absorption lines are well resolved. Thus, the cross-sensitivity determined using more conventional techniques is dependent on the isotopic ratio of the standard used for the characterization, although errors are expected to be small. Consequently, the cross-sensitivity should be determined by characterizing the span and zero offset of each isotope mixing ratio. In fact, this technique makes the span correction for the isotopic ratio redundant. In this work we model the impact of changes in the span and offset of the heavy and light isotopes and illustrate the impact on the cross-sensitivity of the isotopic ratios on water vapor. This clearly shows the importance of determining the zero offset for the two isotopes. The cross-sensitivity of the isotopic ratios on water vapor is then characterized by determining the instrument response for the individual isotopes for a number of different in-situ analyzers that employ different optical methods. We compare this simplified calibration technique to more conventional characterization of both the cross-sensitivity determined in isotopic ratio space and the isotopic ratio span. Utilizing this simplified calibration approach with improved software control can lead to a significant reduction in time spent calibrating in-situ instrumentation or enable an increase in calibration frequency as required to minimize measurement uncertainty.

  18. Self-calibration performance in stereoscopic PIV acquired in a transonic wind tunnel

    DOE PAGES

    Beresh, Steven J.; Wagner, Justin L.; Smith, Barton L.

    2016-03-16

    Three stereoscopic PIV experiments have been examined to test the effectiveness of self-calibration under varied circumstances. Furthermore, we our measurements taken in a streamwise plane yielded a robust self-calibration that returned common results regardless of the specific calibration procedure, but measurements in the crossplane exhibited substantial velocity bias errors whose nature was sensitive to the particulars of the self-calibration approach. Self-calibration is complicated by thick laser sheets and large stereoscopic camera angles and further exacerbated by small particle image diameters and high particle seeding density. In spite of the different answers obtained by varied self-calibrations, each implementation locked onto anmore » apparently valid solution with small residual disparity and converged adjustment of the calibration plane. Thus, the convergence of self-calibration on a solution with small disparity is not sufficient to indicate negligible velocity error due to the stereo calibration.« less

  19. Design and Theoretical Analysis of a Resonant Sensor for Liquid Density Measurement

    PubMed Central

    Zheng, Dezhi; Shi, Jiying; Fan, Shangchun

    2012-01-01

    In order to increase the accuracy of on-line liquid density measurements, a sensor equipped with a tuning fork as the resonant sensitive component is designed in this paper. It is a quasi-digital sensor with simple structure and high precision. The sensor is based on resonance theory and composed of a sensitive unit and a closed-loop control unit, where the sensitive unit consists of the actuator, the resonant tuning fork and the detector and the closed-loop control unit comprises precondition circuit, digital signal processing and control unit, analog-to-digital converter and digital-to-analog converter. An approximate parameters model of the tuning fork is established and the impact of liquid density, position of the tuning fork, temperature and structural parameters on the natural frequency of the tuning fork are also analyzed. On this basis, a tuning fork liquid density measurement sensor is developed. In addition, experimental testing on the sensor has been carried out on standard calibration facilities under constant 20 °C, and the sensor coefficients are calibrated. The experimental results show that the repeatability error is about 0.03% and the accuracy is about 0.4 kg/m3. The results also confirm that the method to increase the accuracy of liquid density measurement is feasible. PMID:22969378

  20. Design and theoretical analysis of a resonant sensor for liquid density measurement.

    PubMed

    Zheng, Dezhi; Shi, Jiying; Fan, Shangchun

    2012-01-01

    In order to increase the accuracy of on-line liquid density measurements, a sensor equipped with a tuning fork as the resonant sensitive component is designed in this paper. It is a quasi-digital sensor with simple structure and high precision. The sensor is based on resonance theory and composed of a sensitive unit and a closed-loop control unit, where the sensitive unit consists of the actuator, the resonant tuning fork and the detector and the closed-loop control unit comprises precondition circuit, digital signal processing and control unit, analog-to-digital converter and digital-to-analog converter. An approximate parameters model of the tuning fork is established and the impact of liquid density, position of the tuning fork, temperature and structural parameters on the natural frequency of the tuning fork are also analyzed. On this basis, a tuning fork liquid density measurement sensor is developed. In addition, experimental testing on the sensor has been carried out on standard calibration facilities under constant 20 °C, and the sensor coefficients are calibrated. The experimental results show that the repeatability error is about 0.03% and the accuracy is about 0.4 kg/m(3). The results also confirm that the method to increase the accuracy of liquid density measurement is feasible.

  1. EBT-XD Radiochromic Film Sensitivity Calibrations Using Proton Beams from a Pelletron Accelerator

    NASA Astrophysics Data System (ADS)

    Stockler, Barak; Grun, Alexander; Brown, Gunnar; Klein, Matthew; Wood, Jacob; Cooper, Anthony; Ward, Ryan; Freeman, Charlie; Padalino, Stephen; Regan, S. P.; Sangster, T. C.

    2017-10-01

    Radiochromic film (RCF) is a transparent detector film that permanently changes color following exposure to ionizing radiation. RCF is used frequently in medical applications, but also has been used in a variety of high energy density physics diagnostics. RCF is convenient to use because it requires no chemical processing and can be scanned using commercially available document scanners. In this study, the sensitivity of Gafchromic™ EBT-XD RCF to protons and x-rays was measured. Proton beams produced by the SUNY Geneseo Pelletron accelerator were directed into an evacuated target chamber where they scattered off a thin gold foil. The scattered protons were incident on a sample of RCF which subtended a range of angles around the scattering center. A new analysis method, which relies on the variation in scattered proton fluence as a function of scattering angle in accordance with the Rutherford scattering law, is currently being developed to speed up the proton calibrations. Samples of RCF were also exposed to x-ray radiation using an X-RAD 160 x-ray irradiator, allowing the sensitivity of RCF to X-rays to be measured. This work was funded in part by a Grant from the DOE through the Laboratory for Laser Energetics as well as the NSF.

  2. An ionization gauge for ultrahigh vacuum measurement based on a carbon nanotube cathode

    NASA Astrophysics Data System (ADS)

    Zhang, Huzhong; Cheng, Yongjun; Sun, Jian; Wang, Yongjun; Xi, Zhenhua; Dong, Meng; Li, Detian

    2017-10-01

    This work reports on the complete design and the properties of an ionization gauge based on a carbon nanotube cathode, which can measure ultrahigh vacuum without thermal effects. The gauge is composed of a pressure sensor and an electronic controller. This pressure sensor is constructed based on a hot-cathode ionization gauge, where the traditional hot filament is replaced by an electron source prepared with multi-wall nanotubes. Besides, an electronic controller was developed for bias voltage supply, low current detection, and pressure indication. The gauge was calibrated in the pressure range of 10-8 to 10-4 Pa in a XHV/UHV calibration apparatus. The gauge shows good linear characteristics in different gases. The calibrated sensitivity is 0.035 Pa-1 in N2, and the standard deviation of the sensitivity is about 1.1%. In addition, the stability of the sensitivity was learned in a long period. The standard deviation of the sensitivity factor "S" during one year is 2.0% for Ar and 1.6% for N2.

  3. Assessment of Spatial Transferability of Process-Based Hydrological Model Parameters in Two Neighboring Catchments in the Himalayan Region

    NASA Astrophysics Data System (ADS)

    Nepal, S.

    2016-12-01

    The spatial transferability of the model parameters of the process-oriented distributed J2000 hydrological model was investigated in two glaciated sub-catchments of the Koshi river basin in eastern Nepal. The basins had a high degree of similarity with respect to their static landscape features. The model was first calibrated (1986-1991) and validated (1992-1997) in the Dudh Koshi sub-catchment. The calibrated and validated model parameters were then transferred to the nearby Tamor catchment (2001-2009). A sensitivity and uncertainty analysis was carried out for both sub-catchments to discover the sensitivity range of the parameters in the two catchments. The model represented the overall hydrograph well in both sub-catchments, including baseflow and medium range flows (rising and recession limbs). The efficiency results according to both Nash-Sutcliffe and the coefficient of determination was above 0.84 in both cases. The sensitivity analysis showed that the same parameter was most sensitive for Nash-Sutcliffe (ENS) and Log Nash-Sutcliffe (LNS) efficiencies in both catchments. However, there were some differences in sensitivity to ENS and LNS for moderate and low sensitive parameters, although the majority (13 out of 16 for ENS and 16 out of 16 for LNS) had a sensitivity response in a similar range. A generalized likelihood uncertainty estimation (GLUE) result suggest that most of the time the observed runoff is within the parameter uncertainty range, although occasionally the values lie outside the uncertainty range, especially during flood peaks and more in the Tamor. This may be due to the limited input data resulting from the small number of precipitation stations and lack of representative stations in high-altitude areas, as well as to model structural uncertainty. The results indicate that transfer of the J2000 parameters to a neighboring catchment in the Himalayan region with similar physiographic landscape characteristics is viable. This indicates the possibility of applying process-based J2000 model be to the ungauged catchments in the Himalayan region, which could provide important insights into the hydrological system dynamics and provide much needed information to support water resources planning and management.

  4. FT-IR-cPAS—New Photoacoustic Measurement Technique for Analysis of Hot Gases: A Case Study on VOCs

    PubMed Central

    Hirschmann, Christian Bernd; Koivikko, Niina Susanna; Raittila, Jussi; Tenhunen, Jussi; Ojala, Satu; Rahkamaa-Tolonen, Katariina; Marbach, Ralf; Hirschmann, Sarah; Keiski, Riitta Liisa

    2011-01-01

    This article describes a new photoacoustic FT-IR system capable of operating at elevated temperatures. The key hardware component is an optical-readout cantilever microphone that can work up to 200 °C. All parts in contact with the sample gas were put into a heated oven, incl. the photoacoustic cell. The sensitivity of the built photoacoustic system was tested by measuring 18 different VOCs. At 100 ppm gas concentration, the univariate signal to noise ratios (1σ, measurement time 25.5 min, at highest peak, optical resolution 8 cm−1) of the spectra varied from minimally 19 for o-xylene up to 329 for butyl acetate. The sensitivity can be improved by multivariate analyses over broad wavelength ranges, which effectively co-adds the univariate sensitivities achievable at individual wavelengths. The multivariate limit of detection (3σ, 8.5 min, full useful wavelength range), i.e., the best possible inverse analytical sensitivity achievable at optimum calibration, was calculated using the SBC method and varied from 2.60 ppm for dichloromethane to 0.33 ppm for butyl acetate. Depending on the shape of the spectra, which often only contain a few sharp peaks, the multivariate analysis improved the analytical sensitivity by 2.2 to 9.2 times compared to the univariate case. Selectivity and multi component ability were tested by a SBC calibration including 5 VOCs and water. The average cross selectivities turned out to be less than 2% and the resulting inverse analytical sensitivities of the 5 interfering VOCs was increased by maximum factor of 2.2 compared to the single component sensitivities. Water subtraction using SBC gave the true analyte concentration with a variation coefficient of 3%, although the sample spectra (methyl ethyl ketone, 200 ppm) contained water from 1,400 to 100k ppm and for subtraction only one water spectra (10k ppm) was used. The developed device shows significant improvement to the current state-of-the-art measurement methods used in industrial VOC measurements. PMID:22163900

  5. Pathogen transport and fate modeling in the Upper Salem River Watershed using SWAT model.

    PubMed

    Niazi, Mehran; Obropta, Christopher; Miskewitz, Robert

    2015-03-15

    Simulation of the fate and transport of pathogen contamination was conducted with SWAT for the Upper Salem River Watershed, located in Salem County, New Jersey. This watershed is 37 km(2) and land uses are predominantly agricultural. The watershed drains to a 32 km stretch of the Salem River upstream of the head of tide. This strech is identified on the 303(d) list as impaired for pathogens. The overall goal of this research was to use SWAT as a tool to help to better understand how two pathogen indicators (Escherichia coli and fecal coliform) are transported throughout the watershed, by determining the model parameters that control the fate and transport of these two indicator species. This effort was the first watershed modeling attempt with SWAT to successfully simulate E. coli and fecal coliform simultaneously. Sensitivity analysis has been performed for flow as well as fecal coliform and E. coli. Hydrologic calibration at six sampling locations indicate that the model provides a "good" prediction of watershed outlet flow (E = 0.69) while at certain upstream calibration locations predictions are less representative (0.32 < E < 0.70). Monthly calibration and validation of the pathogen transport and fate model was conducted for both fecal coliform (0.07 < E < 0.47 and -0.94 < E < 0.33) and E. coli (0.03 < E < 0.39 and -0.81 < E < 0.31) for the six sampling points. The fit of the model compared favorably with many similar pathogen modeling efforts. The research contributes new knowledge in E. coli and fecal coliform modeling and will help increase the understanding of sensitivity analysis and pathogen modeling with SWAT at the watershed scale. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Effective groundwater model calibration: With analysis of data, sensitivities, predictions, and uncertainty

    USGS Publications Warehouse

    Hill, Mary C.; Tiedeman, Claire

    2007-01-01

    Methods and guidelines for developing and using mathematical modelsTurn to Effective Groundwater Model Calibration for a set of methods and guidelines that can help produce more accurate and transparent mathematical models. The models can represent groundwater flow and transport and other natural and engineered systems. Use this book and its extensive exercises to learn methods to fully exploit the data on hand, maximize the model's potential, and troubleshoot any problems that arise. Use the methods to perform:Sensitivity analysis to evaluate the information content of dataData assessment to identify (a) existing measurements that dominate model development and predictions and (b) potential measurements likely to improve the reliability of predictionsCalibration to develop models that are consistent with the data in an optimal mannerUncertainty evaluation to quantify and communicate errors in simulated results that are often used to make important societal decisionsMost of the methods are based on linear and nonlinear regression theory.Fourteen guidelines show the reader how to use the methods advantageously in practical situations.Exercises focus on a groundwater flow system and management problem, enabling readers to apply all the methods presented in the text. The exercises can be completed using the material provided in the book, or as hands-on computer exercises using instructions and files available on the text's accompanying Web site.Throughout the book, the authors stress the need for valid statistical concepts and easily understood presentation methods required to achieve well-tested, transparent models. Most of the examples and all of the exercises focus on simulating groundwater systems; other examples come from surface-water hydrology and geophysics.The methods and guidelines in the text are broadly applicable and can be used by students, researchers, and engineers to simulate many kinds systems.

  7. Invasive and non-invasive measurement in medicine and biology: calibration issues

    NASA Astrophysics Data System (ADS)

    Rolfe, P.; Zhang, Yan; Sun, Jinwei; Scopesi, F.; Serra, G.; Yamakoshi, K.; Tanaka, S.; Yamakoshi, T.; Yamakoshi, Y.; Ogawa, M.

    2010-08-01

    Invasive and non-invasive measurement sensors and systems perform vital roles in medical care. Devices are based on various principles, including optics, photonics, and plasmonics, electro-analysis, magnetics, acoustics, bio-recognition, etc. Sensors are used for the direct insertion into the human body, for example to be in contact with blood, which constitutes Invasive Measurement. This approach is very challenging technically, as sensor performance (sensitivity, response time, linearity) can deteriorate due to interactions between the sensor materials and the biological environment, such as blood or interstitial fluid. Invasive techniques may also be potentially hazardous. Alternatively, sensors or devices may be positioned external to the body surface, for example to analyse respired breath, thereby allowing safer Non-Invasive Measurement. However, such methods, which are inherently less direct, often requiring more complex calibration algorithms, perhaps using chemometric principles. This paper considers and reviews the issue of calibration in both invasive and non-invasive biomedical measurement systems. Systems in current use usually rely upon periodic calibration checks being performed by clinical staff against a variety of laboratory instruments and QC samples. These procedures require careful planning and overall management if reliable data are to be assured.

  8. Radiometric Calibration of a Dual-Wavelength, Full-Waveform Terrestrial Lidar.

    PubMed

    Li, Zhan; Jupp, David L B; Strahler, Alan H; Schaaf, Crystal B; Howe, Glenn; Hewawasam, Kuravi; Douglas, Ewan S; Chakrabarti, Supriya; Cook, Timothy A; Paynter, Ian; Saenz, Edward J; Schaefer, Michael

    2016-03-02

    Radiometric calibration of the Dual-Wavelength Echidna(®) Lidar (DWEL), a full-waveform terrestrial laser scanner with two simultaneously-pulsing infrared lasers at 1064 nm and 1548 nm, provides accurate dual-wavelength apparent reflectance (ρ(app)), a physically-defined value that is related to the radiative and structural characteristics of scanned targets and independent of range and instrument optics and electronics. The errors of ρ(app) are 8.1% for 1064 nm and 6.4% for 1548 nm. A sensitivity analysis shows that ρ(app) error is dominated by range errors at near ranges, but by lidar intensity errors at far ranges. Our semi-empirical model for radiometric calibration combines a generalized logistic function to explicitly model telescopic effects due to defocusing of return signals at near range with a negative exponential function to model the fall-off of return intensity with range. Accurate values of ρ(app) from the radiometric calibration improve the quantification of vegetation structure, facilitate the comparison and coupling of lidar datasets from different instruments, campaigns or wavelengths and advance the utilization of bi- and multi-spectral information added to 3D scans by novel spectral lidars.

  9. Design and Calibration of the QUIET CMB Polarimeter

    NASA Astrophysics Data System (ADS)

    Buder, Immanuel

    2011-04-01

    QUIET is a large--angular-scale Cosmic Microwave Background (CMB) polarimeter designed to measure the B-mode signal from inflation. The design incorporates a new time-stream "double-demodulation" technique, a 1.4-m Mizuguchi--Dragone telescope, natural sky rotation, and frequent boresight rotation to minimize systematic contamination. The levels of contamination in the inflationary signal are below r=0.1, the best yet achieved by any B-mode polarimeter. Moreover, QUIET is unique among B-mode polarimeters in using a large focal-plane array of miniaturized High--Electron-Mobility Transistor (HEMT) based coherent detectors. These detectors take advantage of a breakthrough in microwave-circuit packaging to achieve a field sensitivity of 69,K√s. QUIET has collected > 10,000,ours of data and recently released results from the first observing season at Q band (43 GHz). Analysis of W-band (95-GHz) data is ongoing. I will describe the Q-band calibration plan which uses a combination of astronomical and artificial sources to convert the raw data into polarization measurements with small and well-understood calibration errors. I will also give a status report on calibration for the upcoming W-band results.

  10. Calibrating Parameters of Power System Stability Models using Advanced Ensemble Kalman Filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Renke; Diao, Ruisheng; Li, Yuanyuan

    With the ever increasing penetration of renewable energy, smart loads, energy storage, and new market behavior, today’s power grid becomes more dynamic and stochastic, which may invalidate traditional study assumptions and pose great operational challenges. Thus, it is of critical importance to maintain good-quality models for secure and economic planning and real-time operation. Following the 1996 Western Systems Coordinating Council (WSCC) system blackout, North American Electric Reliability Corporation (NERC) and Western Electricity Coordinating Council (WECC) in North America enforced a number of policies and standards to guide the power industry to periodically validate power grid models and calibrate poor parametersmore » with the goal of building sufficient confidence in model quality. The PMU-based approach using online measurements without interfering with the operation of generators provides a low-cost alternative to meet NERC standards. This paper presents an innovative procedure and tool suites to validate and calibrate models based on a trajectory sensitivity analysis method and an advanced ensemble Kalman filter algorithm. The developed prototype demonstrates excellent performance in identifying and calibrating bad parameters of a realistic hydro power plant against multiple system events.« less

  11. Radiometric Calibration of a Dual-Wavelength, Full-Waveform Terrestrial Lidar

    PubMed Central

    Li, Zhan; Jupp, David L. B.; Strahler, Alan H.; Schaaf, Crystal B.; Howe, Glenn; Hewawasam, Kuravi; Douglas, Ewan S.; Chakrabarti, Supriya; Cook, Timothy A.; Paynter, Ian; Saenz, Edward J.; Schaefer, Michael

    2016-01-01

    Radiometric calibration of the Dual-Wavelength Echidna® Lidar (DWEL), a full-waveform terrestrial laser scanner with two simultaneously-pulsing infrared lasers at 1064 nm and 1548 nm, provides accurate dual-wavelength apparent reflectance (ρapp), a physically-defined value that is related to the radiative and structural characteristics of scanned targets and independent of range and instrument optics and electronics. The errors of ρapp are 8.1% for 1064 nm and 6.4% for 1548 nm. A sensitivity analysis shows that ρapp error is dominated by range errors at near ranges, but by lidar intensity errors at far ranges. Our semi-empirical model for radiometric calibration combines a generalized logistic function to explicitly model telescopic effects due to defocusing of return signals at near range with a negative exponential function to model the fall-off of return intensity with range. Accurate values of ρapp from the radiometric calibration improve the quantification of vegetation structure, facilitate the comparison and coupling of lidar datasets from different instruments, campaigns or wavelengths and advance the utilization of bi- and multi-spectral information added to 3D scans by novel spectral lidars. PMID:26950126

  12. A simple and sensitive quantitation of N,N-dimethyltryptamine by gas chromatography with surface ionization detection.

    PubMed

    Ishii, A; Seno, H; Suzuki, O; Hattori, H; Kumazawa, T

    1997-01-01

    A simple and sensitive method for determination of N,N-dimethyltryptamine (DMT) by gas chromatography (GC) with surface ionization detection (SID) is presented. Whole blood or urine, containing DMT and gramine (internal standard), was subjected to solid-phase extraction with a Sep-Pak C18 cartridge before analysis by GC-SID. The calibration curve was linear in the DMT range of 1.25-20 ng/mL blood or urine. The detection limit of DMT was about 0.5 ng/mL (10 pg on-column). The recovery of both DMT and gramine spiked in biological fluids was above 86%.

  13. Matrix-effect free multi-residue analysis of veterinary drugs in food samples of animal origin by nanoflow liquid chromatography high resolution mass spectrometry.

    PubMed

    Alcántara-Durán, Jaime; Moreno-González, David; Gilbert-López, Bienvenida; Molina-Díaz, Antonio; García-Reyes, Juan F

    2018-04-15

    In this work, a sensitive method based on nanoflow liquid chromatography high-resolution mass spectrometry has been developed for the multiresidue determination of veterinary drugs residues in honey, veal muscle, egg and milk. Salting-out supported liquid extraction was employed as sample treatment for milk, veal muscle and egg, while a modified QuEChERS procedure was used in honey. The enhancement of sensitivity provided by the nanoflow LC system also allowed the implementation of high dilution factors as high as 100:1. For all matrices tested, matrix effects were negligible starting from a dilution factor of 100, enabling, thus, the use of external standard calibration instead of matrix-matched calibration of each sample, and the subsequent increase of laboratory throughput. At spiked levels as low as 0.1 or 1 µg kg -1 before the 1:100 dilution, the obtained signals were still significantly higher than the instrumental limit of quantitation (S/N 10). Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Noise temperature and noise figure concepts: DC to light

    NASA Technical Reports Server (NTRS)

    Stelzried, C. T.

    1982-01-01

    The Deep Space Network is investigating the use of higher operational frequencies for improved performance. Noise temperature and noise figure concepts are used to describe the noise performance of these receiving systems. It is proposed to modify present noise temperature definitions for linear amplifiers so they will be valid over the range (hf/kT) 1 (hf/kT). This is important for systems operating at high frequencies and low noise temperatures, or systems requiring very accurate calibrations. The suggested definitions are such that for an ideal amplifier, T sub e = (hg/k) = T sub q and F = 1. These definitions revert to the present definition for (hf/kT) 1. Noise temperature calibrations are illustrated with a detailed example. These concepts are applied to system signal-to-noise analysis. The fundamental limit to a receiving system sensitivity is determined by the thermal noise of the source and the quantum noise limit of the receiver. The sensitivity of a receiving system consisting of an ideal linear amplifier with a 2.7 K source, degrades significantly at higher frequencies.

  15. Broadband electromagnetic sensors for aircraft lightning research. [electromagnetic effects of lightning on aircraft digital equipment

    NASA Technical Reports Server (NTRS)

    Trost, T. F.; Zaepfel, K. P.

    1980-01-01

    A set of electromagnetic sensors, or electrically-small antennas, is described. The sensors are designed for installation on an F-106 research aircraft for the measurement of electric and magnetic fields and currents during a lightning strike. The electric and magnetic field sensors mount on the aircraft skin. The current sensor mounts between the nose boom and the fuselage. The sensors are all on the order of 10 cm in size and should produce up to about 100 V for the estimated lightning fields. The basic designs are the same as those developed for nuclear electromagnetic pulse studies. The most important electrical parameters of the sensors are the sensitivity, or equivalent area, and the bandwidth (or rise time). Calibration of sensors with simple geometries is reliably accomplished by a geometric analysis; all the sensors discussed possess geometries for which the sensitivities have been calculated. For the calibration of sensors with more complex geometries and for general testing of all sensors, two transmission lines were constructed to transmit known pulsed fields and currents over the sensors.

  16. Calibration Technique for Polarization-Sensitive Lidars

    NASA Technical Reports Server (NTRS)

    Alvarez, J. M.; Vaughan, M. A.; Hostetler, C. A.; Hung, W. H.; Winker, D. M.

    2006-01-01

    Polarization-sensitive lidars have proven to be highly effective in discriminating between spherical and non-spherical particles in the atmosphere. These lidars use a linearly polarized laser and are equipped with a receiver that can separately measure the components of the return signal polarized parallel and perpendicular to the outgoing beam. In this work we describe a technique for calibrating polarization-sensitive lidars that was originally developed at NASA s Langley Research Center (LaRC) and has been used continually over the past fifteen years. The procedure uses a rotatable half-wave plate inserted into the optical path of the lidar receiver to introduce controlled amounts of polarization cross-talk into a sequence of atmospheric backscatter measurements. Solving the resulting system of nonlinear equations generates the system calibration constants (gain ratio, G, and offset angle, theta) required for deriving calibrated measurements of depolarization ratio from the lidar signals. In addition, this procedure also determines the mean depolarization ratio within the region of the atmosphere that is analyzed. Simulations and error propagation studies show the method to be both reliable and well behaved. Operational details of the technique are illustrated using measurements obtained as part of Langley Research Center s participation in the First ISCCP Regional Experiment (FIRE).

  17. Calibration of the pressure sensitivity of microphones by a free-field method at frequencies up to 80 khz.

    PubMed

    Zuckerwar, Allan J; Herring, G C; Elbing, Brian R

    2006-01-01

    A free-field (FF) substitution method for calibrating the pressure sensitivity of microphones at frequencies up to 80 kHz is demonstrated with both grazing and normal-incidence geometries. The substitution-based method, as opposed to a simultaneous method, avoids problems associated with the nonuniformity of the sound field and, as applied here, uses a 1/4-in. air-condenser pressure microphone as a known reference. Best results were obtained with a centrifugal fan, which is used as a random, broadband sound source. A broadband source minimizes reflection-related interferences that can plague FF measurements. Calibrations were performed on 1/4-in. FF air-condenser, electret, and microelectromechanical systems (MEMS) microphones in an anechoic chamber. The uncertainty of this FF method is estimated by comparing the pressure sensitivity of an air-condenser FF microphone, as derived from the FF measurement, with that of an electrostatic actuator calibration. The root-mean-square difference is found to be +/- 0.3 dB over the range 1-80 kHz, and the combined standard uncertainty of the FF method, including other significant contributions, is +/- 0.41 dB.

  18. Calibration strategy for the COROT photometry

    NASA Astrophysics Data System (ADS)

    Buey, J.-T.; Auvergne, M.; Lapeyrere, V.; Boumier, P.

    2004-01-01

    Like Eddington, the COROT photometer will measure very small fluctutions on a large signal: the amplitudes of planetary transits and solar-like oscillations are expressed in ppm (parts per million). For such an instrument, specific calibration has to be done during the different phases of the development of the instrument and of all the subsystems. Two main things have to be taken into account: - the calibration during the study phase; - the calibration of the sub-systems and building of numerical models. The first item allows us to clearly understand all the perturbations (internal and external) and to identify their relative impacts on the expected signal (by numerical models including expected values of perturbations and sensitivity of the instrument). Methods and a schedule for the calibration process can also be introduced, in good agreement with the development plan of the instrument. The second item is more related to the measurement of the sensitivity of the instrument and all its sub-systems. As the instrument is designed to be as stable as possible, we have to mix measurements (with larger fluctuations of parameters than expected) and numerical models. Some typical reasons for that are: - there are many parameters to introduce in the measurements and results from some models (bread-board for example) may be extrapolated to the flight model; - larger fluctuations than expected are used (to measure precisely the sensitivity) and numerical models give the real value of noise with the expected fluctuations. - Characteristics of sub-systems may be measured and models used to give the sensitivity of the whole system built with them, as end-to-end measurements may be impossible (time, budget, physical limitations). Also, house-keeping measurements have to be set up on the critical parts of the sub-systems: measurements on thermal probes, power supply, pointing, etc. All these house-keeping data are used during ground calibration and during the flight, so that correct correlation between signal and house-keeping can be achieved.

  19. Aquarius Instrument Science Calibration During the Risk Reduction Phase

    NASA Technical Reports Server (NTRS)

    Ruf, Christopher S.

    2004-01-01

    This final report presents the results of work performed under NASA Grant NAG512726 during the period 15 January 2003 through 30 June 2004. An analysis was performed of a possible vicarious calibration method for use by Aquarius to monitor and stabilize the absolute and relative calibration of its microwave radiometer. Stationary statistical properties of the brightness temperature (T(sub B)) measured by a low Earth orbiting radiometer operating at 1.4135 GHz are considered as a means of validating its absolute calibration. The global minimum, maximum, and average T(sub B) are considered, together with a vicarious cold reference method that detects the presence of a sharp lower bound on naturally occurring values for T(sub B). Of particular interest is the reliability with which these statistics can be extracted from a realistic distribution of T(sub B) measurements that would be observed by a typical sensor. Simulations of measurements are performed that include the effects of instrument noise and variable environmental factors such as the global water vapor and ocean surface temperature, salinity and wind distributions. Global minima can vary widely due to instrument noise and are not a reliable calibration reference. Global maxima are strongly influenced by several environmental factors as well as instrument noise and are even less stationary. Global averages are largely insensitive to instrument noise and, in most cases, to environmental conditions as well. The global average T(sub B) varies at only the 0.1 K RMS level except in cases of anomalously high winds, when it can increase considerably more. The vicarious cold reference is similarly insensitive to instrument effects and most environmental factors. It is not significantly affected by high wind conditions. The stability of the vicarious reference is, however, found to be somewhat sensitive (at the several tenths of Kelvins level) to variations in the background cold space brightness, T(sub c). The global average is much less sensitive to this parameter and so using two approaches together can be mutually beneficial.

  20. Quasi-Static Calibration Method of a High-g Accelerometer

    PubMed Central

    Wang, Yan; Fan, Jinbiao; Zu, Jing; Xu, Peng

    2017-01-01

    To solve the problem of resonance during quasi-static calibration of high-g accelerometers, we deduce the relationship between the minimum excitation pulse width and the resonant frequency of the calibrated accelerometer according to the second-order mathematical model of the accelerometer, and improve the quasi-static calibration theory. We establish a quasi-static calibration testing system, which uses a gas gun to generate high-g acceleration signals, and apply a laser interferometer to reproduce the impact acceleration. These signals are used to drive the calibrated accelerometer. By comparing the excitation acceleration signal and the output responses of the calibrated accelerometer to the excitation signals, the impact sensitivity of the calibrated accelerometer is obtained. As indicated by the calibration test results, this calibration system produces excitation acceleration signals with a pulse width of less than 1000 μs, and realize the quasi-static calibration of high-g accelerometers with a resonant frequency above 20 kHz when the calibration error was 3%. PMID:28230743

  1. A computer model of long-term salinity in San Francisco Bay: Sensitivity to mixing and inflows

    USGS Publications Warehouse

    Uncles, R.J.; Peterson, D.H.

    1995-01-01

    A two-level model of the residual circulation and tidally-averaged salinity in San Francisco Bay has been developed in order to interpret long-term (days to decades) salinity variability in the Bay. Applications of the model to biogeochemical studies are also envisaged. The model has been used to simulate daily-averaged salinity in the upper and lower levels of a 51-segment discretization of the Bay over the 22-y period 1967–1988. Observed, monthly-averaged surface salinity data and monthly averages of the daily-simulated salinity are in reasonable agreement, both near the Golden Gate and in the upper reaches, close to the delta. Agreement is less satisfactory in the central reaches of North Bay, in the vicinity of Carquinez Strait. Comparison of daily-averaged data at Station 5 (Pittsburg, in the upper North Bay) with modeled data indicates close agreement with a correlation coefficient of 0.97 for the 4110 daily values. The model successfully simulates the marked seasonal variability in salinity as well as the effects of rapidly changing freshwater inflows. Salinity variability is driven primarily by freshwater inflow. The sensitivity of the modeled salinity to variations in the longitudinal mixing coefficients is investigated. The modeled salinity is relatively insensitive to the calibration factor for vertical mixing and relatively sensitive to the calibration factor for longitudinal mixing. The optimum value of the longitudinal calibration factor is 1.1, compared with the physically-based value of 1.0. Linear time-series analysis indicates that the observed and dynamically-modeled salinity-inflow responses are in good agreement in the lower reaches of the Bay.

  2. Sensitivity Analysis of earth and environmental models: a systematic review to guide scientific advancement

    NASA Astrophysics Data System (ADS)

    Wagener, Thorsten; Pianosi, Francesca

    2016-04-01

    Sensitivity Analysis (SA) investigates how the variation in the output of a numerical model can be attributed to variations of its input factors. SA is increasingly being used in earth and environmental modelling for a variety of purposes, including uncertainty assessment, model calibration and diagnostic evaluation, dominant control analysis and robust decision-making. Here we provide some practical advice regarding best practice in SA and discuss important open questions based on a detailed recent review of the existing body of work in SA. Open questions relate to the consideration of input factor interactions, methods for factor mapping and the formal inclusion of discrete factors in SA (for example for model structure comparison). We will analyse these questions using relevant examples and discuss possible ways forward. We aim at stimulating the discussion within the community of SA developers and users regarding the setting of good practices and on defining priorities for future research.

  3. The Pointing Self-calibration Algorithm for Aperture Synthesis Radio Telescopes

    NASA Astrophysics Data System (ADS)

    Bhatnagar, S.; Cornwell, T. J.

    2017-11-01

    This paper is concerned with algorithms for calibration of direction-dependent effects (DDE) in aperture synthesis radio telescopes (ASRT). After correction of direction-independent effects (DIE) using self-calibration, imaging performance can be limited by the imprecise knowledge of the forward gain of the elements in the array. In general, the forward gain pattern is directionally dependent and varies with time due to a number of reasons. Some factors, such as rotation of the primary beam with Parallactic Angle for Azimuth-Elevation mount antennas are known a priori. Some, such as antenna pointing errors and structural deformation/projection effects for aperture-array elements cannot be measured a priori. Thus, in addition to algorithms to correct for DD effects known a priori, algorithms to solve for DD gains are required for high dynamic range imaging. Here, we discuss a mathematical framework for antenna-based DDE calibration algorithms and show that this framework leads to computationally efficient optimal algorithms that scale well in a parallel computing environment. As an example of an antenna-based DD calibration algorithm, we demonstrate the Pointing SelfCal (PSC) algorithm to solve for the antenna pointing errors. Our analysis show that the sensitivity of modern ASRT is sufficient to solve for antenna pointing errors and other DD effects. We also discuss the use of the PSC algorithm in real-time calibration systems and extensions for antenna Shape SelfCal algorithm for real-time tracking and corrections for pointing offsets and changes in antenna shape.

  4. Stochastic calibration and learning in nonstationary hydroeconomic models

    NASA Astrophysics Data System (ADS)

    Maneta, M. P.; Howitt, R.

    2014-05-01

    Concern about water scarcity and adverse climate events over agricultural regions has motivated a number of efforts to develop operational integrated hydroeconomic models to guide adaptation and optimal use of water. Once calibrated, these models are used for water management and analysis assuming they remain valid under future conditions. In this paper, we present and demonstrate a methodology that permits the recursive calibration of economic models of agricultural production from noisy but frequently available data. We use a standard economic calibration approach, namely positive mathematical programming, integrated in a data assimilation algorithm based on the ensemble Kalman filter equations to identify the economic model parameters. A moving average kernel ensures that new and past information on agricultural activity are blended during the calibration process, avoiding loss of information and overcalibration for the conditions of a single year. A regularization constraint akin to the standard Tikhonov regularization is included in the filter to ensure its stability even in the presence of parameters with low sensitivity to observations. The results show that the implementation of the PMP methodology within a data assimilation framework based on the enKF equations is an effective method to calibrate models of agricultural production even with noisy information. The recursive nature of the method incorporates new information as an added value to the known previous observations of agricultural activity without the need to store historical information. The robustness of the method opens the door to the use of new remote sensing algorithms for operational water management.

  5. The Pointing Self-calibration Algorithm for Aperture Synthesis Radio Telescopes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhatnagar, S.; Cornwell, T. J., E-mail: sbhatnag@nrao.edu

    This paper is concerned with algorithms for calibration of direction-dependent effects (DDE) in aperture synthesis radio telescopes (ASRT). After correction of direction-independent effects (DIE) using self-calibration, imaging performance can be limited by the imprecise knowledge of the forward gain of the elements in the array. In general, the forward gain pattern is directionally dependent and varies with time due to a number of reasons. Some factors, such as rotation of the primary beam with Parallactic Angle for Azimuth–Elevation mount antennas are known a priori. Some, such as antenna pointing errors and structural deformation/projection effects for aperture-array elements cannot be measuredmore » a priori. Thus, in addition to algorithms to correct for DD effects known a priori, algorithms to solve for DD gains are required for high dynamic range imaging. Here, we discuss a mathematical framework for antenna-based DDE calibration algorithms and show that this framework leads to computationally efficient optimal algorithms that scale well in a parallel computing environment. As an example of an antenna-based DD calibration algorithm, we demonstrate the Pointing SelfCal (PSC) algorithm to solve for the antenna pointing errors. Our analysis show that the sensitivity of modern ASRT is sufficient to solve for antenna pointing errors and other DD effects. We also discuss the use of the PSC algorithm in real-time calibration systems and extensions for antenna Shape SelfCal algorithm for real-time tracking and corrections for pointing offsets and changes in antenna shape.« less

  6. Optical Mass Displacement Tracking: A simplified field calibration method for the electro-mechanical seismometer.

    NASA Astrophysics Data System (ADS)

    Burk, D. R.; Mackey, K. G.; Hartse, H. E.

    2016-12-01

    We have developed a simplified field calibration method for use in seismic networks that still employ the classical electro-mechanical seismometer. Smaller networks may not always have the financial capability to purchase and operate modern, state of the art equipment. Therefore these networks generally operate a modern, low-cost digitizer that is paired to an existing electro-mechanical seismometer. These systems are typically poorly calibrated. Calibration of the station is difficult to estimate because coil loading, digitizer input impedance, and amplifier gain differences vary by station and digitizer model. Therefore, it is necessary to calibrate the station channel as a complete system to take into account all components from instrument, to amplifier, to even the digitizer. Routine calibrations at the smaller networks are not always consistent, because existing calibration techniques require either specialized equipment or significant technical expertise. To improve station data quality at the small network, we developed a calibration method that utilizes open source software and a commonly available laser position sensor. Using a signal generator and a small excitation coil, we force the mass of the instrument to oscillate at various frequencies across its operating range. We then compare the channel voltage output to the laser-measured mass displacement to determine the instrument voltage sensitivity at each frequency point. Using the standard equations of forced motion, a representation of the calibration curve as a function of voltage per unit of ground velocity is calculated. A computer algorithm optimizes the curve and then translates the instrument response into a Seismic Analysis Code (SAC) poles & zeros format. Results have been demonstrated to fall within a few percent of a standard laboratory calibration. This method is an effective and affordable option for networks that employ electro-mechanical seismometers, and it is currently being deployed in regional networks throughout Russia and in Central Asia.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vaccarono, Mattia; Bechini, Renzo; Chandrasekar, Chandra V.

    The stability of weather radar calibration is a mandatory aspect for quantitative applications, such as rainfall estimation, short-term weather prediction and initialization of numerical atmospheric and hydrological models. Over the years, calibration monitoring techniques based on external sources have been developed, specifically calibration using the Sun and calibration based on ground clutter returns. In this paper, these two techniques are integrated and complemented with a self-consistency procedure and an intercalibration technique. The aim of the integrated approach is to implement a robust method for online monitoring, able to detect significant changes in the radar calibration. The physical consistency of polarimetricmore » radar observables is exploited using the self-consistency approach, based on the expected correspondence between dual-polarization power and phase measurements in rain. This technique allows a reference absolute value to be provided for the radar calibration, from which eventual deviations may be detected using the other procedures. In particular, the ground clutter calibration is implemented on both polarization channels (horizontal and vertical) for each radar scan, allowing the polarimetric variables to be monitored and hardware failures to promptly be recognized. The Sun calibration allows monitoring the calibration and sensitivity of the radar receiver, in addition to the antenna pointing accuracy. It is applied using observations collected during the standard operational scans but requires long integration times (several days) in order to accumulate a sufficient amount of useful data. Finally, an intercalibration technique is developed and performed to compare colocated measurements collected in rain by two radars in overlapping regions. The integrated approach is performed on the C-band weather radar network in northwestern Italy, during July–October 2014. The set of methods considered appears suitable to establish an online tool to monitor the stability of the radar calibration with an accuracy of about 2 dB. In conclusion, this is considered adequate to automatically detect any unexpected change in the radar system requiring further data analysis or on-site measurements.« less

  8. Biotrickling filter modeling for styrene abatement. Part 2: Simulating a two-phase partitioning bioreactor.

    PubMed

    San-Valero, Pau; Dorado, Antonio D; Quijano, Guillermo; Álvarez-Hornos, F Javier; Gabaldón, Carmen

    2018-01-01

    A dynamic model describing styrene abatement was developed for a two-phase partitioning bioreactor operated as a biotrickling filter (TPPB-BTF). The model was built as a coupled set of two different systems of partial differential equations depending on whether an irrigation or a non-irrigation period was simulated. The maximum growth rate was previously calibrated from a conventional BTF treating styrene (Part 1). The model was extended to simulate the TPPB-BTF based on the hypothesis that the main change associated with the non-aqueous phase is the modification of the pollutant properties in the liquid phase. The three phases considered were gas, a water-silicone liquid mixture, and biofilm. The selected calibration parameters were related to the physical properties of styrene: Henry's law constant, diffusivity, and the gas-liquid mass transfer coefficient. A sensitivity analysis revealed that Henry's law constant was the most sensitive parameter. The model was successfully calibrated with a goodness of fit of 0.94. It satisfactorily simulated the performance of the TPPB-BTF at styrene loads ranging from 13 to 77 g C m -3 h -1 and empty bed residence times of 30-15 s with the mass transfer enhanced by a factor of 1.6. The model was validated with data obtained in a TPPB-BTF removing styrene continuously. The experimental outlet emissions associated to oscillating inlet concentrations were satisfactorily predicted by using the calibrated parameters. Model simulations demonstrated the potential improvement of the mass-transfer performance of a conventional BTF degrading styrene by adding silicone oil. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Accuracy of a new real-time continuous glucose monitoring algorithm.

    PubMed

    Keenan, D Barry; Cartaya, Raymond; Mastrototaro, John J

    2010-01-01

    Through minimally invasive sensor-based continuous glucose monitoring (CGM), individuals can manage their blood glucose (BG) levels more aggressively, thereby improving their hemoglobin A1c level, while reducing the risk of hypoglycemia. Tighter glycemic control through CGM, however, requires an accurate glucose sensor and calibration algorithm with increased performance at lower BG levels. Sensor and BG measurements for 72 adult and adolescent subjects were obtained during the course of a 26-week multicenter study evaluating the efficacy of the Paradigm REAL-Time (PRT) sensor-augmented pump system (Medtronic Diabetes, Northridge, CA) in an outpatient setting. Subjects in the study arm performed at least four daily finger stick measurements. A retrospective analysis of the data set was performed to evaluate a new calibration algorithm utilized in the Paradigm Veo insulin pump (Medtronic Diabetes) and to compare these results to performance metrics calculated for the PRT. A total of N = 7193 PRT sensor downloads for 3 days of use, as well as 90,472 temporally and nonuniformly paired data points (sensor and meter values), were evaluated, with 5841 hypoglycemic and 15,851 hyperglycemic events detected through finger stick measurements. The Veo calibration algorithm decreased the overall mean absolute relative difference by greater than 0.25 to 15.89%, with hypoglycemia sensitivity increased from 54.9% in the PRT to 82.3% in the Veo (90.5% with predictive alerts); however, hyperglycemia sensitivity was decreased only marginally from 86% in the PRT to 81.7% in the Veo. The Veo calibration algorithm, with sensor error reduced significantly in the 40- to 120-mg/dl range, improves hypoglycemia detection, while retaining accuracy at high glucose levels. 2010 Diabetes Technology Society.

  10. DGSA: A Matlab toolbox for distance-based generalized sensitivity analysis of geoscientific computer experiments

    NASA Astrophysics Data System (ADS)

    Park, Jihoon; Yang, Guang; Satija, Addy; Scheidt, Céline; Caers, Jef

    2016-12-01

    Sensitivity analysis plays an important role in geoscientific computer experiments, whether for forecasting, data assimilation or model calibration. In this paper we focus on an extension of a method of regionalized sensitivity analysis (RSA) to applications typical in the Earth Sciences. Such applications involve the building of large complex spatial models, the application of computationally extensive forward modeling codes and the integration of heterogeneous sources of model uncertainty. The aim of this paper is to be practical: 1) provide a Matlab code, 2) provide novel visualization methods to aid users in getting a better understanding in the sensitivity 3) provide a method based on kernel principal component analysis (KPCA) and self-organizing maps (SOM) to account for spatial uncertainty typical in Earth Science applications and 4) provide an illustration on a real field case where the above mentioned complexities present themselves. We present methods that extend the original RSA method in several ways. First we present the calculation of conditional effects, defined as the sensitivity of a parameter given a level of another parameters. Second, we show how this conditional effect can be used to choose nominal values or ranges to fix insensitive parameters aiming to minimally affect uncertainty in the response. Third, we develop a method based on KPCA and SOM to assign a rank to spatial models in order to calculate the sensitivity on spatial variability in the models. A large oil/gas reservoir case is used as illustration of these ideas.

  11. Analysis of H2O in silicate glass using attenuated total reflectance (ATR) micro-FTIR spectroscopy

    USGS Publications Warehouse

    Lowenstern, Jacob B.; Pitcher, Bradley W.

    2013-01-01

    We present a calibration for attenuated total reflectance (ATR) micro-FTIR for analysis of H2O in hydrous glass. A Ge ATR accessory was used to measure evanescent wave absorption by H2O within hydrous rhyolite and other standards. Absorbance at 3450 cm−1 (representing total H2O or H2Ot) and 1630 cm−1 (molecular H2O or H2Om) showed high correlation with measured H2O in the glasses as determined by transmission FTIR spectroscopy and manometry. For rhyolite, wt%H2O=245(±9)×A3450-0.22(±0.03) and wt%H2Om=235(±11)×A1630-0.20(±0.03) where A3450 and A1630 represent the ATR absorption at the relevant infrared wavelengths. The calibration permits determination of volatiles in singly polished glass samples with spot size down to ~5 μm (for H2O-rich samples) and detection limits of ~0.1 wt% H2O. Basaltic, basaltic andesite and dacitic glasses of known H2O concentrations fall along a density-adjusted calibration, indicating that ATR is relatively insensitive to glass composition, at least for calc-alkaline glasses. The following equation allows quantification of H2O in silicate glasses that range in composition from basalt to rhyolite: wt%H2O=(ω×A3450/ρ)+b where ω = 550 ± 21, b = −0.19 ± 0.03, ρ = density, in g/cm3, and A3450 is the ATR absorbance at 3450 cm−1. The ATR micro-FTIR technique is less sensitive than transmission FTIR, but requires only a singly polished sample for quantitative results, thus minimizing time for sample preparation. Compared with specular reflectance, it is more sensitive and better suited for imaging of H2O variations in heterogeneous samples such as melt inclusions. One drawback is that the technique can damage fragile samples and we therefore recommend mounting of unknowns in epoxy prior to polishing. Our calibration should hold for any Ge ATR crystals with the same incident angle (31°). Use of a different crystal type or geometry would require measurement of several H2O-bearing standards to provide a crystal-specific calibration.

  12. Structured light system calibration method with optimal fringe angle.

    PubMed

    Li, Beiwen; Zhang, Song

    2014-11-20

    For structured light system calibration, one popular approach is to treat the projector as an inverse camera. This is usually performed by projecting horizontal and vertical sequences of patterns to establish one-to-one mapping between camera points and projector points. However, for a well-designed system, either horizontal or vertical fringe images are not sensitive to depth variation and thus yield inaccurate mapping. As a result, the calibration accuracy is jeopardized if a conventional calibration method is used. To address this limitation, this paper proposes a novel calibration method based on optimal fringe angle determination. Experiments demonstrate that our calibration approach can increase the measurement accuracy up to 38% compared to the conventional calibration method with a calibration volume of 300(H)  mm×250(W)  mm×500(D)  mm.

  13. An Improved Interferometric Calibration Method Based on Independent Parameter Decomposition

    NASA Astrophysics Data System (ADS)

    Fan, J.; Zuo, X.; Li, T.; Chen, Q.; Geng, X.

    2018-04-01

    Interferometric SAR is sensitive to earth surface undulation. The accuracy of interferometric parameters plays a significant role in precise digital elevation model (DEM). The interferometric calibration is to obtain high-precision global DEM by calculating the interferometric parameters using ground control points (GCPs). However, interferometric parameters are always calculated jointly, making them difficult to decompose precisely. In this paper, we propose an interferometric calibration method based on independent parameter decomposition (IPD). Firstly, the parameters related to the interferometric SAR measurement are determined based on the three-dimensional reconstruction model. Secondly, the sensitivity of interferometric parameters is quantitatively analyzed after the geometric parameters are completely decomposed. Finally, each interferometric parameter is calculated based on IPD and interferometric calibration model is established. We take Weinan of Shanxi province as an example and choose 4 TerraDEM-X image pairs to carry out interferometric calibration experiment. The results show that the elevation accuracy of all SAR images is better than 2.54 m after interferometric calibration. Furthermore, the proposed method can obtain the accuracy of DEM products better than 2.43 m in the flat area and 6.97 m in the mountainous area, which can prove the correctness and effectiveness of the proposed IPD based interferometric calibration method. The results provide a technical basis for topographic mapping of 1 : 50000 and even larger scale in the flat area and mountainous area.

  14. A high sensitivity fiber optic macro-bend based gas flow rate transducer for low flow rates: Theory, working principle, and static calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schena, Emiliano; Saccomandi, Paola; Silvestri, Sergio

    2013-02-15

    A novel fiber optic macro-bend based gas flowmeter for low flow rates is presented. Theoretical analysis of the sensor working principle, design, and static calibration were performed. The measuring system consists of: an optical fiber, a light emitting diode (LED), a Quadrant position sensitive Detector (QD), and an analog electronic circuit for signal processing. The fiber tip undergoes a deflection in the flow, acting like a cantilever. The consequent displacement of light spot center is monitored by the QD generating four unbalanced photocurrents which are function of fiber tip position. The analog electronic circuit processes the photocurrents providing voltage signalmore » proportional to light spot position. A circular target was placed on the fiber in order to increase the sensing surface. Sensor, tested in the measurement range up to 10 l min{sup -1}, shows a discrimination threshold of 2 l min{sup -1}, extremely low fluid dynamic resistance (0.17 Pa min l{sup -1}), and high sensitivity, also at low flow rates (i.e., 33 mV min l{sup -1} up to 4 l min{sup -1} and 98 mV min l{sup -1} from 4 l min{sup -1} up to 10 l min{sup -1}). Experimental results agree with the theoretical predictions. The high sensitivity, along with the reduced dimension and negligible pressure drop, makes the proposed transducer suitable for medical applications in neonatal ventilation.« less

  15. mHealth App for Risk Assessment of Pigmented and Nonpigmented Skin Lesions-A Study on Sensitivity and Specificity in Detecting Malignancy.

    PubMed

    Thissen, Monique; Udrea, Andreea; Hacking, Michelle; von Braunmuehl, Tanja; Ruzicka, Thomas

    2017-12-01

    With the advent of smartphone devices, an increasing number of mHealth applications that target melanoma identification have been developed, but none addresses the general context of melanoma and nonmelanoma skin cancer identification. In this study a smartphone application using fractal and classical image analysis for the risk assessment of skin lesions is systematically evaluated to determine its sensitivity and specificity in the diagnosis of melanoma and nonmelanoma skin cancer along with actinic keratosis and Bowen's disease. In the Department of Dermatology, Catharina Hospital Eindhoven, The Netherlands, 341 melanocytic and nonmelanocytic lesions were imaged using SkinVision app; 239 underwent histopathological examination, while the rest of 102 lesions were clinically diagnosed as clearly benign and not removed. The algorithm has been calibrated using the images of the first 233 lesions. The calibrated version of the algorithm was used in a subset of 108 lesions, and the obtained results were compared with the medical findings. On the 108 cases used for evaluation the algorithm scored 80% sensitivity and 78% specificity in detecting (pre)malignant conditions. Although less accurate than the dermatologist's clinical eye, the app may offer support to other professionals who are less familiar with differentiating between benign and malignant lesions. An mHealth application for the risk assessment of skin lesions was evaluated. It adds value to diagnosis tools of its type by taking into consideration pigmented and nonpigmented lesions all together and detecting signs of malignancy with high sensitivity.

  16. Precise SAR measurements in the near-field of RF antenna systems

    NASA Astrophysics Data System (ADS)

    Hakim, Bandar M.

    Wireless devices must meet specific safety radiation limits, and in order to assess the health affects of such devices, standard procedures are used in which standard phantoms, tissue-equivalent liquids, and miniature electric field probes are used. The accuracy of such measurements depend on the precision in measuring the dielectric properties of the tissue-equivalent liquids and the associated calibrations of the electric-field probes. This thesis describes work on the theoretical modeling and experimental measurement of the complex permittivity of tissue-equivalent liquids, and associated calibration of miniature electric-field probes. The measurement method is based on measurements of the field attenuation factor and power reflection coefficient of a tissue-equivalent sample. A novel method, to the best of the authors knowledge, for determining the dielectric properties and probe calibration factors is described and validated. The measurement system is validated using saline at different concentrations, and measurements of complex permittivity and calibration factors have been made on tissue-equivalent liquids at 900MHz and 1800MHz. Uncertainty analysis have been conducted to study the measurement system sensitivity. Using the same waveguide to measure tissue-equivalent permittivity and calibrate e-field probes eliminates a source of uncertainty associated with using two different measurement systems. The measurement system is used to test GSM cell-phones at 900MHz and 1800MHz for Specific Absorption Rate (SAR) compliance using a Specific Anthropomorphic Mannequin phantom (SAM).

  17. An Innovative Software Tool Suite for Power Plant Model Validation and Parameter Calibration using PMU Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yuanyuan; Diao, Ruisheng; Huang, Renke

    Maintaining good quality of power plant stability models is of critical importance to ensure the secure and economic operation and planning of today’s power grid with its increasing stochastic and dynamic behavior. According to North American Electric Reliability (NERC) standards, all generators in North America with capacities larger than 10 MVA are required to validate their models every five years. Validation is quite costly and can significantly affect the revenue of generator owners, because the traditional staged testing requires generators to be taken offline. Over the past few years, validating and calibrating parameters using online measurements including phasor measurement unitsmore » (PMUs) and digital fault recorders (DFRs) has been proven to be a cost-effective approach. In this paper, an innovative open-source tool suite is presented for validating power plant models using PPMV tool, identifying bad parameters with trajectory sensitivity analysis, and finally calibrating parameters using an ensemble Kalman filter (EnKF) based algorithm. The architectural design and the detailed procedures to run the tool suite are presented, with results of test on a realistic hydro power plant using PMU measurements for 12 different events. The calibrated parameters of machine, exciter, governor and PSS models demonstrate much better performance than the original models for all the events and show the robustness of the proposed calibration algorithm.« less

  18. A comparison of calibration techniques for hot-wires operated in subsonic compressible slip flows

    NASA Technical Reports Server (NTRS)

    Jones, Gregory S.; Stainback, P. C.; Nagabushana, K. A.

    1992-01-01

    This paper focuses on the correlation of constant temperature anemometer voltages to velocity, density, and total temperature in the transonic slip flow regime. Three different calibration schemes were evaluated. The ultimate use of these hot-wire calibrations is to obtain fluctuations in the flow variables. Without the appropriate mean flow sensitivities of the heated wire, the measurements of these fluctuations cannot be accurately determined.

  19. Optical Interferometric Micrometrology

    NASA Technical Reports Server (NTRS)

    Abel, Phillip B.; Lauer, James R.

    1989-01-01

    Resolutions in angstrom and subangstrom range sought for atomic-scale surface probes. Experimental optical micrometrological system built to demonstrate calibration of piezoelectric transducer to displacement sensitivity of few angstroms. Objective to develop relatively simple system producing and measuring translation, across surface of specimen, of stylus in atomic-force or scanning tunneling microscope. Laser interferometer used to calibrate piezoelectric transducer used in atomic-force microscope. Electronic portion of calibration system made of commercially available components.

  20. Influence of Ultrasonic Nonlinear Propagation on Hydrophone Calibration Using Two-Transducer Reciprocity Method

    NASA Astrophysics Data System (ADS)

    Yoshioka, Masahiro; Sato, Sojun; Kikuchi, Tsuneo; Matsuda, Yoichi

    2006-05-01

    In this study, the influence of ultrasonic nonlinear propagation on hydrophone calibration by the two-transducer reciprocity method is investigated quantitatively using the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation. It is proposed that the correction for the diffraction and attenuation of ultrasonic waves used in two-transducer reciprocity calibration can be derived using the KZK equation to remove the influence of nonlinear propagation. The validity of the correction is confirmed by comparing the sensitivities calibrated by the two-transducer reciprocity method and laser interferometry.

  1. Heat transfer analysis of cylindrical anaerobic reactors with different sizes: a heat transfer model.

    PubMed

    Liu, Jiawei; Zhou, Xingqiu; Wu, Jiangdong; Gao, Wen; Qian, Xu

    2017-10-01

    The temperature is the essential factor that influences the efficiency of anaerobic reactors. During the operation of the anaerobic reactor, the fluctuations of ambient temperature can cause a change in the internal temperature of the reactor. Therefore, insulation and heating measures are often used to maintain anaerobic reactor's internal temperature. In this paper, a simplified heat transfer model was developed to study heat transfer between cylindrical anaerobic reactors and their surroundings. Three cylindrical reactors of different sizes were studied, and the internal relations between ambient temperature, thickness of insulation, and temperature fluctuations of the reactors were obtained at different reactor sizes. The model was calibrated by a sensitivity analysis, and the calibrated model was well able to predict reactor temperature. The Nash-Sutcliffe model efficiency coefficient was used to assess the predictive power of heat transfer models. The Nash coefficients of the three reactors were 0.76, 0.60, and 0.45, respectively. The model can provide reference for the thermal insulation design of cylindrical anaerobic reactors.

  2. Characterizing property distributions of polymeric nanogels by size-exclusion chromatography.

    PubMed

    Mourey, Thomas H; Leon, Jeffrey W; Bennett, James R; Bryan, Trevor G; Slater, Lisa A; Balke, Stephen T

    2007-03-30

    Nanogels are highly branched, swellable polymer structures with average diameters between 1 and 100nm. Size-exclusion chromatography (SEC) fractionates materials in this size range, and it is commonly used to measure nanogel molar mass distributions. For many nanogel applications, it may be more important to calculate the particle size distribution from the SEC data than it is to calculate the molar mass distribution. Other useful nanogel property distributions include particle shape, area, and volume, as well as polymer volume fraction per particle. All can be obtained from multi-detector SEC data with proper calibration and data analysis methods. This work develops the basic equations for calculating several of these differential and cumulative property distributions and applies them to SEC data from the analysis of polymeric nanogels. The methods are analogous to those used to calculate the more familiar SEC molar mass distributions. Calibration methods and characteristics of the distributions are discussed, and the effects of detector noise and mismatched concentration and molar mass sensitive detector signals are examined.

  3. Variance-based Sensitivity Analysis of Large-scale Hydrological Model to Prepare an Ensemble-based SWOT-like Data Assimilation Experiments

    NASA Astrophysics Data System (ADS)

    Emery, C. M.; Biancamaria, S.; Boone, A. A.; Ricci, S. M.; Garambois, P. A.; Decharme, B.; Rochoux, M. C.

    2015-12-01

    Land Surface Models (LSM) coupled with River Routing schemes (RRM), are used in Global Climate Models (GCM) to simulate the continental part of the water cycle. They are key component of GCM as they provide boundary conditions to atmospheric and oceanic models. However, at global scale, errors arise mainly from simplified physics, atmospheric forcing, and input parameters. More particularly, those used in RRM, such as river width, depth and friction coefficients, are difficult to calibrate and are mostly derived from geomorphologic relationships, which may not always be realistic. In situ measurements are then used to calibrate these relationships and validate the model, but global in situ data are very sparse. Additionally, due to the lack of existing global river geomorphology database and accurate forcing, models are run at coarse resolution. This is typically the case of the ISBA-TRIP model used in this study.A complementary alternative to in-situ data are satellite observations. In this regard, the Surface Water and Ocean Topography (SWOT) satellite mission, jointly developed by NASA/CNES/CSA/UKSA and scheduled for launch around 2020, should be very valuable to calibrate RRM parameters. It will provide maps of water surface elevation for rivers wider than 100 meters over continental surfaces in between 78°S and 78°N and also direct observation of river geomorphological parameters such as width ans slope.Yet, before assimilating such kind of data, it is needed to analyze RRM temporal sensitivity to time-constant parameters. This study presents such analysis over large river basins for the TRIP RRM. Model output uncertainty, represented by unconditional variance, is decomposed into ordered contribution from each parameter. Doing a time-dependent analysis allows then to identify to which parameters modeled water level and discharge are the most sensitive along a hydrological year. The results show that local parameters directly impact water levels, while discharge is more affected by parameters from the whole upstream drainage area. Understanding model output variance behavior will have a direct impact on the design and performance of the ensemble-based data assimilation platform, for which uncertainties are also modeled by variances. It will help to select more objectively RRM parameters to correct.

  4. 21 CFR 874.3310 - Hearing aid calibrator and analysis system.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Hearing aid calibrator and analysis system. 874.3310 Section 874.3310 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN... aid calibrator and analysis system. (a) Identification. A hearing aid calibrator and analysis system...

  5. 21 CFR 874.3310 - Hearing aid calibrator and analysis system.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Hearing aid calibrator and analysis system. 874.3310 Section 874.3310 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN... aid calibrator and analysis system. (a) Identification. A hearing aid calibrator and analysis system...

  6. 21 CFR 874.3310 - Hearing aid calibrator and analysis system.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Hearing aid calibrator and analysis system. 874.3310 Section 874.3310 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN... aid calibrator and analysis system. (a) Identification. A hearing aid calibrator and analysis system...

  7. 21 CFR 874.3310 - Hearing aid calibrator and analysis system.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Hearing aid calibrator and analysis system. 874.3310 Section 874.3310 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN... aid calibrator and analysis system. (a) Identification. A hearing aid calibrator and analysis system...

  8. 21 CFR 874.3310 - Hearing aid calibrator and analysis system.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Hearing aid calibrator and analysis system. 874.3310 Section 874.3310 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN... aid calibrator and analysis system. (a) Identification. A hearing aid calibrator and analysis system...

  9. Dew Point Calibration System Using a Quartz Crystal Sensor with a Differential Frequency Method.

    PubMed

    Lin, Ningning; Meng, Xiaofeng; Nie, Jing

    2016-11-18

    In this paper, the influence of temperature on quartz crystal microbalance (QCM) sensor response during dew point calibration is investigated. The aim is to present a compensation method to eliminate temperature impact on frequency acquisition. A new sensitive structure is proposed with double QCMs. One is kept in contact with the environment, whereas the other is not exposed to the atmosphere. There is a thermal conductivity silicone pad between each crystal and a refrigeration device to keep a uniform temperature condition. A differential frequency method is described in detail and is applied to calibrate the frequency characteristics of QCM at the dew point of -3.75 °C. It is worth noting that frequency changes of two QCMs were approximately opposite when temperature conditions were changed simultaneously. The results from continuous experiments show that the frequencies of two QCMs as the dew point moment was reached have strong consistency and high repeatability, leading to the conclusion that the sensitive structure can calibrate dew points with high reliability.

  10. Beat frequency quartz-enhanced photoacoustic spectroscopy for fast and calibration-free continuous trace-gas monitoring

    PubMed Central

    Wu, Hongpeng; Dong, Lei; Zheng, Huadan; Yu, Yajun; Ma, Weiguang; Zhang, Lei; Yin, Wangbao; Xiao, Liantuan; Jia, Suotang; Tittel, Frank K.

    2017-01-01

    Quartz-enhanced photoacoustic spectroscopy (QEPAS) is a sensitive gas detection technique which requires frequent calibration and has a long response time. Here we report beat frequency (BF) QEPAS that can be used for ultra-sensitive calibration-free trace-gas detection and fast spectral scan applications. The resonance frequency and Q-factor of the quartz tuning fork (QTF) as well as the trace-gas concentration can be obtained simultaneously by detecting the beat frequency signal generated when the transient response signal of the QTF is demodulated at its non-resonance frequency. Hence, BF-QEPAS avoids a calibration process and permits continuous monitoring of a targeted trace gas. Three semiconductor lasers were selected as the excitation source to verify the performance of the BF-QEPAS technique. The BF-QEPAS method is capable of measuring lower trace-gas concentration levels with shorter averaging times as compared to conventional PAS and QEPAS techniques and determines the electrical QTF parameters precisely. PMID:28561065

  11. Identification of the most sensitive parameters in the activated sludge model implemented in BioWin software.

    PubMed

    Liwarska-Bizukojc, Ewa; Biernacki, Rafal

    2010-10-01

    In order to simulate biological wastewater treatment processes, data concerning wastewater and sludge composition, process kinetics and stoichiometry are required. Selection of the most sensitive parameters is an important step of model calibration. The aim of this work is to verify the predictability of the activated sludge model, which is implemented in BioWin software, and select its most influential kinetic and stoichiometric parameters with the help of sensitivity analysis approach. Two different measures of sensitivity are applied: the normalised sensitivity coefficient (S(i,j)) and the mean square sensitivity measure (delta(j)(msqr)). It occurs that 17 kinetic and stoichiometric parameters of the BioWin activated sludge (AS) model can be regarded as influential on the basis of S(i,j) calculations. Half of the influential parameters are associated with growth and decay of phosphorus accumulating organisms (PAOs). The identification of the set of the most sensitive parameters should support the users of this model and initiate the elaboration of determination procedures for the parameters, for which it has not been done yet. Copyright 2010 Elsevier Ltd. All rights reserved.

  12. An approach to measure parameter sensitivity in watershed ...

    EPA Pesticide Factsheets

    Hydrologic responses vary spatially and temporally according to watershed characteristics. In this study, the hydrologic models that we developed earlier for the Little Miami River (LMR) and Las Vegas Wash (LVW) watersheds were used for detail sensitivity analyses. To compare the relative sensitivities of the hydrologic parameters of these two models, we used Normalized Root Mean Square Error (NRMSE). By combining the NRMSE index with the flow duration curve analysis, we derived an approach to measure parameter sensitivities under different flow regimes. Results show that the parameters related to groundwater are highly sensitive in the LMR watershed, whereas the LVW watershed is primarily sensitive to near surface and impervious parameters. The high and medium flows are more impacted by most of the parameters. Low flow regime was highly sensitive to groundwater related parameters. Moreover, our approach is found to be useful in facilitating model development and calibration. This journal article describes hydrological modeling of climate change and land use changes on stream hydrology, and elucidates the importance of hydrological model construction in generating valid modeling results.

  13. A non-contact, thermal noise based method for the calibration of lateral deflection sensitivity in atomic force microscopy.

    PubMed

    Mullin, Nic; Hobbs, Jamie K

    2014-11-01

    Calibration of lateral forces and displacements has been a long standing problem in lateral force microscopies. Recently, it was shown by Wagner et al. that the thermal noise spectrum of the first torsional mode may be used to calibrate the deflection sensitivity of the detector. This method is quick, non-destructive and may be performed in situ in air or liquid. Here we make a full quantitative comparison of the lateral inverse optical lever sensitivity obtained by the lateral thermal noise method and the shape independent method developed by Anderson et al. We find that the thermal method provides accurate results for a wide variety of rectangular cantilevers, provided that the geometry of the cantilever is suitable for torsional stiffness calibration by the torsional Sader method, in-plane bending of the cantilever may be eliminated or accounted for and that any scaling of the lateral deflection signal between the measurement of the lateral thermal noise and the measurement of the lateral deflection is eliminated or corrected for. We also demonstrate that the thermal method may be used to characterize the linearity of the detector signal as a function of position, and find a deviation of less than 8% for the instrument used.

  14. Modeling the Effects of Irrigation on Land Surface Fluxes and States over the Conterminous United States: Sensitivity to Input Data and Model Parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leng, Guoyong; Huang, Maoyi; Tang, Qiuhong

    2013-09-16

    Previous studies on irrigation impacts on land surface fluxes/states were mainly conducted as sensitivity experiments, with limited analysis of uncertainties from the input data and model irrigation schemes used. In this study, we calibrated and evaluated the performance of irrigation water use simulated by the Community Land Model version 4 (CLM4) against observations from agriculture census. We investigated the impacts of irrigation on land surface fluxes and states over the conterminous United States (CONUS) and explored possible directions of improvement. Specifically, we found large uncertainty in the irrigation area data from two widely used sources and CLM4 tended to producemore » unrealistically large temporal variations of irrigation demand for applications at the water resources region scale over CONUS. At seasonal to interannual time scales, the effects of irrigation on surface energy partitioning appeared to be large and persistent, and more pronounced in dry than wet years. Even with model calibration to yield overall good agreement with the irrigation amounts from the National Agricultural Statistics Service (NASS), differences between the two irrigation area datasets still dominate the differences in the interannual variability of land surface response to irrigation. Our results suggest that irrigation amount simulated by CLM4 can be improved by (1) calibrating model parameter values to account for regional differences in irrigation demand and (2) accurate representation of the spatial distribution and intensity of irrigated areas.« less

  15. Ultrasensitive, self-calibrated cavity ring-down spectrometer for quantitative trace gas analysis.

    PubMed

    Chen, Bing; Sun, Yu R; Zhou, Ze-Yi; Chen, Jian; Liu, An-Wen; Hu, Shui-Ming

    2014-11-10

    A cavity ring-down spectrometer is built for trace gas detection using telecom distributed feedback (DFB) diode lasers. The longitudinal modes of the ring-down cavity are used as frequency markers without active-locking either the laser or the high-finesse cavity. A control scheme is applied to scan the DFB laser frequency, matching the cavity modes one by one in sequence and resulting in a correct index at each recorded spectral data point, which allows us to calibrate the spectrum with a relative frequency precision of 0.06 MHz. Besides the frequency precision of the spectrometer, a sensitivity (noise-equivalent absorption) of 4×10-11  cm-1  Hz-1/2 has also been demonstrated. A minimum detectable absorption coefficient of 5×10-12  cm-1 has been obtained by averaging about 100 spectra recorded in 2  h. The quantitative accuracy is tested by measuring the CO2 concentrations in N2 samples prepared by the gravimetric method, and the relative deviation is less than 0.3%. The trace detection capability is demonstrated by detecting CO2 of ppbv-level concentrations in a high-purity nitrogen gas sample. Simple structure, high sensitivity, and good accuracy make the instrument very suitable for quantitative trace gas analysis.

  16. A method to calibrate phase fluctuation in polarization-sensitive swept-source optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Lu, Zenghai; Kasaragod, Deepa K.; Matcher, Stephen J.

    2011-06-01

    A phase fluctuation calibration method is presented for polarization-sensitive swept-source optical coherence tomography (PS-SS-OCT) using continuous polarization modulation. The method consists of the generation of a continuous triggered tone-burst waveform rather than an asynchronous waveform by use of a function generator and the removal of the global phases of the measured Jones matrices by use of matrix normalization. This could remove the use of auxiliary optical components for the phase fluctuation compensation in the system, which reduces the system complexity. Phase fluctuation calibration is necessary to obtain the reference Jones matrix by averaging the measured Jones matrices at sample surfaces. Measurements on an equine tendon sample were made by the PS-SS-OCT system to validate the proposed method.

  17. Accuracy evaluation of a new real-time continuous glucose monitoring algorithm in hypoglycemia.

    PubMed

    Mahmoudi, Zeinab; Jensen, Morten Hasselstrøm; Dencker Johansen, Mette; Christensen, Toke Folke; Tarnow, Lise; Christiansen, Jens Sandahl; Hejlesen, Ole

    2014-10-01

    The purpose of this study was to evaluate the performance of a new continuous glucose monitoring (CGM) calibration algorithm and to compare it with the Guardian(®) REAL-Time (RT) (Medtronic Diabetes, Northridge, CA) calibration algorithm in hypoglycemia. CGM data were obtained from 10 type 1 diabetes patients undergoing insulin-induced hypoglycemia. Data were obtained in two separate sessions using the Guardian RT CGM device. Data from the same CGM sensor were calibrated by two different algorithms: the Guardian RT algorithm and a new calibration algorithm. The accuracy of the two algorithms was compared using four performance metrics. The median (mean) of absolute relative deviation in the whole range of plasma glucose was 20.2% (32.1%) for the Guardian RT calibration and 17.4% (25.9%) for the new calibration algorithm. The mean (SD) sample-based sensitivity for the hypoglycemic threshold of 70 mg/dL was 31% (33%) for the Guardian RT algorithm and 70% (33%) for the new algorithm. The mean (SD) sample-based specificity at the same hypoglycemic threshold was 95% (8%) for the Guardian RT algorithm and 90% (16%) for the new calibration algorithm. The sensitivity of the event-based hypoglycemia detection for the hypoglycemic threshold of 70 mg/dL was 61% for the Guardian RT calibration and 89% for the new calibration algorithm. Application of the new calibration caused one false-positive instance for the event-based hypoglycemia detection, whereas the Guardian RT caused no false-positive instances. The overestimation of plasma glucose by CGM was corrected from 33.2 mg/dL in the Guardian RT algorithm to 21.9 mg/dL in the new calibration algorithm. The results suggest that the new algorithm may reduce the inaccuracy of Guardian RT CGM system within the hypoglycemic range; however, data from a larger number of patients are required to compare the clinical reliability of the two algorithms.

  18. Hot-wire calibration in subsonic/transonic flow regimes

    NASA Technical Reports Server (NTRS)

    Nagabushana, K. A.; Ash, Robert L.

    1995-01-01

    A different approach for calibrating hot-wires, which simplifies the calibration procedure and reduces the tunnel run-time by an order of magnitude was sought. In general, it is accepted that the directly measurable quantities in any flow are velocity, density, and total temperature. Very few facilities have the capability of varying the total temperature over an adequate range. However, if the overheat temperature parameter, a(sub w), is used to calibrate the hot-wire then the directly measurable quantity, voltage, will be a function of the flow variables and the overheat parameter i.e., E = f(u,p,a(sub w), T(sub w)) where a(sub w) will contain the needed total temperature information. In this report, various methods of evaluating sensitivities with different dependent and independent variables to calibrate a 3-Wire hot-wire probe using a constant temperature anemometer (CTA) in subsonic/transonic flow regimes is presented. The advantage of using a(sub w) as the independent variable instead of total temperature, t(sub o), or overheat temperature parameter, tau, is that while running a calibration test it is not necessary to know the recovery factor, the coefficients in a wire resistance to temperature relationship for a given probe. It was deduced that the method employing the relationship E = f (u,p,a(sub w)) should result in the most accurate calibration of hot wire probes. Any other method would require additional measurements. Also this method will allow calibration and determination of accurate temperature fluctuation information even in atmospheric wind tunnels where there is no ability to obtain any temperature sensitivity information at present. This technique greatly simplifies the calibration process for hot-wires, provides the required calibration information needed in obtaining temperature fluctuations, and reduces both the tunnel run-time and the test matrix required to calibrate hotwires. Some of the results using the above techniques are presented in an appendix.

  19. Absolute Radiometric Calibration of EUNIS-06

    NASA Technical Reports Server (NTRS)

    Thomas, R. J.; Rabin, D. M.; Kent, B. J.; Paustian, W.

    2007-01-01

    The Extreme-Ultraviolet Normal-Incidence Spectrometer (EUNIS) is a soundingrocket payload that obtains imaged high-resolution spectra of individual solar features, providing information about the Sun's corona and upper transition region. Shortly after its successful initial flight last year, a complete end-to-end calibration was carried out to determine the instrument's absolute radiometric response over its Longwave bandpass of 300 - 370A. The measurements were done at the Rutherford-Appleton Laboratory (RAL) in England, using the same vacuum facility and EUV radiation source used in the pre-flight calibrations of both SOHO/CDS and Hinode/EIS, as well as in three post-flight calibrations of our SERTS sounding rocket payload, the precursor to EUNIS. The unique radiation source provided by the Physikalisch-Technische Bundesanstalt (PTB) had been calibrated to an absolute accuracy of 7% (l-sigma) at 12 wavelengths covering our bandpass directly against the Berlin electron storage ring BESSY, which is itself a primary radiometric source standard. Scans of the EUNIS aperture were made to determine the instrument's absolute spectral sensitivity to +- 25%, considering all sources of error, and demonstrate that EUNIS-06 was the most sensitive solar E W spectrometer yet flown. The results will be matched against prior calibrations which relied on combining measurements of individual optical components, and on comparisons with theoretically predicted 'insensitive' line ratios. Coordinated observations were made during the EUNIS-06 flight by SOHO/CDS and EIT that will allow re-calibrations of those instruments as well. In addition, future EUNIS flights will provide similar calibration updates for TRACE, Hinode/EIS, and STEREO/SECCHI/EUVI.

  20. Astrophysical Observations with the HEROES Balloon-borne Payload

    NASA Astrophysics Data System (ADS)

    Wilson, Colleen; Gaskin, J.; Christe, S.; Shih, A. Y.; Swartz, D. A.; Tennant, A. F.; Ramsey, B.

    2014-01-01

    The High Energy Replicated Optics to Explore the Sun (HEROES) payload flew on a balloon from Ft. Sumner, NM, September 21-22, 2013. HEROES is sensitive from about 20-75 keV and comprises 8 optics modules, each consisting of 13-14 nickel replicated optics shells and 8 Xenon-filled position-sensitive proportional counter detectors. HEROES is unique in that it is the first hard X-ray telescope that will observe the Sun and astrophysical targets in the same balloon flight. Our astrophysics targets include the Crab nebula and pulsar and the black hole binary GRS 1915+105. In this presentation, I will describe the HEROES mission, the data analysis pipeline and calibrations, and preliminary astrophysics results.

  1. Development of an automated scanning monochromator for sensitivity calibration of the MUSTANG instrument

    NASA Astrophysics Data System (ADS)

    Rivers, Thane D.

    1992-06-01

    An Automated Scanning Monochromator was developed using: an Acton Research Corporation (ARC) Monochromator, Ealing Photomultiplier Tube and a Macintosh PC in conjunction with LabVIEW software. The LabVIEW Virtual Instrument written to operate the ARC Monochromator is a mouse driven user friendly program developed for automated spectral data measurements. Resolution and sensitivity of the Automated Scanning Monochromator System were determined experimentally. The Automated monochromator was then used for spectral measurements of a Platinum Lamp. Additionally, the reflectivity curve for a BaSO4 coated screen has been measured. Reflectivity measurements indicate a large discrepancy with expected results. Further analysis of the reflectivity experiment is required for conclusive results.

  2. An Investigation of Acoustic Cavitation Produced by Pulsed Ultrasound

    DTIC Science & Technology

    1987-12-01

    S~ PVDF Hydrophone Sensitivity Calibration Curves C. DESCRIPTION OF TEST AND CALIBRATION TECHNIQUE We chose the reciprocity technique for calibration...NAVAL POSTGRADUATE SCHOOLN a n Monterey, Calif ornia ITHESIS AN INVESTIGATION OF ACOUSTIC CAVITATION PRODUCED BY PULSED ULTRASOUND by Robert L. Bruce...INVESTIGATION OF ACOUSTIC CAVITATION PRODUCED B~Y PULSED ULTRASOUND !2 PERSONAL AUTHOR(S) .RR~r. g~rtL_ 1DLJN, Rober- ., Jr. 13a TYPE OF REPORT )3b TIME

  3. GHRS Ech-B Wavelength Monitor -- Cycle 4

    NASA Astrophysics Data System (ADS)

    Soderblom, David

    1994-01-01

    This proposal defines the spectral lamp test for Echelle B. It is an internal test which makes measurements of the wavelength lamp SC2. It calibrates the carrousel function, Y deflections, resolving power, sensitivity, and scattered light. The wavelength calibration dispersion constants will be updated in the PODPS calibration data base. It will be run every 4 months. The wavelengths may be out of range according to PEPSI or TRANS. Please ignore the errors.

  4. Remote sensing of evapotranspiration using automated calibration: Development and testing in the state of Florida

    NASA Astrophysics Data System (ADS)

    Evans, Aaron H.

    Thermal remote sensing is a powerful tool for measuring the spatial variability of evapotranspiration due to the cooling effect of vaporization. The residual method is a popular technique which calculates evapotranspiration by subtracting sensible heat from available energy. Estimating sensible heat requires aerodynamic surface temperature which is difficult to retrieve accurately. Methods such as SEBAL/METRIC correct for this problem by calibrating the relationship between sensible heat and retrieved surface temperature. Disadvantage of these calibrations are 1) user must manually identify extremely dry and wet pixels in image 2) each calibration is only applicable over limited spatial extent. Producing larger maps is operationally limited due to time required to manually calibrate multiple spatial extents over multiple days. This dissertation develops techniques which automatically detect dry and wet pixels. LANDSAT imagery is used because it resolves dry pixels. Calibrations using 1) only dry pixels and 2) including wet pixels are developed. Snapshots of retrieved evaporative fraction and actual evapotranspiration are compared to eddy covariance measurements for five study areas in Florida: 1) Big Cypress 2) Disney Wilderness 3) Everglades 4) near Gainesville, FL. 5) Kennedy Space Center. The sensitivity of evaporative fraction to temperature, available energy, roughness length and wind speed is tested. A technique for temporally interpolating evapotranspiration by fusing LANDSAT and MODIS is developed and tested. The automated algorithm is successful at detecting wet and dry pixels (if they exist). Including wet pixels in calibration and assuming constant atmospheric conductance significantly improved results for all but Big Cypress and Gainesville. Evaporative fraction is not very sensitive to instantaneous available energy but it is sensitive to temperature when wet pixels are included because temperature is required for estimating wet pixel evapotranspiration. Data fusion techniques only slightly outperformed linear interpolation. Eddy covariance comparison and temporal interpolation produced acceptable bias error for most cases suggesting automated calibration and interpolation could be used to predict monthly or annual ET. Maps demonstrating spatial patterns of evapotranspiration at field scale were successfully produced, but only for limited spatial extents. A framework has been established for producing larger maps by creating a mosaic of smaller individual maps.

  5. Development of a low background test facility for the SPICA-SAFARI on-ground calibration

    NASA Astrophysics Data System (ADS)

    Dieleman, P.; Laauwen, W. M.; Ferrari, L.; Ferlet, M.; Vandenbussche, B.; Meinsma, L.; Huisman, R.

    2012-09-01

    SAFARI is a far-infrared camera to be launched in 2021 onboard the SPICA satellite. SAFARI offers imaging spectroscopy and imaging photometry in the wavelength range of 34 to 210 μm with detector NEP of 2•10-19 W/√Hz. A cryogenic test facility for SAFARI on-ground calibration and characterization is being developed. The main design driver is the required low background of a few attoWatts per pixel. This prohibits optical access to room temperature and hence all test equipment needs to be inside the cryostat at 4.5K. The instrument parameters to be verified are interfaces with the SPICA satellite, sensitivity, alignment, image quality, spectral response, frequency calibration, and point spread function. The instrument sensitivity is calibrated by a calibration source providing a spatially homogeneous signal at the attoWatt level. This low light intensity is achieved by geometrical dilution of a 150K source to an integrating sphere. The beam quality and point spread function is measured by a pinhole/mask plate wheel, back-illuminated by a second integrating sphere. This sphere is fed by a stable wide-band source, providing spectral lines via a cryogenic etalon.

  6. Attaining the Photometric Precision Required by Future Dark Energy Projects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stubbs, Christopher

    2013-01-21

    This report outlines our progress towards achieving the high-precision astronomical measurements needed to derive improved constraints on the nature of the Dark Energy. Our approach to obtaining higher precision flux measurements has two basic components: 1) determination of the optical transmission of the atmosphere, and 2) mapping out the instrumental photon sensitivity function vs. wavelength, calibrated by referencing the measurements to the known sensitivity curve of a high precision silicon photodiode, and 3) using the self-consistency of the spectrum of stars to achieve precise color calibrations.

  7. Solar measurements from the Airglow-Solar Spectrometer Instrument (ASSI) on the San Marco 5 satellite

    NASA Technical Reports Server (NTRS)

    Woods, Thomas N.

    1994-01-01

    The analysis of the solar spectral irradiance from the Airglow-Solar Spectrometer Instrument (ASSI) on the San Marco 5 satellite is the focus for this research grant. A pre-print copy of the paper describing the calibrations of and results from the San Marco ASSI is attached to this report. The calibration of the ASSI included (1) transfer of photometric calibration from a rocket experiment and the Solar Mesosphere Explorer (SME), (2) use of the on-board radioactive calibration sources, (3) validation of the ASSI sensitivity over its field of view, and (4) determining the degradation of the spectrometers. We have determined that the absolute values for the solar irradiance needs adjustment in the current proxy models of the solar UV irradiance, and the amount of solar variability from the proxy models are in reasonable agreement with the ASSI measurements. This research grant also has supported the development of a new solar EUV irradiance proxy model. We expected that the magnetic flux is responsible for most of the heating, via Alfen waves, in the chromosphere, transition region, and corona. From examining time series of solar irradiance data and magnetic fields at different levels, we did indeed find that the chromospheric emissions correlate best with the large magnetic field levels.

  8. MODOPTIM: A general optimization program for ground-water flow model calibration and ground-water management with MODFLOW

    USGS Publications Warehouse

    Halford, Keith J.

    2006-01-01

    MODOPTIM is a non-linear ground-water model calibration and management tool that simulates flow with MODFLOW-96 as a subroutine. A weighted sum-of-squares objective function defines optimal solutions for calibration and management problems. Water levels, discharges, water quality, subsidence, and pumping-lift costs are the five direct observation types that can be compared in MODOPTIM. Differences between direct observations of the same type can be compared to fit temporal changes and spatial gradients. Water levels in pumping wells, wellbore storage in the observation wells, and rotational translation of observation wells also can be compared. Negative and positive residuals can be weighted unequally so inequality constraints such as maximum chloride concentrations or minimum water levels can be incorporated in the objective function. Optimization parameters are defined with zones and parameter-weight matrices. Parameter change is estimated iteratively with a quasi-Newton algorithm and is constrained to a user-defined maximum parameter change per iteration. Parameters that are less sensitive than a user-defined threshold are not estimated. MODOPTIM facilitates testing more conceptual models by expediting calibration of each conceptual model. Examples of applying MODOPTIM to aquifer-test analysis, ground-water management, and parameter estimation problems are presented.

  9. WE-E-18A-04: Precision In-Vivo Dosimetry Using Optically Stimulated Luminescence Dosimeters and a Pulsed-Stimulating Dose Reader

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Q; Herrick, A; Hoke, S

    Purpose: A new readout technology based on pulsed optically stimulating luminescence is introduced (microSTARii, Landauer, Inc, Glenwood, IL60425). This investigation searches for approaches that maximizes the dosimetry accuracy in clinical applications. Methods: The sensitivity of each optically stimulated luminescence dosimeter (OSLD) was initially characterized by exposing it to a given radiation beam. After readout, the luminescence signal stored in the OSLD was erased by exposing its sensing area to a 21W white LED light for 24 hours. A set of OSLDs with consistent sensitivities was selected to calibrate the dose reader. Higher order nonlinear curves were also derived from themore » calibration readings. OSLDs with cumulative doses below 15 Gy were reused. Before an in-vivo dosimetry, the OSLD luminescence signal was erased with the white LED light. Results: For a set of 68 manufacturer-screened OSLDs, the measured sensitivities vary in a range of 17.3%. A sub-set of the OSLDs with sensitivities within ±1% was selected for the reader calibration. Three OSLDs in a group were exposed to a given radiation. Nine groups were exposed to radiation doses ranging from 0 to 13 Gy. Additional verifications demonstrated that the reader uncertainty is about 3%. With an external calibration function derived by fitting the OSLD readings to a 3rd-order polynomial, the dosimetry uncertainty dropped to 0.5%. The dose-luminescence response curves of individual OSLDs were characterized. All curves converge within 1% after the sensitivity correction. With all uncertainties considered, the systematic uncertainty is about 2%. Additional tests emulating in-vivo dosimetry by exposing the OSLDs under different radiation sources confirmed the claim. Conclusion: The sensitivity of individual OSLD should be characterized initially. A 3rd-order polynomial function is a more accurate representation of the dose-luminescence response curve. The dosimetry uncertainty specified by the manufacturer is 4%. Following the proposed approach, it can be controlled to 2%.« less

  10. Microscope in orbit calibration procedure for a test of the equivalence principle at 10(-15).

    PubMed

    Pradels, G; Touboul, P

    2003-01-01

    The scientific objectives of the MICROSCOPE space mission impose a very fine calibration of the on-board accelerometers. However the required performance cannot be achieved on ground because of the presence of high disturbing sources. On-board the CHAMP satellite, accelerometers similar in the concept to the MICROSCOPE instrument, have already flown and analysis of the provided data then allowed to characterise the vibration environment at low altitude as well as the fluctuation of the drag. The requirements of the in-orbit calibration procedure for the MICROSCOPE instrument are demonstrated by modelling the expected applied acceleration signals with the developed analytic model of the mission. The proposed approach exploits the drag-free system of the satellite and the sensitivity of the accelerometers. A specific simulator of the attitude control system of the satellite has been developed and tests of the proposed solution are performed using nominal conditions or disturbing conditions as observed during the CHAMP mission. c2003 International Astronautical Federation. Published by Elsevier Science Ldt. All rights reserved.

  11. High-throughput immunomagnetic scavenging technique for quantitative analysis of live VX nerve agent in water, hamburger, and soil matrixes.

    PubMed

    Knaack, Jennifer S; Zhou, Yingtao; Abney, Carter W; Prezioso, Samantha M; Magnuson, Matthew; Evans, Ronald; Jakubowski, Edward M; Hardy, Katelyn; Johnson, Rudolph C

    2012-11-20

    We have developed a novel immunomagnetic scavenging technique for extracting cholinesterase inhibitors from aqueous matrixes using biological targeting and antibody-based extraction. The technique was characterized using the organophosphorus nerve agent VX. The limit of detection for VX in high-performance liquid chromatography (HPLC)-grade water, defined as the lowest calibrator concentration, was 25 pg/mL in a small, 500 μL sample. The method was characterized over the course of 22 sample sets containing calibrators, blanks, and quality control samples. Method precision, expressed as the mean relative standard deviation, was less than 9.2% for all calibrators. Quality control sample accuracy was 102% and 100% of the mean for VX spiked into HPLC-grade water at concentrations of 2.0 and 0.25 ng/mL, respectively. This method successfully was applied to aqueous extracts from soil, hamburger, and finished tap water spiked with VX. Recovery was 65%, 81%, and 100% from these matrixes, respectively. Biologically based extractions of organophosphorus compounds represent a new technique for sample extraction that provides an increase in extraction specificity and sensitivity.

  12. Cycle 22 COS/NUV Spectroscopic Sensitivity Monitor

    NASA Astrophysics Data System (ADS)

    Taylor, Jo

    2016-09-01

    Observations of HST spectrophotometric standard stars show that there is a significant time dependence of the COS NUV MAMA sensitivity (Debes et al. 2016). Time-dependent sensitivity (TDS) monitoring is necessary for accurate flux calibration. Regular calibration observations monitor the decline in sensitivity for all NUV gratings: G185M, G225M, G285M, and G230L. Results from the cycle 22 NUV TDS program show the reflectivity of the G225M and G285M gratings, which are coated in bare-aluminum, declines at a rate of -3 to -2.5%/year and -10.6 to -11.8%/year, respectively. The G185M and G230L gratings, which are coated in MgF2 over aluminum, show a decline of -0.3 to +0.6%/year and -0.4 to +0.9%/year, respectively.

  13. Supplement: “The Rate of Binary Black Hole Mergers Inferred from Advanced LIGO Observations Surrounding GW150914” (2016, ApJL, 833, L1)

    NASA Astrophysics Data System (ADS)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Behnke, B.; Bejger, M.; Bell, A. S.; Bell, C. J.; Berger, B. K.; Bergman, J.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Birney, R.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Bodiya, T. P.; Boer, M.; Bogaert, G.; Bogan, C.; Bohe, A.; Bojtos, P.; Bond, C.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chakraborty, R.; Chalermsongsak, T.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Chen, H. Y.; Chen, Y.; Cheng, C.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M., Jr.; Conte, A.; Conti, L.; Cook, D.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Cripe, J.; Crowder, S. G.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D’Antonio, S.; Danzmann, K.; Darman, N. S.; Dattilo, V.; Dave, I.; Daveloza, H. P.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; De, S.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dereli, H.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Dojcinoski, G.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Douglas, R.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Engels, W.; Essick, R. C.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W. M.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fong, H.; Fournier, J.-D.; Franco, S.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fricke, T. T.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H. A. G.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gatto, A.; Gaur, G.; Gehrels, N.; Gemme, G.; Gendre, B.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gordon, N. A.; Gorodetsky, M. L.; Gossan, S. E.; Gosselin, M.; Gouaty, R.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hodge, K. A.; Hofman, D.; Hollitt, S. E.; Holt, K.; Holz, D. E.; Hopkins, P.; Hosken, D. J.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huang, S.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Idrisy, A.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J.-M.; Isi, M.; Islas, G.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jang, H.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; K, Haris; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Karki, S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kawazoe, F.; Kéfélian, F.; Kehl, M. S.; Keitel, D.; Kelley, D. B.; Kells, W.; Kennedy, R.; Key, J. S.; Khalaidovski, A.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, C.; Kim, J.; Kim, K.; Kim, Nam-Gyu; Kim, Namjun; Kim, Y.-M.; King, E. J.; King, P. J.; Kinzel, D. L.; Kissel, J. S.; Kleybolte, L.; Klimenko, S.; Koehlenbeck, S. M.; Kokeyama, K.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kringel, V.; Krishnan, B.; Królak, A.; Krueger, C.; Kuehn, G.; Kumar, P.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lange, J.; Lantz, B.; Lasky, P. D.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, K.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Levine, B. M.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Lockerbie, N. A.; Logue, J.; Lombardi, A. L.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lück, H.; Lundgren, A. P.; Luo, J.; Lynch, R.; Ma, Y.; MacDonald, T.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Magee, R. M.; Mageswaran, M.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandel, I.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martin, R. M.; Martynov, D. V.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mendoza-Gandara, D.; Mercer, R. A.; Merilh, E.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, C. L.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Murphy, D. J.; Murray, P. G.; Mytidis, A.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Necula, V.; Nedkova, K.; Nelemans, G.; Neri, M.; Neunzert, A.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; O’Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O’Reilly, B.; O’Shaughnessy, R.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Patrick, Z.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perreca, A.; Phelps, M.; Piccinni, O.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poggiani, R.; Popolizio, P.; Porter, E. K.; Post, A.; Powell, J.; Prasad, J.; Predoi, V.; Premachandra, S. S.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Reed, C. M.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Ricci, F.; Riles, K.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, R.; Romanov, G.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sampson, L.; Sanchez, E. J.; Sandberg, V.; Sandeen, B.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Schilling, R.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Serna, G.; Setyawati, Y.; Sevigny, A.; Shaddock, D. A.; Shah, S.; Shahriar, M. S.; Shaltev, M.; Shao, Z.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sigg, D.; Silva, A. D.; Simakov, D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, J. R.; Smith, N. D.; Smith, R. J. E.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stevenson, S.; Stone, R.; Strain, K. A.; Straniero, N.; Stratta, G.; Strauss, N. A.; Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Tarabrin, S. P.; Taracchini, A.; Taylor, R.; Theeg, T.; Thirugnanasambandam, M. P.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Tomlinson, C.; Tonelli, M.; Torres, C. V.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Tringali, M. C.; Trozzo, L.; Tse, M.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; Vallisneri, M.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Vass, S.; Vasúth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Viceré, A.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, M.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, X.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Welborn, T.; Wen, L.; Wesels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; White, D. J.; Whiting, B. F.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Worden, J.; Wright, J. L.; Wu, G.; Yablon, J.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, H.; Yvert, M.; Zadrożny, A.; Zangrando, L.; Zanolin, M.; Zendri, J.-P.; Zevin, M.; Zhang, F.; Zhang, L.; Zhang, M.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, X. J.; Zucker, M. E.; Zuraw, S. E.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration

    2016-12-01

    This article provides supplemental information for a Letter reporting the rate of (BBH) coalescences inferred from 16 days of coincident Advanced LIGO observations surrounding the transient (GW) signal GW150914. In that work we reported various rate estimates whose 90% confidence intervals fell in the range 2–600 Gpc‑3 yr‑1. Here we give details on our method and computations, including information about our search pipelines, a derivation of our likelihood function for the analysis, a description of the astrophysical search trigger distribution expected from merging BBHs, details on our computational methods, a description of the effects and our model for calibration uncertainty, and an analytic method for estimating our detector sensitivity, which is calibrated to our measurements.

  14. Neutron coincidence counting based on time interval analysis with one- and two-dimensional Rossi-alpha distributions: an application for passive neutron waste assay

    NASA Astrophysics Data System (ADS)

    Bruggeman, M.; Baeten, P.; De Boeck, W.; Carchon, R.

    1996-02-01

    Neutron coincidence counting is commonly used for the non-destructive assay of plutonium bearing waste or for safeguards verification measurements. A major drawback of conventional coincidence counting is related to the fact that a valid calibration is needed to convert a neutron coincidence count rate to a 240Pu equivalent mass ( 240Pu eq). In waste assay, calibrations are made for representative waste matrices and source distributions. The actual waste however may have quite different matrices and source distributions compared to the calibration samples. This often results in a bias of the assay result. This paper presents a new neutron multiplicity sensitive coincidence counting technique including an auto-calibration of the neutron detection efficiency. The coincidence counting principle is based on the recording of one- and two-dimensional Rossi-alpha distributions triggered respectively by pulse pairs and by pulse triplets. Rossi-alpha distributions allow an easy discrimination between real and accidental coincidences and are aimed at being measured by a PC-based fast time interval analyser. The Rossi-alpha distributions can be easily expressed in terms of a limited number of factorial moments of the neutron multiplicity distributions. The presented technique allows an unbiased measurement of the 240Pu eq mass. The presented theory—which will be indicated as Time Interval Analysis (TIA)—is complementary to Time Correlation Analysis (TCA) theories which were developed in the past, but is from the theoretical point of view much simpler and allows a straightforward calculation of deadtime corrections and error propagation. Analytical expressions are derived for the Rossi-alpha distributions as a function of the factorial moments of the efficiency dependent multiplicity distributions. The validity of the proposed theory is demonstrated and verified via Monte Carlo simulations of pulse trains and the subsequent analysis of the simulated data.

  15. Increasing the sensitivity of the Jaffe reaction for creatinine

    NASA Technical Reports Server (NTRS)

    Tom, H. Y.

    1973-01-01

    Study of analytical procedure has revealed that linearity of creatinine calibration curve can be extended by using 0.03 molar picric acid solution made up in 70 percent ethanol instead of water. Three to five times more creatinine concentration can be encompassed within linear portion of calibration curve.

  16. Exploring a Three-Level Model of Calibration Accuracy

    ERIC Educational Resources Information Center

    Schraw, Gregory; Kuch, Fred; Gutierrez, Antonio P.; Richmond, Aaron S.

    2014-01-01

    We compared 5 different statistics (i.e., G index, gamma, "d'", sensitivity, specificity) used in the social sciences and medical diagnosis literatures to assess calibration accuracy in order to examine the relationship among them and to explore whether one statistic provided a best fitting general measure of accuracy. College…

  17. Absolute photometric calibration of IRAC: lessons learned using nine years of flight data

    NASA Astrophysics Data System (ADS)

    Carey, S.; Ingalls, J.; Hora, J.; Surace, J.; Glaccum, W.; Lowrance, P.; Krick, J.; Cole, D.; Laine, S.; Engelke, C.; Price, S.; Bohlin, R.; Gordon, K.

    2012-09-01

    Significant improvements in our understanding of various photometric effects have occurred in the more than nine years of flight operations of the Infrared Array Camera aboard the Spitzer Space Telescope. With the accumulation of calibration data, photometric variations that are intrinsic to the instrument can now be mapped with high fidelity. Using all existing data on calibration stars, the array location-dependent photometric correction (the variation of flux with position on the array) and the correction for intra-pixel sensitivity variation (pixel-phase) have been modeled simultaneously. Examination of the warm mission data enabled the characterization of the underlying form of the pixelphase variation in cryogenic data. In addition to the accumulation of calibration data, significant improvements in the calibration of the truth spectra of the calibrators has taken place. Using the work of Engelke et al. (2006), the KIII calibrators have no offset as compared to the AV calibrators, providing a second pillar of the calibration scheme. The current cryogenic calibration is better than 3% in an absolute sense, with most of the uncertainty still in the knowledge of the true flux densities of the primary calibrators. We present the final state of the cryogenic IRAC calibration and a comparison of the IRAC calibration to an independent calibration methodology using the HST primary calibrators.

  18. Parameter Optimisation and Uncertainty Analysis in Visual MODFLOW based Flow Model for predicting the groundwater head in an Eastern Indian Aquifer

    NASA Astrophysics Data System (ADS)

    Mohanty, B.; Jena, S.; Panda, R. K.

    2016-12-01

    The overexploitation of groundwater elicited in abandoning several shallow tube wells in the study Basin in Eastern India. For the sustainability of groundwater resources, basin-scale modelling of groundwater flow is indispensable for the effective planning and management of the water resources. The basic intent of this study is to develop a 3-D groundwater flow model of the study basin using the Visual MODFLOW Flex 2014.2 package and successfully calibrate and validate the model using 17 years of observed data. The sensitivity analysis was carried out to quantify the susceptibility of aquifer system to the river bank seepage, recharge from rainfall and agriculture practices, horizontal and vertical hydraulic conductivities, and specific yield. To quantify the impact of parameter uncertainties, Sequential Uncertainty Fitting Algorithm (SUFI-2) and Markov chain Monte Carlo (McMC) techniques were implemented. Results from the two techniques were compared and the advantages and disadvantages were analysed. Nash-Sutcliffe coefficient (NSE), Coefficient of Determination (R2), Mean Absolute Error (MAE), Mean Percent Deviation (Dv) and Root Mean Squared Error (RMSE) were adopted as criteria of model evaluation during calibration and validation of the developed model. NSE, R2, MAE, Dv and RMSE values for groundwater flow model during calibration and validation were in acceptable range. Also, the McMC technique was able to provide more reasonable results than SUFI-2. The calibrated and validated model will be useful to identify the aquifer properties, analyse the groundwater flow dynamics and the change in groundwater levels in future forecasts.

  19. Helium Mass Spectrometer Leak Detection: A Method to Quantify Total Measurement Uncertainty

    NASA Technical Reports Server (NTRS)

    Mather, Janice L.; Taylor, Shawn C.

    2015-01-01

    In applications where leak rates of components or systems are evaluated against a leak rate requirement, the uncertainty of the measured leak rate must be included in the reported result. However, in the helium mass spectrometer leak detection method, the sensitivity, or resolution, of the instrument is often the only component of the total measurement uncertainty noted when reporting results. To address this shortfall, a measurement uncertainty analysis method was developed that includes the leak detector unit's resolution, repeatability, hysteresis, and drift, along with the uncertainty associated with the calibration standard. In a step-wise process, the method identifies the bias and precision components of the calibration standard, the measurement correction factor (K-factor), and the leak detector unit. Together these individual contributions to error are combined and the total measurement uncertainty is determined using the root-sum-square method. It was found that the precision component contributes more to the total uncertainty than the bias component, but the bias component is not insignificant. For helium mass spectrometer leak rate tests where unit sensitivity alone is not enough, a thorough evaluation of the measurement uncertainty such as the one presented herein should be performed and reported along with the leak rate value.

  20. Nanometric Integrated Temperature and Thermal Sensors in CMOS-SOI Technology.

    PubMed

    Malits, Maria; Nemirovsky, Yael

    2017-07-29

    This paper reviews and compares the thermal and noise characterization of CMOS (complementary metal-oxide-semiconductor) SOI (Silicon on insulator) transistors and lateral diodes used as temperature and thermal sensors. DC analysis of the measured sensors and the experimental results in a broad (300 K up to 550 K) temperature range are presented. It is shown that both sensors require small chip area, have low power consumption, and exhibit linearity and high sensitivity over the entire temperature range. However, the diode's sensitivity to temperature variations in CMOS-SOI technology is highly dependent on the diode's perimeter; hence, a careful calibration for each fabrication process is needed. In contrast, the short thermal time constant of the electrons in the transistor's channel enables measuring the instantaneous heating of the channel and to determine the local true temperature of the transistor. This allows accurate "on-line" temperature sensing while no additional calibration is needed. In addition, the noise measurements indicate that the diode's small area and perimeter causes a high 1/ f noise in all measured bias currents. This is a severe drawback for the sensor accuracy when using the sensor as a thermal sensor; hence, CMOS-SOI transistors are a better choice for temperature sensing.

  1. A new cloud point extraction procedure for determination of inorganic antimony species in beverages and biological samples by flame atomic absorption spectrometry.

    PubMed

    Altunay, Nail; Gürkan, Ramazan

    2015-05-15

    A new cloud-point extraction (CPE) for the determination of antimony species in biological and beverages samples has been established with flame atomic absorption spectrometry (FAAS). The method is based on the fact that formation of the competitive ion-pairing complex of Sb(III) and Sb(V) with Victoria Pure Blue BO (VPB(+)) at pH 10. The antimony species were individually detected by FAAS. Under the optimized conditions, the calibration range for Sb(V) is 1-250 μg L(-1) with a detection limit of 0.25 μg L(-1) and sensitive enhancement factor of 76.3 while the calibration range for Sb(III) is 10-400 μg L(-1) with a detection limit of 5.15 μg L(-1) and sensitive enhancement factor of 48.3. The precision as a relative standard deviation is in range of 0.24-2.35%. The method was successfully applied to the speciative determination of antimony species in the samples. The validation was verified by analysis of certified reference materials (CRMs). Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. SeaWiFS technical report series. Volume 13: Case studies for SeaWiFS calibration and validation, part 1

    NASA Technical Reports Server (NTRS)

    Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Mcclain, Charles R.; Comiso, Josefino C.; Fraser, Robert S.; Firestone, James K.; Schieber, Brian D.; Yeh, Eueng-Nan; Arrigo, Kevin R.; Sullivan, Cornelius W.

    1994-01-01

    Although the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Calibration and Validation Program relies on the scientific community for the collection of bio-optical and atmospheric correction data as well as for algorithm development, it does have the responsibility for evaluating and comparing the algorithms and for ensuring that the algorithms are properly implemented within the SeaWiFS Data Processing System. This report consists of a series of sensitivity and algorithm (bio-optical, atmospheric correction, and quality control) studies based on Coastal Zone Color Scanner (CZCS) and historical ancillary data undertaken to assist in the development of SeaWiFS specific applications needed for the proper execution of that responsibility. The topics presented are as follows: (1) CZCS bio-optical algorithm comparison, (2) SeaWiFS ozone data analysis study, (3) SeaWiFS pressure and oxygen absorption study, (4) pixel-by-pixel pressure and ozone correction study for ocean color imagery, (5) CZCS overlapping scenes study, (6) a comparison of CZCS and in situ pigment concentrations in the Southern Ocean, (7) the generation of ancillary data climatologies, (8) CZCS sensor ringing mask comparison, and (9) sun glint flag sensitivity study.

  3. Modeling pesticide loadings from the San Joaquin watershed into the Sacramento-San Joaquin Delta using SWAT

    NASA Astrophysics Data System (ADS)

    Chen, H.; Zhang, M.

    2016-12-01

    The Sacramento-San Joaquin Delta is an ecologically rich, hydrologically complex area that serves as the hub of California's water supply. However, pesticides have been routinely detected in the Delta waterways, with concentrations exceeding the benchmark for the protection of aquatic life. Pesticide loadings into the Delta are partially attributed to the San Joaquin watershed, a highly productive agricultural watershed located upstream. Therefore, this study aims to simulate pesticide loadings to the Delta by applying the Soil and Water Assessment Tool (SWAT) model to the San Joaquin watershed, under the support of the USDA-ARS Delta Area-Wide Pest Management Program. Pesticide use patterns in the San Joaquin watershed were characterized by combining the California Pesticide Use Reporting (PUR) database and GIS analysis. Sensitivity/uncertainty analyses and multi-site calibration were performed in the simulation of stream flow, sediment, and pesticide loads along the San Joaquin River. Model performance was evaluated using a combination of graphic and quantitative measures. Preliminary results indicated that stream flow was satisfactorily simulated along the San Joaquin River and the major eastern tributaries, whereas stream flow was less accurately simulated in the western tributaries, which are ephemeral small streams that peak during winter storm events and are mainly fed by irrigation return flow during the growing season. The most sensitive parameters to stream flow were CN2, SOL_AWC, HRU_SLP, SLSUBBSN, SLSOIL, GWQMN and GW_REVAP. Regionalization of parameters is important as the sensitivity of parameters vary significantly spatially. In terms of evaluation metric, NSE tended to overrate model performance when compared to PBIAS. Anticipated results will include (1) pesticide use pattern analysis, (2) calibration and validation of stream flow, sediment, and pesticide loads, and (3) characterization of spatial patterns and temporal trends of pesticide yield.

  4. Film-based delivery quality assurance for robotic radiosurgery: Commissioning and validation.

    PubMed

    Blanck, Oliver; Masi, Laura; Damme, Marie-Christin; Hildebrandt, Guido; Dunst, Jürgen; Siebert, Frank-Andre; Poppinga, Daniela; Poppe, Björn

    2015-07-01

    Robotic radiosurgery demands comprehensive delivery quality assurance (DQA), but guidelines for commissioning of the DQA method is missing. We investigated the stability and sensitivity of our film-based DQA method with various test scenarios and routine patient plans. We also investigated the applicability of tight distance-to-agreement (DTA) Gamma-Index criteria. We used radiochromic films with multichannel film dosimetry and re-calibration and our analysis was performed in four steps: 1) Film-to-plan registration, 2) Standard Gamma-Index criteria evaluation (local-pixel-dose-difference ≤2%, distance-to-agreement ≤2 mm, pass-rate ≥90%), 3) Dose distribution shift until maximum pass-rate (Maxγ) was found (shift acceptance <1 mm), and 4) Final evaluation with tight DTA criteria (≤1 mm). Test scenarios consisted of purposefully introduced phantom misalignments, dose miscalibrations, and undelivered MU. Initial method evaluation was done on 30 clinical plans. Our method showed similar sensitivity compared to the standard End-2-End-Test and incorporated an estimate of global system offsets in the analysis. The simulated errors (phantom shifts, global robot misalignment, undelivered MU) were detected by our method while standard Gamma-Index criteria often did not reveal these deviations. Dose miscalibration was not detected by film alone, hence simultaneous ion-chamber measurement for film calibration is strongly recommended. 83% of the clinical patient plans were within our tight DTA tolerances. Our presented methods provide additional measurements and quality references for film-based DQA enabling more sensitive error detection. We provided various test scenarios for commissioning of robotic radiosurgery DQA and demonstrated the necessity to use tight DTA criteria. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  5. A sensitivity analysis of low salinity habitats simulated by a hydrodynamic model in the Manatee River estuary in Florida, USA

    NASA Astrophysics Data System (ADS)

    Chen, XinJian

    2012-06-01

    This paper presents a sensitivity study of simulated availability of low salinity habitats by a hydrodynamic model for the Manatee River estuary located in the southwest portion of the Florida peninsula. The purpose of the modeling study was to establish a regulatory minimum freshwater flow rate required to prevent the estuarine ecosystem from significant harm. The model used in the study was a multi-block model that dynamically couples a three-dimensional (3D) hydrodynamic model with a laterally averaged (2DV) hydrodynamic model. The model was calibrated and verified against measured real-time data of surface elevation and salinity at five stations during March 2005-July 2006. The calibrated model was then used to conduct a series of scenario runs to investigate effects of the flow reduction on salinity distributions in the Manatee River estuary. Based on simulated salinity distribution in the estuary, water volumes, bottom areas and shoreline lengths for salinity less than certain predefined values were calculated and analyzed to help establish the minimum freshwater flow rate for the estuarine system. The sensitivity analysis conducted during the modeling study for the Manatee River estuary examined effects of the bottom roughness, ambient vertical eddy viscosity/diffusivity, horizontal eddy viscosity/diffusivity, and ungauged flow on the model results and identified the relative importance of these model parameters (input data) to the outcome of the availability of low salinity habitats. It is found that the ambient vertical eddy viscosity/diffusivity is the most influential factor controlling the model outcome, while the horizontal eddy viscosity/diffusivity is the least influential one.

  6. Characterization of metal oxide field-effect transistors for first helical tomotherapy Hi-Art II unit in India.

    PubMed

    Kinhikar, Rajesh A; Pai, Rajeshree; Master, Zubin; Deshpande, Deepak D

    2009-01-01

    To characterize metal oxide semiconductor field-effect transistors (MOSFETs) for a 6-MV photon beam with a first helical tomotherapy Hi-Art II unit in India. Standard sensitivity MOSFETs were first calibrated and then characterized for reproducibility, field size dependence, angular dependence, fade effects, and temperature dependence. The detector sensitivity was estimated for static as well as rotational modes for three jaw settings (1.0 cm x 40 cm, 2.5 cm x 40 cm, and 5 cm x 40 cm) at 1.5-cm depth with a source-to-axis distance (SAD) of 85 cm in virtual water slabs. The A1SL ion chamber and thermoluminescence dosimeters (TLDs) were used to compare the results. No significant difference was found in the detector sensitivity for static and rotational procedures. The average detector sensitivity for static procedures was 1.10 mV/cGy (SD 0.02) while it was 1.12 mV/cGy (SD 0.02) for rotational procedures. The average detector sensitivity found was the same within the experimental uncertainty for static and rotational dose deliveries. The MOSFET reading was consistent and its reproducibility was excellent (+0.5%) while there was no significant dependence of field size. The angular dependence of less than 1.0% was observed. There was negligible fading effect of the MOSFET. The MOSFET response was found independent of temperature in the range 18 degrees-30 degrees. The ion chamber readings were assumed to be a reference for the estimation of the MOSFET calibration factor. The ion chamber and the TLD were in good agreement (+2%) with each other. This study deals only with the measurements and calibration performed on the surface of the phantom. MOSFET was calibrated and validated for phantom surface measurements for a 6-MV photon beam generated by a tomotherapy machine. The sensitivity of the detector was the same for both modes of treatment delivery with tomotherapy. The performance of the MOSFET was validated for and satisfactory for the helical tomotherapy Hi-Art II unit. However, MOSFET may be used for in vivo surface dosimetry only after it is calibrated under the conditions replicating as much as possible the manner in which the dosimeter will be used clinically.

  7. Design and optimization of stress centralized MEMS vector hydrophone with high sensitivity at low frequency

    NASA Astrophysics Data System (ADS)

    Zhang, Guojun; Ding, Junwen; Xu, Wei; Liu, Yuan; Wang, Renxin; Han, Janjun; Bai, Bing; Xue, Chenyang; Liu, Jun; Zhang, Wendong

    2018-05-01

    A micro hydrophone based on piezoresistive effect, "MEMS vector hydrophone" was developed for acoustic detection application. To improve the sensitivity of MEMS vector hydrophone at low frequency, we reported a stress centralized MEMS vector hydrophone (SCVH) mainly used in 20-500 Hz. Stress concentration area was actualized in sensitive unit of hydrophone by silicon micromachining technology. Then piezoresistors were placed in stress concentration area for better mechanical response, thereby obtaining higher sensitivity. Static analysis was done to compare the mechanical response of three different sensitive microstructure: SCVH, conventional micro-silicon four-beam vector hydrophone (CFVH) and Lollipop-shaped vector hydrophone (LVH) respectively. And fluid-structure interaction (FSI) was used to analyze the natural frequency of SCVH for ensuring the measurable bandwidth. Eventually, the calibration experiment in standing wave field was done to test the property of SCVH and verify the accuracy of simulation. The results show that the sensitivity of SCVH has nearly increased by 17.2 dB in contrast to CFVH and 7.6 dB in contrast to LVH during 20-500 Hz.

  8. Improving calibration and validation of cosmic-ray neutron sensors in the light of spatial sensitivity

    NASA Astrophysics Data System (ADS)

    Schrön, Martin; Köhli, Markus; Scheiffele, Lena; Iwema, Joost; Bogena, Heye R.; Lv, Ling; Martini, Edoardo; Baroni, Gabriele; Rosolem, Rafael; Weimar, Jannis; Mai, Juliane; Cuntz, Matthias; Rebmann, Corinna; Oswald, Sascha E.; Dietrich, Peter; Schmidt, Ulrich; Zacharias, Steffen

    2017-10-01

    In the last few years the method of cosmic-ray neutron sensing (CRNS) has gained popularity among hydrologists, physicists, and land-surface modelers. The sensor provides continuous soil moisture data, averaged over several hectares and tens of decimeters in depth. However, the signal still may contain unidentified features of hydrological processes, and many calibration datasets are often required in order to find reliable relations between neutron intensity and water dynamics. Recent insights into environmental neutrons accurately described the spatial sensitivity of the sensor and thus allowed one to quantify the contribution of individual sample locations to the CRNS signal. Consequently, data points of calibration and validation datasets are suggested to be averaged using a more physically based weighting approach. In this work, a revised sensitivity function is used to calculate weighted averages of point data. The function is different from the simple exponential convention by the extraordinary sensitivity to the first few meters around the probe, and by dependencies on air pressure, air humidity, soil moisture, and vegetation. The approach is extensively tested at six distinct monitoring sites: two sites with multiple calibration datasets and four sites with continuous time series datasets. In all cases, the revised averaging method improved the performance of the CRNS products. The revised approach further helped to reveal hidden hydrological processes which otherwise remained unexplained in the data or were lost in the process of overcalibration. The presented weighting approach increases the overall accuracy of CRNS products and will have an impact on all their applications in agriculture, hydrology, and modeling.

  9. Sensitivity Analysis of an Automated Calibration Routine for Airborne Cameras

    DTIC Science & Technology

    2013-03-01

    Karl Walli, Lt Col, USAF (Member) Date v AFIT-ENG-13- M -51 Abstract Given a known aircraft...pitch up. 7. Holding Pattern – A standard holding pattern with 30 second straight legs and 180° turns using 30° angle of bank at each end. 8. S ...It can be seen that at noise levels 0 1000 2000 3000 4000 5000 6000 7000 0 1 2 3 4 5 6 7 8 Er ro r ( m et er s ) PIxel Noise Standard

  10. R-SWAT-FME user's guide

    USGS Publications Warehouse

    Wu, Yiping; Liu, Shu-Guang

    2012-01-01

    R program language-Soil and Water Assessment Tool-Flexible Modeling Environment (R-SWAT-FME) (Wu and Liu, 2012) is a comprehensive modeling framework that adopts an R package, Flexible Modeling Environment (FME) (Soetaert and Petzoldt, 2010), for the Soil and Water Assessment Tool (SWAT) model (Arnold and others, 1998; Neitsch and others, 2005). This framework provides the functionalities of parameter identifiability, model calibration, and sensitivity and uncertainty analysis with instant visualization. This user's guide shows how to apply this framework for a customized SWAT project.

  11. Emulation of simulations of atmospheric dispersion at Fukushima for Sobol' sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Girard, Sylvain; Korsakissok, Irène; Mallet, Vivien

    2015-04-01

    Polyphemus/Polair3D, from which derives IRSN's operational model ldX, was used to simulate the atmospheric dispersion at the Japan scale of radionuclides after the Fukushima disaster. A previous study with the screening method of Morris had shown that - The sensitivities depend a lot on the considered output; - Only a few of the inputs are non-influential on all considered outputs; - Most influential inputs have either non-linear effects or are interacting. These preliminary results called for a more detailed sensitivity analysis, especially regarding the characterization of interactions. The method of Sobol' allows for a precise evaluation of interactions but requires large simulation samples. Gaussian process emulators for each considered outputs were built in order to relieve this computational burden. Globally aggregated outputs proved to be easy to emulate with high accuracy, and associated Sobol' indices are in broad agreement with previous results obtained with the Morris method. More localized outputs, such as temporal averages of gamma dose rates at measurement stations, resulted in lesser emulator performances: tests simulations could not satisfactorily be reproduced by some emulators. These outputs are of special interest because they can be compared to available observations, for instance for calibration purpose. A thorough inspection of prediction residuals hinted that the model response to wind perturbations often behaved in very distinct regimes relatively to some thresholds. Complementing the initial sample with wind perturbations set to the extreme values allowed for sensible improvement of some of the emulators while other remained too unreliable to be used in a sensitivity analysis. Adaptive sampling or regime-wise emulation could be tried to circumvent this issue. Sobol' indices for local outputs revealed interesting patterns, mostly dominated by the winds, with very high interactions. The emulators will be useful for subsequent studies. Indeed, our goal is to characterize the model output uncertainty but too little information is available about input uncertainties. Hence, calibration of the input distributions with observation and a Bayesian approach seem necessary. This would probably involve methods such as MCMC which would be intractable without emulators.

  12. CLUSTER STAFF search coils magnetometer calibration - comparisons with FGM

    NASA Astrophysics Data System (ADS)

    Robert, P.; Cornilleau-Wehrlin, N.; Piberne, R.; de Conchy, Y.; Lacombe, C.; Bouzid, V.; Grison, B.; Alison, D.; Canu, P.

    2013-12-01

    The main part of Cluster Spatio Temporal Analysis of Field Fluctuations (STAFF) experiment consists of triaxial search coils allowing the measurements of the three magnetic components of the waves from 0.1 Hz up to 4 kHz. Two sets of data are produced, one by a module to filter and transmit the corresponding waveform up to either 10 or 180 Hz (STAFF-SC) and the second by an onboard Spectrum Analyser (STAFF-SA) to compute the elements of the spectral matrix for five components of the waves, 3 × B and 2 × E (from EFW experiment) in the frequency range 8 Hz to 4 kHz. In order to understand the way the output signal of the search coils are calibrated, the transfer functions of the different parts of the instrument are described as well as the way to transform telemetry data into physical units, across various coordinate systems from the spinning sensors to a fixed and known frame. The instrument sensitivity is discussed. Cross-calibration inside STAFF (SC and SA) is presented. Results of cross-calibration between the STAFF search coils and the Cluster Flux Gate Magnetometer (FGM) data are discussed. It is shown that these cross-calibrations lead to an agreement between both data sets at low frequency within a 2% error. By means of statistics done over 10 yr, it is shown that the functionalities and characteristics of both instruments have not changed during this period.

  13. CLUSTER-STAFF search coil magnetometer calibration - comparisons with FGM

    NASA Astrophysics Data System (ADS)

    Robert, P.; Cornilleau-Wehrlin, N.; Piberne, R.; de Conchy, Y.; Lacombe, C.; Bouzid, V.; Grison, B.; Alison, D.; Canu, P.

    2014-09-01

    The main part of the Cluster Spatio-Temporal Analysis of Field Fluctuations (STAFF) experiment consists of triaxial search coils allowing the measurements of the three magnetic components of the waves from 0.1 Hz up to 4 kHz. Two sets of data are produced, one by a module to filter and transmit the corresponding waveform up to either 10 or 180 Hz (STAFF-SC), and the second by the onboard Spectrum Analyser (STAFF-SA) to compute the elements of the spectral matrix for five components of the waves, 3 × B and 2 × E (from the EFW experiment), in the frequency range 8 Hz to 4 kHz. In order to understand the way the output signals of the search coils are calibrated, the transfer functions of the different parts of the instrument are described as well as the way to transform telemetry data into physical units across various coordinate systems from the spinning sensors to a fixed and known frame. The instrument sensitivity is discussed. Cross-calibration inside STAFF (SC and SA) is presented. Results of cross-calibration between the STAFF search coils and the Cluster Fluxgate Magnetometer (FGM) data are discussed. It is shown that these cross-calibrations lead to an agreement between both data sets at low frequency within a 2% error. By means of statistics done over 10 yr, it is shown that the functionalities and characteristics of both instruments have not changed during this period.

  14. UCODE_2005 and six other computer codes for universal sensitivity analysis, calibration, and uncertainty evaluation constructed using the JUPITER API

    USGS Publications Warehouse

    Poeter, Eileen E.; Hill, Mary C.; Banta, Edward R.; Mehl, Steffen; Christensen, Steen

    2006-01-01

    This report documents the computer codes UCODE_2005 and six post-processors. Together the codes can be used with existing process models to perform sensitivity analysis, data needs assessment, calibration, prediction, and uncertainty analysis. Any process model or set of models can be used; the only requirements are that models have numerical (ASCII or text only) input and output files, that the numbers in these files have sufficient significant digits, that all required models can be run from a single batch file or script, and that simulated values are continuous functions of the parameter values. Process models can include pre-processors and post-processors as well as one or more models related to the processes of interest (physical, chemical, and so on), making UCODE_2005 extremely powerful. An estimated parameter can be a quantity that appears in the input files of the process model(s), or a quantity used in an equation that produces a value that appears in the input files. In the latter situation, the equation is user-defined. UCODE_2005 can compare observations and simulated equivalents. The simulated equivalents can be any simulated value written in the process-model output files or can be calculated from simulated values with user-defined equations. The quantities can be model results, or dependent variables. For example, for ground-water models they can be heads, flows, concentrations, and so on. Prior, or direct, information on estimated parameters also can be considered. Statistics are calculated to quantify the comparison of observations and simulated equivalents, including a weighted least-squares objective function. In addition, data-exchange files are produced that facilitate graphical analysis. UCODE_2005 can be used fruitfully in model calibration through its sensitivity analysis capabilities and its ability to estimate parameter values that result in the best possible fit to the observations. Parameters are estimated using nonlinear regression: a weighted least-squares objective function is minimized with respect to the parameter values using a modified Gauss-Newton method or a double-dogleg technique. Sensitivities needed for the method can be read from files produced by process models that can calculate sensitivities, such as MODFLOW-2000, or can be calculated by UCODE_2005 using a more general, but less accurate, forward- or central-difference perturbation technique. Problems resulting from inaccurate sensitivities and solutions related to the perturbation techniques are discussed in the report. Statistics are calculated and printed for use in (1) diagnosing inadequate data and identifying parameters that probably cannot be estimated; (2) evaluating estimated parameter values; and (3) evaluating how well the model represents the simulated processes. Results from UCODE_2005 and codes RESIDUAL_ANALYSIS and RESIDUAL_ANALYSIS_ADV can be used to evaluate how accurately the model represents the processes it simulates. Results from LINEAR_UNCERTAINTY can be used to quantify the uncertainty of model simulated values if the model is sufficiently linear. Results from MODEL_LINEARITY and MODEL_LINEARITY_ADV can be used to evaluate model linearity and, thereby, the accuracy of the LINEAR_UNCERTAINTY results. UCODE_2005 can also be used to calculate nonlinear confidence and predictions intervals, which quantify the uncertainty of model simulated values when the model is not linear. CORFAC_PLUS can be used to produce factors that allow intervals to account for model intrinsic nonlinearity and small-scale variations in system characteristics that are not explicitly accounted for in the model or the observation weighting. The six post-processing programs are independent of UCODE_2005 and can use the results of other programs that produce the required data-exchange files. UCODE_2005 and the other six codes are intended for use on any computer operating system. The programs con

  15. [Quantitative analysis of Cu in water by collinear DP-LIBS].

    PubMed

    Zheng, Mei-Lan; Yao, Ming-Yin; Chen, Tian-Bing; Lin, Yong-Zeng; Li, Wen-Bing; Liu, Mu-Hua

    2014-07-01

    The purpose of this research is to study the influence of double pulse laser induced breakdown spectroscopy (DP-LIBS) on the sensitivity of Cu in water. The water solution of Cu was tested by collinear DP-LIBS in this article. The results show that spectral intensity of Cu can be enhanced obviously by DP-LIBS, compared with single pulse laser induced breakdown spectroscopy (SP-LIBS). Besides, the experimental results were significantly impacted by delay time between laser pulse and spectrometer acquisition, delay time of double laser pulse and energy of laser pulse and so on. The paper determined the best conditions for DP-LIBS detecting Cu in water. The optimal acquisition delay time was 1 380 ns. The best laser pulse delay time was 25 ns. The most appropriate energy of double laser pulse was 100 mJ. Characteristic analysis of spectra of Cu at 324.7 and 327.4 nm was done for quantitative analysis. The detection limit was 3.5 microg x mL(-1) at 324.7 nm, and the detection limit was 4.84 microg x mL(-1) at 327.4 nm. The relative standard deviation of the two characteristic spectral lines was within 10%. The calibration curve of characteristic spectral line, established by 327.4 nm, was verified with 500 microg x mL(-1) sample. Concentration of the sample was 446 microg x mL(-1) calculated by the calibration curve. This research shows that the detection sensitivity of Cu in water can be improved by DP-LIBS. At the same time, it had high stability.

  16. [Measurement of Water COD Based on UV-Vis Spectroscopy Technology].

    PubMed

    Wang, Xiao-ming; Zhang, Hai-liang; Luo, Wei; Liu, Xue-mei

    2016-01-01

    Ultraviolet/visible (UV/Vis) spectroscopy technology was used to measure water COD. A total of 135 water samples were collected from Zhejiang province. Raw spectra with 3 different pretreatment methods (Multiplicative Scatter Correction (MSC), Standard Normal Variate (SNV) and 1st Derivatives were compared to determine the optimal pretreatment method for analysis. Spectral variable selection is an important strategy in spectrum modeling analysis, because it tends to parsimonious data representation and can lead to multivariate models with better performance. In order to simply calibration models, the preprocessed spectra were then used to select sensitive wavelengths by competitive adaptive reweighted sampling (CARS), Random frog and Successive Genetic Algorithm (GA) methods. Different numbers of sensitive wavelengths were selected by different variable selection methods with SNV preprocessing method. Partial least squares (PLS) was used to build models with the full spectra, and Extreme Learning Machine (ELM) was applied to build models with the selected wavelength variables. The overall results showed that ELM model performed better than PLS model, and the ELM model with the selected wavelengths based on CARS obtained the best results with the determination coefficient (R2), RMSEP and RPD were 0.82, 14.48 and 2.34 for prediction set. The results indicated that it was feasible to use UV/Vis with characteristic wavelengths which were obtained by CARS variable selection method, combined with ELM calibration could apply for the rapid and accurate determination of COD in aquaculture water. Moreover, this study laid the foundation for further implementation of online analysis of aquaculture water and rapid determination of other water quality parameters.

  17. Calibration of limited-area ensemble precipitation forecasts for hydrological predictions

    NASA Astrophysics Data System (ADS)

    Diomede, Tommaso; Marsigli, Chiara; Montani, Andrea; Nerozzi, Fabrizio; Paccagnella, Tiziana

    2015-04-01

    The main objective of this study is to investigate the impact of calibration for limited-area ensemble precipitation forecasts, to be used for driving discharge predictions up to 5 days in advance. A reforecast dataset, which spans 30 years, based on the Consortium for Small Scale Modeling Limited-Area Ensemble Prediction System (COSMO-LEPS) was used for testing the calibration strategy. Three calibration techniques were applied: quantile-to-quantile mapping, linear regression, and analogs. The performance of these methodologies was evaluated in terms of statistical scores for the precipitation forecasts operationally provided by COSMO-LEPS in the years 2003-2007 over Germany, Switzerland, and the Emilia-Romagna region (northern Italy). The analog-based method seemed to be preferred because of its capability of correct position errors and spread deficiencies. A suitable spatial domain for the analog search can help to handle model spatial errors as systematic errors. However, the performance of the analog-based method may degrade in cases where a limited training dataset is available. A sensitivity test on the length of the training dataset over which to perform the analog search has been performed. The quantile-to-quantile mapping and linear regression methods were less effective, mainly because the forecast-analysis relation was not so strong for the available training dataset. A comparison between the calibration based on the deterministic reforecast and the calibration based on the full operational ensemble used as training dataset has been considered, with the aim to evaluate whether reforecasts are really worthy for calibration, given that their computational cost is remarkable. The verification of the calibration process was then performed by coupling ensemble precipitation forecasts with a distributed rainfall-runoff model. This test was carried out for a medium-sized catchment located in Emilia-Romagna, showing a beneficial impact of the analog-based method on the reduction of missed events for discharge predictions.

  18. Adjustments to the MODIS Terra Radiometric Calibration and Polarization Sensitivity in the 2010 Reprocessing

    NASA Technical Reports Server (NTRS)

    Meister, Gerhard; Franz, Bryan A.

    2011-01-01

    The Moderate-Resolution Imaging Spectroradiometer (MODIS) on NASA s Earth Observing System (EOS) satellite Terra provides global coverage of top-of-atmosphere (TOA) radiances that have been successfully used for terrestrial and atmospheric research. The MODIS Terra ocean color products, however, have been compromised by an inadequate radiometric calibration at the short wavelengths. The Ocean Biology Processing Group (OBPG) at NASA has derived radiometric corrections using ocean color products from the SeaWiFS sensor as truth fields. In the R2010.0 reprocessing, these corrections have been applied to the whole mission life span of 10 years. This paper presents the corrections to the radiometric gains and to the instrument polarization sensitivity, demonstrates the improvement to the Terra ocean color products, and discusses issues that need further investigation. Although the global averages of MODIS Terra ocean color products are now in excellent agreement with those of SeaWiFS and MODIS Aqua, and image quality has been significantly improved, the large corrections applied to the radiometric calibration and polarization sensitivity require additional caution when using the data.

  19. Solar flares observed simultaneously with SphinX, GOES and RHESSI

    NASA Astrophysics Data System (ADS)

    Mrozek, Tomasz; Gburek, Szymon; Siarkowski, Marek; Sylwester, Barbara; Sylwester, Janusz; Kępa, Anna; Gryciuk, Magdalena

    2013-07-01

    In February 2009, during recent deepest solar minimum, Polish Solar Photometer in X-rays (SphinX) begun observations of the Sun in the energy range of 1.2-15 keV. SphinX was almost 100 times more sensitive than GOES X-ray Sensors. The silicon PIN diode detectors used in the experiment were carefully calibrated on the ground using Synchrotron Radiation Source BESSY II. The SphinX energy range overlaps with the Ramaty High Energy Solar Spectroscopic Imager (RHESSI) energy range. The instrument provided us with observations of hundreds of very small flares and X-ray brightenings. We have chosen a group of solar flares observed simultaneously with GOES, SphinX and RHESSI and performed spectroscopic analysis of observations wherever possible. The analysis of thermal part of the spectra showed that SphinX is a very sensitive complementary observatory for RHESSI and GOES.

  20. Quantitative analysis of [Dmt(1)]DALDA in ovine plasma by capillary liquid chromatography-nanospray ion-trap mass spectrometry.

    PubMed

    Wan, Haibao; Umstot, Edward S; Szeto, Hazel H; Schiller, Peter W; Desiderio, Dominic M

    2004-04-15

    The synthetic opioid peptide analog Dmt-D-Arg-Phe-Lys-NH(2) ([Dmt(1)]DALDA; [Dmt= 2',6'-dimethyltyrosine) is a highly potent and selective mu opioid-receptor agonist. A very sensitive and robust capillary liquid chromatography/nanospray ion-trap (IT) mass spectrometry method has been developed to quantify [Dmt(1)]DALDA in ovine plasma, using deuterated [Dmt(1)]DALDA as the internal standard. The standard MS/MS spectra of d(0)- and d(5)-[Dmt(1)]DALDA were obtained, and the collision energy was experimentally optimized to 25%. The product ion [ M + 2H-NH(3)](2+) (m/z 312.2) was used to identify and to quantify the synthetic opioid peptide analog in ovine plasma samples. The MS/MS detection sensitivity for [Dmt(1)]DALDA was 625 amol. A calibration curve was constructed, and quantitative analysis was performed on a series of ovine plasma samples.

  1. Development of calibration techniques for ultrasonic hydrophone probes in the frequency range from 1 to 100 MHz

    PubMed Central

    Umchid, S.; Gopinath, R.; Srinivasan, K.; Lewin, P. A.; Daryoush, A. S.; Bansal, L.; El-Sherif, M.

    2009-01-01

    The primary objective of this work was to develop and optimize the calibration techniques for ultrasonic hydrophone probes used in acoustic field measurements up to 100 MHz. A dependable, 100 MHz calibration method was necessary to examine the behavior of a sub-millimeter spatial resolution fiber optic (FO) sensor and assess the need for such a sensor as an alternative tool for high frequency characterization of ultrasound fields. Also, it was of interest to investigate the feasibility of using FO probes in high intensity fields such as those employed in HIFU (High Intensity Focused Ultrasound) applications. In addition to the development and validation of a novel, 100 MHz calibration technique the innovative elements of this research include implementation and testing of a prototype FO sensor with an active diameter of about 10 μm that exhibits uniform sensitivity over the considered frequency range and does not require any spatial averaging corrections up to about 75 MHz. The results of the calibration measurements are presented and it is shown that the optimized calibration technique allows the sensitivity of the hydrophone probes to be determined as a virtually continuous function of frequency and is also well suited to verify the uniformity of the FO sensor frequency response. As anticipated, the overall uncertainty of the calibration was dependent on frequency and determined to be about ±12% (±1 dB) up to 40 MHz, ±20% (±1.5 dB) from 40 to 60 MHz and ±25% (±2 dB) from 60 to 100 MHz. The outcome of this research indicates that once fully developed and calibrated, the combined acousto-optic system will constitute a universal reference tool in the wide, 100 MHz bandwidth. PMID:19110289

  2. Two statistics for evaluating parameter identifiability and error reduction

    USGS Publications Warehouse

    Doherty, John; Hunt, Randall J.

    2009-01-01

    Two statistics are presented that can be used to rank input parameters utilized by a model in terms of their relative identifiability based on a given or possible future calibration dataset. Identifiability is defined here as the capability of model calibration to constrain parameters used by a model. Both statistics require that the sensitivity of each model parameter be calculated for each model output for which there are actual or presumed field measurements. Singular value decomposition (SVD) of the weighted sensitivity matrix is then undertaken to quantify the relation between the parameters and observations that, in turn, allows selection of calibration solution and null spaces spanned by unit orthogonal vectors. The first statistic presented, "parameter identifiability", is quantitatively defined as the direction cosine between a parameter and its projection onto the calibration solution space. This varies between zero and one, with zero indicating complete non-identifiability and one indicating complete identifiability. The second statistic, "relative error reduction", indicates the extent to which the calibration process reduces error in estimation of a parameter from its pre-calibration level where its value must be assigned purely on the basis of prior expert knowledge. This is more sophisticated than identifiability, in that it takes greater account of the noise associated with the calibration dataset. Like identifiability, it has a maximum value of one (which can only be achieved if there is no measurement noise). Conceptually it can fall to zero; and even below zero if a calibration problem is poorly posed. An example, based on a coupled groundwater/surface-water model, is included that demonstrates the utility of the statistics. ?? 2009 Elsevier B.V.

  3. TIMED solar EUV experiment: preflight calibration results for the XUV photometer system

    NASA Astrophysics Data System (ADS)

    Woods, Thomas N.; Rodgers, Erica M.; Bailey, Scott M.; Eparvier, Francis G.; Ucker, Gregory J.

    1999-10-01

    The Solar EUV Experiment (SEE) on the NASA Thermosphere, Ionosphere, and Mesosphere Energetics and Dynamics (TIMED) mission will measure the solar vacuum ultraviolet (VUV) spectral irradiance from 0.1 to 200 nm. To cover this wide spectral range two different types of instruments are used: a grating spectrograph for spectra between 25 and 200 nm with a spectral resolution of 0.4 nm and a set of silicon soft x-ray (XUV) photodiodes with thin film filters as broadband photometers between 0.1 and 35 nm with individual bandpasses of about 5 nm. The grating spectrograph is called the EUV Grating Spectrograph (EGS), and it consists of a normal- incidence, concave diffraction grating used in a Rowland spectrograph configuration with a 64 X 1024 array CODACON detector. The primary calibrations for the EGS are done using the National Institute for Standards and Technology (NIST) Synchrotron Ultraviolet Radiation Facility (SURF-III) in Gaithersburg, Maryland. In addition, detector sensitivity and image quality, the grating scattered light, the grating higher order contributions, and the sun sensor field of view are characterized in the LASP calibration laboratory. The XUV photodiodes are called the XUV Photometer System (XPS), and the XPS includes 12 photodiodes with thin film filters deposited directly on the silicon photodiodes' top surface. The sensitivities of the XUV photodiodes are calibrated at both the NIST SURF-III and the Physikalisch-Technische Bundesanstalt (PTB) electron storage ring called BESSY. The other XPS calibrations, namely the electronics linearity and field of view maps, are performed in the LASP calibration laboratory. The XPS and solar sensor pre-flight calibration results are primarily discussed as the EGS calibrations at SURF-III have not yet been performed.

  4. Semi-micro high-performance liquid chromatographic analysis of tiropramide in human plasma using column-switching.

    PubMed

    Baek, Soo Kyoung; Lee, Seung Seok; Park, Eun Jeon; Sohn, Dong Hwan; Lee, Hye Suk

    2003-02-05

    A rapid and sensitive column-switching semi-micro high-performance liquid chromatography method was developed for the direct analysis of tiropramide in human plasma. The plasma sample (100 microl) was directly injected onto Capcell Pak MF Ph-1 precolumn where deproteinization and analyte fractionation occurred. Tiropramide was then eluted into an enrichment column (Capcell Pak UG C(18)) using acetonitrile-potassium phosphate (pH 7.0, 50 mM) (12:88, v/v) and was analyzed on a semi-micro C(18) analytical column using acetonitrile-potassium phosphate (pH 7.0, 10 mM) (50:50, v/v). The method showed excellent sensitivity (limit of quantification 5 ng/ml), and good precision (C.V.

  5. The creation and evaluation of a model to simulate the probability of conception in seasonal-calving pasture-based dairy heifers.

    PubMed

    Fenlon, Caroline; O'Grady, Luke; Butler, Stephen; Doherty, Michael L; Dunnion, John

    2017-01-01

    Herd fertility in pasture-based dairy farms is a key driver of farm economics. Models for predicting nulliparous reproductive outcomes are rare, but age, genetics, weight, and BCS have been identified as factors influencing heifer conception. The aim of this study was to create a simulation model of heifer conception to service with thorough evaluation. Artificial Insemination service records from two research herds and ten commercial herds were provided to build and evaluate the models. All were managed as spring-calving pasture-based systems. The factors studied were related to age, genetics, and time of service. The data were split into training and testing sets and bootstrapping was used to train the models. Logistic regression (with and without random effects) and generalised additive modelling were selected as the model-building techniques. Two types of evaluation were used to test the predictive ability of the models: discrimination and calibration. Discrimination, which includes sensitivity, specificity, accuracy and ROC analysis, measures a model's ability to distinguish between positive and negative outcomes. Calibration measures the accuracy of the predicted probabilities with the Hosmer-Lemeshow goodness-of-fit, calibration plot and calibration error. After data cleaning and the removal of services with missing values, 1396 services remained to train the models and 597 were left for testing. Age, breed, genetic predicted transmitting ability for calving interval, month and year were significant in the multivariate models. The regression models also included an interaction between age and month. Year within herd was a random effect in the mixed regression model. Overall prediction accuracy was between 77.1% and 78.9%. All three models had very high sensitivity, but low specificity. The two regression models were very well-calibrated. The mean absolute calibration errors were all below 4%. Because the models were not adept at identifying unsuccessful services, they are not suggested for use in predicting the outcome of individual heifer services. Instead, they are useful for the comparison of services with different covariate values or as sub-models in whole-farm simulations. The mixed regression model was identified as the best model for prediction, as the random effects can be ignored and the other variables can be easily obtained or simulated.

  6. A diagnostic model for the detection of sensitization to wheat allergens was developed and validated in bakery workers.

    PubMed

    Suarthana, Eva; Vergouwe, Yvonne; Moons, Karel G; de Monchy, Jan; Grobbee, Diederick; Heederik, Dick; Meijer, Evert

    2010-09-01

    To develop and validate a prediction model to detect sensitization to wheat allergens in bakery workers. The prediction model was developed in 867 Dutch bakery workers (development set, prevalence of sensitization 13%) and included questionnaire items (candidate predictors). First, principal component analysis was used to reduce the number of candidate predictors. Then, multivariable logistic regression analysis was used to develop the model. Internal validation and extent of optimism was assessed with bootstrapping. External validation was studied in 390 independent Dutch bakery workers (validation set, prevalence of sensitization 20%). The prediction model contained the predictors nasoconjunctival symptoms, asthma symptoms, shortness of breath and wheeze, work-related upper and lower respiratory symptoms, and traditional bakery. The model showed good discrimination with an area under the receiver operating characteristic (ROC) curve area of 0.76 (and 0.75 after internal validation). Application of the model in the validation set gave a reasonable discrimination (ROC area=0.69) and good calibration after a small adjustment of the model intercept. A simple model with questionnaire items only can be used to stratify bakers according to their risk of sensitization to wheat allergens. Its use may increase the cost-effectiveness of (subsequent) medical surveillance.

  7. Uncertainty Analysis of Inertial Model Attitude Sensor Calibration and Application with a Recommended New Calibration Method

    NASA Technical Reports Server (NTRS)

    Tripp, John S.; Tcheng, Ping

    1999-01-01

    Statistical tools, previously developed for nonlinear least-squares estimation of multivariate sensor calibration parameters and the associated calibration uncertainty analysis, have been applied to single- and multiple-axis inertial model attitude sensors used in wind tunnel testing to measure angle of attack and roll angle. The analysis provides confidence and prediction intervals of calibrated sensor measurement uncertainty as functions of applied input pitch and roll angles. A comparative performance study of various experimental designs for inertial sensor calibration is presented along with corroborating experimental data. The importance of replicated calibrations over extended time periods has been emphasized; replication provides independent estimates of calibration precision and bias uncertainties, statistical tests for calibration or modeling bias uncertainty, and statistical tests for sensor parameter drift over time. A set of recommendations for a new standardized model attitude sensor calibration method and usage procedures is included. The statistical information provided by these procedures is necessary for the uncertainty analysis of aerospace test results now required by users of industrial wind tunnel test facilities.

  8. Near surface illumination method to detect particle size information by optical calibration free remission measurements

    NASA Astrophysics Data System (ADS)

    Stocker, Sabrina; Foschum, Florian; Kienle, Alwin

    2017-07-01

    A calibration free method to detect particle size information is presented. A possible application for such measurements is the investigation of raw milk since there not only the fat and protein content varies but also the fat droplet size. The newly developed method is sensitive to the scattering phase function, which makes it applicable to many other applications, too. By simulating the light propagation by use of Monte Carlo simulations, a calibration free device can be developed from this principle.

  9. The Seventh SeaWiFS Intercalibration Round-Robin Experiment (SIRREX-7), March 1999

    NASA Technical Reports Server (NTRS)

    Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); McLean, Scott; Sherman, Jennifer; Small, Mark; Lazin, Gordana; Zibordi, Giuseppe; Brown, James W.; McClain, Charles R. (Technical Monitor)

    2002-01-01

    This report documents the scientific activities during the seventh SeaWiFS Intercalibration Round-Robin Experiment (SIRREX-7) held at Satlantic, Inc. (Halifax, Canada). The overall objective of SIRREX-7 was to determine the uncertainties of radiometric calibrations and measurements at a single calibration facility. Specifically, this involved the estimation of the uncertainties in a) lamp standards, b) plaque standards (including the uncertainties associated with plaque illumination non-uniformity), c) radiance calibrations, and d) irradiance calibrations. The investigation of the uncertainties in lamp standards included a comparison between a calibration of a new FEL by the National Institute of Standards and Technology (NIST) and Optronic Laboratories, Inc. In addition, the rotation and polarization sensitivity of radiometers were determined, and a procedure for transferring an absolute calibration to portable light sources was defined and executed.

  10. Sensitivity of hot-cathode ionization vacuum gages in several gases

    NASA Technical Reports Server (NTRS)

    Holanda, R.

    1972-01-01

    Four hot-cathode ionization vacuum gages were calibrated in 12 gases. The relative sensitivities of these gages were compared to several gas properties. Ionization cross section was the physical property which correlated best with gage sensitivity. The effects of gage accelerating voltage and ionization-cross-section energy level were analyzed. Recommendations for predicting gage sensitivity according to gage type were made.

  11. Shock Initiation Characteristics of an Aluminized DNAN/RDX Melt-Cast Explosive

    NASA Astrophysics Data System (ADS)

    Cao, Tong-Tang; Zhou, Lin; Zhang, Xiang-Rong; Zhang, Wei; Miao, Fei-Chao

    2017-10-01

    Shock sensitivity is one of the key parameters for newly developed, 2,4-dinitroanisole (DNAN)-based, melt-cast explosives. For this paper, a series of shock initiation experiments were conducted using a one-dimensional Lagrangian system with a manganin piezoresistive pressure gauge technique to evaluate the shock sensitivity of an aluminized DNAN/cyclotrimethylenetrinitramine (RDX) melt-cast explosive. This study fully investigated the effects of particle size distributions in both RDX and aluminum, as well as the RDX's crystal quality on the shock sensitivity of the aluminized DNAN/RDX melt-cast explosive. Ultimately, the shock sensitivity of the aluminized DNAN/RDX melt-cast explosives increases when the particle size decreases in both RDX and aluminum. Additionally, shock sensitivity increases when the RDX's crystal quality decreases. In order to simulate these effects, an Ignition and Growth (I&G) reactive flow model was calibrated. This calibrated I&G model was able to predict the shock initiation characteristics of the aluminized DNAN/RDX melt-cast explosive.

  12. Dew Point Calibration System Using a Quartz Crystal Sensor with a Differential Frequency Method

    PubMed Central

    Lin, Ningning; Meng, Xiaofeng; Nie, Jing

    2016-01-01

    In this paper, the influence of temperature on quartz crystal microbalance (QCM) sensor response during dew point calibration is investigated. The aim is to present a compensation method to eliminate temperature impact on frequency acquisition. A new sensitive structure is proposed with double QCMs. One is kept in contact with the environment, whereas the other is not exposed to the atmosphere. There is a thermal conductivity silicone pad between each crystal and a refrigeration device to keep a uniform temperature condition. A differential frequency method is described in detail and is applied to calibrate the frequency characteristics of QCM at the dew point of −3.75 °C. It is worth noting that frequency changes of two QCMs were approximately opposite when temperature conditions were changed simultaneously. The results from continuous experiments show that the frequencies of two QCMs as the dew point moment was reached have strong consistency and high repeatability, leading to the conclusion that the sensitive structure can calibrate dew points with high reliability. PMID:27869746

  13. Calibration of the QCM/SAW Cascade Impactor for Measurement of Ozone

    NASA Technical Reports Server (NTRS)

    Williams, Cassandra K.; Peterson, C. B.; Morris, V. R.

    1997-01-01

    The Quartz Crystal Microbalance Surface Acoustic Wave (QCM/SAW) cascade impactor is an instrument designed to collect size-fractionated distributions of aerosols on a series of quartz crystals and employ SAW devices coated with chemical sensors for gas detection. We are calibrating the cascade impactor in our laboratory for future deployment for in-situ experiments to measure ozone. Experiments have been performed to characterize the QCM and SAW mass loading, saturation limits, mass frequency relationships, and sensitivity. The characteristics of mass loading, saturation limits, mass-frequency relationships, sensitivity, and the loss of ozone on different materials have been quantified.

  14. Energy dependent calibration of XR-QA2 radiochromic film with monochromatic and polychromatic x-ray beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di Lillo, F.; Mettivier, G., E-mail: mettivier@na.infn.it; Sarno, A.

    2016-01-15

    Purpose: This work investigates the energy response and dose-response curve determinations for XR-QA2 radiochromic film dosimetry system used for synchrotron radiation work and for quality assurance in diagnostic radiology, in the range of effective energies 18–46.5 keV. Methods: Pieces of XR-QA2 films were irradiated, in a plane transverse to the beam axis, with a monochromatic beam of energy in the range 18–40 keV at the ELETTRA synchrotron radiation facility (Trieste, Italy) and with a polychromatic beam from a laboratory x-ray tube operated at 80, 100, and 120 kV. The film calibration curve was expressed as air kerma (measured free-in-air withmore » an ionization chamber) versus the net optical reflectance change (netΔR) derived from the red channel of the RGB scanned film image. Four functional relationships (rational, linear exponential, power, and logarithm) were tested to evaluate the best curve for fitting the calibration data. The adequacy of the various fitting functions was tested by using the uncertainty analysis and by assessing the average of the absolute air kerma error calculated as the difference between calculated and delivered air kerma. The sensitivity of the film was evaluated as the ratio of the change in net reflectance to the corresponding air kerma. Results: The sensitivity of XR-QA2 films increased in the energy range 18–39 keV, with a maximum variation of about 170%, and decreased in the energy range 38–46.5 keV. The present results confirmed and extended previous findings by this and other groups, as regards the dose response of the radiochromic film XR-QA2 to monochromatic and polychromatic x-ray beams, respectively. Conclusions: The XR-QA2 radiochromic film response showed a strong dependence on beam energy for both monochromatic and polychromatic beams in the range of half value layer values from 0.55 to 6.1 mm Al and corresponding effective energies from 18 to 46.5 keV. In this range, the film response varied by 170%, from a minimum sensitivity of 0.0127 to a maximum sensitivity of 0.0219 at 10 mGy air kerma in air. The more suitable function for air kerma calibration of the XR-QA2 radiochromic film was the power function. A significant batch-to-batch variation, up to 55%, in film response at 120 kV (46.5 keV effective energy) was observed in comparison with published data.« less

  15. Assessing the sensitivity of benzene cluster cation chemical ionization mass spectrometry toward a wide array of biogenic volatile organic compounds

    NASA Astrophysics Data System (ADS)

    Lavi, Avi; Vermeuel, Michael; Novak, Gordon; Bertram, Timothy

    2017-04-01

    Chemical ionization mass spectrometry is a real-time, sensitive and selective measurement technique for the detection of volatile organic compounds (VOCs). The benefits of CIMS technology make it highly suitable for field measurements that requires fast (10Hz and higher) response rates, such as the study of surface-atmosphere exchange processes by the eddy covariance method. The use of benzene cluster cations as a regent ion was previously demonstrated as a sensitive and selective method for the detection of select biogenic VOCs (e.g. isoprene, monoterpenes and sesquiterpenes) [Kim et al., 2016; Leibrock and Huey, 2000]. Quantitative analysis of atmospheric trace gases necessitates calibration for each analyte as a function of atmospheric conditions. We describe a custom designed calibration system, based on liquid evaporation, for determination of the sensitivity of the benzene-CIMS to a wide range of organic compounds at atmospherically relevant mixing ratios (<200 ppt). We report on the effect of atmospheric water vapor and oxygen concentrations on instrument response for isoprene and a wide range of monoterpenes and sesquiterpenes. To gain mechanistic insight into the ion-molecule reactions and the role of water vapor and oxygen, we compare our measured sensitivities with a computational analysis of the charge distribution between the analyte, reagent ion and water molecules in the gas phase. These parameters provide insight on the ionization mechanism and provide parameters for quantification of organic molecules measured during field campaigns. References Kim, M. J., M. C. Zoerb, N. R. Campbell, K. J. Zimmermann, B. W. Blomquist, B. J. Huebert, and T. H. Bertram (2016), Revisiting benzene cluster cations for the chemical ionization of dimethyl sulfide and select volatile organic compounds, Atmos Meas Tech, 9(4), 1473-1484, doi:10.5194/amt-9-1473-2016. Leibrock, E., and L. G. Huey (2000), Ion chemistry for the detection of isoprene and other volatile organic compounds in ambient air, Geophys Res Lett, 27(12), 1719-1722, doi:Doi 10.1029/1999gl010804.

  16. Setting up an atmospheric-hydrologic model for seasonal forecasts of water flow into dams in a mountainous semi-arid environment (Cyprus)

    NASA Astrophysics Data System (ADS)

    Camera, Corrado; Bruggeman, Adriana; Zittis, Georgios; Hadjinicolaou, Panos

    2017-04-01

    Due to limited rainfall concentrated in the winter months and long dry summers, storage and management of water resources is of paramount importance in Cyprus. For water storage purposes, the Cyprus Water Development Department is responsible for the operation of 56 large dams total volume of 310 Mm3) and 51 smaller reservoirs (total volume of 17 Mm3) over the island. Climate change is also expected to heavily affect Cyprus water resources with a 1.5%-12% decrease in mean annual rainfall (Camera et al., 2016) projected for the period 2020-2050, relative to 1980-2010. This will make reliable seasonal water inflow forecasts even more important for water managers. The overall aim of this study is to set-up the widely used Weather Research and Forecasting (WRF) model with its hydrologic extension (WRF-hydro), for seasonal forecasts of water inflow in dams located in the Troodos Mountains of Cyprus. The specific objectives of this study are: i) the calibration and evaluation of WRF-Hydro for the simulation of stream flows, in the Troodos Mountains, for past rainfall seasons; ii) a sensitivity analysis of the model parameters; iii) a comparison of the application of the atmospheric-hydrologic modelling chain versus the use of climate observations as forcing. The hydrologic model is run in its off-line version with daily forcing over a 1-km grid, while the overland and channel routing is performed on a 100-m grid with a time-step of 6 seconds. Model outputs are exported on a daily base. First, WRF-Hydro is calibrated and validated over two 1-year periods (October-September), using a 1-km gridded observational precipitation dataset (Camera et al., 2014) as input. For the calibration and validation periods, years with annual rainfall close to the long-term average and with the presence of extreme rainfall and flow events were selected. A sensitivity analysis is performed, for the following parameters: partitioning of rainfall into runoff and infiltration (REFKDT), the partitioning of deep percolation between losses and baseflow contribution (LOSS_BASE), water retention depth (RETDEPRTFAC), overland roughness (OVROUGHRTFAC), and channel manning coefficients (MANN). The calibrated WRF-Hydro shows a good ability to reproduce annual total streamflow (-19% error) and total peak discharge volumes (+3% error), although very high values of MANN were used to match the timing of the peak and get positive values of Nash-Sutcliffe efficiency coefficient (0.13). The two most sensitive parameters for the modeled seasonal flow were REFKDT and LOSS_BASE. Simulations of the calibrated WRF-Hydro with WRF modelled atmospheric forcing showed high errors in comparison with those forced with observations, which can be corrected only by modifying the most sensitive parameters by at least one order of magnitude. This study has received funding from the EU H2020 BINGO Project (GA 641739). Camera C., Bruggeman A., Hadjinicolaou P., Pashiardis S., Lange M.A., 2016. Evaluation of interpolation techniques for the creation of gridded daily precipitation (1 × 1 km2); Cyprus, 1980-2010. J Geophys Res Atmos 119, 693-712, DOI:10.1002/2013JD020611 Camera C., Bruggeman A., Hadjinicolaou P., Michaelides S., Lange M.A., 2016. Evaluation of a spatial rainfall generator for generating high resolution precipitation projections over orographically complex terrain. Stoch Environ Res Risk Assess, DOI 10.1007/s00477-016-1239-1

  17. Considerations for test design to accommodate energy-budget models in ecotoxicology: a case study for acetone in the pond snail Lymnaea stagnalis.

    PubMed

    Barsi, Alpar; Jager, Tjalling; Collinet, Marc; Lagadic, Laurent; Ducrot, Virginie

    2014-07-01

    Toxicokinetic-toxicodynamic (TKTD) modeling offers many advantages in the analysis of ecotoxicity test data. Calibration of TKTD models, however, places different demands on test design compared with classical concentration-response approaches. In the present study, useful complementary information is provided regarding test design for TKTD modeling. A case study is presented for the pond snail Lymnaea stagnalis exposed to the narcotic compound acetone, in which the data on all endpoints were analyzed together using a relatively simple TKTD model called DEBkiss. Furthermore, the influence of the data used for calibration on accuracy and precision of model parameters is discussed. The DEBkiss model described toxic effects on survival, growth, and reproduction over time well, within a single integrated analysis. Regarding the parameter estimates (e.g., no-effect concentration), precision rather than accuracy was affected depending on which data set was used for model calibration. In addition, the present study shows that the intrinsic sensitivity of snails to acetone stays the same across different life stages, including the embryonic stage. In fact, the data on egg development allowed for selection of a unique metabolic mode of action for the toxicant. Practical and theoretical considerations for test design to accommodate TKTD modeling are discussed in the hope that this information will aid other researchers to make the best possible use of their test animals. © 2014 SETAC.

  18. Sediment transport in forested head water catchments - Calibration and validation of a soil erosion and landscape evolution model

    NASA Astrophysics Data System (ADS)

    Hancock, G. R.; Webb, A. A.; Turner, L.

    2017-11-01

    Sediment transport and soil erosion can be determined by a variety of field and modelling approaches. Computer based soil erosion and landscape evolution models (LEMs) offer the potential to be reliable assessment and prediction tools. An advantage of such models is that they provide both erosion and deposition patterns as well as total catchment sediment output. However, before use, like all models they require calibration and validation. In recent years LEMs have been used for a variety of both natural and disturbed landscape assessment. However, these models have not been evaluated for their reliability in steep forested catchments. Here, the SIBERIA LEM is calibrated and evaluated for its reliability for two steep forested catchments in south-eastern Australia. The model is independently calibrated using two methods. Firstly, hydrology and sediment transport parameters are inferred from catchment geomorphology and soil properties and secondly from catchment sediment transport and discharge data. The results demonstrate that both calibration methods provide similar parameters and reliable modelled sediment transport output. A sensitivity study of the input parameters demonstrates the model's sensitivity to correct parameterisation and also how the model could be used to assess potential timber harvesting as well as the removal of vegetation by fire.

  19. Enhanced chemiluminescence for trazodone trace analysis based on acidic permanganate oxidation in concurrent presence of rhodamine 6G.

    PubMed

    Fujimori, Keiichi; Sakata, Yuta; Moriuchi-Kawakami, Takayo; Shibutani, Yasuhiko

    2017-11-01

    A new sensitized chemiluminescence method by acidic permanganate oxidation was developed for the sensitive determination of trazodone. A fluorescent dye as used rhodamine 6G to increase a chemiluminescence intensity. Under optimum conditions, the liner range of the calibration curve was obtained for 1-5000 nmol/L. The limit of detection was calculated from 3σ of a blank was 0.23 nmol/L. The coexistent ions and substances had no interference with the chemiluminescence measurement. The chemiluminescence spectra were measured to elucidate a possible mechanism for the system. The present method was satisfactorily used in the determination of the drugs in pharmaceutical samples and animal serums. Copyright © 2017 John Wiley & Sons, Ltd.

  20. Test of prototype ITER vacuum ultraviolet spectrometer and its application to impurity study in KSTAR plasmas.

    PubMed

    Seon, C R; Hong, J H; Jang, J; Lee, S H; Choe, W; Lee, H H; Cheon, M S; Pak, S; Lee, H G; Biel, W; Barnsley, R

    2014-11-01

    To optimize the design of ITER vacuum ultraviolet (VUV) spectrometer, a prototype VUV spectrometer was developed. The sensitivity calibration curve of the spectrometer was calculated from the mirror reflectivity, the grating efficiency, and the detector efficiency. The calibration curve was consistent with the calibration points derived in the experiment using the calibrated hollow cathode lamp. For the application of the prototype ITER VUV spectrometer, the prototype spectrometer was installed at KSTAR, and various impurity emission lines could be measured. By analyzing about 100 shots, strong positive correlation between the O VI and the C IV emission intensities could be found.

  1. Unraveling fabrication and calibration of wearable gas monitor for use under free-living conditions.

    PubMed

    Yue Deng; Cheng Chen; Tsow, Francis; Xiaojun Xian; Forzani, Erica

    2016-08-01

    Volatile organic compounds (VOC) are organic chemicals that have high vapor pressure at regular conditions. Some VOC could be dangerous to human health, therefore it is important to determine real-time indoor and outdoor personal exposures to VOC. To achieve this goal, our group has developed a wearable gas monitor with a complete sensor fabrication and calibration protocol for free-living conditions. Correction factors for calibrating the sensors, including sensitivity, aging effect, and temperature effect are implemented into a Quick Response Code (QR code), so that the pre-calibrated quartz tuning fork (QTF) sensor can be used with the wearable monitor under free-living conditions.

  2. ASME V\\&V challenge problem: Surrogate-based V&V

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beghini, Lauren L.; Hough, Patricia D.

    2015-12-18

    The process of verification and validation can be resource intensive. From the computational model perspective, the resource demand typically arises from long simulation run times on multiple cores coupled with the need to characterize and propagate uncertainties. In addition, predictive computations performed for safety and reliability analyses have similar resource requirements. For this reason, there is a tradeoff between the time required to complete the requisite studies and the fidelity or accuracy of the results that can be obtained. At a high level, our approach is cast within a validation hierarchy that provides a framework in which we perform sensitivitymore » analysis, model calibration, model validation, and prediction. The evidence gathered as part of these activities is mapped into the Predictive Capability Maturity Model to assess credibility of the model used for the reliability predictions. With regard to specific technical aspects of our analysis, we employ surrogate-based methods, primarily based on polynomial chaos expansions and Gaussian processes, for model calibration, sensitivity analysis, and uncertainty quantification in order to reduce the number of simulations that must be done. The goal is to tip the tradeoff balance to improving accuracy without increasing the computational demands.« less

  3. GOSAT TIR radiometric validation toward simultaneous GHG column and profile observation

    NASA Astrophysics Data System (ADS)

    Kataoka, F.; Knuteson, R. O.; Kuze, A.; Shiomi, K.; Suto, H.; Saitoh, N.

    2015-12-01

    The Greenhouse gases Observing SATellite (GOSAT) was launched on January 2009 and continues its operation for more than six years. The thermal and near infrared sensor for carbon observation Fourier-Transform Spectrometer (TANSO-FTS) onboard GOSAT measures greenhouse gases (GHG), such as CO2 and CH4, with wide and high resolution spectra from shortwave infrared (SWIR) to thermal infrared (TIR). This instrument has the advantage of being able to measure simultaneously the same field of view in different spectral ranges. The combination of column-GHG form SWIR band and vertical profile-GHG from TIR band provide better understanding and distribution of GHG, especially in troposphere. This work describes the radiometric validation and sensitivity analysis of TANSO-FTS TIR spectra, especially CO2, atmospheric window and CH4 channels with forward calculation. In this evaluation, we used accurate in-situ dataset of the HIPPO (HIAPER Pole-to-Pole Observation) airplane observation data and GOSAT vicarious calibration and validation campaign data in Railroad Valley, NV. The HIPPO aircraft campaign had taken accurate atmospheric vertical profile dataset (T, RH, O3, CO2, CH4, N2O, CO) approximately pole-to-pole from the surface to the tropopause over the ocean. We implemented these dataset for forward calculation and made the spectral correction model with respect to wavenumber and internal calibration blackbody temperature The GOSAT vicarious calibration campaign have conducted every year since 2009 near summer solstice in Railroad Valley, where high-temperature desert site. In this campaign, we have measured temperature and humidity by a radiosonde and CO2, CH4 and O3 profile by the AJAX airplane at the time of the GOSAT overpass. Sometimes, the GHG profiles over the Railroad Valley show the air mass advection in mid-troposphere depending on upper wind. These advections bring the different concentration of GHG in lower and upper troposphere. Using these cases, we made sensitivity analysis of TANSO-FTS TIR band in troposphere changing in-situ GHG profiles.

  4. Results and lessons learned from MODIS polarization sensitivity characterization

    NASA Astrophysics Data System (ADS)

    Sun, J.; Xiong, X.; Wang, X.; Qiu, S.; Xiong, S.; Waluschka, E.

    2006-08-01

    In addition to radiometric, spatial, and spectral calibration requirements, MODIS design specifications include polarization sensitivity requirements of less than 2% for all Reflective Solar Bands (RSB) except for the band centered at 412nm. To the best of our knowledge, MODIS was the first imaging radiometer that went through comprehensive system level (end-to-end) polarization characterization. MODIS polarization sensitivity was measured pre-launch at a number of sensor view angles using a laboratory Polarization Source Assembly (PSA) that consists of a rotatable source, a polarizer (Ahrens prism design), and a collimator. This paper describes MODIS polarization characterization approaches used by MODIS Characterization Support Team (MCST) at NASA/GSFC and addresses issues and concerns in the measurements. Results (polarization factor and phase angle) using different analyzing methods are discussed. Also included in this paper is a polarization characterization comparison between Terra and Aqua MODIS. Our previous and recent analysis of MODIS RSB polarization sensitivity could provide useful information for future Earth-observing sensor design, development, and characterization.

  5. Pd/Ag coated fiber Bragg grating sensor for hydrogen monitoring in power transformers.

    PubMed

    Ma, G M; Jiang, J; Li, C R; Song, H T; Luo, Y T; Wang, H B

    2015-04-01

    Compared with conventional DGA (dissolved gas analysis) method for on-line monitoring of power transformers, FBG (fiber Bragg grating) hydrogen sensor represents marked advantages over immunity to electromagnetic field, time-saving, and convenience to defect location. Thus, a novel FBG hydrogen sensor based on Pd/Ag (Palladium/Silver) along with polyimide composite film to measure dissolved hydrogen concentration in large power transformers is proposed in this article. With the help of Pd/Ag composite coating, the enhanced performance on mechanical strength and sensitivity is demonstrated, moreover, the response time and sensitivity influenced by oil temperature are solved by correction lines. Sensitivity measurement and temperature calibration of the specific hydrogen sensor have been done respectively in the lab. And experiment results show a high sensitivity of 0.055 pm/(μl/l) with instant response time about 0.4 h under the typical operating temperature of power transformers, which proves a potential utilization inside power transformers to monitor the health status by detecting the dissolved hydrogen concentration.

  6. Numerical groundwater-flow modeling to evaluate potential effects of pumping and recharge: implications for sustainable groundwater management in the Mahanadi delta region, India

    NASA Astrophysics Data System (ADS)

    Sahoo, Sasmita; Jha, Madan K.

    2017-12-01

    Process-based groundwater models are useful to understand complex aquifer systems and make predictions about their response to hydrological changes. A conceptual model for evaluating responses to environmental changes is presented, considering the hydrogeologic framework, flow processes, aquifer hydraulic properties, boundary conditions, and sources and sinks of the groundwater system. Based on this conceptual model, a quasi-three-dimensional transient groundwater flow model was designed using MODFLOW to simulate the groundwater system of Mahanadi River delta, eastern India. The model was constructed in the context of an upper unconfined aquifer and lower confined aquifer, separated by an aquitard. Hydraulic heads of 13 shallow wells and 11 deep wells were used to calibrate transient groundwater conditions during 1997-2006, followed by validation (2007-2011). The aquifer and aquitard hydraulic properties were obtained by pumping tests and were calibrated along with the rainfall recharge. The statistical and graphical performance indicators suggested a reasonably good simulation of groundwater flow over the study area. Sensitivity analysis revealed that groundwater level is most sensitive to the hydraulic conductivities of both the aquifers, followed by vertical hydraulic conductivity of the confining layer. The calibrated model was then employed to explore groundwater-flow dynamics in response to changes in pumping and recharge conditions. The simulation results indicate that pumping has a substantial effect on the confined aquifer flow regime as compared to the unconfined aquifer. The results and insights from this study have important implications for other regional groundwater modeling studies, especially in multi-layered aquifer systems.

  7. Sensitivity analysis in practice: providing an uncertainty budget when applying supplement 1 to the GUM

    NASA Astrophysics Data System (ADS)

    Allard, Alexandre; Fischer, Nicolas

    2018-06-01

    Sensitivity analysis associated with the evaluation of measurement uncertainty is a very important tool for the metrologist, enabling them to provide an uncertainty budget and to gain a better understanding of the measurand and the underlying measurement process. Using the GUM uncertainty framework, the contribution of an input quantity to the variance of the output quantity is obtained through so-called ‘sensitivity coefficients’. In contrast, such coefficients are no longer computed in cases where a Monte-Carlo method is used. In such a case, supplement 1 to the GUM suggests varying the input quantities one at a time, which is not an efficient method and may provide incorrect contributions to the variance in cases where significant interactions arise. This paper proposes different methods for the elaboration of the uncertainty budget associated with a Monte Carlo method. An application to the mass calibration example described in supplement 1 to the GUM is performed with the corresponding R code for implementation. Finally, guidance is given for choosing a method, including suggestions for a future revision of supplement 1 to the GUM.

  8. Modelling Hydrologic Processes in the Mekong River Basin Using a Distributed Model Driven by Satellite Precipitation and Rain Gauge Observations

    PubMed Central

    Wang, Wei; Lu, Hui; Yang, Dawen; Sothea, Khem; Jiao, Yang; Gao, Bin; Peng, Xueting; Pang, Zhiguo

    2016-01-01

    The Mekong River is the most important river in Southeast Asia. It has increasingly suffered from water-related problems due to economic development, population growth and climate change in the surrounding areas. In this study, we built a distributed Geomorphology-Based Hydrological Model (GBHM) of the Mekong River using remote sensing data and other publicly available data. Two numerical experiments were conducted using different rainfall data sets as model inputs. The data sets included rain gauge data from the Mekong River Commission (MRC) and remote sensing rainfall data from the Tropic Rainfall Measurement Mission (TRMM 3B42V7). Model calibration and validation were conducted for the two rainfall data sets. Compared to the observed discharge, both the gauge simulation and TRMM simulation performed well during the calibration period (1998–2001). However, the performance of the gauge simulation was worse than that of the TRMM simulation during the validation period (2002–2012). The TRMM simulation is more stable and reliable at different scales. Moreover, the calibration period was changed to 2, 4, and 8 years to test the impact of the calibration period length on the two simulations. The results suggest that longer calibration periods improved the GBHM performance during validation periods. In addition, the TRMM simulation is more stable and less sensitive to the calibration period length than is the gauge simulation. Further analysis reveals that the uneven distribution of rain gauges makes the input rainfall data less representative and more heterogeneous, worsening the simulation performance. Our results indicate that remotely sensed rainfall data may be more suitable for driving distributed hydrologic models, especially in basins with poor data quality or limited gauge availability. PMID:27010692

  9. Modelling Hydrologic Processes in the Mekong River Basin Using a Distributed Model Driven by Satellite Precipitation and Rain Gauge Observations.

    PubMed

    Wang, Wei; Lu, Hui; Yang, Dawen; Sothea, Khem; Jiao, Yang; Gao, Bin; Peng, Xueting; Pang, Zhiguo

    2016-01-01

    The Mekong River is the most important river in Southeast Asia. It has increasingly suffered from water-related problems due to economic development, population growth and climate change in the surrounding areas. In this study, we built a distributed Geomorphology-Based Hydrological Model (GBHM) of the Mekong River using remote sensing data and other publicly available data. Two numerical experiments were conducted using different rainfall data sets as model inputs. The data sets included rain gauge data from the Mekong River Commission (MRC) and remote sensing rainfall data from the Tropic Rainfall Measurement Mission (TRMM 3B42V7). Model calibration and validation were conducted for the two rainfall data sets. Compared to the observed discharge, both the gauge simulation and TRMM simulation performed well during the calibration period (1998-2001). However, the performance of the gauge simulation was worse than that of the TRMM simulation during the validation period (2002-2012). The TRMM simulation is more stable and reliable at different scales. Moreover, the calibration period was changed to 2, 4, and 8 years to test the impact of the calibration period length on the two simulations. The results suggest that longer calibration periods improved the GBHM performance during validation periods. In addition, the TRMM simulation is more stable and less sensitive to the calibration period length than is the gauge simulation. Further analysis reveals that the uneven distribution of rain gauges makes the input rainfall data less representative and more heterogeneous, worsening the simulation performance. Our results indicate that remotely sensed rainfall data may be more suitable for driving distributed hydrologic models, especially in basins with poor data quality or limited gauge availability.

  10. Time-Distance Analysis of Deep Solar Convection

    NASA Technical Reports Server (NTRS)

    Duvall, T. L., Jr.; Hanasoge, S. M.

    2011-01-01

    Recently it was shown by Hanasoge, Duvall, and DeRosa (2010) that the upper limit to convective flows for spherical harmonic degrees l

  11. Intracavity optogalvanic spectroscopy. An analytical technique for 14C analysis with subattomole sensitivity.

    PubMed

    Murnick, Daniel E; Dogru, Ozgur; Ilkmen, Erhan

    2008-07-01

    We show a new ultrasensitive laser-based analytical technique, intracavity optogalvanic spectroscopy, allowing extremely high sensitivity for detection of (14)C-labeled carbon dioxide. Capable of replacing large accelerator mass spectrometers, the technique quantifies attomoles of (14)C in submicrogram samples. Based on the specificity of narrow laser resonances coupled with the sensitivity provided by standing waves in an optical cavity and detection via impedance variations, limits of detection near 10(-15) (14)C/(12)C ratios are obtained. Using a 15-W (14)CO2 laser, a linear calibration with samples from 10(-15) to >1.5 x 10(-12) in (14)C/(12)C ratios, as determined by accelerator mass spectrometry, is demonstrated. Possible applications include microdosing studies in drug development, individualized subtherapeutic tests of drug metabolism, carbon dating and real time monitoring of atmospheric radiocarbon. The method can also be applied to detection of other trace entities.

  12. Covalent functionalization of single-walled carbon nanotubes with polytyrosine: Characterization and analytical applications for the sensitive quantification of polyphenols.

    PubMed

    Eguílaz, Marcos; Gutiérrez, Alejandro; Gutierrez, Fabiana; González-Domínguez, Jose Miguel; Ansón-Casaos, Alejandro; Hernández-Ferrer, Javier; Ferreyra, Nancy F; Martínez, María T; Rivas, Gustavo

    2016-02-25

    This work reports the synthesis and characterization of single-walled carbon nanotubes (SWCNT) covalently functionalized with polytyrosine (Polytyr); the critical analysis of the experimental conditions to obtain the efficient dispersion of the modified carbon nanotubes; and the analytical performance of glassy carbon electrodes (GCE) modified with the dispersion (GCE/SWCNT-Polytyr) for the highly sensitive quantification of polyphenols. Under the optimal conditions, the calibration plot for the amperometric response of gallic acid (GA) shows a linear range between 5.0 × 10(-7) and 1.7 × 10(-4) M, with a sensitivity of (518 ± 5) m AM(-1) cm(-2), and a detection limit of 8.8 nM. The proposed sensor was successfully used for the determination of total polyphenolic content in tea extracts. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. SUPPLEMENT: “THE RATE OF BINARY BLACK HOLE MERGERS INFERRED FROM ADVANCED LIGO OBSERVATIONS SURROUNDING GW150914” (2016, ApJL, 833, L1)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abbott, B. P.; Abbott, R.; Abernathy, M. R.

    This article provides supplemental information for a Letter reporting the rate of (BBH) coalescences inferred from 16 days of coincident Advanced LIGO observations surrounding the transient (GW) signal GW150914. In that work we reported various rate estimates whose 90% confidence intervals fell in the range 2–600 Gpc{sup −3} yr{sup −1}. Here we give details on our method and computations, including information about our search pipelines, a derivation of our likelihood function for the analysis, a description of the astrophysical search trigger distribution expected from merging BBHs, details on our computational methods, a description of the effects and our model for calibration uncertainty,more » and an analytic method for estimating our detector sensitivity, which is calibrated to our measurements.« less

  14. Modeling the system dynamics for nutrient removal in an innovative septic tank media filter.

    PubMed

    Xuan, Zhemin; Chang, Ni-Bin; Wanielista, Martin

    2012-05-01

    A next generation septic tank media filter to replace or enhance the current on-site wastewater treatment drainfields was proposed in this study. Unit operation with known treatment efficiencies, flow pattern identification, and system dynamics modeling was cohesively concatenated in order to prove the concept of a newly developed media filter. A multicompartmental model addressing system dynamics and feedbacks based on our assumed microbiological processes accounting for aerobic, anoxic, and anaerobic conditions in the media filter was constructed and calibrated with the aid of in situ measurements and the understanding of the flow patterns. Such a calibrated system dynamics model was then applied for a sensitivity analysis under changing inflow conditions based on the rates of nitrification and denitrification characterized through the field-scale testing. This advancement may contribute to design such a drainfield media filter in household septic tank systems in the future.

  15. A Wireless Multi-Sensor Dielectric Impedance Spectroscopy Platform

    PubMed Central

    Ghaffari, Seyed Alireza; Caron, William-O.; Loubier, Mathilde; Rioux, Maxime; Viens, Jeff; Gosselin, Benoit; Messaddeq, Younes

    2015-01-01

    This paper describes the development of a low-cost, miniaturized, multiplexed, and connected platform for dielectric impedance spectroscopy (DIS), designed for in situ measurements and adapted to wireless network architectures. The platform has been tested and used as a DIS sensor node on ZigBee mesh and was able to interface up to three DIS sensors at the same time and relay the information through the network for data analysis and storage. The system is built from low-cost commercial microelectronics components, performs dielectric spectroscopy ranging from 5 kHz to 100 kHz, and benefits from an on-the-fly calibration system that makes sensor calibration easy. The paper describes the microelectronics design, the Nyquist impedance response, the measurement sensitivity and accuracy, and the testing of the platform for in situ dielectric impedance spectroscopy applications pertaining to fertilizer sensing, water quality sensing, and touch sensing. PMID:26393587

  16. An Empirical Approach to Ocean Color Data: Reducing Bias and the Need for Post-Launch Radiometric Re-Calibration

    NASA Technical Reports Server (NTRS)

    Gregg, Watson W.; Casey, Nancy W.; O'Reilly, John E.; Esaias, Wayne E.

    2009-01-01

    A new empirical approach is developed for ocean color remote sensing. Called the Empirical Satellite Radiance-In situ Data (ESRID) algorithm, the approach uses relationships between satellite water-leaving radiances and in situ data after full processing, i.e., at Level-3, to improve estimates of surface variables while relaxing requirements on post-launch radiometric re-calibration. The approach is evaluated using SeaWiFS chlorophyll, which is the longest time series of the most widely used ocean color geophysical product. The results suggest that ESRID 1) drastically reduces the bias of ocean chlorophyll, most impressively in coastal regions, 2) modestly improves the uncertainty, and 3) reduces the sensitivity of global annual median chlorophyll to changes in radiometric re-calibration. Simulated calibration errors of 1% or less produce small changes in global median chlorophyll (less than 2.7%). In contrast, the standard NASA algorithm set is highly sensitive to radiometric calibration: similar 1% calibration errors produce changes in global median chlorophyll up to nearly 25%. We show that 0.1% radiometric calibration error (about 1% in water-leaving radiance) is needed to prevent radiometric calibration errors from changing global annual median chlorophyll more than the maximum interannual variability observed in the SeaWiFS 9-year record (+/- 3%), using the standard method. This is much more stringent than the goal for SeaWiFS of 5% uncertainty for water leaving radiance. The results suggest ocean color programs might consider less emphasis of expensive efforts to improve post-launch radiometric re-calibration in favor of increased efforts to characterize in situ observations of ocean surface geophysical products. Although the results here are focused on chlorophyll, in principle the approach described by ESRID can be applied to any surface variable potentially observable by visible remote sensing.

  17. Application of solid-phase microextraction to the quantitative analysis of 1,8-cineole in blood and expired air in a Eucalyptus herbivore, the brushtail possum (Trichosurus vulpecula).

    PubMed

    Boyle, Rebecca R; McLean, Stuart; Brandon, Sue; Pass, Georgia J; Davies, Noel W

    2002-11-25

    We have developed two solid-phase microextraction (SPME) methods, coupled with gas chromatography, for quantitatively analysing the major Eucalyptus leaf terpene, 1,8-cineole, in both expired air and blood from the common brushtail possum (Trichosurus vulpecula). In-line SPME sampling (5 min at 20 degrees C room temperature) of excurrent air from an expiratory chamber containing a possum dosed orally with 1,8-cineole (50 mg/kg) allowed real-time semi-quantitative measurements reflecting 1,8-cineole blood concentrations. Headspace SPME using 50 microl whole blood collected from possums dosed orally with 1,8-cineole (30 mg/kg) resulted in excellent sensitivity (quantitation limit 1 ng/ml) and reproducibility. Blood concentrations ranged between 1 and 1380 ng/ml. Calibration curves were prepared for two concentration ranges (0.05-10 and 10-400 ng/50 microl) for the analysis of blood concentrations. Both calibration curves were linear (r(2)=0.999 and 0.994, respectively) and the equations for the two concentration ranges were consistent. Copyright 2002 Elsevier Science B.V.

  18. Comparative artificial neural network and partial least squares models for analysis of Metronidazole, Diloxanide, Spiramycin and Cliquinol in pharmaceutical preparations.

    PubMed

    Elkhoudary, Mahmoud M; Abdel Salam, Randa A; Hadad, Ghada M

    2014-09-15

    Metronidazole (MNZ) is a widely used antibacterial and amoebicide drug. Therefore, it is important to develop a rapid and specific analytical method for the determination of MNZ in mixture with Spiramycin (SPY), Diloxanide (DIX) and Cliquinol (CLQ) in pharmaceutical preparations. This work describes simple, sensitive and reliable six multivariate calibration methods, namely linear and nonlinear artificial neural networks preceded by genetic algorithm (GA-ANN) and principle component analysis (PCA-ANN) as well as partial least squares (PLS) either alone or preceded by genetic algorithm (GA-PLS) for UV spectrophotometric determination of MNZ, SPY, DIX and CLQ in pharmaceutical preparations with no interference of pharmaceutical additives. The results manifest the problem of nonlinearity and how models like ANN can handle it. Analytical performance of these methods was statistically validated with respect to linearity, accuracy, precision and specificity. The developed methods indicate the ability of the previously mentioned multivariate calibration models to handle and solve UV spectra of the four components' mixtures using easy and widely used UV spectrophotometer. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Enhanced orbit determination filter sensitivity analysis: Error budget development

    NASA Technical Reports Server (NTRS)

    Estefan, J. A.; Burkhart, P. D.

    1994-01-01

    An error budget analysis is presented which quantifies the effects of different error sources in the orbit determination process when the enhanced orbit determination filter, recently developed, is used to reduce radio metric data. The enhanced filter strategy differs from more traditional filtering methods in that nearly all of the principal ground system calibration errors affecting the data are represented as filter parameters. Error budget computations were performed for a Mars Observer interplanetary cruise scenario for cases in which only X-band (8.4-GHz) Doppler data were used to determine the spacecraft's orbit, X-band ranging data were used exclusively, and a combined set in which the ranging data were used in addition to the Doppler data. In all three cases, the filter model was assumed to be a correct representation of the physical world. Random nongravitational accelerations were found to be the largest source of error contributing to the individual error budgets. Other significant contributors, depending on the data strategy used, were solar-radiation pressure coefficient uncertainty, random earth-orientation calibration errors, and Deep Space Network (DSN) station location uncertainty.

  20. Comparative artificial neural network and partial least squares models for analysis of Metronidazole, Diloxanide, Spiramycin and Cliquinol in pharmaceutical preparations

    NASA Astrophysics Data System (ADS)

    Elkhoudary, Mahmoud M.; Abdel Salam, Randa A.; Hadad, Ghada M.

    2014-09-01

    Metronidazole (MNZ) is a widely used antibacterial and amoebicide drug. Therefore, it is important to develop a rapid and specific analytical method for the determination of MNZ in mixture with Spiramycin (SPY), Diloxanide (DIX) and Cliquinol (CLQ) in pharmaceutical preparations. This work describes simple, sensitive and reliable six multivariate calibration methods, namely linear and nonlinear artificial neural networks preceded by genetic algorithm (GA-ANN) and principle component analysis (PCA-ANN) as well as partial least squares (PLS) either alone or preceded by genetic algorithm (GA-PLS) for UV spectrophotometric determination of MNZ, SPY, DIX and CLQ in pharmaceutical preparations with no interference of pharmaceutical additives. The results manifest the problem of nonlinearity and how models like ANN can handle it. Analytical performance of these methods was statistically validated with respect to linearity, accuracy, precision and specificity. The developed methods indicate the ability of the previously mentioned multivariate calibration models to handle and solve UV spectra of the four components’ mixtures using easy and widely used UV spectrophotometer.

  1. Numerical simulation of groundwater flow in Dar es Salaam Coastal Plain (Tanzania)

    NASA Astrophysics Data System (ADS)

    Luciani, Giulia; Sappa, Giuseppe; Cella, Antonella

    2016-04-01

    They are presented the results of a groundwater modeling study on the Coastal Aquifer of Dar es Salaam (Tanzania). Dar es Salaam is one of the fastest-growing coastal cities in Sub-Saharan Africa, with with more than 4 million of inhabitants and a population growth rate of about 8 per cent per year. The city faces periodic water shortages, due to the lack of an adequate water supply network. These two factors have determined, in the last ten years, an increasing demand of groundwater exploitation, carried on by quite a number of private wells, which have been drilled to satisfy human demand. A steady-state three dimensional groundwater model has been set up by the MODFLOW code, and calibrated with the UCODE code for inverse modeling. The aim of the model was to carry out a characterization of groundwater flow system in the Dar es Salaam Coastal Plain. The inputs applied to the model included net recharge rate, calculated from time series of precipitation data (1961-2012), estimations of average groundwater extraction, and estimations of groundwater recharge, coming from zones, outside the area under study. Parametrization of the hydraulic conductivities was realized referring to the main geological features of the study area, based on available literature data and information. Boundary conditions were assigned based on hydrogeological boundaries. The conceptual model was defined in subsequent steps, which added some hydrogeological features and excluded other ones. Calibration was performed with UCODE 2014, using 76 measures of hydraulic head, taken in 2012 referred to the same season. Data were weighted on the basis of the expected errors. Sensitivity analysis of data was performed during calibration, and permitted to identify which parameters were possible to be estimated, and which data could support parameters estimation. Calibration was evaluated based on statistical index, maps of error distribution and test of independence of residuals. Further model analysis was performed after calibration, to test model performance under a range of variations of input variables.

  2. Micro-mass standards to calibrate the sensitivity of mass comparators

    NASA Astrophysics Data System (ADS)

    Madec, Tanguy; Mann, Gaëlle; Meury, Paul-André; Rabault, Thierry

    2007-10-01

    In mass metrology, the standards currently used are calibrated by a chain of comparisons, performed using mass comparators, that extends ultimately from the international prototype (which is the definition of the unit of mass) to the standards in routine use. The differences measured in the course of these comparisons become smaller and smaller as the standards approach the definitions of their units, precisely because of how accurately they have been adjusted. One source of uncertainty in the determination of the difference of mass between the mass compared and the reference mass is the sensitivity error of the comparator used. Unfortunately, in the market there are no mass standards small enough (of the order of a few hundreds of micrograms) for a valid evaluation of this source of uncertainty. The users of these comparators therefore have no choice but to rely on the characteristics claimed by the makers of the comparators, or else to determine this sensitivity error at higher values (at least 1 mg) and interpolate from this result to smaller differences of mass. For this reason, the LNE decided to produce and calibrate micro-mass standards having nominal values between 100 µg and 900 µg. These standards were developed, then tested in multiple comparisons on an A5 type automatic comparator. They have since been qualified and calibrated in a weighing design, repeatedly and over an extended period of time, to establish their stability with respect to oxidation and the harmlessness of the handling and storage procedure associated with their use. Finally, the micro-standards so qualified were used to characterize the sensitivity errors of two of the LNE's mass comparators, including the one used to tie France's Platinum reference standard (Pt 35) to stainless steel and superalloy standards.

  3. Quantitative Clinical Diagnostic Analysis of Acetone in Human Blood by HPLC: A Metabolomic Search for Acetone as Indicator

    PubMed Central

    Akgul Kalkan, Esin; Sahiner, Mehtap; Ulker Cakir, Dilek; Alpaslan, Duygu; Yilmaz, Selehattin

    2016-01-01

    Using high-performance liquid chromatography (HPLC) and 2,4-dinitrophenylhydrazine (2,4-DNPH) as a derivatizing reagent, an analytical method was developed for the quantitative determination of acetone in human blood. The determination was carried out at 365 nm using an ultraviolet-visible (UV-Vis) diode array detector (DAD). For acetone as its 2,4-dinitrophenylhydrazone derivative, a good separation was achieved with a ThermoAcclaim C18 column (15 cm × 4.6 mm × 3 μm) at retention time (t R) 12.10 min and flowrate of 1 mL min−1 using a (methanol/acetonitrile) water elution gradient. The methodology is simple, rapid, sensitive, and of low cost, exhibits good reproducibility, and allows the analysis of acetone in biological fluids. A calibration curve was obtained for acetone using its standard solutions in acetonitrile. Quantitative analysis of acetone in human blood was successfully carried out using this calibration graph. The applied method was validated in parameters of linearity, limit of detection and quantification, accuracy, and precision. We also present acetone as a useful tool for the HPLC-based metabolomic investigation of endogenous metabolism and quantitative clinical diagnostic analysis. PMID:27298750

  4. Simultaneous determination of hydroquinone, catechol and resorcinol by voltammetry using graphene screen-printed electrodes and partial least squares calibration.

    PubMed

    Aragó, Miriam; Ariño, Cristina; Dago, Àngela; Díaz-Cruz, José Manuel; Esteban, Miquel

    2016-11-01

    Catechol (CC), resorcinol (RC) and hydroquinone (HQ) are dihydroxybenzene isomers that usually coexist in different samples and can be determined using voltammetric techniques taking profit of their fast response, high sensitivity and selectivity, cheap instrumentation, simple and timesaving operation modes. However, a strong overlapping of CC and HQ signals is observed hindering their accurate analysis. In the present work, the combination of differential pulse voltammetry with graphene screen-printed electrodes (allowing detection limits of 2.7, 1.7 and 2.4µmolL(-1) for HQ, CC and RC respectively) and the data analysis by partial least squares calibration (giving root mean square errors of prediction, RMSEP values, of 2.6, 4.1 and 2.3 for HQ, CC and RC respectively) has been proposed as a powerful tool for the quantification of mixtures of these dihydroxybenzene isomers. The commercial availability of the screen-printed devices and the low cost and simplicity of the analysis suggest that the proposed method can be a valuable alternative to chromatographic and electrophoretic methods for the considered species. The method has been applied to the analysis of these isomers in spiked tap water. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Effects of calibration methods on quantitative material decomposition in photon-counting spectral computed tomography using a maximum a posteriori estimator.

    PubMed

    Curtis, Tyler E; Roeder, Ryan K

    2017-10-01

    Advances in photon-counting detectors have enabled quantitative material decomposition using multi-energy or spectral computed tomography (CT). Supervised methods for material decomposition utilize an estimated attenuation for each material of interest at each photon energy level, which must be calibrated based upon calculated or measured values for known compositions. Measurements using a calibration phantom can advantageously account for system-specific noise, but the effect of calibration methods on the material basis matrix and subsequent quantitative material decomposition has not been experimentally investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on the accuracy of quantitative material decomposition in the image domain. Gadolinium was chosen as a model contrast agent in imaging phantoms, which also contained bone tissue and water as negative controls. The maximum gadolinium concentration (30, 60, and 90 mM) and total number of concentrations (2, 4, and 7) were independently varied to systematically investigate effects of the material basis matrix and scaling factor calibration on the quantitative (root mean squared error, RMSE) and spatial (sensitivity and specificity) accuracy of material decomposition. Images of calibration and sample phantoms were acquired using a commercially available photon-counting spectral micro-CT system with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material decomposition of gadolinium, calcium, and water was performed for each calibration method using a maximum a posteriori estimator. Both the quantitative and spatial accuracy of material decomposition were most improved by using an increased maximum gadolinium concentration (range) in the basis matrix calibration; the effects of using a greater number of concentrations were relatively small in magnitude by comparison. The material basis matrix calibration was more sensitive to changes in the calibration methods than the scaling factor calibration. The material basis matrix calibration significantly influenced both the quantitative and spatial accuracy of material decomposition, while the scaling factor calibration influenced quantitative but not spatial accuracy. Importantly, the median RMSE of material decomposition was as low as ~1.5 mM (~0.24 mg/mL gadolinium), which was similar in magnitude to that measured by optical spectroscopy on the same samples. The accuracy of quantitative material decomposition in photon-counting spectral CT was significantly influenced by calibration methods which must therefore be carefully considered for the intended diagnostic imaging application. © 2017 American Association of Physicists in Medicine.

  6. Aircraft electric field measurements: Calibration and ambient field retrieval

    NASA Technical Reports Server (NTRS)

    Koshak, William J.; Bailey, Jeff; Christian, Hugh J.; Mach, Douglas M.

    1994-01-01

    An aircraft locally distorts the ambient thundercloud electric field. In order to determine the field in the absence of the aircraft, an aircraft calibration is required. In this work a matrix inversion method is introduced for calibrating an aircraft equipped with four or more electric field sensors and a high-voltage corona point that is capable of charging the aircraft. An analytic, closed form solution for the estimate of a (3 x 3) aircraft calibration matrix is derived, and an absolute calibration experiment is used to improve the relative magnitudes of the elements of this matrix. To demonstrate the calibration procedure, we analyze actual calibration date derived from a Lear jet 28/29 that was equipped with five shutter-type field mill sensors (each with sensitivities of better than 1 V/m) located on the top, bottom, port, starboard, and aft positions. As a test of the calibration method, we analyze computer-simulated calibration data (derived from known aircraft and ambient fields) and explicitly determine the errors involved in deriving the variety of calibration matrices. We extend our formalism to arrive at an analytic solution for the ambient field, and again carry all errors explicitly.

  7. Calibration and comparison of the NASA Lewis free-piston Stirling engine model predictions with RE-1000 test data

    NASA Technical Reports Server (NTRS)

    Geng, Steven M.

    1987-01-01

    A free-piston Stirling engine performance code is being upgraded and validated at the NASA Lewis Research Center under an interagency agreement between the Department of Energy's Oak Ridge National Laboratory and NASA Lewis. Many modifications were made to the free-piston code in an attempt to decrease the calibration effort. A procedure was developed that made the code calibration process more systematic. Engine-specific calibration parameters are often used to bring predictions and experimental data into better agreement. The code was calibrated to a matrix of six experimental data points. Predictions of the calibrated free-piston code are compared with RE-1000 free-piston Stirling engine sensitivity test data taken at NASA Lewis. Reasonable agreement was obtained between the code prediction and the experimental data over a wide range of engine operating conditions.

  8. Calibration and comparison of the NASA Lewis free-piston Stirling engine model predictions with RE-1000 test data

    NASA Technical Reports Server (NTRS)

    Geng, Steven M.

    1987-01-01

    A free-piston Stirling engine performance code is being upgraded and validated at the NASA Lewis Research Center under an interagency agreement between the Department of Energy's Oak Ridge National Laboratory and NASA Lewis. Many modifications were made to the free-piston code in an attempt to decrease the calibration effort. A procedure was developed that made the code calibration process more systematic. Engine-specific calibration parameters are often used to bring predictions and experimental data into better agreement. The code was calibrated to a matrix of six experimental data points. Predictions of the calibrated free-piston code are compared with RE-1000 free-piston Stirling engine sensitivity test data taken at NASA Lewis. Resonable agreement was obtained between the code predictions and the experimental data over a wide range of engine operating conditions.

  9. Handheld Metal Detectors: Nicaraguan Field Test Report

    DTIC Science & Technology

    2001-10-01

    Electronic Sensors Directorate (NVESD), the Organization of American States (OAS) ( Organizacion de los Estados Americanos [OEA]), the Assistance...manufacturer�s instructional manual , against known targets and soil types to optimize the detector sensitivity. Operators were instructed to operate...operating manual in the training/calibration area. Once calibration was completed, the operators were required to spend some time working with the

  10. Error analysis of speed of sound reconstruction in ultrasound limited angle transmission tomography.

    PubMed

    Jintamethasawat, Rungroj; Lee, Won-Mean; Carson, Paul L; Hooi, Fong Ming; Fowlkes, J Brian; Goodsitt, Mitchell M; Sampson, Richard; Wenisch, Thomas F; Wei, Siyuan; Zhou, Jian; Chakrabarti, Chaitali; Kripfgans, Oliver D

    2018-04-07

    We have investigated limited angle transmission tomography to estimate speed of sound (SOS) distributions for breast cancer detection. That requires both accurate delineations of major tissues, in this case by segmentation of prior B-mode images, and calibration of the relative positions of the opposed transducers. Experimental sensitivity evaluation of the reconstructions with respect to segmentation and calibration errors is difficult with our current system. Therefore, parametric studies of SOS errors in our bent-ray reconstructions were simulated. They included mis-segmentation of an object of interest or a nearby object, and miscalibration of relative transducer positions in 3D. Close correspondence of reconstruction accuracy was verified in the simplest case, a cylindrical object in homogeneous background with induced segmentation and calibration inaccuracies. Simulated mis-segmentation in object size and lateral location produced maximum SOS errors of 6.3% within 10 mm diameter change and 9.1% within 5 mm shift, respectively. Modest errors in assumed transducer separation produced the maximum SOS error from miscalibrations (57.3% within 5 mm shift), still, correction of this type of error can easily be achieved in the clinic. This study should aid in designing adequate transducer mounts and calibration procedures, and in specification of B-mode image quality and segmentation algorithms for limited angle transmission tomography relying on ray tracing algorithms. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. Evaluating the predictive accuracy and the clinical benefit of a nomogram aimed to predict survival in node-positive prostate cancer patients: External validation on a multi-institutional database.

    PubMed

    Bianchi, Lorenzo; Schiavina, Riccardo; Borghesi, Marco; Bianchi, Federico Mineo; Briganti, Alberto; Carini, Marco; Terrone, Carlo; Mottrie, Alex; Gacci, Mauro; Gontero, Paolo; Imbimbo, Ciro; Marchioro, Giansilvio; Milanese, Giulio; Mirone, Vincenzo; Montorsi, Francesco; Morgia, Giuseppe; Novara, Giacomo; Porreca, Angelo; Volpe, Alessandro; Brunocilla, Eugenio

    2018-04-06

    To assess the predictive accuracy and the clinical value of a recent nomogram predicting cancer-specific mortality-free survival after surgery in pN1 prostate cancer patients through an external validation. We evaluated 518 prostate cancer patients treated with radical prostatectomy and pelvic lymph node dissection with evidence of nodal metastases at final pathology, at 10 tertiary centers. External validation was carried out using regression coefficients of the previously published nomogram. The performance characteristics of the model were assessed by quantifying predictive accuracy, according to the area under the curve in the receiver operating characteristic curve and model calibration. Furthermore, we systematically analyzed the specificity, sensitivity, positive predictive value and negative predictive value for each nomogram-derived probability cut-off. Finally, we implemented decision curve analysis, in order to quantify the nomogram's clinical value in routine practice. External validation showed inferior predictive accuracy as referred to in the internal validation (65.8% vs 83.3%, respectively). The discrimination (area under the curve) of the multivariable model was 66.7% (95% CI 60.1-73.0%) by testing with receiver operating characteristic curve analysis. The calibration plot showed an overestimation throughout the range of predicted cancer-specific mortality-free survival rates probabilities. However, in decision curve analysis, the nomogram's use showed a net benefit when compared with the scenarios of treating all patients or none. In an external setting, the nomogram showed inferior predictive accuracy and suboptimal calibration characteristics as compared to that reported in the original population. However, decision curve analysis showed a clinical net benefit, suggesting a clinical implication to correctly manage pN1 prostate cancer patients after surgery. © 2018 The Japanese Urological Association.

  12. Method development towards qualitative and semi-quantitative analysis of multiple pesticides from food surfaces and extracts by desorption electrospray ionization mass spectrometry as a preselective tool for food control.

    PubMed

    Gerbig, Stefanie; Stern, Gerold; Brunn, Hubertus E; Düring, Rolf-Alexander; Spengler, Bernhard; Schulz, Sabine

    2017-03-01

    Direct analysis of fruit and vegetable surfaces is an important tool for in situ detection of food contaminants such as pesticides. We tested three different ways to prepare samples for the qualitative desorption electrospray ionization mass spectrometry (DESI-MS) analysis of 32 pesticides found on nine authentic fruits collected from food control. Best recovery rates for topically applied pesticides (88%) were found by analyzing the surface of a glass slide which had been rubbed against the surface of the food. Pesticide concentration in all samples was at or below the maximum residue level allowed. In addition to the high sensitivity of the method for qualitative analysis, quantitative or, at least, semi-quantitative information is needed in food control. We developed a DESI-MS method for the simultaneous determination of linear calibration curves of multiple pesticides of the same chemical class using normalization to one internal standard (ISTD). The method was first optimized for food extracts and subsequently evaluated for the quantification of pesticides in three authentic food extracts. Next, pesticides and the ISTD were applied directly onto food surfaces, and the corresponding calibration curves were obtained. The determination of linear calibration curves was still feasible, as demonstrated for three different food surfaces. This proof-of-principle method was used to simultaneously quantify two pesticides on an authentic sample, showing that the method developed could serve as a fast and simple preselective tool for disclosure of pesticide regulation violations. Graphical Abstract Multiple pesticide residues were detected and quantified in-situ from an authentic set of food items and extracts in a proof of principle study.

  13. Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers

    PubMed Central

    Sun, Ting; Xing, Fei; You, Zheng

    2013-01-01

    The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers. PMID:23567527

  14. Flows of dioxins and furans in coastal food webs: inverse modeling, sensitivity analysis, and applications of linear system theory.

    PubMed

    Saloranta, Tuomo M; Andersen, Tom; Naes, Kristoffer

    2006-01-01

    Rate constant bioaccumulation models are applied to simulate the flow of polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) in the coastal marine food web of Frierfjorden, a contaminated fjord in southern Norway. We apply two different ways to parameterize the rate constants in the model, global sensitivity analysis of the models using Extended Fourier Amplitude Sensitivity Test (Extended FAST) method, as well as results from general linear system theory, in order to obtain a more thorough insight to the system's behavior and to the flow pathways of the PCDD/Fs. We calibrate our models against observed body concentrations of PCDD/Fs in the food web of Frierfjorden. Differences between the predictions from the two models (using the same forcing and parameter values) are of the same magnitude as their individual deviations from observations, and the models can be said to perform about equally well in our case. Sensitivity analysis indicates that the success or failure of the models in predicting the PCDD/F concentrations in the food web organisms highly depends on the adequate estimation of the truly dissolved concentrations in water and sediment pore water. We discuss the pros and cons of such models in understanding and estimating the present and future concentrations and bioaccumulation of persistent organic pollutants in aquatic food webs.

  15. Salting-out assisted liquid-liquid extraction and partial least squares regression to assay low molecular weight polycyclic aromatic hydrocarbons leached from soils and sediments

    NASA Astrophysics Data System (ADS)

    Bressan, Lucas P.; do Nascimento, Paulo Cícero; Schmidt, Marcella E. P.; Faccin, Henrique; de Machado, Leandro Carvalho; Bohrer, Denise

    2017-02-01

    A novel method was developed to determine low molecular weight polycyclic aromatic hydrocarbons in aqueous leachates from soils and sediments using a salting-out assisted liquid-liquid extraction, synchronous fluorescence spectrometry and a multivariate calibration technique. Several experimental parameters were controlled and the optimum conditions were: sodium carbonate as the salting-out agent at concentration of 2 mol L- 1, 3 mL of acetonitrile as extraction solvent, 6 mL of aqueous leachate, vortexing for 5 min and centrifuging at 4000 rpm for 5 min. The partial least squares calibration was optimized to the lowest values of root mean squared error and five latent variables were chosen for each of the targeted compounds. The regression coefficients for the true versus predicted concentrations were higher than 0.99. Figures of merit for the multivariate method were calculated, namely sensitivity, multivariate detection limit and multivariate quantification limit. The selectivity was also evaluated and other polycyclic aromatic hydrocarbons did not interfere in the analysis. Likewise, high performance liquid chromatography was used as a comparative methodology, and the regression analysis between the methods showed no statistical difference (t-test). The proposed methodology was applied to soils and sediments of a Brazilian river and the recoveries ranged from 74.3% to 105.8%. Overall, the proposed methodology was suitable for the targeted compounds, showing that the extraction method can be applied to spectrofluorometric analysis and that the multivariate calibration is also suitable for these compounds in leachates from real samples.

  16. Balance Calibration – A Method for Assigning a Direct-Reading Uncertainty to an Electronic Balance.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mike Stears

    2010-07-01

    Paper Title: Balance Calibration – A method for assigning a direct-reading uncertainty to an electronic balance. Intended Audience: Those who calibrate or use electronic balances. Abstract: As a calibration facility, we provide on-site (at the customer’s location) calibrations of electronic balances for customers within our company. In our experience, most of our customers are not using their balance as a comparator, but simply putting an unknown quantity on the balance and reading the displayed mass value. Manufacturer’s specifications for balances typically include specifications such as readability, repeatability, linearity, and sensitivity temperature drift, but what does this all mean when themore » balance user simply reads the displayed mass value and accepts the reading as the true value? This paper discusses a method for assigning a direct-reading uncertainty to a balance based upon the observed calibration data and the environment where the balance is being used. The method requires input from the customer regarding the environment where the balance is used and encourages discussion with the customer regarding sources of uncertainty and possible means for improvement; the calibration process becomes an educational opportunity for the balance user as well as calibration personnel. This paper will cover the uncertainty analysis applied to the calibration weights used for the field calibration of balances; the uncertainty is calculated over the range of environmental conditions typically encountered in the field and the resulting range of air density. The temperature stability in the area of the balance is discussed with the customer and the temperature range over which the balance calibration is valid is decided upon; the decision is based upon the uncertainty needs of the customer and the desired rigor in monitoring by the customer. Once the environmental limitations are decided, the calibration is performed and the measurement data is entered into a custom spreadsheet. The spreadsheet uses measurement results, along with the manufacturer’s specifications, to assign a direct-read measurement uncertainty to the balance. The fact that the assigned uncertainty is a best-case uncertainty is discussed with the customer; the assigned uncertainty contains no allowance for contributions associated with the unknown weighing sample, such as density, static charges, magnetism, etc. The attendee will learn uncertainty considerations associated with balance calibrations along with one method for assigning an uncertainty to a balance used for non-comparison measurements.« less

  17. Sensitivity analysis of machine-learning models of hydrologic time series

    NASA Astrophysics Data System (ADS)

    O'Reilly, A. M.

    2017-12-01

    Sensitivity analysis traditionally has been applied to assessing model response to perturbations in model parameters, where the parameters are those model input variables adjusted during calibration. Unlike physics-based models where parameters represent real phenomena, the equivalent of parameters for machine-learning models are simply mathematical "knobs" that are automatically adjusted during training/testing/verification procedures. Thus the challenge of extracting knowledge of hydrologic system functionality from machine-learning models lies in their very nature, leading to the label "black box." Sensitivity analysis of the forcing-response behavior of machine-learning models, however, can provide understanding of how the physical phenomena represented by model inputs affect the physical phenomena represented by model outputs.As part of a previous study, hybrid spectral-decomposition artificial neural network (ANN) models were developed to simulate the observed behavior of hydrologic response contained in multidecadal datasets of lake water level, groundwater level, and spring flow. Model inputs used moving window averages (MWA) to represent various frequencies and frequency-band components of time series of rainfall and groundwater use. Using these forcing time series, the MWA-ANN models were trained to predict time series of lake water level, groundwater level, and spring flow at 51 sites in central Florida, USA. A time series of sensitivities for each MWA-ANN model was produced by perturbing forcing time-series and computing the change in response time-series per unit change in perturbation. Variations in forcing-response sensitivities are evident between types (lake, groundwater level, or spring), spatially (among sites of the same type), and temporally. Two generally common characteristics among sites are more uniform sensitivities to rainfall over time and notable increases in sensitivities to groundwater usage during significant drought periods.

  18. Micro-Arcsec mission: implications of the monitoring, diagnostic and calibration of the instrument response in the data reduction chain. .

    NASA Astrophysics Data System (ADS)

    Busonero, D.; Gai, M.

    The goals of 21st century high angular precision experiments rely on the limiting performance associated to the selected instrumental configuration and observational strategy. Both global and narrow angle micro-arcsec space astrometry require that the instrument contributions to the overall error budget has to be less than the desired micro-arcsec level precision. Appropriate modelling of the astrometric response is required for optimal definition of the data reduction and calibration algorithms, in order to ensure high sensitivity to the astrophysical source parameters and in general high accuracy. We will refer to the framework of the SIM-Lite and the Gaia mission, the most challenging space missions of the next decade in the narrow angle and global astrometry field, respectively. We will focus our dissertation on the Gaia data reduction issues and instrument calibration implications. We describe selected topics in the framework of the Astrometric Instrument Modelling for the Gaia mission, evidencing their role in the data reduction chain and we give a brief overview of the Astrometric Instrument Model Data Analysis Software System, a Java-based pipeline under development by our team.

  19. Modeling Streamflow and Water Temperature in the North Santiam and Santiam Rivers, Oregon, 2001-02

    USGS Publications Warehouse

    Sullivan, Annett B.; Roundsk, Stewart A.

    2004-01-01

    To support the development of a total maximum daily load (TMDL) for water temperature in the Willamette Basin, the laterally averaged, two-dimensional model CE-QUAL-W2 was used to construct a water temperature and streamflow model of the Santiam and North Santiam Rivers. The rivers were simulated from downstream of Detroit and Big Cliff dams to the confluence with the Willamette River. Inputs to the model included bathymetric data, flow and temperature from dam releases, tributary flow and temperature, and meteorologic data. The model was calibrated for the period July 1 through November 21, 2001, and confirmed with data from April 1 through October 31, 2002. Flow calibration made use of data from two streamflow gages and travel-time and river-width data. Temperature calibration used data from 16 temperature monitoring locations in 2001 and 5 locations in 2002. A sensitivity analysis was completed by independently varying input parameters, including point-source flow, air temperature, flow and water temperature from dam releases, and riparian shading. Scenario analyses considered hypothetical river conditions without anthropogenic heat inputs, with restored riparian vegetation, with minimum streamflow from the dams, and with a more-natural seasonal water temperature regime from dam releases.

  20. Meteor44 Video Meteor Photometry

    NASA Technical Reports Server (NTRS)

    Swift, Wesley R.; Suggs, Robert M.; Cooke, William J.

    2004-01-01

    Meteor44 is a software system developed at MSFC for the calibration and analysis of video meteor data. The dynamic range of the (8bit) video data is extended by approximately 4 magnitudes for both meteors and stellar images using saturation compensation. Camera and lens specific saturation compensation coefficients are derived from artificial variable star laboratory measurements. Saturation compensation significantly increases the number of meteors with measured intensity and improves the estimation of meteoroid mass distribution. Astrometry is automated to determine each image s plate coefficient using appropriate star catalogs. The images are simultaneously intensity calibrated from the contained stars to determine the photon sensitivity and the saturation level referenced above the atmosphere. The camera s spectral response is used to compensate for stellar color index and typical meteor spectra in order to report meteor light curves in traditional visual magnitude units. Recent efforts include improved camera calibration procedures, long focal length "streak" meteor photome&y and two-station track determination. Meteor44 has been used to analyze data from the 2001.2002 and 2003 MSFC Leonid observational campaigns as well as several lesser showers. The software is interactive and can be demonstrated using data from recent Leonid campaigns.

  1. Validation of Storm Water Management Model Storm Control Measures Modules

    NASA Astrophysics Data System (ADS)

    Simon, M. A.; Platz, M. C.

    2017-12-01

    EPA's Storm Water Management Model (SWMM) is a computational code heavily relied upon by industry for the simulation of wastewater and stormwater infrastructure performance. Many municipalities are relying on SWMM results to design multi-billion-dollar, multi-decade infrastructure upgrades. Since the 1970's, EPA and others have developed five major releases, the most recent ones containing storm control measures modules for green infrastructure. The main objective of this study was to quantify the accuracy with which SWMM v5.1.10 simulates the hydrologic activity of previously monitored low impact developments. Model performance was evaluated with a mathematical comparison of outflow hydrographs and total outflow volumes, using empirical data and a multi-event, multi-objective calibration method. The calibration methodology utilized PEST++ Version 3, a parameter estimation tool, which aided in the selection of unmeasured hydrologic parameters. From the validation study and sensitivity analysis, several model improvements were identified to advance SWMM LID Module performance for permeable pavements, infiltration units and green roofs, and these were performed and reported herein. Overall, it was determined that SWMM can successfully simulate low impact development controls given accurate model confirmation, parameter measurement, and model calibration.

  2. Fluorescent nanosensors for intracellular measurements: synthesis, characterization, calibration, and measurement

    PubMed Central

    Desai, Arpan S.; Chauhan, Veeren M.; Johnston, Angus P. R.; Esler, Tim; Aylott, Jonathan W.

    2013-01-01

    Measurement of intracellular acidification is important for understanding fundamental biological pathways as well as developing effective therapeutic strategies. Fluorescent pH nanosensors are an enabling technology for real-time monitoring of intracellular acidification. The physicochemical characteristics of nanosensors can be engineered to target specific cellular compartments and respond to external stimuli. Therefore, nanosensors represent a versatile approach for probing biological pathways inside cells. The fundamental components of nanosensors comprise a pH-sensitive fluorophore (signal transducer) and a pH-insensitive reference fluorophore (internal standard) immobilized in an inert non-toxic matrix. The inert matrix prevents interference of cellular components with the sensing elements as well as minimizing potentially harmful effects of some fluorophores on cell function. Fluorescent nanosensors are synthesized using standard laboratory equipment and are detectable by non-invasive widely accessible imaging techniques. The outcomes of studies employing this technology are dependent on reliable methodology for performing measurements. In particular, special consideration must be given to conditions for sensor calibration, uptake conditions and parameters for image analysis. We describe procedures for: (1) synthesis and characterization of polyacrylamide and silica based nanosensors, (2) nanosensor calibration and (3) performing measurements using fluorescence microscopy. PMID:24474936

  3. Effects of Contamination Upon the Performance of X-Ray Telescopes

    NASA Technical Reports Server (NTRS)

    O'Dell, Stephen L.; Elsner, Ronald F.; Oosterbroek, Tim

    2010-01-01

    Particulate and molecular contamination can each impact the performance of x-ray telescope systems. Furthermore, any changes in the level of contamination between on-ground calibration and in-space operation can compromise the validity of the calibration. Thus, it is important to understand the sensitivity of telescope performance, especially the net effective area and the wings of the point spread function to contamination. Here, we quantify this sensitivity and discuss the flow-down of science requirements to contamination-control requirements. As an example, we apply this methodology to the International X-ray Observatory (IXO), currently under joint study by ESA, JAXA, and NASA.

  4. Effects of contamination upon the performance of x-ray telescopes

    NASA Astrophysics Data System (ADS)

    O'Dell, Stephen L.; Elsner, Ronald F.; Oosterbroek, Tim

    2010-07-01

    Particulate and molecular contamination can each impact the performance of x-ray telescope systems. Furthermore, any changes in the level of contamination between on-ground calibration and in-space operation can compromise the validity of the calibration. Thus, it is important to understand the sensitivity of telescope performance---especially the net effective area and the wings of the point spread function---to contamination. Here, we quantify this sensitivity and discuss the flow-down of science requirements to contamination-control requirements. As an example, we apply this methodology to the International X-ray Observatory (IXO), currently under joint study by ESA, JAXA, and NASA.

  5. Revision of IRIS/IDA Seismic Station Metadata

    NASA Astrophysics Data System (ADS)

    Xu, W.; Davis, P.; Auerbach, D.; Klimczak, E.

    2017-12-01

    Trustworthy data quality assurance has always been one of the goals of seismic network operators and data management centers. This task is considerably complex and evolving due to the huge quantities as well as the rapidly changing characteristics and complexities of seismic data. Published metadata usually reflect instrument response characteristics and their accuracies, which includes zero frequency sensitivity for both seismometer and data logger as well as other, frequency-dependent elements. In this work, we are mainly focused studying the variation of the seismometer sensitivity with time of IRIS/IDA seismic recording systems with a goal to improve the metadata accuracy for the history of the network. There are several ways to measure the accuracy of seismometer sensitivity for the seismic stations in service. An effective practice recently developed is to collocate a reference seismometer in proximity to verify the in-situ sensors' calibration. For those stations with a secondary broadband seismometer, IRIS' MUSTANG metric computation system introduced a transfer function metric to reflect two sensors' gain ratios in the microseism frequency band. In addition, a simulation approach based on M2 tidal measurements has been proposed and proven to be effective. In this work, we compare and analyze the results from three different methods, and concluded that the collocated-sensor method is most stable and reliable with the minimum uncertainties all the time. However, for epochs without both the collocated sensor and secondary seismometer, we rely on the analysis results from tide method. For the data since 1992 on IDA stations, we computed over 600 revised seismometer sensitivities for all the IRIS/IDA network calibration epochs. Hopefully further revision procedures will help to guarantee that the data is accurately reflected by the metadata of these stations.

  6. Collection of quantitative chemical release field data.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demirgian, J.; Macha, S.; Loyola Univ.

    1999-01-01

    Detection and quantitation of chemicals in the environment requires Fourier-transform infrared (FTIR) instruments that are properly calibrated and tested. This calibration and testing requires field testing using matrices that are representative of actual instrument use conditions. Three methods commonly used for developing calibration files and training sets in the field are a closed optical cell or chamber, a large-scale chemical release, and a small-scale chemical release. There is no best method. The advantages and limitations of each method should be considered in evaluating field results. Proper calibration characterizes the sensitivity of an instrument, its ability to detect a component inmore » different matrices, and the quantitative accuracy and precision of the results.« less

  7. High resolution analysis of trace elements in corals by laser ablation ICP-MS

    NASA Astrophysics Data System (ADS)

    Sinclair, Daniel J.; Kinsley, Leslie P. J.; McCulloch, Malcolm T.

    1998-06-01

    A method has been developed using laser ablation inductively-coupled plasma mass spectrometry (LA-ICP-MS) for rapid high resolution analysis of B, Mg, Sr, Ba, and U in corals. Corals represent a challenge for a microbeam technique due to their compositional and structural heterogeneity, their nonsilicate matrix, and their unusual range of trace element compositions relative to available standards. The method employs an argon-fluoride excimer laser (λ = 193 nm), masked to produce a beam 600 μm wide by 20 μm across to average ablation sampling over a range of structural features. Coral sections are scanned at a constant rate beneath the laser to produce a continuous sampling of the coral surface. Sensitivity drift is controlled by careful preconditioning of the ICP-MS to carbonate material, and standardisation is carried out by bracketing each traverse down the coral sample by analyses of a CaSiO 3 glass synthesised from coral powder. The method demonstrates excellent reproducibility of both the shape and magnitude of coralline trace element profiles, with typical precisions of between 1.0 and 3.7% based on analysis of the synthetic standard. Accuracy varies between 3.8% for B and 31% for U. Discrepancies are attributed to heterogeneities in the synthetic standard, and matrix differences between the silicate standard and carbonate sample. The method is demonstrated by analysis of a coral collected from Australia's Great Barrier Reef near a weather station recording in-situ sea-surface-temperature (SST). The elements B, Mg, Sr, and U show seasonal compositional cycles, and tentative calibrations against SST have been derived. Using independent ICP-MS solution estimates of the coral composition to correct for standardisation uncertainties, the following calibrations have been derived: B/Ca (μmol/mol)= 1000 (±20)- 20.6 (±0.8)× SSTMg/Ca (mmol/mol)= 0.0 (±0.3)+ 0.16 (±0.01)× SSTSr/Ca (mmol/mol)= 10.8 (±0.1)- 0.070 (±0.004)× SSTU/Ca (μmol/mol)= 2.24 (±0.07)- 0.046 (±0.003)× SSTl These calibrations agree with literature within experimental errors, except for Mg which displays a 35% greater temperature dependence than reported previously. None of the elements in the coral appear to be sensitive to decreases in salinity associated with heavy rainfall in the summer of 1991/1992.

  8. Rectilinear accelerometer possesses self- calibration feature

    NASA Technical Reports Server (NTRS)

    Henderson, R. B.

    1966-01-01

    Rectilinear accelerometer operates from an ac source with a phase-sensitive ac voltage output proportional to the applied accelerations. The unit includes an independent circuit for self-test which provides a sensor output simulating an acceleration applied to the sensitive axis of the accelerometer.

  9. A Functional and Structural Mongolian Scots Pine (Pinus sylvestris var. mongolica) Model Integrating Architecture, Biomass and Effects of Precipitation

    PubMed Central

    Wang, Feng; Letort, Véronique; Lu, Qi; Bai, Xuefeng; Guo, Yan; de Reffye, Philippe; Li, Baoguo

    2012-01-01

    Mongolian Scots pine (Pinus sylvestris var. mongolica) is one of the principal tree species in the network of Three-North Shelterbelt for windbreak and sand stabilisation in China. The functions of shelterbelts are highly correlated with the architecture and eco-physiological processes of individual tree. Thus, model-assisted analysis of canopy architecture and function dynamic in Mongolian Scots pine is of value for better understanding its role and behaviour within shelterbelt ecosystems in these arid and semiarid regions. We present here a single-tree functional and structural model, derived from the GreenLab model, which is adapted for young Mongolian Scots pines by incorporation of plant biomass production, allocation, allometric rules and soil water dynamics. The model is calibrated and validated based on experimental measurements taken on Mongolian Scots pines in 2007 and 2006 under local meteorological conditions. Measurements include plant biomass, topology and geometry, as well as soil attributes and standard meteorological data. After calibration, the model allows reconstruction of three-dimensional (3D) canopy architecture and biomass dynamics for trees from one- to six-year-old at the same site using meteorological data for the six years from 2001 to 2006. Sensitivity analysis indicates that rainfall variation has more influence on biomass increment than on architecture, and the internode and needle compartments and the aboveground biomass respond linearly to increases in precipitation. Sensitivity analysis also shows that the balance between internode and needle growth varies only slightly within the range of precipitations considered here. The model is expected to be used to investigate the growth of Mongolian Scots pines in other regions with different soils and climates. PMID:22927982

  10. FY17 Status Report on the Initial Development of a Constitutive Model for Grade 91 Steel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Messner, M. C.; Phan, V. -T.; Sham, T. -L.

    Grade 91 is a candidate structural material for high temperature advanced reactor applications. Existing ASME Section III, Subsection HB, Subpart B simplified design rules based on elastic analysis are setup as conservative screening tools with the intent to supplement these screening rules with full inelastic analysis when required. The Code provides general guidelines for suitable inelastic models but does not provide constitutive model implementations. This report describes the development of an inelastic constitutive model for Gr. 91 steel aimed at fulfilling the ASME Code requirements and being included into a new Section III Code appendix, HBB-Z. A large database ofmore » over 300 experiments on Gr. 91 was collected and converted to a standard XML form. Five families of Gr. 91 material models were identified in the literature. Of these five, two are potentially suitable for use in the ASME code. These two models were implemented and evaluated against the experimental database. Both models have deficiencies so the report develops a framework for developing and calibrating an improved model. This required creating a new modeling method for representing changes in material rate sensitivity across the full ASME allowable temperature range for Gr. 91 structural components: room temperature to 650° C. On top of this framework for rate sensitivity the report describes calibrating a model for work hardening and softening in the material using genetic algorithm optimization. Future work will focus on improving this trial model by including tension/compression asymmetry observed in experiments and necessary to capture material ratcheting under zero mean stress and by improving the optimization and analysis framework.« less

  11. Application of FTLOADDS to Simulate Flow, Salinity, and Surface-Water Stage in the Southern Everglades, Florida

    USGS Publications Warehouse

    Wang, John D.; Swain, Eric D.; Wolfert, Melinda A.; Langevin, Christian D.; James, Dawn E.; Telis, Pamela A.

    2007-01-01

    The Comprehensive Everglades Restoration Plan requires numerical modeling to achieve a sufficient understanding of coastal freshwater flows, nutrient sources, and the evaluation of management alternatives to restore the ecosystem of southern Florida. Numerical models include a regional water-management model to represent restoration changes to the hydrology of southern Florida and a hydrodynamic model to represent the southern and western offshore waters. The coastal interface between these two systems, however, has complex surface-water/ground-water and freshwater/saltwater interactions and requires a specialized modeling effort. The Flow and Transport in a Linked Overland/Aquifer Density Dependent System (FTLOADDS) code was developed to represent connected surface- and ground-water systems with variable-density flow. The first use of FTLOADDS is the Southern Inland and Coastal Systems (SICS) application to the southeastern part of the Everglades/Florida Bay coastal region. The need to (1) expand the domain of the numerical modeling into most of Everglades National Park and the western coastal area, and (2) better represent the effect of water-delivery control structures, led to the application of the FTLOADDS code to the Tides and Inflows in the Mangroves of the Everglades (TIME) domain. This application allows the model to address a broader range of hydrologic issues and incorporate new code modifications. The surface-water hydrology is of primary interest to water managers, and is the main focus of this study. The coupling to ground water, however, was necessary to accurately represent leakage exchange between the surface water and ground water, which transfers substantial volumes of water and salt. Initial calibration and analysis of the TIME application produced simulated results that compare well statistically with field-measured values. A comparison of TIME simulation results to previous SICS results shows improved capabilities, particularly in the representation of coastal flows. This improvement most likely is due to a more stable numerical representation of the coastal creek outlets. Sensitivity analyses were performed by varying frictional resistance, leakage, barriers to flow, and topography. Changing frictional resistance values in inland areas was shown to improve water-level representation locally, but to have a negligible effect on area-wide values. These changes have only local effects and are not physically based (as are the unchanged values), and thus have limited validity. Sensitivity tests indicate that the overall accuracy of the simulation is diminished if leakage between surface water and ground water is not simulated. The inclusion of a major road as a complete barrier to surface-water flow influenced the local distribution and timing of flow; however, the changes in total flow and individual creekflows were negligible. The model land-surface altitude was lowered by 0.1 meter to determine the sensitivity to topographic variation. This topographic sensitivity test produced mixed results in matching field data. Overall, the representation of stage did not improve definitively. A final calibration utilized the results of the sensitivity analysis to refine the TIME application. To accomplish this calibration, the friction coefficient was reduced at the northern boundary inflow and increased in the southwestern corner of the model, the evapotranspiration function was varied, additional data were used for the ground-water head boundary along the southeast, and the frictional resistance of the primary coastal creek outlet was increased. The calibration improved the match between measured and simulated total flows to Florida Bay and coastal salinities. Agreement also was improved at most of the water-level sites throughout the model domain.

  12. Hydrology of the Coastal Lowlands aquifer system in parts of Alabama, Florida, Louisiana, and Mississippi

    USGS Publications Warehouse

    Martin, Angel; Whiteman, C.D.

    1999-01-01

    Existing data on water levels, water use, water quality, and aquifer properties were used to construct a multilayer digital model to simulate flow in the aquifer system. The report describes the geohydrologic framework of the aquifer system, and the development, calibration, and sensitivity analysis of the ground-water-flow model, but it is primarily focused on the results of the simulations that show the natural flow of ground water throughout the regional aquifer system and the changes from the natural flow caused by development of ground-water supplies.

  13. An electrooptic probe to determine internal electric fields in a piezoelectric transformer.

    PubMed

    Norgard, Peter; Kovaleski, Scott

    2012-02-01

    A technique using the electrooptic effect to determine the output voltage of an optically clear LiNbO(3) piezoelectric transformer was developed and explored. A brief mathematical description of the solution is provided, as well as experimental data demonstrating a linear response under ac resonant operating conditions. A technique to calibrate the diagnostic was developed and is described. Finally, a sensitivity analysis of the electrooptic response to variations in angular alignment between the LiNbO(3) transformer and the laser probe are discussed.

  14. GC-MS quantitation of fragrance compounds suspected to cause skin reactions. 1.

    PubMed

    Chaintreau, Alain; Joulain, Daniel; Marin, Christophe; Schmidt, Claus-Oliver; Vey, Matthias

    2003-10-22

    Recent changes in European legislation require monitoring of 24 volatile compounds in perfumes as they might elicit skin sensitization. This paper reports a GC-MS quantitation procedure for their determination in fragrance concentrates. GC and MS conditions were optimized for a routine use: analysis within 30 min, solvent and internal standard selection, and stock solution stability. Calibration curves were linear in the range of 2-100 mg/L with coefficients of determination in excess of 0.99. The method was tested using real perfumes spiked with known amounts of reference compounds.

  15. Towards simplification of hydrologic modeling: Identification of dominant processes

    USGS Publications Warehouse

    Markstrom, Steven; Hay, Lauren E.; Clark, Martyn P.

    2016-01-01

    The Precipitation–Runoff Modeling System (PRMS), a distributed-parameter hydrologic model, has been applied to the conterminous US (CONUS). Parameter sensitivity analysis was used to identify: (1) the sensitive input parameters and (2) particular model output variables that could be associated with the dominant hydrologic process(es). Sensitivity values of 35 PRMS calibration parameters were computed using the Fourier amplitude sensitivity test procedure on 110 000 independent hydrologically based spatial modeling units covering the CONUS and then summarized to process (snowmelt, surface runoff, infiltration, soil moisture, evapotranspiration, interflow, baseflow, and runoff) and model performance statistic (mean, coefficient of variation, and autoregressive lag 1). Identified parameters and processes provide insight into model performance at the location of each unit and allow the modeler to identify the most dominant process on the basis of which processes are associated with the most sensitive parameters. The results of this study indicate that: (1) the choice of performance statistic and output variables has a strong influence on parameter sensitivity, (2) the apparent model complexity to the modeler can be reduced by focusing on those processes that are associated with sensitive parameters and disregarding those that are not, (3) different processes require different numbers of parameters for simulation, and (4) some sensitive parameters influence only one hydrologic process, while others may influence many

  16. Summary and Outlook

    NASA Astrophysics Data System (ADS)

    Wuest, Martin; Robinson, David W.; Decoste, Dennis

    Calibration is defined as a set of operations that establish, under specified conditions, the relationship between the values of quantities indicated by a measuring instrument or measuring system and the corresponding values realized by standards. Calibration of an instrument means determining by how much the instrument reading is in error by checking it against a measurement standard of known error.Space physics particle instrumentation needs to be calibrated on the ground and inflight to insure that the data can be properly interpreted.On the ground, calibration is performed by exposing the instrument to a well characterized incident particle beam. Not only the nominal range of parameters the instrument is designed to measure should be calibrated but the instrument should also be exposed to out-of-band exposure such as higher energies, angles outside of the nominal field-of-view and susceptibility to ultraviolet radiation.There are several challenges to laboratory calibration on the ground. The beam must be well characterized in energy, angle, mass and position. The particle flux must be uniform over the whole aperture area of the instrument to be calibrated. The beam must be very stable in time and space. One of the difficulties arises that in order to measure the incident particle flux the beam monitor is placed upstream in front of the instrument thereby blocking the incident beam and interrupting the beam detection by the device under test. A beam monitor placed outside of the field-of-view of the instrument to be calibrated is often in a region at the fringes of the beam where the beam is not very stable. This basically prevents the measuring of the same beam with a trusted reference detector and the instrument under test at the same time. Further, highly sensitive instruments are calibrated at flux levels too low to be detected with stable Faraday cup detectors. Present day windowless electron multiplier detectors are able to measure the low flux levels but are sensitive to degradation as a function of contamination and the amount of extracted charge. Windowless electron multipliers are therefore not very stable reference detectors. This makes it difficult to obtain a reliable absolute calibration traceable to a national measurement institute. Calibration is still a time consuming process. It involves testing the instrument at component, subsystem and integrated level. It is important that the instrument is not only operated using a special calibration configuration to save time, but also in its full flight configuration exercising the full path of the data through data compression and telemetry. Very seldom there is enough time available to calibrate all the desired points in parameter space. Usually only a subset can be calibrated for schedule and economic reasons. The number of calibration points is often further reduced since the available calibration time is cut due to development schedule slip and a fixed launch date. This increases the uncertainties as more parameters have to be interpolated or extrapolated. Calibration data should be evaluated preferably in near-real time to prevent losing valuable calibration time if something in the instrument or facility is not working properly. Computer simulation models should be used to obtain a thorough understanding of the actual flight instrument. In flight the instrument performance degrades due to contamination (outgassing), environmental effects (atomic oxygen, radiation) or aging. One of the most sensitive parts in today's instrument are their detectors. Microchannel plate detectors degrade as function of the extracted charge. Solid-state detectors experience radiation damage which increases their noise and the lower energy detection threshold. The goal of the in-flight calibration is to determine this instrument degradation. Calibration is then performed by comparing measurements taken with different bias voltage or discriminator threshold settings. If possible, the instrument data is compared with other sensors covering the same or at least a part of the same measurand on the same or on a different spacecraft. In-flight calibration is not easy, as no absolute calibration standard for particles exist in space and measuring the same physical quantity with two different spacecraft at the same environmental conditions is very challenging.

  17. Nanometric Integrated Temperature and Thermal Sensors in CMOS-SOI Technology

    PubMed Central

    Malits, Maria; Nemirovsky, Yael

    2017-01-01

    This paper reviews and compares the thermal and noise characterization of CMOS (complementary metal-oxide-semiconductor) SOI (Silicon on insulator) transistors and lateral diodes used as temperature and thermal sensors. DC analysis of the measured sensors and the experimental results in a broad (300 K up to 550 K) temperature range are presented. It is shown that both sensors require small chip area, have low power consumption, and exhibit linearity and high sensitivity over the entire temperature range. However, the diode’s sensitivity to temperature variations in CMOS-SOI technology is highly dependent on the diode’s perimeter; hence, a careful calibration for each fabrication process is needed. In contrast, the short thermal time constant of the electrons in the transistor’s channel enables measuring the instantaneous heating of the channel and to determine the local true temperature of the transistor. This allows accurate “on-line” temperature sensing while no additional calibration is needed. In addition, the noise measurements indicate that the diode’s small area and perimeter causes a high 1/f noise in all measured bias currents. This is a severe drawback for the sensor accuracy when using the sensor as a thermal sensor; hence, CMOS-SOI transistors are a better choice for temperature sensing. PMID:28758932

  18. A New Load Residual Threshold Definition for the Evaluation of Wind Tunnel Strain-Gage Balance Data

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; Volden, T.

    2016-01-01

    A new definition of a threshold for the detection of load residual outliers of wind tunnel strain-gage balance data was developed. The new threshold is defined as the product between the inverse of the absolute value of the primary gage sensitivity and an empirical limit of the electrical outputs of a strain{gage. The empirical limit of the outputs is either 2.5 microV/V for balance calibration or check load residuals. A reduced limit of 0.5 microV/V is recommended for the evaluation of differences between repeat load points because, by design, the calculation of these differences removes errors in the residuals that are associated with the regression analysis of the data itself. The definition of the new threshold and different methods for the determination of the primary gage sensitivity are discussed. In addition, calibration data of a six-component force balance and a five-component semi-span balance are used to illustrate the application of the proposed new threshold definition to different types of strain{gage balances. During the discussion of the force balance example it is also explained how the estimated maximum expected output of a balance gage can be used to better understand results of the application of the new threshold definition.

  19. GOME Total Ozone and Calibration Error Derived Usign Version 8 TOMS Algorithm

    NASA Technical Reports Server (NTRS)

    Gleason, J.; Wellemeyer, C.; Qin, W.; Ahn, C.; Gopalan, A.; Bhartia, P.

    2003-01-01

    The Global Ozone Monitoring Experiment (GOME) is a hyper-spectral satellite instrument measuring the ultraviolet backscatter at relatively high spectral resolution. GOME radiances have been slit averaged to emulate measurements of the Total Ozone Mapping Spectrometer (TOMS) made at discrete wavelengths and processed using the new TOMS Version 8 Ozone Algorithm. Compared to Differential Optical Absorption Spectroscopy (DOAS) techniques based on local structure in the Huggins Bands, the TOMS uses differential absorption between a pair of wavelengths including the local stiucture as well as the background continuum. This makes the TOMS Algorithm more sensitive to ozone, but it also makes the algorithm more sensitive to instrument calibration errors. While calibration adjustments are not needed for the fitting techniques like the DOAS employed in GOME algorithms, some adjustment is necessary when applying the TOMS Algorithm to GOME. Using spectral discrimination at near ultraviolet wavelength channels unabsorbed by ozone, the GOME wavelength dependent calibration drift is estimated and then checked using pair justification. In addition, the day one calibration offset is estimated based on the residuals of the Version 8 TOMS Algorithm. The estimated drift in the 2b detector of GOME is small through the first four years and then increases rapidly to +5% in normalized radiance at 331 nm relative to 385 nm by mid 2000. The lb detector appears to be quite well behaved throughout this time period.

  20. Research on the method of establishing the total radiation meter calibration device

    NASA Astrophysics Data System (ADS)

    Gao, Jianqiang; Xia, Ming; Xia, Junwen; Zhang, Dong

    2015-10-01

    Pyranometer is an instrument used to measure the solar radiation, according to pyranometer differs as installation state, can be respectively measured total solar radiation, reflected radiation, or with the help of shading device for measuring scattering radiation. Pyranometer uses the principle of thermoelectric effect, inductive element adopts winding plating type multi junction thermopile, its surface is coated with black coating with high absorption rate. Hot junction in the induction surface, while the cold junction is located in the body, the cold and hot junction produce thermoelectric potential. In the linear range, the output signal is proportional to the solar irradiance. Traceability to national meteorological station, as the unit of the national legal metrology organizations, the responsibility is to transfer value of the sun and the earth radiation value about the national meteorological industry. Using the method of comparison, with indoor calibration of solar simulator, at the same location, standard pyranometer and measured pyranometer were alternately measured radiation irradiance, depending on the irradiation sensitivity standard pyranometer were calculated the radiation sensitivity of measured pyranometer. This paper is mainly about the design and calibration method of the pyranometer indoor device. The uncertainty of the calibration result is also evaluated.

  1. Linear regression analysis and its application to multivariate chromatographic calibration for the quantitative analysis of two-component mixtures.

    PubMed

    Dinç, Erdal; Ozdemir, Abdil

    2005-01-01

    Multivariate chromatographic calibration technique was developed for the quantitative analysis of binary mixtures enalapril maleate (EA) and hydrochlorothiazide (HCT) in tablets in the presence of losartan potassium (LST). The mathematical algorithm of multivariate chromatographic calibration technique is based on the use of the linear regression equations constructed using relationship between concentration and peak area at the five-wavelength set. The algorithm of this mathematical calibration model having a simple mathematical content was briefly described. This approach is a powerful mathematical tool for an optimum chromatographic multivariate calibration and elimination of fluctuations coming from instrumental and experimental conditions. This multivariate chromatographic calibration contains reduction of multivariate linear regression functions to univariate data set. The validation of model was carried out by analyzing various synthetic binary mixtures and using the standard addition technique. Developed calibration technique was applied to the analysis of the real pharmaceutical tablets containing EA and HCT. The obtained results were compared with those obtained by classical HPLC method. It was observed that the proposed multivariate chromatographic calibration gives better results than classical HPLC.

  2. On Learning Cluster Coefficient of Private Networks

    PubMed Central

    Wang, Yue; Wu, Xintao; Zhu, Jun; Xiang, Yang

    2013-01-01

    Enabling accurate analysis of social network data while preserving differential privacy has been challenging since graph features such as clustering coefficient or modularity often have high sensitivity, which is different from traditional aggregate functions (e.g., count and sum) on tabular data. In this paper, we treat a graph statistics as a function f and develop a divide and conquer approach to enforce differential privacy. The basic procedure of this approach is to first decompose the target computation f into several less complex unit computations f1, …, fm connected by basic mathematical operations (e.g., addition, subtraction, multiplication, division), then perturb the output of each fi with Laplace noise derived from its own sensitivity value and the distributed privacy threshold εi, and finally combine those perturbed fi as the perturbed output of computation f. We examine how various operations affect the accuracy of complex computations. When unit computations have large global sensitivity values, we enforce the differential privacy by calibrating noise based on the smooth sensitivity, rather than the global sensitivity. By doing this, we achieve the strict differential privacy guarantee with smaller magnitude noise. We illustrate our approach by using clustering coefficient, which is a popular statistics used in social network analysis. Empirical evaluations on five real social networks and various synthetic graphs generated from three random graph models show the developed divide and conquer approach outperforms the direct approach. PMID:24429843

  3. Adaptive Prior Variance Calibration in the Bayesian Continual Reassessment Method

    PubMed Central

    Zhang, Jin; Braun, Thomas M.; Taylor, Jeremy M.G.

    2012-01-01

    Use of the Continual Reassessment Method (CRM) and other model-based approaches to design in Phase I clinical trials has increased due to the ability of the CRM to identify the maximum tolerated dose (MTD) better than the 3+3 method. However, the CRM can be sensitive to the variance selected for the prior distribution of the model parameter, especially when a small number of patients are enrolled. While methods have emerged to adaptively select skeletons and to calibrate the prior variance only at the beginning of a trial, there has not been any approach developed to adaptively calibrate the prior variance throughout a trial. We propose three systematic approaches to adaptively calibrate the prior variance during a trial and compare them via simulation to methods proposed to calibrate the variance at the beginning of a trial. PMID:22987660

  4. Germanium resistance thermometer calibration at superfluid helium temperatures

    NASA Technical Reports Server (NTRS)

    Mason, F. C.

    1985-01-01

    The rapid increase in resistance of high purity semi-conducting germanium with decreasing temperature in the superfluid helium range of temperatures makes this material highly adaptable as a very sensitive thermometer. Also, a germanium thermometer exhibits a highly reproducible resistance versus temperature characteristic curve upon cycling between liquid helium temperatures and room temperature. These two factors combine to make germanium thermometers ideally suited for measuring temperatures in many cryogenic studies at superfluid helium temperatures. One disadvantage, however, is the relatively high cost of calibrated germanium thermometers. In space helium cryogenic systems, many such thermometers are often required, leading to a high cost for calibrated thermometers. The construction of a thermometer calibration cryostat and probe which will allow for calibrating six germanium thermometers at one time, thus effecting substantial savings in the purchase of thermometers is considered.

  5. Redundant interferometric calibration as a complex optimization problem

    NASA Astrophysics Data System (ADS)

    Grobler, T. L.; Bernardi, G.; Kenyon, J. S.; Parsons, A. R.; Smirnov, O. M.

    2018-05-01

    Observations of the redshifted 21 cm line from the epoch of reionization have recently motivated the construction of low-frequency radio arrays with highly redundant configurations. These configurations provide an alternative calibration strategy - `redundant calibration' - and boost sensitivity on specific spatial scales. In this paper, we formulate calibration of redundant interferometric arrays as a complex optimization problem. We solve this optimization problem via the Levenberg-Marquardt algorithm. This calibration approach is more robust to initial conditions than current algorithms and, by leveraging an approximate matrix inversion, allows for further optimization and an efficient implementation (`redundant STEFCAL'). We also investigated using the preconditioned conjugate gradient method as an alternative to the approximate matrix inverse, but found that its computational performance is not competitive with respect to `redundant STEFCAL'. The efficient implementation of this new algorithm is made publicly available.

  6. Ground calibrations of the X-ray detector system of the Solar Intensity X-ray Spectrometer (SIXS) on board BepiColombo

    NASA Astrophysics Data System (ADS)

    Huovelin, Juhani; Lehtolainen, Arto; Genzer, Maria; Korpela, Seppo; Esko, Eero; Andersson, Hans

    2014-05-01

    SIXS includes X-ray and particle detector systems for the BepiColombo Mercury Planetary Orbiter (MPO). Its task is to monitor the direct solar X-rays and energetic particles in a wide field of view in the energy range of 1-20 keV (X-rays), 0.1-3 MeV (electrons) and 1-30 MeV (protons). The main purpose of these measurements is to provide quantitative information on the high energy radiation incident on Mercury's surface which causes the X-ray glow of the planet measured by the MIXS instrument. The X-ray and particle measurements of SIXS are also useful for investigations of the solar corona and the magnetosphere of Mercury. The ground calibrations of the X-ray detectors of the SIXS flight model were carried out in the X-ray laboratory of the Helsinki University during May and June 2012. The aim of the ground calibrations was to characterize the performance of the SIXS instrument's three High-Purity Silicon PIN X-ray detectors and verify that they fulfil their scientific performance requirements. The calibrations included the determination of the beginning of life energy resolution at different operational temperatures, determination of the detector's sensitivity within the field of view as a function of the off-axis and roll angles, pile-up tests for determining the speed of the read out electronics, measurements of the low energy threshold of the energy scale, a cross-calibration with the SMART-1 XSM flight spare detector, and the determination of the temperature dependence of the energy scale. An X-ray tube and the detectors' internal Ti coated 55Fe calibration sources were used as primary X-ray sources. In addition, two external fluorescence sources were used as secondary X-ray sources in the determination of the energy resolutions and in the comparison calibration with the SMART-1 XSM. The calibration results show that the detectors fulfill all of the scientific performance requirements. The ground calibration data combined with the instrument house-keeping data, spacecraft attitude data in relation to the Sun, and the in-flight calibration spectra measured during the operations contain all required information for the final analysis of the solar X-ray data.

  7. SeaWiFS technical report series. Volume 23: SeaWiFS prelaunch radiometric calibration and spectral characterization

    NASA Technical Reports Server (NTRS)

    Barnes, Robert A.; Holmes, Alan W.; Barnes, William L.; Esaias, Wayne E.; Mcclain, Charles R.; Svitek, Tomas; Hooker, Stanford B.; Firestone, Elaine R.; Acker, James G.

    1994-01-01

    Based on the operating characteristics of the Sea-viewing Wide Field-of-view Sensor (SeaWiFS), calibration equations have been developed that allow conversion of the counts from the radiometer into Earth-existing radiances. These radiances are the geophysical properties the instrument has been designed to measure. SeaWiFS uses bilinear gains to allow high sensitivity measurements of ocean-leaving radiances and low sensitivity measurements of radiances from clouds, which are much brighter than the ocean. The calculation of these bilinear gains is central to the calibration equations. Several other factors within these equations are also included. Among these are the spectral responses of the eight SeaWiFS bands. A band's spectral response includes the ability of the band to isolate a portion of the electromagnetic spectrum and the amount of light that lies outside of that region. The latter is termed out-of-band response. In the calibration procedure, some of the counts from the instrument are produced by radiance in the out-of-band region. The number of those counts for each band is a function of the spectral shape of the source. For the SeaWiFS calibration equations, the out-of-band responses are converted from those for the laboratory source into those for a source with the spectral shape of solar flux. The solar flux, unlike the laboratory calibration, approximates the spectral shape of the Earth-existing radiance from the oceans. This conversion modifies the results from the laboratory radiometric calibration by 1-4 percent, depending on the band. These and other factors in the SeaWiFS calibration equations are presented here, both for users of the SeaWiFS data set and for researchers making ground-based radiance measurements in support of Sea WiFS.

  8. Data-driven sensitivity inference for Thomson scattering electron density measurement systems.

    PubMed

    Fujii, Keisuke; Yamada, Ichihiro; Hasuo, Masahiro

    2017-01-01

    We developed a method to infer the calibration parameters of multichannel measurement systems, such as channel variations of sensitivity and noise amplitude, from experimental data. We regard such uncertainties of the calibration parameters as dependent noise. The statistical properties of the dependent noise and that of the latent functions were modeled and implemented in the Gaussian process kernel. Based on their statistical difference, both parameters were inferred from the data. We applied this method to the electron density measurement system by Thomson scattering for the Large Helical Device plasma, which is equipped with 141 spatial channels. Based on the 210 sets of experimental data, we evaluated the correction factor of the sensitivity and noise amplitude for each channel. The correction factor varies by ≈10%, and the random noise amplitude is ≈2%, i.e., the measurement accuracy increases by a factor of 5 after this sensitivity correction. The certainty improvement in the spatial derivative inference was demonstrated.

  9. Calibration of the B/Ca proxy in the planktic foraminifer Orbulina universa to Paleocene seawater conditions

    NASA Astrophysics Data System (ADS)

    Haynes, Laura L.; Hönisch, Bärbel; Dyez, Kelsey A.; Holland, Kate; Rosenthal, Yair; Fish, Carina R.; Subhas, Adam V.; Rae, James W. B.

    2017-06-01

    The B/Ca ratio of planktic foraminiferal calcite, a proxy for the surface ocean carbonate system, displays large negative excursions during the Paleocene-Eocene Thermal Maximum (PETM, 55.9 Ma), consistent with rapid ocean acidification at that time. However, the B/Ca excursion measured at the PETM exceeds a magnitude that modern pH calibrations can explain. Numerous other controls on the proxy have been suggested, including foraminiferal growth rate and the total concentration of dissolved inorganic carbon (DIC). Here we present new calibrations for B/Ca versus the combined effects of pH and DIC in the symbiont-bearing planktic foraminifer Orbulina universa, grown in culture solutions with simulated Paleocene seawater elemental composition (high [Ca], low [Mg], and low total boron concentration ([B]T). We also investigate the isolated effects of low seawater [B]T, high [Ca], reduced symbiont photosynthetic activity, and average shell growth rate on O. universa B/Ca in order to further understand the proxy systematics and to determine other possible influences on the PETM records. We find that average shell growth rate does not appear to determine B/Ca in high calcite saturation experiments. In addition, our "Paleocene" calibration shows higher sensitivity than the modern calibration at low [B(OH)4-]/DIC. Given a large DIC pulse at the PETM, this amplification of the B/Ca response can more fully explain the PETM B/Ca excursion. However, further calibrations with other foraminifer species are needed to determine the range of foraminifer species-specific proxy sensitivities under these conditions for quantitative reconstruction of large carbon cycle perturbations.

  10. Legato: Personal Computer Software for Analyzing Pressure-Sensitive Paint Data

    NASA Technical Reports Server (NTRS)

    Schairer, Edward T.

    2001-01-01

    'Legato' is personal computer software for analyzing radiometric pressure-sensitive paint (PSP) data. The software is written in the C programming language and executes under Windows 95/98/NT operating systems. It includes all operations normally required to convert pressure-paint image intensities to normalized pressure distributions mapped to physical coordinates of the test article. The program can analyze data from both single- and bi-luminophore paints and provides for both in situ and a priori paint calibration. In addition, there are functions for determining paint calibration coefficients from calibration-chamber data. The software is designed as a self-contained, interactive research tool that requires as input only the bare minimum of information needed to accomplish each function, e.g., images, model geometry, and paint calibration coefficients (for a priori calibration) or pressure-tap data (for in situ calibration). The program includes functions that can be used to generate needed model geometry files for simple model geometries (e.g., airfoils, trapezoidal wings, rotor blades) based on the model planform and airfoil section. All data files except images are in ASCII format and thus are easily created, read, and edited. The program does not use database files. This simplifies setup but makes the program inappropriate for analyzing massive amounts of data from production wind tunnels. Program output consists of Cartesian plots, false-colored real and virtual images, pressure distributions mapped to the surface of the model, assorted ASCII data files, and a text file of tabulated results. Graphical output is displayed on the computer screen and can be saved as publication-quality (PostScript) files.

  11. Technique for Radiometer and Antenna Array Calibration - TRAAC

    NASA Technical Reports Server (NTRS)

    Meyer, Paul; Sims, William; Varnavas, Kosta; McCracken, Jeff; Srinivasan, Karthik; Limaye, Ashutosh; Laymon, Charles; Richeson. James

    2012-01-01

    Highly sensitive receivers are used to detect minute amounts of emitted electromagnetic energy. Calibration of these receivers is vital to the accuracy of the measurements. Traditional calibration techniques depend on calibration reference internal to the receivers as reference for the calibration of the observed electromagnetic energy. Such methods can only calibrate errors in measurement introduced by the receiver only. The disadvantage of these existing methods is that they cannot account for errors introduced by devices, such as antennas, used for capturing electromagnetic radiation. This severely limits the types of antennas that can be used to make measurements with a high degree of accuracy. Complex antenna systems, such as electronically steerable antennas (also known as phased arrays), while offering potentially significant advantages, suffer from a lack of a reliable and accurate calibration technique. The proximity of antenna elements in an array results in interaction between the electromagnetic fields radiated (or received) by the individual elements. This phenomenon is called mutual coupling. The new calibration method uses a known noise source as a calibration load to determine the instantaneous characteristics of the antenna. The noise source is emitted from one element of the antenna array and received by all the other elements due to mutual coupling. This received noise is used as a calibration standard to monitor the stability of the antenna electronics.

  12. Experimental sensitivity analysis of subsoil-slab behaviour regarding degree of fibre-concrete slab reinforcement

    NASA Astrophysics Data System (ADS)

    Hrubesova, E.; Lahuta, H.; Mohyla, M.; Quang, T. B.; Phi, N. D.

    2018-04-01

    The paper is focused on the sensitivity analysis of behaviour of the subsoil – foundation system as regards the variant properties of fibre-concrete slab resulting into different relative stiffness of the whole cooperating system. The character of slab and its properties are very important for the character of external load transfer, but the character of subsoil cannot be neglected either because it determines the stress-strain behaviour of the all system and consequently the bearing capacity of structure. The sensitivity analysis was carried out based on experimental results, which include both the stress values in soil below the foundation structure and settlements of structure, characterized by different quantity of fibres in it. Flat dynamometers GEOKON were used for the stress measurements below the observed slab, the strains inside slab were registered by tensometers, the settlements were monitored geodetically. The paper is focused on the comparison of soil stresses below the slab for different quantity of fibres in structure. The results obtained from the experimental stand can contribute to more objective knowledge of soil – slab interaction, to the evaluation of real carrying capacity of the slab, to the calibration of corresponding numerical models, to the optimization of quantity of fibres in the slab, and finally, to higher safety and more economical design of slab.

  13. Effect of different transport observations on inverse modeling results: case study of a long-term groundwater tracer test monitored at high resolution

    NASA Astrophysics Data System (ADS)

    Rasa, Ehsan; Foglia, Laura; Mackay, Douglas M.; Scow, Kate M.

    2013-11-01

    Conservative tracer experiments can provide information useful for characterizing various subsurface transport properties. This study examines the effectiveness of three different types of transport observations for sensitivity analysis and parameter estimation of a three-dimensional site-specific groundwater flow and transport model: conservative tracer breakthrough curves (BTCs), first temporal moments of BTCs ( m 1), and tracer cumulative mass discharge ( M d) through control planes combined with hydraulic head observations ( h). High-resolution data obtained from a 410-day controlled field experiment at Vandenberg Air Force Base, California (USA), have been used. In this experiment, bromide was injected to create two adjacent plumes monitored at six different transects (perpendicular to groundwater flow) with a total of 162 monitoring wells. A total of 133 different observations of transient hydraulic head, 1,158 of BTC concentration, 23 of first moment, and 36 of mass discharge were used for sensitivity analysis and parameter estimation of nine flow and transport parameters. The importance of each group of transport observations in estimating these parameters was evaluated using sensitivity analysis, and five out of nine parameters were calibrated against these data. Results showed the advantages of using temporal moment of conservative tracer BTCs and mass discharge as observations for inverse modeling.

  14. Water quality modeling for urban reach of Yamuna river, India (1999-2009), using QUAL2Kw

    NASA Astrophysics Data System (ADS)

    Sharma, Deepshikha; Kansal, Arun; Pelletier, Greg

    2017-06-01

    The study was to characterize and understand the water quality of the river Yamuna in Delhi (India) prior to an efficient restoration plan. A combination of collection of monitored data, mathematical modeling, sensitivity, and uncertainty analysis has been done using the QUAL2Kw, a river quality model. The model was applied to simulate DO, BOD, total coliform, and total nitrogen at four monitoring stations, namely Palla, Old Delhi Railway Bridge, Nizamuddin, and Okhla for 10 years (October 1999-June 2009) excluding the monsoon seasons (July-September). The study period was divided into two parts: monthly average data from October 1999-June 2004 (45 months) were used to calibrate the model and monthly average data from October 2005-June 2009 (45 months) were used to validate the model. The R2 for CBODf and TN lies within the range of 0.53-0.75 and 0.68-0.83, respectively. This shows that the model has given satisfactory results in terms of R2 for CBODf, TN, and TC. Sensitivity analysis showed that DO, CBODf, TN, and TC predictions are highly sensitive toward headwater flow and point source flow and quality. Uncertainty analysis using Monte Carlo showed that the input data have been simulated in accordance with the prevalent river conditions.

  15. Application of Partial Least Square (PLS) Analysis on Fluorescence Data of 8-Anilinonaphthalene-1-Sulfonic Acid, a Polarity Dye, for Monitoring Water Adulteration in Ethanol Fuel.

    PubMed

    Kumar, Keshav; Mishra, Ashok Kumar

    2015-07-01

    Fluorescence characteristic of 8-anilinonaphthalene-1-sulfonic acid (ANS) in ethanol-water mixture in combination with partial least square (PLS) analysis was used to propose a simple and sensitive analytical procedure for monitoring the adulteration of ethanol by water. The proposed analytical procedure was found to be capable of detecting even small adulteration level of ethanol by water. The robustness of the procedure is evident from the statistical parameters such as square of correlation coefficient (R(2)), root mean square of calibration (RMSEC) and root mean square of prediction (RMSEP) that were found to be well with in the acceptable limits.

  16. Calibration in dogs of a subcutaneous miniaturized glucose sensor using a glucose meter for blood glucose determination.

    PubMed

    Poitout, V; Moatti-Sirat, D; Reach, G

    1992-01-01

    The feasibility of calibrating a glucose sensor by using a wearable glucose meter for blood glucose determination and moderate variations of blood glucose concentration was assessed. Six miniaturized glucose sensors were implanted in the subcutaneous tissue of conscious dogs, and the parameters used for the in vivo calibration of the sensor (sensitivity coefficient and extrapolated current in the absence of glucose) were determined from values of blood glucose and sensor response obtained during glucose infusion. (1) Venous plasma glucose level and venous total blood glucose level were measured simultaneously on the same sample, using a Beckman analyser and a Glucometer II, respectively. The regression between plasma glucose (x) and whole blood glucose (y) was y = 1.12x-0.08 mM (n = 114 values, r = 0.96, p = 0.0001). The error grid analysis indicated that the use of a Glucometer II for blood glucose determination was appropriate in dogs. (2) The in vivo sensitivity coefficients were 0.57 +/- 0.11 nA mM-1 when determined from plasma glucose, and 0.51 +/- 0.07 nA mM-1 when determined from whole blood glucose (t = 1.53, p = 0.18, n.s.). The background currents were 0.88 +/- 0.57 nA when determined from plasma glucose, and 0.63 +/- 0.77 nA when determined from whole blood glucose (t = 0.82, p = 0.45, n.s.). (3) The regression equation of the estimation of the subcutaneous glucose level obtained from the two methods was y = 1.04x + 0.56 mM (n = 171 values, r = 0.98, p = 0.0001).(ABSTRACT TRUNCATED AT 250 WORDS)

  17. Simulation of runoff and nutrient export from a typical small watershed in China using the Hydrological Simulation Program-Fortran.

    PubMed

    Li, Zhaofu; Liu, Hongyu; Luo, Chuan; Li, Yan; Li, Hengpeng; Pan, Jianjun; Jiang, Xiaosan; Zhou, Quansuo; Xiong, Zhengqin

    2015-05-01

    The Hydrological Simulation Program-Fortran (HSPF), which is a hydrological and water-quality computer model that was developed by the United States Environmental Protection Agency, was employed to simulate runoff and nutrient export from a typical small watershed in a hilly eastern monsoon region of China. First, a parameter sensitivity analysis was performed to assess how changes in the model parameters affect runoff and nutrient export. Next, the model was calibrated and validated using measured runoff and nutrient concentration data. The Nash-Sutcliffe efficiency (E NS ) values of the yearly runoff were 0.87 and 0.69 for the calibration and validation periods, respectively. For storms runoff events, the E NS values were 0.93 for the calibration period and 0.47 for the validation period. Antecedent precipitation and soil moisture conditions can affect the simulation accuracy of storm event flow. The E NS values for the total nitrogen (TN) export were 0.58 for the calibration period and 0.51 for the validation period. In addition, the correlation coefficients between the observed and simulated TN concentrations were 0.84 for the calibration period and 0.74 for the validation period. For phosphorus export, the E NS values were 0.89 for the calibration period and 0.88 for the validation period. In addition, the correlation coefficients between the observed and simulated orthophosphate concentrations were 0.96 and 0.94 for the calibration and validation periods, respectively. The nutrient simulation results are generally satisfactory even though the parameter-lumped HSPF model cannot represent the effects of the spatial pattern of land cover on nutrient export. The model parameters obtained in this study could serve as reference values for applying the model to similar regions. In addition, HSPF can properly describe the characteristics of water quantity and quality processes in this area. After adjustment, calibration, and validation of the parameters, the HSPF model is suitable for hydrological and water-quality simulations in watershed planning and management and for designing best management practices.

  18. Parameter identification of the SWAT model on the BANI catchment (West Africa) under limited data condition

    NASA Astrophysics Data System (ADS)

    Chaibou Begou, Jamilatou; Jomaa, Seifeddine; Benabdallah, Sihem; Rode, Michael

    2015-04-01

    Due to the climate change, drier conditions have prevailed in West Africa, since the seventies, and the consequences are important on water resources. In order to identify and implement management strategies of adaptation to climate change in the sector of water, it is crucial to improve our physical understanding of water resources evolution in the region. To this end, hydrologic modelling is an appropriate tool for flow predictions under changing climate and land use conditions. In this study, the applicability and performance of the recent version of Soil and Water Assessment Tool (SWAT2012) model were tested on the Bani catchment in West Africa under limited data condition. Model parameters identification was also tested using one site and multisite calibration approaches. The Bani is located in the upper part of the Niger River and drains an area of about 101, 000 km2 at the outlet of Douna. The climate is tropical, humid to semi-arid from the South to the North with an average annual rainfall of 1050 mm (period 1981-2000). Global datasets were used for the model setup such as: USGS hydrosheds DEM, USGS LCI GlobCov2009 and the FAO Digital Soil Map of the World. Daily measured rainfall from nine rain gauges and maximum and minimum temperature from five weather stations covering the period 1981-1997 were used for model setup. Sensitivity analysis, calibration and validation are performed within SWATCUP using GLUE procedure at Douna station first (one site calibration), then at three additional internal stations, Bougouni, Pankourou and Kouoro1 (multi-site calibration). Model parameters were calibrated at daily time step for the period 1983-1992, then validated for the period 1993-1997. A period of two years (1981-1982) was used for model warming up. Results of one-site calibration showed that the model performance is evaluated by 0.76 and 0.79 for Nash-Sutcliffe (NS) and correlation coefficient (R2), respectively. While for the validation period the performance improved considerably with NS and R2 equal to 0.84 and 0.87, respectively. The degree of total uncertainties is quantified by a minimum P-factor of 0.61 and a maximum R-factor of 0.59. These statistics suggest that the model performance can be judged as very good, especially considering limited data condition and high climate, land use and soil variability in the studied basin. The most sensitive parameters (CN2, OVN and SLSUBBSN) are related to surface runoff reflecting the dominance of this process on the streamflow generation. In the next step, multisite calibration approach will be performed on the BANI basin to assess how much additional observations improve the model parameter identification.

  19. Nimbus-7 Earth radiation budget calibration history. Part 1: The solar channels

    NASA Technical Reports Server (NTRS)

    Kyle, H. Lee; Hoyt, Douglas V.; Hickey, John R.; Maschhoff, Robert H.; Vallette, Brenda J.

    1993-01-01

    The Earth Radiation Budget (ERB) experiment on the Nimbus-7 satellite measured the total solar irradiance plus broadband spectral components on a nearly daily basis from 16 Nov. 1978, until 16 June 1992. Months of additional observations were taken in late 1992 and in 1993. The emphasis is on the electrically self calibrating cavity radiometer, channel 10c, which recorded accurate total solar irradiance measurements over the whole period. The spectral channels did not have inflight calibration adjustment capabilities. These channels can, with some additional corrections, be used for short-term studies (one or two solar rotations - 27 to 60 days), but not for long-term trend analysis. For channel 10c, changing radiometer pointing, the zero offsets, the stability of the gain, the temperature sensitivity, and the influences of other platform instruments are all examined and their effects on the measurements considered. Only the question of relative accuracy (not absolute) is examined. The final channel 10c product is also compared with solar measurements made by independent experiments on other satellites. The Nimbus experiment showed that the mean solar energy was about 0.1 percent (1.4 W/sqm) higher in the excited Sun years of 1979 and 1991 than in the quiet Sun years of 1985 and 1986. The error analysis indicated that the measured long-term trends may be as accurate as +/- 0.005 percent. The worse-case error estimate is +/- 0.03 percent.

  20. Fluorescent quantification of terazosin hydrochloride content in human plasma and tablets using second-order calibration based on both parallel factor analysis and alternating penalty trilinear decomposition.

    PubMed

    Zou, Hong-Yan; Wu, Hai-Long; OuYang, Li-Qun; Zhang, Yan; Nie, Jin-Fang; Fu, Hai-Yan; Yu, Ru-Qin

    2009-09-14

    Two second-order calibration methods based on the parallel factor analysis (PARAFAC) and the alternating penalty trilinear decomposition (APTLD) method, have been utilized for the direct determination of terazosin hydrochloride (THD) in human plasma samples, coupled with the excitation-emission matrix fluorescence spectroscopy. Meanwhile, the two algorithms combing with the standard addition procedures have been applied for the determination of terazosin hydrochloride in tablets and the results were validated by the high-performance liquid chromatography with fluorescence detection. These second-order calibrations all adequately exploited the second-order advantages. For human plasma samples, the average recoveries by the PARAFAC and APTLD algorithms with the factor number of 2 (N=2) were 100.4+/-2.7% and 99.2+/-2.4%, respectively. The accuracy of two algorithms was also evaluated through elliptical joint confidence region (EJCR) tests and t-test. It was found that both algorithms could give accurate results, and only the performance of APTLD was slightly better than that of PARAFAC. Figures of merit, such as sensitivity (SEN), selectivity (SEL) and limit of detection (LOD) were also calculated to compare the performances of the two strategies. For tablets, the average concentrations of THD in tablet were 63.5 and 63.2 ng mL(-1) by using the PARAFAC and APTLD algorithms, respectively. The accuracy was evaluated by t-test and both algorithms could give accurate results, too.

Top