A calibration hierarchy for risk models was defined: from utopia to empirical data.
Van Calster, Ben; Nieboer, Daan; Vergouwe, Yvonne; De Cock, Bavo; Pencina, Michael J; Steyerberg, Ewout W
2016-06-01
Calibrated risk models are vital for valid decision support. We define four levels of calibration and describe implications for model development and external validation of predictions. We present results based on simulated data sets. A common definition of calibration is "having an event rate of R% among patients with a predicted risk of R%," which we refer to as "moderate calibration." Weaker forms of calibration only require the average predicted risk (mean calibration) or the average prediction effects (weak calibration) to be correct. "Strong calibration" requires that the event rate equals the predicted risk for every covariate pattern. This implies that the model is fully correct for the validation setting. We argue that this is unrealistic: the model type may be incorrect, the linear predictor is only asymptotically unbiased, and all nonlinear and interaction effects should be correctly modeled. In addition, we prove that moderate calibration guarantees nonharmful decision making. Finally, results indicate that a flexible assessment of calibration in small validation data sets is problematic. Strong calibration is desirable for individualized decision support but unrealistic and counter productive by stimulating the development of overly complex models. Model development and external validation should focus on moderate calibration. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Doty, Keith L
1992-01-01
The author has formulated a new, general model for specifying the kinematic properties of serial manipulators. The new model kinematic parameters do not suffer discontinuities when nominally parallel adjacent axes deviate from exact parallelism. From this new theory the author develops a first-order, lumped-parameter, calibration-model for the ARID manipulator. Next, the author develops a calibration methodology for the ARID based on visual and acoustic sensing. A sensor platform, consisting of a camera and four sonars attached to the ARID end frame, performs calibration measurements. A calibration measurement consists of processing one visual frame of an accurately placed calibration image and recording four acoustic range measurements. A minimum of two measurement protocols determine the kinematics calibration-model of the ARID for a particular region: assuming the joint displacements are accurately measured, the calibration surface is planar, and the kinematic parameters do not vary rapidly in the region. No theoretical or practical limitations appear to contra-indicate the feasibility of the calibration method developed here.
SWAT: Model use, calibration, and validation
USDA-ARS?s Scientific Manuscript database
SWAT (Soil and Water Assessment Tool) is a comprehensive, semi-distributed river basin model that requires a large number of input parameters which complicates model parameterization and calibration. Several calibration techniques have been developed for SWAT including manual calibration procedures...
Hunt, R.J.; Anderson, M.P.; Kelson, V.A.
1998-01-01
This paper demonstrates that analytic element models have potential as powerful screening tools that can facilitate or improve calibration of more complicated finite-difference and finite-element models. We demonstrate how a two-dimensional analytic element model was used to identify errors in a complex three-dimensional finite-difference model caused by incorrect specification of boundary conditions. An improved finite-difference model was developed using boundary conditions developed from a far-field analytic element model. Calibration of a revised finite-difference model was achieved using fewer zones of hydraulic conductivity and lake bed conductance than the original finite-difference model. Calibration statistics were also improved in that simulated base-flows were much closer to measured values. The improved calibration is due mainly to improved specification of the boundary conditions made possible by first solving the far-field problem with an analytic element model.This paper demonstrates that analytic element models have potential as powerful screening tools that can facilitate or improve calibration of more complicated finite-difference and finite-element models. We demonstrate how a two-dimensional analytic element model was used to identify errors in a complex three-dimensional finite-difference model caused by incorrect specification of boundary conditions. An improved finite-difference model was developed using boundary conditions developed from a far-field analytic element model. Calibration of a revised finite-difference model was achieved using fewer zones of hydraulic conductivity and lake bed conductance than the original finite-difference model. Calibration statistics were also improved in that simulated base-flows were much closer to measured values. The improved calibration is due mainly to improved specification of the boundary conditions made possible by first solving the far-field problem with an analytic element model.
Calibration of steady-state car-following models using macroscopic loop detector data.
DOT National Transportation Integrated Search
2010-05-01
The paper develops procedures for calibrating the steady-state component of various car following models using : macroscopic loop detector data. The calibration procedures are developed for a number of commercially available : microscopic traffic sim...
Input variable selection and calibration data selection for storm water quality regression models.
Sun, Siao; Bertrand-Krajewski, Jean-Luc
2013-01-01
Storm water quality models are useful tools in storm water management. Interest has been growing in analyzing existing data for developing models for urban storm water quality evaluations. It is important to select appropriate model inputs when many candidate explanatory variables are available. Model calibration and verification are essential steps in any storm water quality modeling. This study investigates input variable selection and calibration data selection in storm water quality regression models. The two selection problems are mutually interacted. A procedure is developed in order to fulfil the two selection tasks in order. The procedure firstly selects model input variables using a cross validation method. An appropriate number of variables are identified as model inputs to ensure that a model is neither overfitted nor underfitted. Based on the model input selection results, calibration data selection is studied. Uncertainty of model performances due to calibration data selection is investigated with a random selection method. An approach using the cluster method is applied in order to enhance model calibration practice based on the principle of selecting representative data for calibration. The comparison between results from the cluster selection method and random selection shows that the former can significantly improve performances of calibrated models. It is found that the information content in calibration data is important in addition to the size of calibration data.
A Comparison of Two Balance Calibration Model Building Methods
NASA Technical Reports Server (NTRS)
DeLoach, Richard; Ulbrich, Norbert
2007-01-01
Simulated strain-gage balance calibration data is used to compare the accuracy of two balance calibration model building methods for different noise environments and calibration experiment designs. The first building method obtains a math model for the analysis of balance calibration data after applying a candidate math model search algorithm to the calibration data set. The second building method uses stepwise regression analysis in order to construct a model for the analysis. Four balance calibration data sets were simulated in order to compare the accuracy of the two math model building methods. The simulated data sets were prepared using the traditional One Factor At a Time (OFAT) technique and the Modern Design of Experiments (MDOE) approach. Random and systematic errors were introduced in the simulated calibration data sets in order to study their influence on the math model building methods. Residuals of the fitted calibration responses and other statistical metrics were compared in order to evaluate the calibration models developed with different combinations of noise environment, experiment design, and model building method. Overall, predicted math models and residuals of both math model building methods show very good agreement. Significant differences in model quality were attributable to noise environment, experiment design, and their interaction. Generally, the addition of systematic error significantly degraded the quality of calibration models developed from OFAT data by either method, but MDOE experiment designs were more robust with respect to the introduction of a systematic component of the unexplained variance.
Model Robust Calibration: Method and Application to Electronically-Scanned Pressure Transducers
NASA Technical Reports Server (NTRS)
Walker, Eric L.; Starnes, B. Alden; Birch, Jeffery B.; Mays, James E.
2010-01-01
This article presents the application of a recently developed statistical regression method to the controlled instrument calibration problem. The statistical method of Model Robust Regression (MRR), developed by Mays, Birch, and Starnes, is shown to improve instrument calibration by reducing the reliance of the calibration on a predetermined parametric (e.g. polynomial, exponential, logarithmic) model. This is accomplished by allowing fits from the predetermined parametric model to be augmented by a certain portion of a fit to the residuals from the initial regression using a nonparametric (locally parametric) regression technique. The method is demonstrated for the absolute scale calibration of silicon-based pressure transducers.
Automatic Calibration of a Semi-Distributed Hydrologic Model Using Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Bekele, E. G.; Nicklow, J. W.
2005-12-01
Hydrologic simulation models need to be calibrated and validated before using them for operational predictions. Spatially-distributed hydrologic models generally have a large number of parameters to capture the various physical characteristics of a hydrologic system. Manual calibration of such models is a very tedious and daunting task, and its success depends on the subjective assessment of a particular modeler, which includes knowledge of the basic approaches and interactions in the model. In order to alleviate these shortcomings, an automatic calibration model, which employs an evolutionary optimization technique known as Particle Swarm Optimizer (PSO) for parameter estimation, is developed. PSO is a heuristic search algorithm that is inspired by social behavior of bird flocking or fish schooling. The newly-developed calibration model is integrated to the U.S. Department of Agriculture's Soil and Water Assessment Tool (SWAT). SWAT is a physically-based, semi-distributed hydrologic model that was developed to predict the long term impacts of land management practices on water, sediment and agricultural chemical yields in large complex watersheds with varying soils, land use, and management conditions. SWAT was calibrated for streamflow and sediment concentration. The calibration process involves parameter specification, whereby sensitive model parameters are identified, and parameter estimation. In order to reduce the number of parameters to be calibrated, parameterization was performed. The methodology is applied to a demonstration watershed known as Big Creek, which is located in southern Illinois. Application results show the effectiveness of the approach and model predictions are significantly improved.
Absolute radiometric calibration of Landsat using a pseudo invariant calibration site
Helder, D.; Thome, K.J.; Mishra, N.; Chander, G.; Xiong, Xiaoxiong; Angal, A.; Choi, Tae-young
2013-01-01
Pseudo invariant calibration sites (PICS) have been used for on-orbit radiometric trending of optical satellite systems for more than 15 years. This approach to vicarious calibration has demonstrated a high degree of reliability and repeatability at the level of 1-3% depending on the site, spectral channel, and imaging geometries. A variety of sensors have used this approach for trending because it is broadly applicable and easy to implement. Models to describe the surface reflectance properties, as well as the intervening atmosphere have also been developed to improve the precision of the method. However, one limiting factor of using PICS is that an absolute calibration capability has not yet been fully developed. Because of this, PICS are primarily limited to providing only long term trending information for individual sensors or cross-calibration opportunities between two sensors. This paper builds an argument that PICS can be used more extensively for absolute calibration. To illustrate this, a simple empirical model is developed for the well-known Libya 4 PICS based on observations by Terra MODIS and EO-1 Hyperion. The model is validated by comparing model predicted top-of-atmosphere reflectance values to actual measurements made by the Landsat ETM+ sensor reflective bands. Following this, an outline is presented to develop a more comprehensive and accurate PICS absolute calibration model that can be Système international d'unités (SI) traceable. These initial concepts suggest that absolute calibration using PICS is possible on a broad scale and can lead to improved on-orbit calibration capabilities for optical satellite sensors.
USDA-ARS?s Scientific Manuscript database
Information to support application of hydrologic and water quality (H/WQ) models abounds, yet modelers commonly use arbitrary, ad hoc methods to conduct, document, and report model calibration, validation, and evaluation. Consistent methods are needed to improve model calibration, validation, and e...
NASA Astrophysics Data System (ADS)
Lafontaine, J.; Hay, L.; Markstrom, S. L.
2016-12-01
The United States Geological Survey (USGS) has developed a National Hydrologic Model (NHM) to support coordinated, comprehensive and consistent hydrologic model development, and facilitate the application of hydrologic simulations within the conterminous United States (CONUS). As many stream reaches in the CONUS are either not gaged, or are substantially impacted by water use or flow regulation, ancillary information must be used to determine reasonable parameter estimations for streamflow simulations. Hydrologic models for 1,576 gaged watersheds across the CONUS were developed to test the feasibility of improving streamflow simulations linking physically-based hydrologic models with remotely-sensed data products (i.e. snow water equivalent). Initially, the physically-based models were calibrated to measured streamflow data to provide a baseline for comparison across multiple calibration strategy tests. In addition, not all ancillary datasets are appropriate for application to all parts of the CONUS (e.g. snow water equivalent in the southeastern U.S., where snow is a rarity). As it is not expected that any one data product or model simulation will be sufficient for representing hydrologic behavior across the entire CONUS, a systematic evaluation of which data products improve hydrologic simulations for various regions across the CONUS was performed. The resulting portfolio of calibration strategies can be used to guide selection of an appropriate combination of modeled and measured information for hydrologic model development and calibration. In addition, these calibration strategies have been developed to be flexible so that new data products can be assimilated. This analysis provides a foundation to understand how well models work when sufficient streamflow data are not available and could be used to further inform hydrologic model parameter development for ungaged areas.
Bhandari, Ammar B; Nelson, Nathan O; Sweeney, Daniel W; Baffaut, Claire; Lory, John A; Senaviratne, Anomaa; Pierzynski, Gary M; Janssen, Keith A; Barnes, Philip L
2017-11-01
Process-based computer models have been proposed as a tool to generate data for Phosphorus (P) Index assessment and development. Although models are commonly used to simulate P loss from agriculture using managements that are different from the calibration data, this use of models has not been fully tested. The objective of this study is to determine if the Agricultural Policy Environmental eXtender (APEX) model can accurately simulate runoff, sediment, total P, and dissolved P loss from 0.4 to 1.5 ha of agricultural fields with managements that are different from the calibration data. The APEX model was calibrated with field-scale data from eight different managements at two locations (management-specific models). The calibrated models were then validated, either with the same management used for calibration or with different managements. Location models were also developed by calibrating APEX with data from all managements. The management-specific models resulted in satisfactory performance when used to simulate runoff, total P, and dissolved P within their respective systems, with > 0.50, Nash-Sutcliffe efficiency > 0.30, and percent bias within ±35% for runoff and ±70% for total and dissolved P. When applied outside the calibration management, the management-specific models only met the minimum performance criteria in one-third of the tests. The location models had better model performance when applied across all managements compared with management-specific models. Our results suggest that models only be applied within the managements used for calibration and that data be included from multiple management systems for calibration when using models to assess management effects on P loss or evaluate P Indices. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Hydrological processes and model representation: impact of soft data on calibration
J.G. Arnold; M.A. Youssef; H. Yen; M.J. White; A.Y. Sheshukov; A.M. Sadeghi; D.N. Moriasi; J.L. Steiner; Devendra Amatya; R.W. Skaggs; E.B. Haney; J. Jeong; M. Arabi; P.H. Gowda
2015-01-01
Hydrologic and water quality models are increasingly used to determine the environmental impacts of climate variability and land management. Due to differing model objectives and differences in monitored data, there are currently no universally accepted procedures for model calibration and validation in the literature. In an effort to develop accepted model calibration...
Reducing equifinality of hydrological models by integrating Functional Streamflow Disaggregation
NASA Astrophysics Data System (ADS)
Lüdtke, Stefan; Apel, Heiko; Nied, Manuela; Carl, Peter; Merz, Bruno
2014-05-01
A universal problem of the calibration of hydrological models is the equifinality of different parameter sets derived from the calibration of models against total runoff values. This is an intrinsic problem stemming from the quality of the calibration data and the simplified process representation by the model. However, discharge data contains additional information which can be extracted by signal processing methods. An analysis specifically developed for the disaggregation of runoff time series into flow components is the Functional Streamflow Disaggregation (FSD; Carl & Behrendt, 2008). This method is used in the calibration of an implementation of the hydrological model SWIM in a medium sized watershed in Thailand. FSD is applied to disaggregate the discharge time series into three flow components which are interpreted as base flow, inter-flow and surface runoff. In addition to total runoff, the model is calibrated against these three components in a modified GLUE analysis, with the aim to identify structural model deficiencies, assess the internal process representation and to tackle equifinality. We developed a model dependent (MDA) approach calibrating the model runoff components against the FSD components, and a model independent (MIA) approach comparing the FSD of the model results and the FSD of calibration data. The results indicate, that the decomposition provides valuable information for the calibration. Particularly MDA highlights and discards a number of standard GLUE behavioural models underestimating the contribution of soil water to river discharge. Both, MDA and MIA yield to a reduction of the parameter ranges by a factor up to 3 in comparison to standard GLUE. Based on these results, we conclude that the developed calibration approach is able to reduce the equifinality of hydrological model parameterizations. The effect on the uncertainty of the model predictions is strongest by applying MDA and shows only minor reductions for MIA. Besides further validation of FSD, the next steps include an extension of the study to different catchments and other hydrological models with a similar structure.
Enns, Eva Andrea; Kao, Szu-Yu; Kozhimannil, Katy Backes; Kahn, Judith; Farris, Jill; Kulasingam, Shalini L
2017-10-01
Mathematical models are important tools for assessing prevention and management strategies for sexually transmitted infections. These models are usually developed for a single infection and require calibration to observed epidemiological trends in the infection of interest. Incorporating other outcomes of sexual behavior into the model, such as pregnancy, may better inform the calibration process. We developed a mathematical model of chlamydia transmission and pregnancy in Minnesota adolescents aged 15 to 19 years. We calibrated the model to statewide rates of reported chlamydia cases alone (chlamydia calibration) and in combination with pregnancy rates (dual calibration). We evaluated the impact of calibrating to different outcomes of sexual behavior on estimated input parameter values, predicted epidemiological outcomes, and predicted impact of chlamydia prevention interventions. The two calibration scenarios produced different estimates of the probability of condom use, the probability of chlamydia transmission per sex act, the proportion of asymptomatic infections, and the screening rate among men. These differences resulted in the dual calibration scenario predicting lower prevalence and incidence of chlamydia compared with calibrating to chlamydia cases alone. When evaluating the impact of a 10% increase in condom use, the dual calibration scenario predicted fewer infections averted over 5 years compared with chlamydia calibration alone [111 (6.8%) vs 158 (8.5%)]. While pregnancy and chlamydia in adolescents are often considered separately, both are outcomes of unprotected sexual activity. Incorporating both as calibration targets in a model of chlamydia transmission resulted in different parameter estimates, potentially impacting the intervention effectiveness predicted by the model.
DOT National Transportation Integrated Search
2006-01-01
A previous study developed a procedure for microscopic simulation model calibration and validation and evaluated the procedure via two relatively simple case studies using three microscopic simulation models. Results showed that default parameters we...
USDA-ARS?s Scientific Manuscript database
The progressive improvement of computer science and development of auto-calibration techniques means that calibration of simulation models is no longer a major challenge for watershed planning and management. Modelers now increasingly focus on challenges such as improved representation of watershed...
A parallel calibration utility for WRF-Hydro on high performance computers
NASA Astrophysics Data System (ADS)
Wang, J.; Wang, C.; Kotamarthi, V. R.
2017-12-01
A successful modeling of complex hydrological processes comprises establishing an integrated hydrological model which simulates the hydrological processes in each water regime, calibrates and validates the model performance based on observation data, and estimates the uncertainties from different sources especially those associated with parameters. Such a model system requires large computing resources and often have to be run on High Performance Computers (HPC). The recently developed WRF-Hydro modeling system provides a significant advancement in the capability to simulate regional water cycles more completely. The WRF-Hydro model has a large range of parameters such as those in the input table files — GENPARM.TBL, SOILPARM.TBL and CHANPARM.TBL — and several distributed scaling factors such as OVROUGHRTFAC. These parameters affect the behavior and outputs of the model and thus may need to be calibrated against the observations in order to obtain a good modeling performance. Having a parameter calibration tool specifically for automate calibration and uncertainty estimates of WRF-Hydro model can provide significant convenience for the modeling community. In this study, we developed a customized tool using the parallel version of the model-independent parameter estimation and uncertainty analysis tool, PEST, to enabled it to run on HPC with PBS and SLURM workload manager and job scheduler. We also developed a series of PEST input file templates that are specifically for WRF-Hydro model calibration and uncertainty analysis. Here we will present a flood case study occurred in April 2013 over Midwest. The sensitivity and uncertainties are analyzed using the customized PEST tool we developed.
Calibration of a COTS Integration Cost Model Using Local Project Data
NASA Technical Reports Server (NTRS)
Boland, Dillard; Coon, Richard; Byers, Kathryn; Levitt, David
1997-01-01
The software measures and estimation techniques appropriate to a Commercial Off the Shelf (COTS) integration project differ from those commonly used for custom software development. Labor and schedule estimation tools that model COTS integration are available. Like all estimation tools, they must be calibrated with the organization's local project data. This paper describes the calibration of a commercial model using data collected by the Flight Dynamics Division (FDD) of the NASA Goddard Spaceflight Center (GSFC). The model calibrated is SLIM Release 4.0 from Quantitative Software Management (QSM). By adopting the SLIM reuse model and by treating configuration parameters as lines of code, we were able to establish a consistent calibration for COTS integration projects. The paper summarizes the metrics, the calibration process and results, and the validation of the calibration.
Evaluation of calibration efficacy under different levels of uncertainty
Heo, Yeonsook; Graziano, Diane J.; Guzowski, Leah; ...
2014-06-10
This study examines how calibration performs under different levels of uncertainty in model input data. It specifically assesses the efficacy of Bayesian calibration to enhance the reliability of EnergyPlus model predictions. A Bayesian approach can be used to update uncertain values of parameters, given measured energy-use data, and to quantify the associated uncertainty.We assess the efficacy of Bayesian calibration under a controlled virtual-reality setup, which enables rigorous validation of the accuracy of calibration results in terms of both calibrated parameter values and model predictions. Case studies demonstrate the performance of Bayesian calibration of base models developed from audit data withmore » differing levels of detail in building design, usage, and operation.« less
Finite Element Model Calibration Approach for Area I-X
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Gaspar, James L.; Lazor, Daniel R.; Parks, Russell A.; Bartolotta, Paul A.
2010-01-01
Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of non-conventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pretest predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.
Finite Element Model Calibration Approach for Ares I-X
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Lazor, Daniel R.; Gaspar, James L.; Parks, Russel A.; Bartolotta, Paul A.
2010-01-01
Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of nonconventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pre-test predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.
Calibration of Reduced Dynamic Models of Power Systems using Phasor Measurement Unit (PMU) Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Ning; Lu, Shuai; Singh, Ruchi
2011-09-23
Accuracy of a power system dynamic model is essential to the secure and efficient operation of the system. Lower confidence on model accuracy usually leads to conservative operation and lowers asset usage. To improve model accuracy, identification algorithms have been developed to calibrate parameters of individual components using measurement data from staged tests. To facilitate online dynamic studies for large power system interconnections, this paper proposes a model reduction and calibration approach using phasor measurement unit (PMU) data. First, a model reduction method is used to reduce the number of dynamic components. Then, a calibration algorithm is developed to estimatemore » parameters of the reduced model. This approach will help to maintain an accurate dynamic model suitable for online dynamic studies. The performance of the proposed method is verified through simulation studies.« less
Eldyasti, Ahmed; Nakhla, George; Zhu, Jesse
2012-05-01
Biofilm models are valuable tools for process engineers to simulate biological wastewater treatment. In order to enhance the use of biofilm models implemented in contemporary simulation software, model calibration is both necessary and helpful. The aim of this work was to develop a calibration protocol of the particulate biofilm model with a help of the sensitivity analysis of the most important parameters in the biofilm model implemented in BioWin® and verify the predictability of the calibration protocol. A case study of a circulating fluidized bed bioreactor (CFBBR) system used for biological nutrient removal (BNR) with a fluidized bed respirometric study of the biofilm stoichiometry and kinetics was used to verify and validate the proposed calibration protocol. Applying the five stages of the biofilm calibration procedures enhanced the applicability of BioWin®, which was capable of predicting most of the performance parameters with an average percentage error (APE) of 0-20%. Copyright © 2012 Elsevier Ltd. All rights reserved.
USDA-ARS?s Scientific Manuscript database
Watershed simulation models are used extensively to investigate hydrologic processes, landuse and climate change impacts, pollutant load assessments and best management practices (BMPs). Developing, calibrating and validating these models require a number of critical decisions that will influence t...
Probabilistic calibration of the SPITFIRE fire spread model using Earth observation data
NASA Astrophysics Data System (ADS)
Gomez-Dans, Jose; Wooster, Martin; Lewis, Philip; Spessa, Allan
2010-05-01
There is a great interest in understanding how fire affects vegetation distribution and dynamics in the context of global vegetation modelling. A way to include these effects is through the development of embedded fire spread models. However, fire is a complex phenomenon, thus difficult to model. Statistical models based on fire return intervals, or fire danger indices need large amounts of data for calibration, and are often prisoner to the epoch they were calibrated to. Mechanistic models, such as SPITFIRE, try to model the complete fire phenomenon based on simple physical rules, making these models mostly independent of calibration data. However, the processes expressed in models such as SPITFIRE require many parameters. These parametrisations are often reliant on site-specific experiments, or in some other cases, paremeters might not be measured directly. Additionally, in many cases, changes in temporal and/or spatial resolution result in parameters becoming effective. To address the difficulties with parametrisation and the often-used fitting methodologies, we propose using a probabilistic framework to calibrate some areas of the SPITFIRE fire spread model. We calibrate the model against Earth Observation (EO) data, a global and ever-expanding source of relevant data. We develop a methodology that tries to incorporate the limitations of the EO data, reasonable prior values for parameters and that results in distributions of parameters, which can be used to infer uncertainty due to parameter estimates. Additionally, the covariance structure of parameters and observations is also derived, whcih can help inform data gathering efforts and model development, respectively. For this work, we focus on Southern African savannas, an important ecosystem for fire studies, and one with a good amount of EO data relevnt to fire studies. As calibration datasets, we use burned area data, estimated number of fires and vegetation moisture dynamics.
Using Active Learning for Speeding up Calibration in Simulation Models.
Cevik, Mucahit; Ergun, Mehmet Ali; Stout, Natasha K; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan
2016-07-01
Most cancer simulation models include unobservable parameters that determine disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality, and their values are typically estimated via a lengthy calibration procedure, which involves evaluating a large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We developed an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs and therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using the previously developed University of Wisconsin breast cancer simulation model (UWBCS). In a recent study, calibration of the UWBCS required the evaluation of 378 000 input parameter combinations to build a race-specific model, and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378 000 combinations. Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. © The Author(s) 2015.
Using Active Learning for Speeding up Calibration in Simulation Models
Cevik, Mucahit; Ali Ergun, Mehmet; Stout, Natasha K.; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan
2015-01-01
Background Most cancer simulation models include unobservable parameters that determine the disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality and their values are typically estimated via lengthy calibration procedure, which involves evaluating large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Methods Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We develop an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs, therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using previously developed University of Wisconsin Breast Cancer Simulation Model (UWBCS). Results In a recent study, calibration of the UWBCS required the evaluation of 378,000 input parameter combinations to build a race-specific model and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378,000 combinations. Conclusion Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. PMID:26471190
Pan, Wenxiao; Galvin, Janine; Huang, Wei Ling; ...
2018-03-25
In this paper we aim to develop a validated device-scale CFD model that can predict quantitatively both hydrodynamics and CO 2 capture efficiency for an amine-based solvent absorber column with random Pall ring packing. A Eulerian porous-media approach and a two-fluid model were employed, in which the momentum and mass transfer equations were closed by literature-based empirical closure models. We proposed a hierarchical approach for calibrating the parameters in the closure models to make them accurate for the packed column. Specifically, a parameter for momentum transfer in the closure was first calibrated based on data from a single experiment. Withmore » this calibrated parameter, a parameter in the closure for mass transfer was next calibrated under a single operating condition. Last, the closure of the wetting area was calibrated for each gas velocity at three different liquid flow rates. For each calibration, cross validations were pursued using the experimental data under operating conditions different from those used for calibrations. This hierarchical approach can be generally applied to develop validated device-scale CFD models for different absorption columns.« less
Tian, Hai-Qing; Wang, Chun-Guang; Zhang, Hai-Jun; Yu, Zhi-Hong; Li, Jian-Kang
2012-11-01
Outlier samples strongly influence the precision of the calibration model in soluble solids content measurement of melons using NIR Spectra. According to the possible sources of outlier samples, three methods (predicted concentration residual test; Chauvenet test; leverage and studentized residual test) were used to discriminate these outliers respectively. Nine suspicious outliers were detected from calibration set which including 85 fruit samples. Considering the 9 suspicious outlier samples maybe contain some no-outlier samples, they were reclaimed to the model one by one to see whether they influence the model and prediction precision or not. In this way, 5 samples which were helpful to the model joined in calibration set again, and a new model was developed with the correlation coefficient (r) 0. 889 and root mean square errors for calibration (RMSEC) 0.6010 Brix. For 35 unknown samples, the root mean square errors prediction (RMSEP) was 0.854 degrees Brix. The performance of this model was more better than that developed with non outlier was eliminated from calibration set (r = 0.797, RMSEC= 0.849 degrees Brix, RMSEP = 1.19 degrees Brix), and more representative and stable with all 9 samples were eliminated from calibration set (r = 0.892, RMSEC = 0.605 degrees Brix, RMSEP = 0.862 degrees).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan, Wenxiao; Galvin, Janine; Huang, Wei Ling
In this paper we aim to develop a validated device-scale CFD model that can predict quantitatively both hydrodynamics and CO 2 capture efficiency for an amine-based solvent absorber column with random Pall ring packing. A Eulerian porous-media approach and a two-fluid model were employed, in which the momentum and mass transfer equations were closed by literature-based empirical closure models. We proposed a hierarchical approach for calibrating the parameters in the closure models to make them accurate for the packed column. Specifically, a parameter for momentum transfer in the closure was first calibrated based on data from a single experiment. Withmore » this calibrated parameter, a parameter in the closure for mass transfer was next calibrated under a single operating condition. Last, the closure of the wetting area was calibrated for each gas velocity at three different liquid flow rates. For each calibration, cross validations were pursued using the experimental data under operating conditions different from those used for calibrations. This hierarchical approach can be generally applied to develop validated device-scale CFD models for different absorption columns.« less
DOT National Transportation Integrated Search
2017-02-01
This project covered the development and calibration of a Dynamic Traffic Assignment (DTA) model and explained the procedures, constraints, and considerations for usage of this model for the Reno-Sparks area roadway network in Northern Nevada. A lite...
Horrey, William J; Lesch, Mary F; Mitsopoulos-Rubens, Eve; Lee, John D
2015-03-01
Humans often make inflated or erroneous estimates of their own ability or performance. Such errors in calibration can be due to incomplete processing, neglect of available information or due to improper weighing or integration of the information and can impact our decision-making, risk tolerance, and behaviors. In the driving context, these outcomes can have important implications for safety. The current paper discusses the notion of calibration in the context of self-appraisals and self-competence as well as in models of self-regulation in driving. We further develop a conceptual framework for calibration in the driving context borrowing from earlier models of momentary demand regulation, information processing, and lens models for information selection and utilization. Finally, using the model we describe the implications for calibration (or, more specifically, errors in calibration) for our understanding of driver distraction, in-vehicle automation and autonomous vehicles, and the training of novice and inexperienced drivers. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anh Bui; Nam Dinh; Brian Williams
In addition to validation data plan, development of advanced techniques for calibration and validation of complex multiscale, multiphysics nuclear reactor simulation codes are a main objective of the CASL VUQ plan. Advanced modeling of LWR systems normally involves a range of physico-chemical models describing multiple interacting phenomena, such as thermal hydraulics, reactor physics, coolant chemistry, etc., which occur over a wide range of spatial and temporal scales. To a large extent, the accuracy of (and uncertainty in) overall model predictions is determined by the correctness of various sub-models, which are not conservation-laws based, but empirically derived from measurement data. Suchmore » sub-models normally require extensive calibration before the models can be applied to analysis of real reactor problems. This work demonstrates a case study of calibration of a common model of subcooled flow boiling, which is an important multiscale, multiphysics phenomenon in LWR thermal hydraulics. The calibration process is based on a new strategy of model-data integration, in which, all sub-models are simultaneously analyzed and calibrated using multiple sets of data of different types. Specifically, both data on large-scale distributions of void fraction and fluid temperature and data on small-scale physics of wall evaporation were simultaneously used in this work’s calibration. In a departure from traditional (or common-sense) practice of tuning/calibrating complex models, a modern calibration technique based on statistical modeling and Bayesian inference was employed, which allowed simultaneous calibration of multiple sub-models (and related parameters) using different datasets. Quality of data (relevancy, scalability, and uncertainty) could be taken into consideration in the calibration process. This work presents a step forward in the development and realization of the “CIPS Validation Data Plan” at the Consortium for Advanced Simulation of LWRs to enable quantitative assessment of the CASL modeling of Crud-Induced Power Shift (CIPS) phenomenon, in particular, and the CASL advanced predictive capabilities, in general. This report is prepared for the Department of Energy’s Consortium for Advanced Simulation of LWRs program’s VUQ Focus Area.« less
Sin, Gürkan; Van Hulle, Stijn W H; De Pauw, Dirk J W; van Griensven, Ann; Vanrolleghem, Peter A
2005-07-01
Modelling activated sludge systems has gained an increasing momentum after the introduction of activated sludge models (ASMs) in 1987. Application of dynamic models for full-scale systems requires essentially a calibration of the chosen ASM to the case under study. Numerous full-scale model applications have been performed so far which were mostly based on ad hoc approaches and expert knowledge. Further, each modelling study has followed a different calibration approach: e.g. different influent wastewater characterization methods, different kinetic parameter estimation methods, different selection of parameters to be calibrated, different priorities within the calibration steps, etc. In short, there was no standard approach in performing the calibration study, which makes it difficult, if not impossible, to (1) compare different calibrations of ASMs with each other and (2) perform internal quality checks for each calibration study. To address these concerns, systematic calibration protocols have recently been proposed to bring guidance to the modeling of activated sludge systems and in particular to the calibration of full-scale models. In this contribution four existing calibration approaches (BIOMATH, HSG, STOWA and WERF) will be critically discussed using a SWOT (Strengths, Weaknesses, Opportunities, Threats) analysis. It will also be assessed in what way these approaches can be further developed in view of further improving the quality of ASM calibration. In this respect, the potential of automating some steps of the calibration procedure by use of mathematical algorithms is highlighted.
Connections between survey calibration estimators and semiparametric models for incomplete data
Lumley, Thomas; Shaw, Pamela A.; Dai, James Y.
2012-01-01
Survey calibration (or generalized raking) estimators are a standard approach to the use of auxiliary information in survey sampling, improving on the simple Horvitz–Thompson estimator. In this paper we relate the survey calibration estimators to the semiparametric incomplete-data estimators of Robins and coworkers, and to adjustment for baseline variables in a randomized trial. The development based on calibration estimators explains the ‘estimated weights’ paradox and provides useful heuristics for constructing practical estimators. We present some examples of using calibration to gain precision without making additional modelling assumptions in a variety of regression models. PMID:23833390
NASA Astrophysics Data System (ADS)
Ubieta, Eduardo; Hoyo, Itzal del; Valenzuela, Loreto; Lopez-Martín, Rafael; Peña, Víctor de la; López, Susana
2017-06-01
A simulation model of a parabolic-trough solar collector developed in Modelica® language is calibrated and validated. The calibration is performed in order to approximate the behavior of the solar collector model to a real one due to the uncertainty in some of the system parameters, i.e. measured data is used during the calibration process. Afterwards, the validation of this calibrated model is done. During the validation, the results obtained from the model are compared to the ones obtained during real operation in a collector from the Plataforma Solar de Almeria (PSA).
Sediment calibration strategies of Phase 5 Chesapeake Bay watershed model
Wu, J.; Shenk, G.W.; Raffensperger, Jeff P.; Moyer, D.; Linker, L.C.; ,
2005-01-01
Sediment is a primary constituent of concern for Chesapeake Bay due to its effect on water clarity. Accurate representation of sediment processes and behavior in Chesapeake Bay watershed model is critical for developing sound load reduction strategies. Sediment calibration remains one of the most difficult components of watershed-scale assessment. This is especially true for Chesapeake Bay watershed model given the size of the watershed being modeled and complexity involved in land and stream simulation processes. To obtain the best calibration, the Chesapeake Bay program has developed four different strategies for sediment calibration of Phase 5 watershed model, including 1) comparing observed and simulated sediment rating curves for different parts of the hydrograph; 2) analyzing change of bed depth over time; 3) relating deposition/scour to total annual sediment loads; and 4) calculating "goodness-of-fit' statistics. These strategies allow a more accurate sediment calibration, and also provide some insightful information on sediment processes and behavior in Chesapeake Bay watershed.
Effect of Using Extreme Years in Hydrologic Model Calibration Performance
NASA Astrophysics Data System (ADS)
Goktas, R. K.; Tezel, U.; Kargi, P. G.; Ayvaz, T.; Tezyapar, I.; Mesta, B.; Kentel, E.
2017-12-01
Hydrological models are useful in predicting and developing management strategies for controlling the system behaviour. Specifically they can be used for evaluating streamflow at ungaged catchments, effect of climate change, best management practices on water resources, or identification of pollution sources in a watershed. This study is a part of a TUBITAK project named "Development of a geographical information system based decision-making tool for water quality management of Ergene Watershed using pollutant fingerprints". Within the scope of this project, first water resources in Ergene Watershed is studied. Streamgages found in the basin are identified and daily streamflow measurements are obtained from State Hydraulic Works of Turkey. Streamflow data is analysed using box-whisker plots, hydrographs and flow-duration curves focusing on identification of extreme periods, dry or wet. Then a hydrological model is developed for Ergene Watershed using HEC-HMS in the Watershed Modeling System (WMS) environment. The model is calibrated for various time periods including dry and wet ones and the performance of calibration is evaluated using Nash-Sutcliffe Efficiency (NSE), correlation coefficient, percent bias (PBIAS) and root mean square error. It is observed that calibration period affects the model performance, and the main purpose of the development of the hydrological model should guide calibration period selection. Acknowledgement: This study is funded by The Scientific and Technological Research Council of Turkey (TUBITAK) under Project Number 115Y064.
A Review of Calibration Transfer Practices and Instrument Differences in Spectroscopy.
Workman, Jerome J
2018-03-01
Calibration transfer for use with spectroscopic instruments, particularly for near-infrared, infrared, and Raman analysis, has been the subject of multiple articles, research papers, book chapters, and technical reviews. There has been a myriad of approaches published and claims made for resolving the problems associated with transferring calibrations; however, the capability of attaining identical results over time from two or more instruments using an identical calibration still eludes technologists. Calibration transfer, in a precise definition, refers to a series of analytical approaches or chemometric techniques used to attempt to apply a single spectral database, and the calibration model developed using that database, for two or more instruments, with statistically retained accuracy and precision. Ideally, one would develop a single calibration for any particular application, and move it indiscriminately across instruments and achieve identical analysis or prediction results. There are many technical aspects involved in such precision calibration transfer, related to the measuring instrument reproducibility and repeatability, the reference chemical values used for the calibration, the multivariate mathematics used for calibration, and sample presentation repeatability and reproducibility. Ideally, a multivariate model developed on a single instrument would provide a statistically identical analysis when used on other instruments following transfer. This paper reviews common calibration transfer techniques, mostly related to instrument differences, and the mathematics of the uncertainty between instruments when making spectroscopic measurements of identical samples. It does not specifically address calibration maintenance or reference laboratory differences.
Prediction models for clustered data: comparison of a random intercept and standard regression model
2013-01-01
Background When study data are clustered, standard regression analysis is considered inappropriate and analytical techniques for clustered data need to be used. For prediction research in which the interest of predictor effects is on the patient level, random effect regression models are probably preferred over standard regression analysis. It is well known that the random effect parameter estimates and the standard logistic regression parameter estimates are different. Here, we compared random effect and standard logistic regression models for their ability to provide accurate predictions. Methods Using an empirical study on 1642 surgical patients at risk of postoperative nausea and vomiting, who were treated by one of 19 anesthesiologists (clusters), we developed prognostic models either with standard or random intercept logistic regression. External validity of these models was assessed in new patients from other anesthesiologists. We supported our results with simulation studies using intra-class correlation coefficients (ICC) of 5%, 15%, or 30%. Standard performance measures and measures adapted for the clustered data structure were estimated. Results The model developed with random effect analysis showed better discrimination than the standard approach, if the cluster effects were used for risk prediction (standard c-index of 0.69 versus 0.66). In the external validation set, both models showed similar discrimination (standard c-index 0.68 versus 0.67). The simulation study confirmed these results. For datasets with a high ICC (≥15%), model calibration was only adequate in external subjects, if the used performance measure assumed the same data structure as the model development method: standard calibration measures showed good calibration for the standard developed model, calibration measures adapting the clustered data structure showed good calibration for the prediction model with random intercept. Conclusion The models with random intercept discriminate better than the standard model only if the cluster effect is used for predictions. The prediction model with random intercept had good calibration within clusters. PMID:23414436
Bouwmeester, Walter; Twisk, Jos W R; Kappen, Teus H; van Klei, Wilton A; Moons, Karel G M; Vergouwe, Yvonne
2013-02-15
When study data are clustered, standard regression analysis is considered inappropriate and analytical techniques for clustered data need to be used. For prediction research in which the interest of predictor effects is on the patient level, random effect regression models are probably preferred over standard regression analysis. It is well known that the random effect parameter estimates and the standard logistic regression parameter estimates are different. Here, we compared random effect and standard logistic regression models for their ability to provide accurate predictions. Using an empirical study on 1642 surgical patients at risk of postoperative nausea and vomiting, who were treated by one of 19 anesthesiologists (clusters), we developed prognostic models either with standard or random intercept logistic regression. External validity of these models was assessed in new patients from other anesthesiologists. We supported our results with simulation studies using intra-class correlation coefficients (ICC) of 5%, 15%, or 30%. Standard performance measures and measures adapted for the clustered data structure were estimated. The model developed with random effect analysis showed better discrimination than the standard approach, if the cluster effects were used for risk prediction (standard c-index of 0.69 versus 0.66). In the external validation set, both models showed similar discrimination (standard c-index 0.68 versus 0.67). The simulation study confirmed these results. For datasets with a high ICC (≥15%), model calibration was only adequate in external subjects, if the used performance measure assumed the same data structure as the model development method: standard calibration measures showed good calibration for the standard developed model, calibration measures adapting the clustered data structure showed good calibration for the prediction model with random intercept. The models with random intercept discriminate better than the standard model only if the cluster effect is used for predictions. The prediction model with random intercept had good calibration within clusters.
SWAT Model Configuration, Calibration and Validation for Lake Champlain Basin
The Soil and Water Assessment Tool (SWAT) model was used to develop phosphorus loading estimates for sources in the Lake Champlain Basin. This document describes the model setup and parameterization, and presents calibration results.
Ecologically-focused Calibration of Hydrological Models for Environmental Flow Applications
NASA Astrophysics Data System (ADS)
Adams, S. K.; Bledsoe, B. P.
2015-12-01
Hydrologic alteration resulting from watershed urbanization is a common cause of aquatic ecosystem degradation. Developing environmental flow criteria for urbanizing watersheds requires quantitative flow-ecology relationships that describe biological responses to streamflow alteration. Ideally, gaged flow data are used to develop flow-ecology relationships; however, biological monitoring sites are frequently ungaged. For these ungaged locations, hydrologic models must be used to predict streamflow characteristics through calibration and testing at gaged sites, followed by extrapolation to ungaged sites. Physically-based modeling of rainfall-runoff response has frequently utilized "best overall fit" calibration criteria, such as the Nash-Sutcliffe Efficiency (NSE), that do not necessarily focus on specific aspects of the flow regime relevant to biota of interest. This study investigates the utility of employing flow characteristics known a priori to influence regional biological endpoints as "ecologically-focused" calibration criteria compared to traditional, "best overall fit" criteria. For this study, 19 continuous HEC-HMS 4.0 models were created in coastal southern California and calibrated to hourly USGS streamflow gages with nearby biological monitoring sites using one "best overall fit" and three "ecologically-focused" criteria: NSE, Richards-Baker Flashiness Index (RBI), percent of time when the flow is < 1 cfs (%<1), and a Combined Calibration (RBI and %<1). Calibrated models were compared using calibration accuracy, environmental flow metric reproducibility, and the strength of flow-ecology relationships. Results indicate that "ecologically-focused" criteria can be calibrated with high accuracy and may provide stronger flow-ecology relationships than "best overall fit" criteria, especially when multiple "ecologically-focused" criteria are used in concert, despite inabilities to accurately reproduce additional types of ecological flow metrics to which the models are not explicitly calibrated.
NASA Technical Reports Server (NTRS)
Tripp, John S.; Tcheng, Ping
1999-01-01
Statistical tools, previously developed for nonlinear least-squares estimation of multivariate sensor calibration parameters and the associated calibration uncertainty analysis, have been applied to single- and multiple-axis inertial model attitude sensors used in wind tunnel testing to measure angle of attack and roll angle. The analysis provides confidence and prediction intervals of calibrated sensor measurement uncertainty as functions of applied input pitch and roll angles. A comparative performance study of various experimental designs for inertial sensor calibration is presented along with corroborating experimental data. The importance of replicated calibrations over extended time periods has been emphasized; replication provides independent estimates of calibration precision and bias uncertainties, statistical tests for calibration or modeling bias uncertainty, and statistical tests for sensor parameter drift over time. A set of recommendations for a new standardized model attitude sensor calibration method and usage procedures is included. The statistical information provided by these procedures is necessary for the uncertainty analysis of aerospace test results now required by users of industrial wind tunnel test facilities.
NASA Technical Reports Server (NTRS)
Hovenac, Edward A.; Lock, James A.
1993-01-01
Scattering calculations using a detailed model of the multimode laser beam in the forward-scattering spectrometer probe (FSSP) were carried out using a recently developed extension to Mie scattering theory. From this model, new calibration curves for the FSSP were calculated. The difference between the old calibration curves and the new ones is small for droplet diameters less than 10 microns, but the difference increases to approximately 10 percent at diameters of 50 microns. When using glass beads to calibrate the FSSP, calibration errors can be minimized by using glass beads of many different diameters, over the entire range of the FSSP. If the FSSP is calibrated using one-diameter glass beads, then the new formalism is necessary to extrapolate the calibration over the entire range.
NASA Technical Reports Server (NTRS)
Hovenac, Edward A.; Lock, James A.
1993-01-01
Scattering calculations using a more detailed model of the multimode laser beam in the forward-scattering spectrometer probe (FSSP) were carried out by using a recently developed extension to Mie scattering theory. From this model, new calibration curves for the FSSP were calculated. The difference between the old calibration curves and the new ones is small for droplet diameters less than 10 micrometers, but the difference increases to approximately 10% at diameters of 50 micrometers. When using glass beads to calibrate the FSSP, calibration errors can be minimized, by using glass beads of many different diameters, over the entire range of the FSSP. If the FSSP is calibrated using one-diameter glass beads, then the new formalism is necessary to extrapolate the calibration over the entire range.
Geng, Zongyu; Yang, Feng; Chen, Xi; Wu, Nianqiang
2016-01-01
It remains a challenge to accurately calibrate a sensor subject to environmental drift. The calibration task for such a sensor is to quantify the relationship between the sensor’s response and its exposure condition, which is specified by not only the analyte concentration but also the environmental factors such as temperature and humidity. This work developed a Gaussian Process (GP)-based procedure for the efficient calibration of sensors in drifting environments. Adopted as the calibration model, GP is not only able to capture the possibly nonlinear relationship between the sensor responses and the various exposure-condition factors, but also able to provide valid statistical inference for uncertainty quantification of the target estimates (e.g., the estimated analyte concentration of an unknown environment). Built on GP’s inference ability, an experimental design method was developed to achieve efficient sampling of calibration data in a batch sequential manner. The resulting calibration procedure, which integrates the GP-based modeling and experimental design, was applied on a simulated chemiresistor sensor to demonstrate its effectiveness and its efficiency over the traditional method. PMID:26924894
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khan, Yasin; Mathur, Jyotirmay; Bhandari, Mahabir S
2016-01-01
The paper describes a case study of an information technology office building with a radiant cooling system and a conventional variable air volume (VAV) system installed side by side so that performancecan be compared. First, a 3D model of the building involving architecture, occupancy, and HVAC operation was developed in EnergyPlus, a simulation tool. Second, a different calibration methodology was applied to develop the base case for assessing the energy saving potential. This paper details the calibration of the whole building energy model to the component level, including lighting, equipment, and HVAC components such as chillers, pumps, cooling towers, fans,more » etc. Also a new methodology for the systematic selection of influence parameter has been developed for the calibration of a simulated model which requires large time for the execution. The error at the whole building level [measured in mean bias error (MBE)] is 0.2%, and the coefficient of variation of root mean square error (CvRMSE) is 3.2%. The total errors in HVAC at the hourly are MBE = 8.7% and CvRMSE = 23.9%, which meet the criteria of ASHRAE 14 (2002) for hourly calibration. Different suggestions have been pointed out to generalize the energy saving of radiant cooling system through the existing building system. So a base case model was developed by using the calibrated model for quantifying the energy saving potential of the radiant cooling system. It was found that a base case radiant cooling system integrated with DOAS can save 28% energy compared with the conventional VAV system.« less
Calibration Against the Moon. I: A Disk-Resolved Lunar Model for Absolute Reflectance Calibration
2010-01-01
average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and...3. DATES COVERED (From - To) 4. TITLE AND SUBTITLE Calibration against the Moon I: A disk- resolved lunar model for absolute reflectance...of the disk- resolved Moon at visible to near infrared wavelengths. It has been developed in order to use the Moon as a calibration reference
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cole, Charles R.; Bergeron, Marcel P.; Wurstner, Signe K.
2001-05-31
This report describes a new initiative to strengthen the technical defensibility of predictions made with the Hanford site-wide groundwater flow and transport model. The focus is on characterizing major uncertainties in the current model. PNNL will develop and implement a calibration approach and methodology that can be used to evaluate alternative conceptual models of the Hanford aquifer system. The calibration process will involve a three-dimensional transient inverse calibration of each numerical model to historical observations of hydraulic and water quality impacts to the unconfined aquifer system from Hanford operations since the mid-1940s.
Wu, Yiping; Liu, Shuguang; Li, Zhengpeng; Dahal, Devendra; Young, Claudia J.; Schmidt, Gail L.; Liu, Jinxun; Davis, Brian; Sohl, Terry L.; Werner, Jeremy M.; Oeding, Jennifer
2014-01-01
Process-oriented ecological models are frequently used for predicting potential impacts of global changes such as climate and land-cover changes, which can be useful for policy making. It is critical but challenging to automatically derive optimal parameter values at different scales, especially at regional scale, and validate the model performance. In this study, we developed an automatic calibration (auto-calibration) function for a well-established biogeochemical model—the General Ensemble Biogeochemical Modeling System (GEMS)-Erosion Deposition Carbon Model (EDCM)—using data assimilation technique: the Shuffled Complex Evolution algorithm and a model-inversion R package—Flexible Modeling Environment (FME). The new functionality can support multi-parameter and multi-objective auto-calibration of EDCM at the both pixel and regional levels. We also developed a post-processing procedure for GEMS to provide options to save the pixel-based or aggregated county-land cover specific parameter values for subsequent simulations. In our case study, we successfully applied the updated model (EDCM-Auto) for a single crop pixel with a corn–wheat rotation and a large ecological region (Level II)—Central USA Plains. The evaluation results indicate that EDCM-Auto is applicable at multiple scales and is capable to handle land cover changes (e.g., crop rotations). The model also performs well in capturing the spatial pattern of grain yield production for crops and net primary production (NPP) for other ecosystems across the region, which is a good example for implementing calibration and validation of ecological models with readily available survey data (grain yield) and remote sensing data (NPP) at regional and national levels. The developed platform for auto-calibration can be readily expanded to incorporate other model inversion algorithms and potential R packages, and also be applied to other ecological models.
Precision process calibration and CD predictions for low-k1 lithography
NASA Astrophysics Data System (ADS)
Chen, Ting; Park, Sangbong; Berger, Gabriel; Coskun, Tamer H.; de Vocht, Joep; Chen, Fung; Yu, Linda; Hsu, Stephen; van den Broeke, Doug; Socha, Robert; Park, Jungchul; Gronlund, Keith; Davis, Todd; Plachecki, Vince; Harris, Tom; Hansen, Steve; Lambson, Chuck
2005-06-01
Leading resist calibration for sub-0.3 k1 lithography demands accuracy <2nm for CD through pitch. An accurately calibrated resist process is the prerequisite for establishing production-worthy manufacturing under extreme low k1. From an integrated imaging point of view, the following key components must be simultaneously considered during the calibration - high numerical aperture (NA>0.8) imaging characteristics, customized illuminations (measured vs. modeled pupil profiles), resolution enhancement technology (RET) mask with OPC, reticle metrology, and resist thin film substrate. For imaging at NA approaching unity, polarized illumination can impact significantly the contrast formation in the resist film stack, and therefore it is an important factor to consider in the CD-based resist calibration. For aggressive DRAM memory core designs at k1<0.3, pattern-specific illumination optimization has proven to be critical for achieving the required imaging performance. Various optimization techniques from source profile optimization with fixed mask design to the combined source and mask optimization have been considered for customer designs and available imaging capabilities. For successful low-k1 process development, verification of the optimization results can only be made with a sufficiently tunable resist model that can predicate the wafer printing accurately under various optimized process settings. We have developed, for resist patterning under aggressive low-k1 conditions, a novel 3D diffusion model equipped with double-Gaussian convolution in each dimension. Resist calibration with the new diffusion model has demonstrated a fitness and CD predication accuracy that rival or outperform the traditional 3D physical resist models. In this work, we describe our empirical approach to achieving the nm-scale precision for advanced lithography process calibrations, using either measured 1D CD through-pitch or 2D memory core patterns. We show that for ArF imaging, the current resist development and diffusion modeling can readily achieve ~1-2nm max CD errors for common 1D through-pitch and aggressive 2D memory core resist patterns. Sensitivities of the calibrated models to various process parameters are analyzed, including the comparison between the measured and modeled (Gaussian or GRAIL) pupil profiles. We also report our preliminary calibration results under selected polarized illumination conditions.
Calibration of the heat balance model for prediction of car climate
NASA Astrophysics Data System (ADS)
Pokorný, Jan; Fišer, Jan; Jícha, Miroslav
2012-04-01
In the paper, the authors refer to development a heat balance model to predict car climate and power heat load. Model is developed in Modelica language using Dymola as interpreter. It is a dynamical system, which describes a heat exchange between car cabin and ambient. Inside a car cabin, there is considered heat exchange between air zone, interior and air-conditioning system. It is considered 1D heat transfer with a heat accumulation and a relative movement Sun respect to the car cabin, whilst car is moving. Measurements of the real operating conditions of gave us data for model calibration. The model was calibrated for Škoda Felicia parking-summer scenarios.
Hill, M.C.; D'Agnese, F. A.; Faunt, C.C.
2000-01-01
Fourteen guidelines are described which are intended to produce calibrated groundwater models likely to represent the associated real systems more accurately than typically used methods. The 14 guidelines are discussed in the context of the calibration of a regional groundwater flow model of the Death Valley region in the southwestern United States. This groundwater flow system contains two sites of national significance from which the subsurface transport of contaminants could be or is of concern: Yucca Mountain, which is the potential site of the United States high-level nuclear-waste disposal; and the Nevada Test Site, which contains a number of underground nuclear-testing locations. This application of the guidelines demonstrates how they may be used for model calibration and evaluation, and also to direct further model development and data collection.Fourteen guidelines are described which are intended to produce calibrated groundwater models likely to represent the associated real systems more accurately than typically used methods. The 14 guidelines are discussed in the context of the calibration of a regional groundwater flow model of the Death Valley region in the southwestern United States. This groundwater flow system contains two sites of national significance from which the subsurface transport of contaminants could be or is of concern: Yucca Mountain, which is the potential site of the United States high-level nuclear-waste disposal; and the Nevada Test Site, which contains a number of underground nuclear-testing locations. This application of the guidelines demonstrates how they may be used for model calibration and evaluation, and also to direct further model development and data collection.
Niraula, Rewati; Norman, Laura A.; Meixner, Thomas; Callegary, James B.
2012-01-01
In most watershed-modeling studies, flow is calibrated at one monitoring site, usually at the watershed outlet. Like many arid and semi-arid watersheds, the main reach of the Santa Cruz watershed, located on the Arizona-Mexico border, is discontinuous for most of the year except during large flood events, and therefore the flow characteristics at the outlet do not represent the entire watershed. Calibration is required at multiple locations along the Santa Cruz River to improve model reliability. The objective of this study was to best portray surface water flow in this semiarid watershed and evaluate the effect of multi-gage calibration on flow predictions. In this study, the Soil and Water Assessment Tool (SWAT) was calibrated at seven monitoring stations, which improved model performance and increased the reliability of flow, in the Santa Cruz watershed. The most sensitive parameters to affect flow were found to be curve number (CN2), soil evaporation and compensation coefficient (ESCO), threshold water depth in shallow aquifer for return flow to occur (GWQMN), base flow alpha factor (Alpha_Bf), and effective hydraulic conductivity of the soil layer (Ch_K2). In comparison, when the model was established with a single calibration at the watershed outlet, flow predictions at other monitoring gages were inaccurate. This study emphasizes the importance of multi-gage calibration to develop a reliable watershed model in arid and semiarid environments. The developed model, with further calibration of water quality parameters will be an integral part of the Santa Cruz Watershed Ecosystem Portfolio Model (SCWEPM), an online decision support tool, to assess the impacts of climate change and urban growth in the Santa Cruz watershed.
Visible spectroscopy calibration transfer model in determining pH of Sala mangoes
NASA Astrophysics Data System (ADS)
Yahaya, O. K. M.; MatJafri, M. Z.; Aziz, A. A.; Omar, A. F.
2015-05-01
The purpose of this study is to compare the efficiency of calibration transfer procedures between three spectrometers involving two Ocean Optics Inc. spectrometers, namely, QE65000 and Jaz, and also, ASD FieldSpec 3 in measuring the pH of Sala mango by visible reflectance spectroscopy. This study evaluates the ability of these spectrometers in measuring the pH of Sala mango by applying similar calibration algorithms through direct calibration transfer. This visible reflectance spectroscopy technique defines a spectrometer as a master instrument and another spectrometer as a slave. The multiple linear regression (MLR) of calibration model generated using the QE65000 spectrometer is transferred to the Jaz spectrometer and vice versa for Set 1. The same technique is applied for Set 2 with QE65000 spectrometer is transferred to the FieldSpec3 spectrometer and vice versa. For Set 1, the result showed that the QE65000 spectrometer established a calibration model with higher accuracy than that of the Jaz spectrometer. In addition, the calibration model developed on Jaz spectrometer successfully predicted the pH of Sala mango, which was measured using QE65000 spectrometer, with a root means square error of prediction RMSEP = 0.092 pH and coefficients of determination R2 = 0.892. Moreover, the best prediction result is obtained for Set 2 when the calibration model developed on QE65000 spectrometer is successfully transferred to FieldSpec 3 with R2 = 0.839 and RMSEP = 0.16 pH.
Error-in-variables models in calibration
NASA Astrophysics Data System (ADS)
Lira, I.; Grientschnig, D.
2017-12-01
In many calibration operations, the stimuli applied to the measuring system or instrument under test are derived from measurement standards whose values may be considered to be perfectly known. In that case, it is assumed that calibration uncertainty arises solely from inexact measurement of the responses, from imperfect control of the calibration process and from the possible inaccuracy of the calibration model. However, the premise that the stimuli are completely known is never strictly fulfilled and in some instances it may be grossly inadequate. Then, error-in-variables (EIV) regression models have to be employed. In metrology, these models have been approached mostly from the frequentist perspective. In contrast, not much guidance is available on their Bayesian analysis. In this paper, we first present a brief summary of the conventional statistical techniques that have been developed to deal with EIV models in calibration. We then proceed to discuss the alternative Bayesian framework under some simplifying assumptions. Through a detailed example about the calibration of an instrument for measuring flow rates, we provide advice on how the user of the calibration function should employ the latter framework for inferring the stimulus acting on the calibrated device when, in use, a certain response is measured.
Crop physiology calibration in the CLM
Bilionis, I.; Drewniak, B. A.; Constantinescu, E. M.
2015-04-15
Farming is using more of the land surface, as population increases and agriculture is increasingly applied for non-nutritional purposes such as biofuel production. This agricultural expansion exerts an increasing impact on the terrestrial carbon cycle. In order to understand the impact of such processes, the Community Land Model (CLM) has been augmented with a CLM-Crop extension that simulates the development of three crop types: maize, soybean, and spring wheat. The CLM-Crop model is a complex system that relies on a suite of parametric inputs that govern plant growth under a given atmospheric forcing and available resources. CLM-Crop development used measurementsmore » of gross primary productivity (GPP) and net ecosystem exchange (NEE) from AmeriFlux sites to choose parameter values that optimize crop productivity in the model. In this paper, we calibrate these parameters for one crop type, soybean, in order to provide a faithful projection in terms of both plant development and net carbon exchange. Calibration is performed in a Bayesian framework by developing a scalable and adaptive scheme based on sequential Monte Carlo (SMC). The model showed significant improvement of crop productivity with the new calibrated parameters. We demonstrate that the calibrated parameters are applicable across alternative years and different sites.« less
Crop physiology calibration in the CLM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bilionis, I.; Drewniak, B. A.; Constantinescu, E. M.
Farming is using more of the land surface, as population increases and agriculture is increasingly applied for non-nutritional purposes such as biofuel production. This agricultural expansion exerts an increasing impact on the terrestrial carbon cycle. In order to understand the impact of such processes, the Community Land Model (CLM) has been augmented with a CLM-Crop extension that simulates the development of three crop types: maize, soybean, and spring wheat. The CLM-Crop model is a complex system that relies on a suite of parametric inputs that govern plant growth under a given atmospheric forcing and available resources. CLM-Crop development used measurementsmore » of gross primary productivity (GPP) and net ecosystem exchange (NEE) from AmeriFlux sites to choose parameter values that optimize crop productivity in the model. In this paper, we calibrate these parameters for one crop type, soybean, in order to provide a faithful projection in terms of both plant development and net carbon exchange. Calibration is performed in a Bayesian framework by developing a scalable and adaptive scheme based on sequential Monte Carlo (SMC). The model showed significant improvement of crop productivity with the new calibrated parameters. We demonstrate that the calibrated parameters are applicable across alternative years and different sites.« less
The investigation of social networks based on multi-component random graphs
NASA Astrophysics Data System (ADS)
Zadorozhnyi, V. N.; Yudin, E. B.
2018-01-01
The methods of non-homogeneous random graphs calibration are developed for social networks simulation. The graphs are calibrated by the degree distributions of the vertices and the edges. The mathematical foundation of the methods is formed by the theory of random graphs with the nonlinear preferential attachment rule and the theory of Erdôs-Rényi random graphs. In fact, well-calibrated network graph models and computer experiments with these models would help developers (owners) of the networks to predict their development correctly and to choose effective strategies for controlling network projects.
2012-09-28
spectral-geotechnical libraries and models developed during remote sensing and calibration/ validation campaigns conducted by NRL and collaborating...geotechnical libraries and models developed during remote sensing and calibration/ validation campaigns conducted by NRL and collaborating institutions in four...2010; Bachmann, Fry, et al, 2012a). The NRL HITT tool is a model for how we develop and validate software, and the future development of tools by
Software Tools For Building Decision-support Models For Flood Emergency Situations
NASA Astrophysics Data System (ADS)
Garrote, L.; Molina, M.; Ruiz, J. M.; Mosquera, J. C.
The SAIDA decision-support system was developed by the Spanish Ministry of the Environment to provide assistance to decision-makers during flood situations. SAIDA has been tentatively implemented in two test basins: Jucar and Guadalhorce, and the Ministry is currently planning to have it implemented in all major Spanish basins in a few years' time. During the development cycle of SAIDA, the need for providing as- sistance to end-users in model definition and calibration was clearly identified. System developers usually emphasise abstraction and generality with the goal of providing a versatile software environment. End users, on the other hand, require concretion and specificity to adapt the general model to their local basins. As decision-support models become more complex, the gap between model developers and users gets wider: Who takes care of model definition, calibration and validation?. Initially, model developers perform these tasks, but the scope is usually limited to a few small test basins. Before the model enters operational stage, end users must get involved in model construction and calibration, in order to gain confidence in the model recommendations. However, getting the users involved in these activities is a difficult task. The goal of this re- search is to develop representation techniques for simulation and management models in order to define, develop and validate a mechanism, supported by a software envi- ronment, oriented to provide assistance to the end-user in building decision models for the prediction and management of river floods in real time. The system is based on three main building blocks: A library of simulators of the physical system, an editor to assist the user in building simulation models, and a machine learning method to calibrate decision models based on the simulation models provided by the user.
NASA Astrophysics Data System (ADS)
Ruiz-Pérez, Guiomar; Koch, Julian; Manfreda, Salvatore; Caylor, Kelly; Francés, Félix
2017-12-01
Ecohydrological modeling studies in developing countries, such as sub-Saharan Africa, often face the problem of extensive parametrical requirements and limited available data. Satellite remote sensing data may be able to fill this gap, but require novel methodologies to exploit their spatio-temporal information that could potentially be incorporated into model calibration and validation frameworks. The present study tackles this problem by suggesting an automatic calibration procedure, based on the empirical orthogonal function, for distributed ecohydrological daily models. The procedure is tested with the support of remote sensing data in a data-scarce environment - the upper Ewaso Ngiro river basin in Kenya. In the present application, the TETIS-VEG model is calibrated using only NDVI (Normalized Difference Vegetation Index) data derived from MODIS. The results demonstrate that (1) satellite data of vegetation dynamics can be used to calibrate and validate ecohydrological models in water-controlled and data-scarce regions, (2) the model calibrated using only satellite data is able to reproduce both the spatio-temporal vegetation dynamics and the observed discharge at the outlet and (3) the proposed automatic calibration methodology works satisfactorily and it allows for a straightforward incorporation of spatio-temporal data into the calibration and validation framework of a model.
Calibration of X-Ray diffractometer by the experimental comparison method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dudka, A. P., E-mail: dudka@ns.crys.ras.ru
2015-07-15
A software for calibrating an X-ray diffractometer with area detector has been developed. It is proposed to search for detector and goniometer calibration models whose parameters are reproduced in a series of measurements on a reference crystal. Reference (standard) crystals are prepared during the investigation; they should provide the agreement of structural models in repeated analyses. The technique developed has been used to calibrate Xcalibur Sapphire and Eos, Gemini Ruby (Agilent) and Apex x8 and Apex Duo (Bruker) diffractometers. The main conclusions are as follows: the calibration maps are stable for several years and can be used to improve structuralmore » results, verified CCD detectors exhibit significant inhomogeneity of the efficiency (response) function, and a Bruker goniometer introduces smaller distortions than an Agilent goniometer.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tripathi, Markandey M.; Krishnan, Sundar R.; Srinivasan, Kalyan K.
Chemiluminescence emissions from OH*, CH*, C2, and CO2 formed within the reaction zone of premixed flames depend upon the fuel-air equivalence ratio in the burning mixture. In the present paper, a new partial least square regression (PLS-R) based multivariate sensing methodology is investigated and compared with an OH*/CH* intensity ratio-based calibration model for sensing equivalence ratio in atmospheric methane-air premixed flames. Five replications of spectral data at nine different equivalence ratios ranging from 0.73 to 1.48 were used in the calibration of both models. During model development, the PLS-R model was initially validated with the calibration data set using themore » leave-one-out cross validation technique. Since the PLS-R model used the entire raw spectral intensities, it did not need the nonlinear background subtraction of CO2 emission that is required for typical OH*/CH* intensity ratio calibrations. An unbiased spectral data set (not used in the PLS-R model development), for 28 different equivalence ratio conditions ranging from 0.71 to 1.67, was used to predict equivalence ratios using the PLS-R and the intensity ratio calibration models. It was found that the equivalence ratios predicted with the PLS-R based multivariate calibration model matched the experimentally measured equivalence ratios within 7%; whereas, the OH*/CH* intensity ratio calibration grossly underpredicted equivalence ratios in comparison to measured equivalence ratios, especially under rich conditions ( > 1.2). The practical implications of the chemiluminescence-based multivariate equivalence ratio sensing methodology are also discussed.« less
NASA Technical Reports Server (NTRS)
Epperson, David L.; Davis, Jerry M.; Bloomfield, Peter; Karl, Thomas R.; Mcnab, Alan L.; Gallo, Kevin P.
1995-01-01
Multiple regression techniques were used to predict surface shelter temperatures based on the time period 1986-89 using upper-air data from the European Centre for Medium-Range Weather Forecasts (ECMWF) to represent the background climate and site-specific data to represent the local landscape. Global monthly mean temperature models were developed using data from over 5000 stations available in the Global Historical Climate Network (GHCN). Monthly maximum, mean, and minimum temperature models for the United States were also developed using data from over 1000 stations available in the U.S. Cooperative (COOP) Network and comparative monthly mean temperature models were developed using over 1150 U.S. stations in the GHCN. Three-, six-, and full-variable models were developed for comparative purposes. Inferences about the variables selected for the various models were easier for the GHCN models, which displayed month-to-month consistency in which variables were selected, than for the COOP models, which were assigned a different list of variables for nearly every month. These and other results suggest that global calibration is preferred because data from the global spectrum of physical processes that control surface temperatures are incorporated in a global model. All of the models that were developed in this study validated relatively well, especially the global models. Recalibration of the models with validation data resulted in only slightly poorer regression statistics, indicating that the calibration list of variables was valid. Predictions using data from the validation dataset in the calibrated equation were better for the GHCN models, and the globally calibrated GHCN models generally provided better U.S. predictions than the U.S.-calibrated COOP models. Overall, the GHCN and COOP models explained approximately 64%-95% of the total variance of surface shelter temperatures, depending on the month and the number of model variables. In addition, root-mean-square errors (rmse's) were over 3 C for GHCN models and over 2 C for COOP models for winter months, and near 2 C for GHCN models and near 1.5 C for COOP models for summer months.
Wolfs, Vincent; Villazon, Mauricio Florencio; Willems, Patrick
2013-01-01
Applications such as real-time control, uncertainty analysis and optimization require an extensive number of model iterations. Full hydrodynamic sewer models are not sufficient for these applications due to the excessive computation time. Simplifications are consequently required. A lumped conceptual modelling approach results in a much faster calculation. The process of identifying and calibrating the conceptual model structure could, however, be time-consuming. Moreover, many conceptual models lack accuracy, or do not account for backwater effects. To overcome these problems, a modelling methodology was developed which is suited for semi-automatic calibration. The methodology is tested for the sewer system of the city of Geel in the Grote Nete river basin in Belgium, using both synthetic design storm events and long time series of rainfall input. A MATLAB/Simulink(®) tool was developed to guide the modeller through the step-wise model construction, reducing significantly the time required for the conceptual modelling process.
Updated radiometric calibration for the Landsat-5 thematic mapper reflective bands
Helder, D.L.; Markham, B.L.; Thome, K.J.; Barsi, J.A.; Chander, G.; Malla, R.
2008-01-01
The Landsat-5 Thematic Mapper (TM) has been the workhorse of the Landsat system. Launched in 1984, it continues collecting data through the time frame of this paper. Thus, it provides an invaluable link to the past history of the land features of the Earth's surface, and it becomes imperative to provide an accurate radiometric calibration of the reflective bands to the user community. Previous calibration has been based on information obtained from prelaunch, the onboard calibrator, vicarious calibration attempts, and cross-calibration with Landsat-7. Currently, additional data sources are available to improve this calibration. Specifically, improvements in vicarious calibration methods and development of the use of pseudoinvariant sites for trending provide two additional independent calibration sources. The use of these additional estimates has resulted in a consistent calibration approach that ties together all of the available calibration data sources. Results from this analysis indicate a simple exponential, or a constant model may be used for all bands throughout the lifetime of Landsat-5 TM. Where previously time constants for the exponential models were approximately one year, the updated model has significantly longer time constants in bands 1-3. In contrast, bands 4, 5, and 7 are shown to be best modeled by a constant. The models proposed in this paper indicate calibration knowledge of 5% or better early in life, decreasing to nearly 2% later in life. These models have been implemented at the U.S. Geological Survey Earth Resources Observation and Science (EROS) and are the default calibration used for all Landsat TM data now distributed through EROS. ?? 2008 IEEE.
Calibrating reaction rates for the CREST model
NASA Astrophysics Data System (ADS)
Handley, Caroline A.; Christie, Michael A.
2017-01-01
The CREST reactive-burn model uses entropy-dependent reaction rates that, until now, have been manually tuned to fit shock-initiation and detonation data in hydrocode simulations. This paper describes the initial development of an automatic method for calibrating CREST reaction-rate coefficients, using particle swarm optimisation. The automatic method is applied to EDC32, to help develop the first CREST model for this conventional high explosive.
Calibrating and testing a gap model for simulating forest management in the Oregon Coast Range
Robert J. Pabst; Matthew N. Goslin; Steven L. Garman; Thomas A. Spies
2008-01-01
The complex mix of economic and ecological objectives facing today's forest managers necessitates the development of growth models with a capacity for simulating a wide range of forest conditions while producing outputs useful for economic analyses. We calibrated the gap model ZELIG to simulate stand level forest development in the Oregon Coast Range as part of a...
A methodology for reduced order modeling and calibration of the upper atmosphere
NASA Astrophysics Data System (ADS)
Mehta, Piyush M.; Linares, Richard
2017-10-01
Atmospheric drag is the largest source of uncertainty in accurately predicting the orbit of satellites in low Earth orbit (LEO). Accurately predicting drag for objects that traverse LEO is critical to space situational awareness. Atmospheric models used for orbital drag calculations can be characterized either as empirical or physics-based (first principles based). Empirical models are fast to evaluate but offer limited real-time predictive/forecasting ability, while physics based models offer greater predictive/forecasting ability but require dedicated parallel computational resources. Also, calibration with accurate data is required for either type of models. This paper presents a new methodology based on proper orthogonal decomposition toward development of a quasi-physical, predictive, reduced order model that combines the speed of empirical and the predictive/forecasting capabilities of physics-based models. The methodology is developed to reduce the high dimensionality of physics-based models while maintaining its capabilities. We develop the methodology using the Naval Research Lab's Mass Spectrometer Incoherent Scatter model and show that the diurnal and seasonal variations can be captured using a small number of modes and parameters. We also present calibration of the reduced order model using the CHAMP and GRACE accelerometer-derived densities. Results show that the method performs well for modeling and calibration of the upper atmosphere.
Generic Raman-based calibration models enabling real-time monitoring of cell culture bioreactors.
Mehdizadeh, Hamidreza; Lauri, David; Karry, Krizia M; Moshgbar, Mojgan; Procopio-Melino, Renee; Drapeau, Denis
2015-01-01
Raman-based multivariate calibration models have been developed for real-time in situ monitoring of multiple process parameters within cell culture bioreactors. Developed models are generic, in the sense that they are applicable to various products, media, and cell lines based on Chinese Hamster Ovarian (CHO) host cells, and are scalable to large pilot and manufacturing scales. Several batches using different CHO-based cell lines and corresponding proprietary media and process conditions have been used to generate calibration datasets, and models have been validated using independent datasets from separate batch runs. All models have been validated to be generic and capable of predicting process parameters with acceptable accuracy. The developed models allow monitoring multiple key bioprocess metabolic variables, and hence can be utilized as an important enabling tool for Quality by Design approaches which are strongly supported by the U.S. Food and Drug Administration. © 2015 American Institute of Chemical Engineers.
Dinç, Erdal; Ozdemir, Abdil
2005-01-01
Multivariate chromatographic calibration technique was developed for the quantitative analysis of binary mixtures enalapril maleate (EA) and hydrochlorothiazide (HCT) in tablets in the presence of losartan potassium (LST). The mathematical algorithm of multivariate chromatographic calibration technique is based on the use of the linear regression equations constructed using relationship between concentration and peak area at the five-wavelength set. The algorithm of this mathematical calibration model having a simple mathematical content was briefly described. This approach is a powerful mathematical tool for an optimum chromatographic multivariate calibration and elimination of fluctuations coming from instrumental and experimental conditions. This multivariate chromatographic calibration contains reduction of multivariate linear regression functions to univariate data set. The validation of model was carried out by analyzing various synthetic binary mixtures and using the standard addition technique. Developed calibration technique was applied to the analysis of the real pharmaceutical tablets containing EA and HCT. The obtained results were compared with those obtained by classical HPLC method. It was observed that the proposed multivariate chromatographic calibration gives better results than classical HPLC.
Agogo, George O.; van der Voet, Hilko; Veer, Pieter van’t; Ferrari, Pietro; Leenders, Max; Muller, David C.; Sánchez-Cantalejo, Emilio; Bamia, Christina; Braaten, Tonje; Knüppel, Sven; Johansson, Ingegerd; van Eeuwijk, Fred A.; Boshuizen, Hendriek
2014-01-01
In epidemiologic studies, measurement error in dietary variables often attenuates association between dietary intake and disease occurrence. To adjust for the attenuation caused by error in dietary intake, regression calibration is commonly used. To apply regression calibration, unbiased reference measurements are required. Short-term reference measurements for foods that are not consumed daily contain excess zeroes that pose challenges in the calibration model. We adapted two-part regression calibration model, initially developed for multiple replicates of reference measurements per individual to a single-replicate setting. We showed how to handle excess zero reference measurements by two-step modeling approach, how to explore heteroscedasticity in the consumed amount with variance-mean graph, how to explore nonlinearity with the generalized additive modeling (GAM) and the empirical logit approaches, and how to select covariates in the calibration model. The performance of two-part calibration model was compared with the one-part counterpart. We used vegetable intake and mortality data from European Prospective Investigation on Cancer and Nutrition (EPIC) study. In the EPIC, reference measurements were taken with 24-hour recalls. For each of the three vegetable subgroups assessed separately, correcting for error with an appropriately specified two-part calibration model resulted in about three fold increase in the strength of association with all-cause mortality, as measured by the log hazard ratio. Further found is that the standard way of including covariates in the calibration model can lead to over fitting the two-part calibration model. Moreover, the extent of adjusting for error is influenced by the number and forms of covariates in the calibration model. For episodically consumed foods, we advise researchers to pay special attention to response distribution, nonlinearity, and covariate inclusion in specifying the calibration model. PMID:25402487
NASA Technical Reports Server (NTRS)
Scott, W. A.
1984-01-01
The propulsion simulator calibration laboratory (PSCL) in which calibrations can be performed to determine the gross thrust and airflow of propulsion simulators installed in wind tunnel models is described. The preliminary checkout, evaluation and calibration of the PSCL's 3 component force measurement system is reported. Methods and equipment were developed for the alignment and calibration of the force measurement system. The initial alignment of the system demonstrated the need for more efficient means of aligning system's components. The use of precision alignment jigs increases both the speed and accuracy with which the system is aligned. The calibration of the force measurement system shows that the methods and equipment for this procedure can be successful.
NASA Astrophysics Data System (ADS)
Kan, Guangyuan; He, Xiaoyan; Ding, Liuqian; Li, Jiren; Hong, Yang; Zuo, Depeng; Ren, Minglei; Lei, Tianjie; Liang, Ke
2018-01-01
Hydrological model calibration has been a hot issue for decades. The shuffled complex evolution method developed at the University of Arizona (SCE-UA) has been proved to be an effective and robust optimization approach. However, its computational efficiency deteriorates significantly when the amount of hydrometeorological data increases. In recent years, the rise of heterogeneous parallel computing has brought hope for the acceleration of hydrological model calibration. This study proposed a parallel SCE-UA method and applied it to the calibration of a watershed rainfall-runoff model, the Xinanjiang model. The parallel method was implemented on heterogeneous computing systems using OpenMP and CUDA. Performance testing and sensitivity analysis were carried out to verify its correctness and efficiency. Comparison results indicated that heterogeneous parallel computing-accelerated SCE-UA converged much more quickly than the original serial version and possessed satisfactory accuracy and stability for the task of fast hydrological model calibration.
Prospects of second generation artificial intelligence tools in calibration of chemical sensors.
Braibanti, Antonio; Rao, Rupenaguntla Sambasiva; Ramam, Veluri Anantha; Rao, Gollapalli Nageswara; Rao, Vaddadi Venkata Panakala
2005-05-01
Multivariate data driven calibration models with neural networks (NNs) are developed for binary (Cu++ and Ca++) and quaternary (K+, Ca++, NO3- and Cl-) ion-selective electrode (ISE) data. The response profiles of ISEs with concentrations are non-linear and sub-Nernstian. This task represents function approximation of multi-variate, multi-response, correlated, non-linear data with unknown noise structure i.e. multi-component calibration/prediction in chemometric parlance. Radial distribution function (RBF) and Fuzzy-ARTMAP-NN models implemented in the software packages, TRAJAN and Professional II, are employed for the calibration. The optimum NN models reported are based on residuals in concentration space. Being a data driven information technology, NN does not require a model, prior- or posterior- distribution of data or noise structure. Missing information, spikes or newer trends in different concentration ranges can be modeled through novelty detection. Two simulated data sets generated from mathematical functions are modeled as a function of number of data points and network parameters like number of neurons and nearest neighbors. The success of RBF and Fuzzy-ARTMAP-NNs to develop adequate calibration models for experimental data and function approximation models for more complex simulated data sets ensures AI2 (artificial intelligence, 2nd generation) as a promising technology in quantitation.
Soybean Physiology Calibration in the Community Land Model
NASA Astrophysics Data System (ADS)
Drewniak, B. A.; Bilionis, I.; Constantinescu, E. M.
2014-12-01
With the large influence of agricultural land use on biophysical and biogeochemical cycles, integrating cultivation into Earth System Models (ESMs) is increasingly important. The Community Land Model (CLM) was augmented with a CLM-Crop extension that simulates the development of three crop types: maize, soybean, and spring wheat. The CLM-Crop model is a complex system that relies on a suite of parametric inputs that govern plant growth under a given atmospheric forcing and available resources. However, the strong nonlinearity of ESMs makes parameter fitting a difficult task. In this study, our goal is to calibrate ten of the CLM-Crop parameters for one crop type, soybean, in order to improve model projection of plant development and carbon fluxes. We used measurements of gross primary productivity, net ecosystem exchange, and plant biomass from AmeriFlux sites to choose parameter values that optimize crop productivity in the model. Calibration is performed in a Bayesian framework by developing a scalable and adaptive scheme based on sequential Monte Carlo (SMC). Our scheme can perform model calibration using very few evaluations and, by exploiting parallelism, at a fraction of the time required by plain vanilla Markov Chain Monte Carlo (MCMC). We present the results from a twin experiment (self-validation) and calibration results and validation using real observations from an AmeriFlux tower site in the Midwestern United States, for the soybean crop type. The improved model will help researchers understand how climate affects crop production and resulting carbon fluxes, and additionally, how cultivation impacts climate.
Monte-Carlo-based uncertainty propagation with hierarchical models—a case study in dynamic torque
NASA Astrophysics Data System (ADS)
Klaus, Leonard; Eichstädt, Sascha
2018-04-01
For a dynamic calibration, a torque transducer is described by a mechanical model, and the corresponding model parameters are to be identified from measurement data. A measuring device for the primary calibration of dynamic torque, and a corresponding model-based calibration approach, have recently been developed at PTB. The complete mechanical model of the calibration set-up is very complex, and involves several calibration steps—making a straightforward implementation of a Monte Carlo uncertainty evaluation tedious. With this in mind, we here propose to separate the complete model into sub-models, with each sub-model being treated with individual experiments and analysis. The uncertainty evaluation for the overall model then has to combine the information from the sub-models in line with Supplement 2 of the Guide to the Expression of Uncertainty in Measurement. In this contribution, we demonstrate how to carry this out using the Monte Carlo method. The uncertainty evaluation involves various input quantities of different origin and the solution of a numerical optimisation problem.
Modeling complex aquifer systems: a case study in Baton Rouge, Louisiana (USA)
NASA Astrophysics Data System (ADS)
Pham, Hai V.; Tsai, Frank T.-C.
2017-05-01
This study targets two challenges in groundwater model development: grid generation and model calibration for aquifer systems that are fluvial in origin. Realistic hydrostratigraphy can be developed using a large quantity of well log data to capture the complexity of an aquifer system. However, generating valid groundwater model grids to be consistent with the complex hydrostratigraphy is non-trivial. Model calibration can also become intractable for groundwater models that intend to match the complex hydrostratigraphy. This study uses the Baton Rouge aquifer system, Louisiana (USA), to illustrate a technical need to cope with grid generation and model calibration issues. A grid generation technique is introduced based on indicator kriging to interpolate 583 wireline well logs in the Baton Rouge area to derive a hydrostratigraphic architecture with fine vertical discretization. Then, an upscaling procedure is developed to determine a groundwater model structure with 162 layers that captures facies geometry in the hydrostratigraphic architecture. To handle model calibration for such a large model, this study utilizes a derivative-free optimization method in parallel computing to complete parameter estimation in a few months. The constructed hydrostratigraphy indicates the Baton Rouge aquifer system is fluvial in origin. The calibration result indicates hydraulic conductivity for Miocene sands is higher than that for Pliocene to Holocene sands and indicates the Baton Rouge fault and the Denham Springs-Scotlandville fault to be low-permeability leaky aquifers. The modeling result shows significantly low groundwater level in the "2,000-foot" sand due to heavy pumping, indicating potential groundwater upward flow from the "2,400-foot" sand.
NONPOINT SOURCE MODEL CALIBRATION IN HONEY CREEK WATERSHED
The U.S. EPA Non-Point Source Model has been applied and calibrated to a fairly large (187 sq. mi.) agricultural watershed in the Lake Erie Drainage basin of north central Ohio. Hydrologic and chemical routing algorithms have been developed. The model is evaluated for suitability...
Absolute Calibration of Optical Satellite Sensors Using Libya 4 Pseudo Invariant Calibration Site
NASA Technical Reports Server (NTRS)
Mishra, Nischal; Helder, Dennis; Angal, Amit; Choi, Jason; Xiong, Xiaoxiong
2014-01-01
The objective of this paper is to report the improvements in an empirical absolute calibration model developed at South Dakota State University using Libya 4 (+28.55 deg, +23.39 deg) pseudo invariant calibration site (PICS). The approach was based on use of the Terra MODIS as the radiometer to develop an absolute calibration model for the spectral channels covered by this instrument from visible to shortwave infrared. Earth Observing One (EO-1) Hyperion, with a spectral resolution of 10 nm, was used to extend the model to cover visible and near-infrared regions. A simple Bidirectional Reflectance Distribution function (BRDF) model was generated using Terra Moderate Resolution Imaging Spectroradiometer (MODIS) observations over Libya 4 and the resulting model was validated with nadir data acquired from satellite sensors such as Aqua MODIS and Landsat 7 (L7) Enhanced Thematic Mapper (ETM+). The improvements in the absolute calibration model to account for the BRDF due to off-nadir measurements and annual variations in the atmosphere are summarized. BRDF models due to off-nadir viewing angles have been derived using the measurements from EO-1 Hyperion. In addition to L7 ETM+, measurements from other sensors such as Aqua MODIS, UK-2 Disaster Monitoring Constellation (DMC), ENVISAT Medium Resolution Imaging Spectrometer (MERIS) and Operational Land Imager (OLI) onboard Landsat 8 (L8), which was launched in February 2013, were employed to validate the model. These satellite sensors differ in terms of the width of their spectral bandpasses, overpass time, off-nadir-viewing capabilities, spatial resolution and temporal revisit time, etc. The results demonstrate that the proposed empirical calibration model has accuracy of the order of 3% with an uncertainty of about 2% for the sensors used in the study.
USDA-ARS?s Scientific Manuscript database
Process-based computer models have been proposed as a tool to generate data for phosphorus-index assessment and development. Although models are commonly used to simulate phosphorus (P) loss from agriculture using managements that are different from the calibration data, this use of models has not ...
Lumb, A.M.; McCammon, R.B.; Kittle, J.L.
1994-01-01
Expert system software was developed to assist less experienced modelers with calibration of a watershed model and to facilitate the interaction between the modeler and the modeling process not provided by mathematical optimization. A prototype was developed with artificial intelligence software tools, a knowledge engineer, and two domain experts. The manual procedures used by the domain experts were identified and the prototype was then coded by the knowledge engineer. The expert system consists of a set of hierarchical rules designed to guide the calibration of the model through a systematic evaluation of model parameters. When the prototype was completed and tested, it was rewritten for portability and operational use and was named HSPEXP. The watershed model Hydrological Simulation Program--Fortran (HSPF) is used in the expert system. This report is the users manual for HSPEXP and contains a discussion of the concepts and detailed steps and examples for using the software. The system has been tested on watersheds in the States of Washington and Maryland, and the system correctly identified the model parameters to be adjusted and the adjustments led to improved calibration.
Development of a Tool for an Efficient Calibration of CORSIM Models
DOT National Transportation Integrated Search
2014-08-01
This project proposes a Memetic Algorithm (MA) for the calibration of microscopic traffic flow simulation models. The proposed MA includes a combination of genetic and simulated annealing algorithms. The genetic algorithm performs the exploration of ...
In this study, the calibration of subsurface batch and reactive-transport models involving complex biogeochemical processes was systematically evaluated. Two hypothetical nitrate biodegradation scenarios were developed and simulated in numerical experiments to evaluate the perfor...
Some advances in experimentation supporting development of viscoplastic constitutive models
NASA Technical Reports Server (NTRS)
Ellis, J. R.; Robinson, D. N.
1985-01-01
The development of a biaxial extensometer capable of measuring axial, torsion, and diametral strains to near-microstrain resolution at elevated temperatures is discussed. An instrument with this capability was needed to provide experimental support to the development of viscoplastic constitutive models. The advantages gained when torsional loading is used to investigate inelastic material response at elevated temperatures are highlighted. The development of the biaxial extensometer was conducted in two stages. The first involved a series of bench calibration experiments performed at room temperature. The second stage involved a series of in-place calibration experiments performed at room temperature. A review of the calibration data indicated that all performance requirements regarding resolution, range, stability, and crosstalk had been met by the subject instrument over the temperature range of interest, 21 C to 651 C. The scope of the in-placed calibration experiments was expanded to investigate the feasibility of generating stress relaxation data under torsional loading.
Some advances in experimentation supporting development of viscoplastic constitutive models
NASA Technical Reports Server (NTRS)
Ellis, J. R.; Robinson, D. N.
1985-01-01
The development of a biaxial extensometer capable of measuring axial, torsion, and diametral strains to near-microstrain resolution at elevated temperatures is discussed. An instrument with this capability was needed to provide experimental support to the development of viscoplastic constitutive models. The advantages gained when torsional loading is used to investigate inelastic material response at elevated temperatures are highlighted. The development of the biaxial extensometer was conducted in two stages. The first involved a series of bench calibration experiments performed at room temperature. The second stage involved a series of in-place calibration experiments conducted at room and elevated temperature. A review of the calibration data indicated that all performance requirements regarding resolution, range, stability, and crosstalk had been met by the subject instrument over the temperature range of interest, 21 C to 651 C. The scope of the in-place calibration experiments was expanded to investigate the feasibility of generating stress relaxation data under torsional loading.
A Bayesian alternative for multi-objective ecohydrological model specification
NASA Astrophysics Data System (ADS)
Tang, Yating; Marshall, Lucy; Sharma, Ashish; Ajami, Hoori
2018-01-01
Recent studies have identified the importance of vegetation processes in terrestrial hydrologic systems. Process-based ecohydrological models combine hydrological, physical, biochemical and ecological processes of the catchments, and as such are generally more complex and parametric than conceptual hydrological models. Thus, appropriate calibration objectives and model uncertainty analysis are essential for ecohydrological modeling. In recent years, Bayesian inference has become one of the most popular tools for quantifying the uncertainties in hydrological modeling with the development of Markov chain Monte Carlo (MCMC) techniques. The Bayesian approach offers an appealing alternative to traditional multi-objective hydrologic model calibrations by defining proper prior distributions that can be considered analogous to the ad-hoc weighting often prescribed in multi-objective calibration. Our study aims to develop appropriate prior distributions and likelihood functions that minimize the model uncertainties and bias within a Bayesian ecohydrological modeling framework based on a traditional Pareto-based model calibration technique. In our study, a Pareto-based multi-objective optimization and a formal Bayesian framework are implemented in a conceptual ecohydrological model that combines a hydrological model (HYMOD) and a modified Bucket Grassland Model (BGM). Simulations focused on one objective (streamflow/LAI) and multiple objectives (streamflow and LAI) with different emphasis defined via the prior distribution of the model error parameters. Results show more reliable outputs for both predicted streamflow and LAI using Bayesian multi-objective calibration with specified prior distributions for error parameters based on results from the Pareto front in the ecohydrological modeling. The methodology implemented here provides insight into the usefulness of multiobjective Bayesian calibration for ecohydrologic systems and the importance of appropriate prior distributions in such approaches.
Numerical Analysis of a Radiant Heat Flux Calibration System
NASA Technical Reports Server (NTRS)
Jiang, Shanjuan; Horn, Thomas J.; Dhir, V. K.
1998-01-01
A radiant heat flux gage calibration system exists in the Flight Loads Laboratory at NASA's Dryden Flight Research Center. This calibration system must be well understood if the heat flux gages calibrated in it are to provide useful data during radiant heating ground tests or flight tests of high speed aerospace vehicles. A part of the calibration system characterization process is to develop a numerical model of the flat plate heater element and heat flux gage, which will help identify errors due to convection, heater element erosion, and other factors. A 2-dimensional mathematical model of the gage-plate system has been developed to simulate the combined problem involving convection, radiation and mass loss by chemical reaction. A fourth order finite difference scheme is used to solve the steady state governing equations and determine the temperature distribution in the gage and plate, incident heat flux on the gage face, and flat plate erosion. Initial gage heat flux predictions from the model are found to be within 17% of experimental results.
Application of the precipitation-runoff model in the Warrior coal field, Alabama
Kidd, Robert E.; Bossong, C.R.
1987-01-01
A deterministic precipitation-runoff model, the Precipitation-Runoff Modeling System, was applied in two small basins located in the Warrior coal field, Alabama. Each basin has distinct geologic, hydrologic, and land-use characteristics. Bear Creek basin (15.03 square miles) is undisturbed, is underlain almost entirely by consolidated coal-bearing rocks of Pennsylvanian age (Pottsville Formation), and is drained by an intermittent stream. Turkey Creek basin (6.08 square miles) contains a surface coal mine and is underlain by both the Pottsville Formation and unconsolidated clay, sand, and gravel deposits of Cretaceous age (Coker Formation). Aquifers in the Coker Formation sustain flow through extended rainless periods. Preliminary daily and storm calibrations were developed for each basin. Initial parameter and variable values were determined according to techniques recommended in the user's manual for the modeling system and through field reconnaissance. Parameters with meaningful sensitivity were identified and adjusted to match hydrograph shapes and to compute realistic water year budgets. When the developed calibrations were applied to data exclusive of the calibration period as a verification exercise, results were comparable to those for the calibration period. The model calibrations included preliminary parameter values for the various categories of geology and land use in each basin. The parameter values for areas underlain by the Pottsville Formation in the Bear Creek basin were transferred directly to similar areas in the Turkey Creek basin, and these parameter values were held constant throughout the model calibration. Parameter values for all geologic and land-use categories addressed in the two calibrations can probably be used in ungaged basins where similar conditions exist. The parameter transfer worked well, as a good calibration was obtained for Turkey Creek basin.
Applying Hierarchical Model Calibration to Automatically Generated Items.
ERIC Educational Resources Information Center
Williamson, David M.; Johnson, Matthew S.; Sinharay, Sandip; Bejar, Isaac I.
This study explored the application of hierarchical model calibration as a means of reducing, if not eliminating, the need for pretesting of automatically generated items from a common item model prior to operational use. Ultimately the successful development of automatic item generation (AIG) systems capable of producing items with highly similar…
Optimization of Equation of State and Burn Model Parameters for Explosives
NASA Astrophysics Data System (ADS)
Bergh, Magnus; Wedberg, Rasmus; Lundgren, Jonas
2017-06-01
A reactive burn model implemented in a multi-dimensional hydrocode can be a powerful tool for predicting non-ideal effects as well as initiation phenomena in explosives. Calibration against experiment is, however, critical and non-trivial. Here, a procedure is presented for calibrating the Ignition and Growth Model utilizing hydrocode simulation in conjunction with the optimization program LS-OPT. The model is applied to the explosive PBXN-109. First, a cylinder expansion test is presented together with a new automatic routine for product equation of state calibration. Secondly, rate stick tests and instrumented gap tests are presented. Data from these experiments are used to calibrate burn model parameters. Finally, we discuss the applicability and development of this optimization routine.
Advanced fast 3D DSA model development and calibration for design technology co-optimization
NASA Astrophysics Data System (ADS)
Lai, Kafai; Meliorisz, Balint; Muelders, Thomas; Welling, Ulrich; Stock, Hans-Jürgen; Marokkey, Sajan; Demmerle, Wolfgang; Liu, Chi-Chun; Chi, Cheng; Guo, Jing
2017-04-01
Direct Optimization (DO) of a 3D DSA model is a more optimal approach to a DTCO study in terms of accuracy and speed compared to a Cahn Hilliard Equation solver. DO's shorter run time (10X to 100X faster) and linear scaling makes it scalable to the area required for a DTCO study. However, the lack of temporal data output, as opposed to prior art, requires a new calibration method. The new method involves a specific set of calibration patterns. The calibration pattern's design is extremely important when temporal data is absent to obtain robust model parameters. A model calibrated to a Hybrid DSA system with a set of device-relevant constructs indicates the effectiveness of using nontemporal data. Preliminary model prediction using programmed defects on chemo-epitaxy shows encouraging results and agree qualitatively well with theoretical predictions from a strong segregation theory.
Calibration of Airframe and Occupant Models for Two Full-Scale Rotorcraft Crash Tests
NASA Technical Reports Server (NTRS)
Annett, Martin S.; Horta, Lucas G.; Polanco, Michael A.
2012-01-01
Two full-scale crash tests of an MD-500 helicopter were conducted in 2009 and 2010 at NASA Langley's Landing and Impact Research Facility in support of NASA s Subsonic Rotary Wing Crashworthiness Project. The first crash test was conducted to evaluate the performance of an externally mounted composite deployable energy absorber under combined impact conditions. In the second crash test, the energy absorber was removed to establish baseline loads that are regarded as severe but survivable. Accelerations and kinematic data collected from the crash tests were compared to a system integrated finite element model of the test article. Results from 19 accelerometers placed throughout the airframe were compared to finite element model responses. The model developed for the purposes of predicting acceleration responses from the first crash test was inadequate when evaluating more severe conditions seen in the second crash test. A newly developed model calibration approach that includes uncertainty estimation, parameter sensitivity, impact shape orthogonality, and numerical optimization was used to calibrate model results for the second full-scale crash test. This combination of heuristic and quantitative methods was used to identify modeling deficiencies, evaluate parameter importance, and propose required model changes. It is shown that the multi-dimensional calibration techniques presented here are particularly effective in identifying model adequacy. Acceleration results for the calibrated model were compared to test results and the original model results. There was a noticeable improvement in the pilot and co-pilot region, a slight improvement in the occupant model response, and an over-stiffening effect in the passenger region. This approach should be adopted early on, in combination with the building-block approaches that are customarily used, for model development and test planning guidance. Complete crash simulations with validated finite element models can be used to satisfy crash certification requirements, thereby reducing overall development costs.
USDA-ARS?s Scientific Manuscript database
Sunflower and soybean are summer annuals that can be grown as an alternative to corn and may be particularly useful in organic production systems. Rapid and low cost methods of analyzing plant quality would be helpful for crop management. We developed and validated calibration models for Near-infrar...
Improving Environmental Model Calibration and Prediction
2011-01-18
REPORT Final Report - Improving Environmental Model Calibration and Prediction 14. ABSTRACT 16. SECURITY CLASSIFICATION OF: First, we have continued to...develop tools for efficient global optimization of environmental models. Our algorithms are hybrid algorithms that combine evolutionary strategies...toward practical hybrid optimization tools for environmental models. 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 18-01-2011 13
NASA Astrophysics Data System (ADS)
Xu, Zhongyan; Godrej, Adil N.; Grizzard, Thomas J.
2007-10-01
SummaryRunoff models such as HSPF and reservoir models such as CE-QUAL-W2 are used to model water quality in watersheds. Most often, the models are independently calibrated to observed data. While this approach can achieve good calibration, it does not replicate the physically-linked nature of the system. When models are linked by using the model output from an upstream model as input to a downstream model, the physical reality of a continuous watershed, where the overland and waterbody portions are parts of the whole, is better represented. There are some additional challenges in the calibration of such linked models, because the aim is to simulate the entire system as a whole, rather than piecemeal. When public entities are charged with model development, one of the driving forces is to use public-domain models. This paper describes the use of two such models, HSPF and CE-QUAL-W2, in the linked modeling of the Occoquan watershed located in northern Virginia, USA. The description of the process is provided, and results from the hydrological calibration and validation are shown. The Occoquan model consists of six HSPF and two CE-QUAL-W2 models, linked in a complex way, to simulate two major reservoirs and the associated drainage areas. The overall linked model was calibrated for a three-year period and validated for a two-year period. The results show that a successful calibration can be achieved using the linked approach, with moderate additional effort. Overall flow balances based on the three-year calibration period at four stream stations showed agreement ranging from -3.95% to +3.21%. Flow balances for the two reservoirs, compared via the daily water surface elevations, also showed good agreement ( R2 values of 0.937 for Lake Manassas and 0.926 for Occoquan Reservoir), when missing (un-monitored) flows were included. Validation of the models ranged from poor to fair for the watershed models and excellent for the waterbody models, thus indicating that the current model can be used to explore waterbody issues, but should be used with appropriate care for watershed issues. The study objective of being able to use the Occoquan model to study the impact of land use changes on hydrodynamics and water quality in the waterbodies, particularly the Occoquan Reservoir, can be met with the current model. However, appropriate judgment should be exercised when using the model for the prediction of watershed runoff. One of the advantages of using the linked approach is to develop a direct linkage between upstream land use changes and downstream water quality. This makes it easier for decision-makers to evaluate alternative watershed management plans and for the public to understand the decision-making process. The successful calibration of hydrology provides a solid base for further model development and application.
NASA Astrophysics Data System (ADS)
Messner, Mark C.; Rhee, Moono; Arsenlis, Athanasios; Barton, Nathan R.
2017-06-01
This work develops a method for calibrating a crystal plasticity model to the results of discrete dislocation (DD) simulations. The crystal model explicitly represents junction formation and annihilation mechanisms and applies these mechanisms to describe hardening in hexagonal close packed metals. The model treats these dislocation mechanisms separately from elastic interactions among populations of dislocations, which the model represents through a conventional strength-interaction matrix. This split between elastic interactions and junction formation mechanisms more accurately reproduces the DD data and results in a multi-scale model that better represents the lower scale physics. The fitting procedure employs concepts of machine learning—feature selection by regularized regression and cross-validation—to develop a robust, physically accurate crystal model. The work also presents a method for ensuring the final, calibrated crystal model respects the physical symmetries of the crystal system. Calibrating the crystal model requires fitting two linear operators: one describing elastic dislocation interactions and another describing junction formation and annihilation dislocation reactions. The structure of these operators in the final, calibrated model reflect the crystal symmetry and slip system geometry of the DD simulations.
Virtual Reality Calibration for Telerobotic Servicing
NASA Technical Reports Server (NTRS)
Kim, W.
1994-01-01
A virtual reality calibration technique of matching a virtual environment of simulated graphics models in 3-D geometry and perspective with actual camera views of the remote site task environment has been developed to enable high-fidelity preview/predictive displays with calibrated graphics overlay on live video.
Aw, Wen C; Ballard, J William O
2013-10-01
The age structure of natural population is of interest in physiological, life history and ecological studies but it is often difficult to determine. One methodological problem is that samples may need to be invasively sampled preventing subsequent taxonomic curation. A second problem is that it can be very expensive to accurately determine the age structure of given population because large sample sizes are often necessary. In this study, we test the effects of temperature (17 °C, 23 °C and 26 °C) and diet (standard cornmeal and low calorie diet) on the accuracy of the non-invasive, inexpensive and high throughput near-infrared spectroscopy (NIRS) technique to determine the age of Drosophila flies. Composite and simplified calibration models were developed for each sex. Independent sets for each temperature and diet treatments with flies not involved in calibration model were then used to validate the accuracy of the calibration models. The composite NIRS calibration model was generated by including flies reared under all temperatures and diets. This approach permits rapid age measurement and age structure determination in large population of flies as less than or equal to 9 days, or more than 9 days old with 85-97% and 64-99% accuracy, respectively. The simplified calibration models were generated by including flies reared at 23 °C on standard diet. Low accuracy rates were observed when simplified calibration models were used to identify (a) Drosophila reared at 17 °C and 26 °C and (b) 23 °C with low calorie diet. These results strongly suggest that appropriate calibration models need to be developed in the laboratory before this technique can be reliably used in field. These calibration models should include the major environmental variables that change across space and time in the particular natural population to be studied. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Pan, S.; Liu, L.; Xu, Y. P.
2017-12-01
Abstract: In physically based distributed hydrological model, large number of parameters, representing spatial heterogeneity of watershed and various processes in hydrologic cycle, are involved. For lack of calibration module in Distributed Hydrology Soil Vegetation Model, this study developed a multi-objective calibration module using Epsilon-Dominance Non-Dominated Sorted Genetic Algorithm II (ɛ-NSGAII) and based on parallel computing of Linux cluster for DHSVM (ɛP-DHSVM). In this study, two hydrologic key elements (i.e., runoff and evapotranspiration) are used as objectives in multi-objective calibration of model. MODIS evapotranspiration obtained by SEBAL is adopted to fill the gap of lack of observation for evapotranspiration. The results show that good performance of runoff simulation in single objective calibration cannot ensure good simulation performance of other hydrologic key elements. Self-developed ɛP-DHSVM model can make multi-objective calibration more efficiently and effectively. The running speed can be increased by more than 20-30 times via applying ɛP-DHSVM. In addition, runoff and evapotranspiration can be simulated very well simultaneously by ɛP-DHSVM, with superior values for two efficiency coefficients (0.74 for NS of runoff and 0.79 for NS of evapotranspiration, -10.5% and -8.6% for PBIAS of runoff and evapotranspiration respectively).
Dong, Ren G; Welcome, Daniel E; McDowell, Thomas W; Wu, John Z
2013-11-25
The relationship between the vibration transmissibility and driving-point response functions (DPRFs) of the human body is important for understanding vibration exposures of the system and for developing valid models. This study identified their theoretical relationship and demonstrated that the sum of the DPRFs can be expressed as a linear combination of the transmissibility functions of the individual mass elements distributed throughout the system. The relationship is verified using several human vibration models. This study also clarified the requirements for reliably quantifying transmissibility values used as references for calibrating the system models. As an example application, this study used the developed theory to perform a preliminary analysis of the method for calibrating models using both vibration transmissibility and DPRFs. The results of the analysis show that the combined method can theoretically result in a unique and valid solution of the model parameters, at least for linear systems. However, the validation of the method itself does not guarantee the validation of the calibrated model, because the validation of the calibration also depends on the model structure and the reliability and appropriate representation of the reference functions. The basic theory developed in this study is also applicable to the vibration analyses of other structures.
Chotimah, Chusnul; Sudjadi; Riyanto, Sugeng; Rohman, Abdul
2015-01-01
Purpose: Analysis of drugs in multicomponent system officially is carried out using chromatographic technique, however, this technique is too laborious and involving sophisticated instrument. Therefore, UV-VIS spectrophotometry coupled with multivariate calibration of partial least square (PLS) for quantitative analysis of metamizole, thiamin and pyridoxin is developed in the presence of cyanocobalamine without any separation step. Methods: The calibration and validation samples are prepared. The calibration model is prepared by developing a series of sample mixture consisting these drugs in certain proportion. Cross validation of calibration sample using leave one out technique is used to identify the smaller set of components that provide the greatest predictive ability. The evaluation of calibration model was based on the coefficient of determination (R2) and root mean square error of calibration (RMSEC). Results: The results showed that the coefficient of determination (R2) for the relationship between actual values and predicted values for all studied drugs was higher than 0.99 indicating good accuracy. The RMSEC values obtained were relatively low, indicating good precision. The accuracy and presision results of developed method showed no significant difference compared to those obtained by official method of HPLC. Conclusion: The developed method (UV-VIS spectrophotometry in combination with PLS) was succesfully used for analysis of metamizole, thiamin and pyridoxin in tablet dosage form. PMID:26819934
Bayesian Treed Calibration: An Application to Carbon Capture With AX Sorbent
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konomi, Bledar A.; Karagiannis, Georgios; Lai, Kevin
2017-01-02
In cases where field or experimental measurements are not available, computer models can model real physical or engineering systems to reproduce their outcomes. They are usually calibrated in light of experimental data to create a better representation of the real system. Statistical methods, based on Gaussian processes, for calibration and prediction have been especially important when the computer models are expensive and experimental data limited. In this paper, we develop the Bayesian treed calibration (BTC) as an extension of standard Gaussian process calibration methods to deal with non-stationarity computer models and/or their discrepancy from the field (or experimental) data. Ourmore » proposed method partitions both the calibration and observable input space, based on a binary tree partitioning, into sub-regions where existing model calibration methods can be applied to connect a computer model with the real system. The estimation of the parameters in the proposed model is carried out using Markov chain Monte Carlo (MCMC) computational techniques. Different strategies have been applied to improve mixing. We illustrate our method in two artificial examples and a real application that concerns the capture of carbon dioxide with AX amine based sorbents. The source code and the examples analyzed in this paper are available as part of the supplementary materials.« less
USDA-ARS?s Scientific Manuscript database
The use of distributed parameter models to address water resource management problems has increased in recent years. Calibration is necessary to reduce the uncertainties associated with model input parameters. Manual calibration of a distributed parameter model is a very time consuming effort. There...
Earth Radiation Budget Experiment scanner radiometric calibration results
NASA Technical Reports Server (NTRS)
Lee, Robert B., III; Gibson, M. A.; Thomas, Susan; Meekins, Jeffrey L.; Mahan, J. R.
1990-01-01
The Earth Radiation Budget Experiment (ERBE) scanning radiometers are producing measurements of the incoming solar, earth/atmosphere-reflected solar, and earth/atmosphere-emitted radiation fields with measurement precisions and absolute accuracies, approaching 1 percent. ERBE uses thermistor bolometers as the detection elements in the narrow-field-of-view scanning radiometers. The scanning radiometers can sense radiation in the shortwave, longwave, and total broadband spectral regions of 0.2 to 5.0, 5.0 to 50.0, and 0.2 to 50.0 micrometers, respectively. Detailed models of the radiometers' response functions were developed in order to design the most suitable calibration techniques. These models guided the design of in-flight calibration procedures as well as the development and characterization of a vacuum-calibration chamber and the blackbody source which provided the absolute basis upon which the total and longwave radiometers were characterized. The flight calibration instrumentation for the narror-field-of-view scanning radiometers is presented and evaluated.
Improving ROLO lunar albedo model using PLEIADES-HR satellites extra-terrestrial observations
NASA Astrophysics Data System (ADS)
Meygret, Aimé; Blanchet, Gwendoline; Colzy, Stéphane; Gross-Colzy, Lydwine
2017-09-01
The accurate on orbit radiometric calibration of optical sensors has become a challenge for space agencies which have developed different technics involving on-board calibration systems, ground targets or extra-terrestrial targets. The combination of different approaches and targets is recommended whenever possible and necessary to reach or demonstrate a high accuracy. Among these calibration targets, the moon is widely used through the well-known ROLO (RObotic Lunar Observatory) model developed by USGS. A great and worldwide recognized work was done to characterize the moon albedo which is very stable. However the more and more demanding needs for calibration accuracy have reached the limitations of the model. This paper deals with two mains limitations: the residual error when modelling the phase angle dependency and the absolute accuracy of the model which is no more acceptable for the on orbit calibration of radiometers. Thanks to PLEIADES high resolution satellites agility, a significant data base of moon and stars images was acquired, allowing to show the limitations of ROLO model and to characterize the errors. The phase angle residual dependency is modelled using PLEIADES 1B images acquired for different quasi-complete moon cycles with a phase angle varying by less than 1°. The absolute albedo residual error is modelled using PLEIADES 1A images taken over stars and the moon. The accurate knowledge of the stars spectral irradiance is transferred to the moon spectral albedo using the satellite as a transfer radiometer. This paper describes the data set used, the ROLO model residual errors and their modelling, the quality of the proposed correction and show some calibration results using this improved model.
NASA Technical Reports Server (NTRS)
Halyo, Nesim; Pandey, Dhirendra K.; Taylor, Deborah B.
1989-01-01
The Earth Radiation Budget Experiment (ERBE) is making high-absolute-accuracy measurements of the reflected solar and Earth-emitted radiation as well as the incoming solar radiation from three satellites: ERBS, NOAA-9, and NOAA-10. Each satellite has four Earth-looking nonscanning radiometers and three scanning radiometers. A fifth nonscanner, the solar monitor, measures the incoming solar radiation. The development of the ERBE sensor characterization procedures are described using the calibration data for each of the Earth-looking nonscanners and scanners. Sensor models for the ERBE radiometers are developed including the radiative exchange, conductive heat flow, and electronics processing for transient and steady state conditions. The steady state models are used to interpret the sensor outputs, resulting in the data reduction algorithms for the ERBE instruments. Both ground calibration and flight calibration procedures are treated and analyzed. The ground and flight calibration coefficients for the data reduction algorithms are presented.
Liu, Yaoming; Cohen, Mark E; Hall, Bruce L; Ko, Clifford Y; Bilimoria, Karl Y
2016-08-01
The American College of Surgeon (ACS) NSQIP Surgical Risk Calculator has been widely adopted as a decision aid and informed consent tool by surgeons and patients. Previous evaluations showed excellent discrimination and combined discrimination and calibration, but model calibration alone, and potential benefits of recalibration, were not explored. Because lack of calibration can lead to systematic errors in assessing surgical risk, our objective was to assess calibration and determine whether spline-based adjustments could improve it. We evaluated Surgical Risk Calculator model calibration, as well as discrimination, for each of 11 outcomes modeled from nearly 3 million patients (2010 to 2014). Using independent random subsets of data, we evaluated model performance for the Development (60% of records), Validation (20%), and Test (20%) datasets, where prediction equations from the Development dataset were recalibrated using restricted cubic splines estimated from the Validation dataset. We also evaluated performance on data subsets composed of higher-risk operations. The nonrecalibrated Surgical Risk Calculator performed well, but there was a slight tendency for predicted risk to be overestimated for lowest- and highest-risk patients and underestimated for moderate-risk patients. After recalibration, this distortion was eliminated, and p values for miscalibration were most often nonsignificant. Calibration was also excellent for subsets of higher-risk operations, though observed calibration was reduced due to instability associated with smaller sample sizes. Performance of NSQIP Surgical Risk Calculator models was shown to be excellent and improved with recalibration. Surgeons and patients can rely on the calculator to provide accurate estimates of surgical risk. Copyright © 2016 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Mu, Nan; Wang, Kun; Xie, Zexiao; Ren, Ping
2017-05-01
To realize online rapid measurement for complex workpieces, a flexible measurement system based on an articulated industrial robot with a structured light sensor mounted on the end-effector is developed. A method for calibrating the system parameters is proposed in which the hand-eye transformation parameters and the robot kinematic parameters are synthesized in the calibration process. An initial hand-eye calibration is first performed using a standard sphere as the calibration target. By applying the modified complete and parametrically continuous method, we establish a synthesized kinematic model that combines the initial hand-eye transformation and distal link parameters as a whole with the sensor coordinate system as the tool frame. According to the synthesized kinematic model, an error model is constructed based on spheres' center-to-center distance errors. Consequently, the error model parameters can be identified in a calibration experiment using a three-standard-sphere target. Furthermore, the redundancy of error model parameters is eliminated to ensure the accuracy and robustness of the parameter identification. Calibration and measurement experiments are carried out based on an ER3A-C60 robot. The experimental results show that the proposed calibration method enjoys high measurement accuracy, and this efficient and flexible system is suitable for online measurement in industrial scenes.
NASA Technical Reports Server (NTRS)
Lerch, F. J.; Nerem, R. S.; Chinn, D. S.; Chan, J. C.; Patel, G. B.; Klosko, S. M.
1993-01-01
A new method has been developed to provide a direct test of the error calibrations of gravity models based on actual satellite observations. The basic approach projects the error estimates of the gravity model parameters onto satellite observations, and the results of these projections are then compared with data residual computed from the orbital fits. To allow specific testing of the gravity error calibrations, subset solutions are computed based on the data set and data weighting of the gravity model. The approach is demonstrated using GEM-T3 to show that the gravity error estimates are well calibrated and that reliable predictions of orbit accuracies can be achieved for independent orbits.
Radulescu, E G; Wójcik, J; Lewin, P A; Nowicki, A
2003-06-01
To facilitate the implementation and verification of the new ultrasound hydrophone calibration techniques described in the companion paper (somewhere in this issue) a nonlinear propagation model was developed. A brief outline of the theoretical considerations is presented and the model's advantages and disadvantages are discussed. The results of simulations yielding spatial and temporal acoustic pressure amplitude are also presented and compared with those obtained using KZK and Field II models. Excellent agreement between all models is evidenced. The applicability of the model in discrete wideband calibration of hydrophones is documented in the companion paper somewhere in this volume.
Validation and calibration of structural models that combine information from multiple sources.
Dahabreh, Issa J; Wong, John B; Trikalinos, Thomas A
2017-02-01
Mathematical models that attempt to capture structural relationships between their components and combine information from multiple sources are increasingly used in medicine. Areas covered: We provide an overview of methods for model validation and calibration and survey studies comparing alternative approaches. Expert commentary: Model validation entails a confrontation of models with data, background knowledge, and other models, and can inform judgments about model credibility. Calibration involves selecting parameter values to improve the agreement of model outputs with data. When the goal of modeling is quantitative inference on the effects of interventions or forecasting, calibration can be viewed as estimation. This view clarifies issues related to parameter identifiability and facilitates formal model validation and the examination of consistency among different sources of information. In contrast, when the goal of modeling is the generation of qualitative insights about the modeled phenomenon, calibration is a rather informal process for selecting inputs that result in model behavior that roughly reproduces select aspects of the modeled phenomenon and cannot be equated to an estimation procedure. Current empirical research on validation and calibration methods consists primarily of methodological appraisals or case-studies of alternative techniques and cannot address the numerous complex and multifaceted methodological decisions that modelers must make. Further research is needed on different approaches for developing and validating complex models that combine evidence from multiple sources.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Dongsu; Cox, Sam J.; Cho, Heejin
With increased use of variable refrigerant flow (VRF) systems in the U.S. building sector, interests in capability and rationality of various building energy modeling tools to simulate VRF systems are rising. This paper presents the detailed procedures for model calibration of a VRF system with a dedicated outdoor air system (DOAS) by comparing to detailed measured data from an occupancy emulated small office building. The building energy model is first developed based on as-built drawings, and building and system characteristics available. The whole building energy modeling tool used for the study is U.S. DOE’s EnergyPlus version 8.1. The initial modelmore » is, then, calibrated with the hourly measured data from the target building and VRF-DOAS system. In a detailed calibration procedures of the VRF-DOAS, the original EnergyPlus source code is modified to enable the modeling of the specific VRF-DOAS installed in the building. After a proper calibration during cooling and heating seasons, the VRF-DOAS model can reasonably predict the performance of the actual VRF-DOAS system based on the criteria from ASHRAE Guideline 14-2014. The calibration results show that hourly CV-RMSE and NMBE would be 15.7% and 3.8%, respectively, which is deemed to be calibrated. As a result, the whole-building energy usage after calibration of the VRF-DOAS model is 1.9% (78.8 kWh) lower than that of the measurements during comparison period.« less
Experiences in Automated Calibration of a Nickel Equation of State
NASA Astrophysics Data System (ADS)
Carpenter, John H.
2017-06-01
Wide availability of large computers has led to increasing incorporation of computational data, such as from density functional theory molecular dynamics, in the development of equation of state (EOS) models. Once a grid of computational data is available, it is usually left to an expert modeler to model the EOS using traditional techniques. One can envision the possibility of using the increasing computing resources to perform black-box calibration of EOS models, with the goal of reducing the workload on the modeler or enabling non-experts to generate good EOSs with such a tool. Progress towards building such a black-box calibration tool will be explored in the context of developing a new, wide-range EOS for nickel. While some details of the model and data will be shared, the focus will be on what was learned by automatically calibrating the model in a black-box method. Model choices and ensuring physicality will also be discussed. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Potential for calibration of geostationary meteorological satellite imagers using the Moon
Stone, T.C.; Kieffer, H.H.; Grant, I.F.; ,
2005-01-01
Solar-band imagery from geostationary meteorological satellites has been utilized in a number of important applications in Earth Science that require radiometric calibration. Because these satellite systems typically lack on-board calibrators, various techniques have been employed to establish "ground truth", including observations of stable ground sites and oceans, and cross-calibrating with coincident observations made by instruments with on-board calibration systems. The Moon appears regularly in the margins and corners of full-disk operational images of the Earth acquired by meteorological instruments with a rectangular field of regard, typically several times each month, which provides an excellent opportunity for radiometric calibration. The USGS RObotic Lunar Observatory (ROLO) project has developed the capability for on-orbit calibration using the Moon via a model for lunar spectral irradiance that accommodates the geometries of illumination and viewing by a spacecraft. The ROLO model has been used to determine on-orbit response characteristics for several NASA EOS instruments in low Earth orbit. Relative response trending with precision approaching 0.1% per year has been achieved for SeaWiFS as a result of the long time-series of lunar observations collected by that instrument. The method has a demonstrated capability for cross-calibration of different instruments that have viewed the Moon. The Moon appears skewed in high-resolution meteorological images, primarily due to satellite orbital motion during acquisition; however, the geometric correction for this is straightforward. By integrating the lunar disk image to an equivalent irradiance, and using knowledge of the sensor's spectral response, a calibration can be developed through comparison against the ROLO lunar model. The inherent stability of the lunar surface means that lunar calibration can be applied to observations made at any time, including retroactively. Archived geostationary imager data that contains the Moon can be used to develop response histories for these instruments, regardless of their current operational status.
NASA Astrophysics Data System (ADS)
Lafontaine, J.; Hay, L.; Archfield, S. A.; Farmer, W. H.; Kiang, J. E.
2014-12-01
The U.S. Geological Survey (USGS) has developed a National Hydrologic Model (NHM) to support coordinated, comprehensive and consistent hydrologic model development, and facilitate the application of hydrologic simulations within the continental US. The portion of the NHM located within the Gulf Coastal Plains and Ozarks Landscape Conservation Cooperative (GCPO LCC) is being used to test the feasibility of improving streamflow simulations in gaged and ungaged watersheds by linking statistically- and physically-based hydrologic models. The GCPO LCC covers part or all of 12 states and 5 sub-geographies, totaling approximately 726,000 km2, and is centered on the lower Mississippi Alluvial Valley. A total of 346 USGS streamgages in the GCPO LCC region were selected to evaluate the performance of this new calibration methodology for the period 1980 to 2013. Initially, the physically-based models are calibrated to measured streamflow data to provide a baseline for comparison. An enhanced calibration procedure then is used to calibrate the physically-based models in the gaged and ungaged areas of the GCPO LCC using statistically-based estimates of streamflow. For this application, the calibration procedure is adjusted to address the limitations of the statistically generated time series to reproduce measured streamflow in gaged basins, primarily by incorporating error and bias estimates. As part of this effort, estimates of uncertainty in the model simulations are also computed for the gaged and ungaged watersheds.
Calibrated Blade-Element/Momentum Theory Aerodynamic Model of the MARIN Stock Wind Turbine: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goupee, A.; Kimball, R.; de Ridder, E. J.
2015-04-02
In this paper, a calibrated blade-element/momentum theory aerodynamic model of the MARIN stock wind turbine is developed and documented. The model is created using open-source software and calibrated to closely emulate experimental data obtained by the DeepCwind Consortium using a genetic algorithm optimization routine. The provided model will be useful for those interested in validating interested in validating floating wind turbine numerical simulators that rely on experiments utilizing the MARIN stock wind turbine—for example, the International Energy Agency Wind Task 30’s Offshore Code Comparison Collaboration Continued, with Correlation project.
NASA Astrophysics Data System (ADS)
Orłowska-Szostak, Maria; Orłowski, Ryszard
2017-11-01
The paper discusses some relevant aspects of the calibration of a computer model describing flows in the water supply system. The authors described an exemplary water supply system and used it as a practical illustration of calibration. A range of measures was discussed and applied, which improve the convergence and effective use of calculations in the calibration process and also the effect of such calibration which is the validity of the results obtained. Drawing up results of performed measurements, i.e. estimating pipe roughnesses, the authors performed using the genetic algorithm implementation of which is a software developed by Resan Labs company from Brazil.
The development of local calibration factors for implementing the highway safety manual in Maryland.
DOT National Transportation Integrated Search
2014-03-01
The goal of the study was to determine local calibration factors (LCFs) to adjust predicted motor : vehicle traffic crashes for the Maryland-specific application of the Highway Safety Manual : (HSM). Since HSM predictive models were developed using d...
Saha, Dibakar; Alluri, Priyanka; Gan, Albert
2017-01-01
The Highway Safety Manual (HSM) presents statistical models to quantitatively estimate an agency's safety performance. The models were developed using data from only a few U.S. states. To account for the effects of the local attributes and temporal factors on crash occurrence, agencies are required to calibrate the HSM-default models for crash predictions. The manual suggests updating calibration factors every two to three years, or preferably on an annual basis. Given that the calibration process involves substantial time, effort, and resources, a comprehensive analysis of the required calibration factor update frequency is valuable to the agencies. Accordingly, the objective of this study is to evaluate the HSM's recommendation and determine the required frequency of calibration factor updates. A robust Bayesian estimation procedure is used to assess the variation between calibration factors computed annually, biennially, and triennially using data collected from over 2400 miles of segments and over 700 intersections on urban and suburban facilities in Florida. Bayesian model yields a posterior distribution of the model parameters that give credible information to infer whether the difference between calibration factors computed at specified intervals is credibly different from the null value which represents unaltered calibration factors between the comparison years or in other words, zero difference. The concept of the null value is extended to include the range of values that are practically equivalent to zero. Bayesian inference shows that calibration factors based on total crash frequency are required to be updated every two years in cases where the variations between calibration factors are not greater than 0.01. When the variations are between 0.01 and 0.05, calibration factors based on total crash frequency could be updated every three years. Copyright © 2016 Elsevier Ltd. All rights reserved.
Fermentation process tracking through enhanced spectral calibration modeling.
Triadaphillou, Sophia; Martin, Elaine; Montague, Gary; Norden, Alison; Jeffkins, Paul; Stimpson, Sarah
2007-06-15
The FDA process analytical technology (PAT) initiative will materialize in a significant increase in the number of installations of spectroscopic instrumentation. However, to attain the greatest benefit from the data generated, there is a need for calibration procedures that extract the maximum information content. For example, in fermentation processes, the interpretation of the resulting spectra is challenging as a consequence of the large number of wavelengths recorded, the underlying correlation structure that is evident between the wavelengths and the impact of the measurement environment. Approaches to the development of calibration models have been based on the application of partial least squares (PLS) either to the full spectral signature or to a subset of wavelengths. This paper presents a new approach to calibration modeling that combines a wavelength selection procedure, spectral window selection (SWS), where windows of wavelengths are automatically selected which are subsequently used as the basis of the calibration model. However, due to the non-uniqueness of the windows selected when the algorithm is executed repeatedly, multiple models are constructed and these are then combined using stacking thereby increasing the robustness of the final calibration model. The methodology is applied to data generated during the monitoring of broth concentrations in an industrial fermentation process from on-line near-infrared (NIR) and mid-infrared (MIR) spectrometers. It is shown that the proposed calibration modeling procedure outperforms traditional calibration procedures, as well as enabling the identification of the critical regions of the spectra with regard to the fermentation process.
Model Calibration Efforts for the International Space Station's Solar Array Mast
NASA Technical Reports Server (NTRS)
Elliott, Kenny B.; Horta, Lucas G.; Templeton, Justin D.; Knight, Norman F., Jr.
2012-01-01
The International Space Station (ISS) relies on sixteen solar-voltaic blankets to provide electrical power to the station. Each pair of blankets is supported by a deployable boom called the Folding Articulated Square Truss Mast (FAST Mast). At certain ISS attitudes, the solar arrays can be positioned in such a way that shadowing of either one or three longerons causes an unexpected asymmetric thermal loading that if unchecked can exceed the operational stability limits of the mast. Work in this paper documents part of an independent NASA Engineering and Safety Center effort to assess the existing operational limits. Because of the complexity of the system, the problem is being worked using a building-block progression from components (longerons), to units (single or multiple bays), to assembly (full mast). The paper presents results from efforts to calibrate the longeron components. The work includes experimental testing of two types of longerons (straight and tapered), development of Finite Element (FE) models, development of parameter uncertainty models, and the establishment of a calibration and validation process to demonstrate adequacy of the models. Models in the context of this paper refer to both FE model and probabilistic parameter models. Results from model calibration of the straight longerons show that the model is capable of predicting the mean load, axial strain, and bending strain. For validation, parameter values obtained from calibration of straight longerons are used to validate experimental results for the tapered longerons.
NASA Technical Reports Server (NTRS)
Annett, Martin S.; Horta, Lucas G.; Jackson, Karen E.; Polanco, Michael A.; Littell, Justin D.
2012-01-01
Two full-scale crash tests of an MD-500 helicopter were conducted in 2009 and 2010 at NASA Langley's Landing and Impact Research Facility in support of NASA s Subsonic Rotary Wing Crashworthiness Project. The first crash test was conducted to evaluate the performance of an externally mounted composite deployable energy absorber (DEA) under combined impact conditions. In the second crash test, the energy absorber was removed to establish baseline loads that are regarded as severe but survivable. The presence of this energy absorbing device reduced the peak impact acceleration levels by a factor of three. Accelerations and kinematic data collected from the crash tests were compared to a system-integrated finite element model of the test article developed in parallel with the test program. In preparation for the full-scale crash test, a series of sub-scale and MD-500 mass simulator tests were conducted to evaluate the impact performances of various components and subsystems, including new crush tubes and the DEA blocks. Parameters defined for the system-integrated finite element model were determined from these tests. Results from 19 accelerometers placed throughout the airframe were compared to finite element model responses. The model developed for the purposes of predicting acceleration responses from the first crash test was inadequate when evaluating more severe conditions seen in the second crash test. A newly developed model calibration approach that includes uncertainty estimation, parameter sensitivity, impact shape orthogonality, and numerical optimization was used to calibrate model results for the full-scale crash test without the DEA. This combination of heuristic and quantitative methods identified modeling deficiencies, evaluated parameter importance, and proposed required model changes. The multidimensional calibration techniques presented here are particularly effective in identifying model adequacy. Acceleration results for the calibrated model were compared to test results and the original model results. There was a noticeable improvement in the pilot and copilot region, a slight improvement in the occupant model response, and an over-stiffening effect in the passenger region. One lesson learned was that this approach should be adopted early on, in combination with the building-block approaches that are customarily used, for model development and pretest predictions. Complete crash simulations with validated finite element models can be used to satisfy crash certification requirements, potentially reducing overall development costs.
Calibration of the rutting model in HDM 4 on the highway network in Macedonia
NASA Astrophysics Data System (ADS)
Ognjenovic, Slobodan; Okrepilov, Vladimir; Ristov, Riste; Papic, Jovan
2018-03-01
The World Bank HDM 4 model is adopted in many countries worldwide. It is consisted of the developed models for almost all types of deformation on the pavement structures, but it can't be used as it is developed everywhere in the world without proper adjustments to local conditions such as traffic load, climate, construction specificities, maintenance level etc. This paper presents the results of the researches carried out in Macedonia for determining calibration coefficient of the rutting model in HDM 4.
DOT National Transportation Integrated Search
2015-10-01
The objective of this task is to develop the concept and framework for a procedure to routinely create, re-calibrate, and update the : Trigger Tables and Performance Models. The scope of work for Task 6 includes a limited review of the recent pavemen...
Wherry, Susan A.; Wood, Tamara M.; Anderson, Chauncey W.
2015-01-01
Using the extended 1991–2010 external phosphorus loading dataset, the lake TMDL model was recalibrated following the same procedures outlined in the Phase 1 review. The version of the model selected for further development incorporated an updated sediment initial condition, a numerical solution method for the chlorophyll a model, changes to light and phosphorus factors limiting algal growth, and a new pH-model regression, which removed Julian day dependence in order to avoid discontinuities in pH at year boundaries. This updated lake TMDL model was recalibrated using the extended dataset in order to compare calibration parameters to those obtained from a calibration with the original 7.5-year dataset. The resulting algal settling velocity calibrated from the extended dataset was more than twice the value calibrated with the original dataset, and, because the calibrated values of algal settling velocity and recycle rate are related (more rapid settling required more rapid recycling), the recycling rate also was larger than that determined with the original dataset. These changes in calibration parameters highlight the uncertainty in critical rates in the Upper Klamath Lake TMDL model and argue for their direct measurement in future data collection to increase confidence in the model predictions.
Development and Application of a Process-based River System Model at a Continental Scale
NASA Astrophysics Data System (ADS)
Kim, S. S. H.; Dutta, D.; Vaze, J.; Hughes, J. D.; Yang, A.; Teng, J.
2014-12-01
Existing global and continental scale river models, mainly designed for integrating with global climate model, are of very course spatial resolutions and they lack many important hydrological processes, such as overbank flow, irrigation diversion, groundwater seepage/recharge, which operate at a much finer resolution. Thus, these models are not suitable for producing streamflow forecast at fine spatial resolution and water accounts at sub-catchment levels, which are important for water resources planning and management at regional and national scale. A large-scale river system model has been developed and implemented for water accounting in Australia as part of the Water Information Research and Development Alliance between Australia's Bureau of Meteorology (BoM) and CSIRO. The model, developed using node-link architecture, includes all major hydrological processes, anthropogenic water utilisation and storage routing that influence the streamflow in both regulated and unregulated river systems. It includes an irrigation model to compute water diversion for irrigation use and associated fluxes and stores and a storage-based floodplain inundation model to compute overbank flow from river to floodplain and associated floodplain fluxes and stores. An auto-calibration tool has been built within the modelling system to automatically calibrate the model in large river systems using Shuffled Complex Evolution optimiser and user-defined objective functions. The auto-calibration tool makes the model computationally efficient and practical for large basin applications. The model has been implemented in several large basins in Australia including the Murray-Darling Basin, covering more than 2 million km2. The results of calibration and validation of the model shows highly satisfactory performance. The model has been operalisationalised in BoM for producing various fluxes and stores for national water accounting. This paper introduces this newly developed river system model describing the conceptual hydrological framework, methods used for representing different hydrological processes in the model and the results and evaluation of the model performance. The operational implementation of the model for water accounting is discussed.
A comparison of hydrologic models for ecological flows and water availability
Caldwell, Peter V; Kennen, Jonathan G.; Sun, Ge; Kiang, Julie E.; Butcher, John B; Eddy, Michelle C; Hay, Lauren E.; LaFontaine, Jacob H.; Hain, Ernie F.; Nelson, Stacy C; McNulty, Steve G
2015-01-01
Robust hydrologic models are needed to help manage water resources for healthy aquatic ecosystems and reliable water supplies for people, but there is a lack of comprehensive model comparison studies that quantify differences in streamflow predictions among model applications developed to answer management questions. We assessed differences in daily streamflow predictions by four fine-scale models and two regional-scale monthly time step models by comparing model fit statistics and bias in ecologically relevant flow statistics (ERFSs) at five sites in the Southeastern USA. Models were calibrated to different extents, including uncalibrated (level A), calibrated to a downstream site (level B), calibrated specifically for the site (level C) and calibrated for the site with adjusted precipitation and temperature inputs (level D). All models generally captured the magnitude and variability of observed streamflows at the five study sites, and increasing level of model calibration generally improved performance. All models had at least 1 of 14 ERFSs falling outside a +/−30% range of hydrologic uncertainty at every site, and ERFSs related to low flows were frequently over-predicted. Our results do not indicate that any specific hydrologic model is superior to the others evaluated at all sites and for all measures of model performance. Instead, we provide evidence that (1) model performance is as likely to be related to calibration strategy as it is to model structure and (2) simple, regional-scale models have comparable performance to the more complex, fine-scale models at a monthly time step.
Early Prediction of Intensive Care Unit-Acquired Weakness: A Multicenter External Validation Study.
Witteveen, Esther; Wieske, Luuk; Sommers, Juultje; Spijkstra, Jan-Jaap; de Waard, Monique C; Endeman, Henrik; Rijkenberg, Saskia; de Ruijter, Wouter; Sleeswijk, Mengalvio; Verhamme, Camiel; Schultz, Marcus J; van Schaik, Ivo N; Horn, Janneke
2018-01-01
An early diagnosis of intensive care unit-acquired weakness (ICU-AW) is often not possible due to impaired consciousness. To avoid a diagnostic delay, we previously developed a prediction model, based on single-center data from 212 patients (development cohort), to predict ICU-AW at 2 days after ICU admission. The objective of this study was to investigate the external validity of the original prediction model in a new, multicenter cohort and, if necessary, to update the model. Newly admitted ICU patients who were mechanically ventilated at 48 hours after ICU admission were included. Predictors were prospectively recorded, and the outcome ICU-AW was defined by an average Medical Research Council score <4. In the validation cohort, consisting of 349 patients, we analyzed performance of the original prediction model by assessment of calibration and discrimination. Additionally, we updated the model in this validation cohort. Finally, we evaluated a new prediction model based on all patients of the development and validation cohort. Of 349 analyzed patients in the validation cohort, 190 (54%) developed ICU-AW. Both model calibration and discrimination of the original model were poor in the validation cohort. The area under the receiver operating characteristics curve (AUC-ROC) was 0.60 (95% confidence interval [CI]: 0.54-0.66). Model updating methods improved calibration but not discrimination. The new prediction model, based on all patients of the development and validation cohort (total of 536 patients) had a fair discrimination, AUC-ROC: 0.70 (95% CI: 0.66-0.75). The previously developed prediction model for ICU-AW showed poor performance in a new independent multicenter validation cohort. Model updating methods improved calibration but not discrimination. The newly derived prediction model showed fair discrimination. This indicates that early prediction of ICU-AW is still challenging and needs further attention.
Barañao, P A; Hall, E R
2004-01-01
Activated Sludge Model No 3 (ASM3) was chosen to model an activated sludge system treating effluents from a mechanical pulp and paper mill. The high COD concentration and the high content of readily biodegradable substrates of the wastewater make this model appropriate for this system. ASM3 was calibrated based on batch respirometric tests using fresh wastewater and sludge from the treatment plant, and on analytical measurements of COD, TSS and VSS. The model, developed for municipal wastewater, was found suitable for fitting a variety of respirometric batch tests, performed at different temperatures and food to microorganism ratios (F/M). Therefore, a set of calibrated parameters, as well as the wastewater COD fractions, was estimated for this industrial wastewater. The majority of the calibrated parameters were in the range of those found in the literature.
NASA Astrophysics Data System (ADS)
Golmohammadi, A.; Jafarpour, B.; M Khaninezhad, M. R.
2017-12-01
Calibration of heterogeneous subsurface flow models leads to ill-posed nonlinear inverse problems, where too many unknown parameters are estimated from limited response measurements. When the underlying parameters form complex (non-Gaussian) structured spatial connectivity patterns, classical variogram-based geostatistical techniques cannot describe the underlying connectivity patterns. Modern pattern-based geostatistical methods that incorporate higher-order spatial statistics are more suitable for describing such complex spatial patterns. Moreover, when the underlying unknown parameters are discrete (geologic facies distribution), conventional model calibration techniques that are designed for continuous parameters cannot be applied directly. In this paper, we introduce a novel pattern-based model calibration method to reconstruct discrete and spatially complex facies distributions from dynamic flow response data. To reproduce complex connectivity patterns during model calibration, we impose a feasibility constraint to ensure that the solution follows the expected higher-order spatial statistics. For model calibration, we adopt a regularized least-squares formulation, involving data mismatch, pattern connectivity, and feasibility constraint terms. Using an alternating directions optimization algorithm, the regularized objective function is divided into a continuous model calibration problem, followed by mapping the solution onto the feasible set. The feasibility constraint to honor the expected spatial statistics is implemented using a supervised machine learning algorithm. The two steps of the model calibration formulation are repeated until the convergence criterion is met. Several numerical examples are used to evaluate the performance of the developed method.
Calibration and accuracy analysis of a focused plenoptic camera
NASA Astrophysics Data System (ADS)
Zeller, N.; Quint, F.; Stilla, U.
2014-08-01
In this article we introduce new methods for the calibration of depth images from focused plenoptic cameras and validate the results. We start with a brief description of the concept of a focused plenoptic camera and how from the recorded raw image a depth map can be estimated. For this camera, an analytical expression of the depth accuracy is derived for the first time. In the main part of the paper, methods to calibrate a focused plenoptic camera are developed and evaluated. The optical imaging process is calibrated by using a method which is already known from the calibration of traditional cameras. For the calibration of the depth map two new model based methods, which make use of the projection concept of the camera are developed. These new methods are compared to a common curve fitting approach, which is based on Taylor-series-approximation. Both model based methods show significant advantages compared to the curve fitting method. They need less reference points for calibration than the curve fitting method and moreover, supply a function which is valid in excess of the range of calibration. In addition the depth map accuracy of the plenoptic camera was experimentally investigated for different focal lengths of the main lens and is compared to the analytical evaluation.
Active Subspace Methods for Data-Intensive Inverse Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Qiqi
2017-04-27
The project has developed theory and computational tools to exploit active subspaces to reduce the dimension in statistical calibration problems. This dimension reduction enables MCMC methods to calibrate otherwise intractable models. The same theoretical and computational tools can also reduce the measurement dimension for calibration problems that use large stores of data.
Polarimetry With Phased Array Antennas: Theoretical Framework and Definitions
NASA Astrophysics Data System (ADS)
Warnick, Karl F.; Ivashina, Marianna V.; Wijnholds, Stefan J.; Maaskant, Rob
2012-01-01
For phased array receivers, the accuracy with which the polarization state of a received signal can be measured depends on the antenna configuration, array calibration process, and beamforming algorithms. A signal and noise model for a dual-polarized array is developed and related to standard polarimetric antenna figures of merit, and the ideal polarimetrically calibrated, maximum-sensitivity beamforming solution for a dual-polarized phased array feed is derived. A practical polarimetric beamformer solution that does not require exact knowledge of the array polarimetric response is shown to be equivalent to the optimal solution in the sense that when the practical beamformers are calibrated, the optimal solution is obtained. To provide a rough initial polarimetric calibration for the practical beamformer solution, an approximate single-source polarimetric calibration method is developed. The modeled instrumental polarization error for a dipole phased array feed with the practical beamformer solution and single-source polarimetric calibration was -10 dB or lower over the array field of view for elements with alignments perturbed by random rotations with 5 degree standard deviation.
Utilization of Expert Knowledge in a Multi-Objective Hydrologic Model Automatic Calibration Process
NASA Astrophysics Data System (ADS)
Quebbeman, J.; Park, G. H.; Carney, S.; Day, G. N.; Micheletty, P. D.
2016-12-01
Spatially distributed continuous simulation hydrologic models have a large number of parameters for potential adjustment during the calibration process. Traditional manual calibration approaches of such a modeling system is extremely laborious, which has historically motivated the use of automatic calibration procedures. With a large selection of model parameters, achieving high degrees of objective space fitness - measured with typical metrics such as Nash-Sutcliffe, Kling-Gupta, RMSE, etc. - can easily be achieved using a range of evolutionary algorithms. A concern with this approach is the high degree of compensatory calibration, with many similarly performing solutions, and yet grossly varying parameter set solutions. To help alleviate this concern, and mimic manual calibration processes, expert knowledge is proposed for inclusion within the multi-objective functions, which evaluates the parameter decision space. As a result, Pareto solutions are identified with high degrees of fitness, but also create parameter sets that maintain and utilize available expert knowledge resulting in more realistic and consistent solutions. This process was tested using the joint SNOW-17 and Sacramento Soil Moisture Accounting method (SAC-SMA) within the Animas River basin in Colorado. Three different elevation zones, each with a range of parameters, resulted in over 35 model parameters simultaneously calibrated. As a result, high degrees of fitness were achieved, in addition to the development of more realistic and consistent parameter sets such as those typically achieved during manual calibration procedures.
Parameter estimation uncertainty: Comparing apples and apples?
NASA Astrophysics Data System (ADS)
Hart, D.; Yoon, H.; McKenna, S. A.
2012-12-01
Given a highly parameterized ground water model in which the conceptual model of the heterogeneity is stochastic, an ensemble of inverse calibrations from multiple starting points (MSP) provides an ensemble of calibrated parameters and follow-on transport predictions. However, the multiple calibrations are computationally expensive. Parameter estimation uncertainty can also be modeled by decomposing the parameterization into a solution space and a null space. From a single calibration (single starting point) a single set of parameters defining the solution space can be extracted. The solution space is held constant while Monte Carlo sampling of the parameter set covering the null space creates an ensemble of the null space parameter set. A recently developed null-space Monte Carlo (NSMC) method combines the calibration solution space parameters with the ensemble of null space parameters, creating sets of calibration-constrained parameters for input to the follow-on transport predictions. Here, we examine the consistency between probabilistic ensembles of parameter estimates and predictions using the MSP calibration and the NSMC approaches. A highly parameterized model of the Culebra dolomite previously developed for the WIPP project in New Mexico is used as the test case. A total of 100 estimated fields are retained from the MSP approach and the ensemble of results defining the model fit to the data, the reproduction of the variogram model and prediction of an advective travel time are compared to the same results obtained using NSMC. We demonstrate that the NSMC fields based on a single calibration model can be significantly constrained by the calibrated solution space and the resulting distribution of advective travel times is biased toward the travel time from the single calibrated field. To overcome this, newly proposed strategies to employ a multiple calibration-constrained NSMC approach (M-NSMC) are evaluated. Comparison of the M-NSMC and MSP methods suggests that M-NSMC can provide a computationally efficient and practical solution for predictive uncertainty analysis in highly nonlinear and complex subsurface flow and transport models. This material is based upon work supported as part of the Center for Frontiers of Subsurface Energy Security, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number DE-SC0001114. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Augmenting epidemiological models with point-of-care diagnostics data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pullum, Laura L.; Ramanathan, Arvind; Nutaro, James J.
Although adoption of newer Point-of-Care (POC) diagnostics is increasing, there is a significant challenge using POC diagnostics data to improve epidemiological models. In this work, we propose a method to process zip-code level POC datasets and apply these processed data to calibrate an epidemiological model. We specifically develop a calibration algorithm using simulated annealing and calibrate a parsimonious equation-based model of modified Susceptible-Infected-Recovered (SIR) dynamics. The results show that parsimonious models are remarkably effective in predicting the dynamics observed in the number of infected patients and our calibration algorithm is sufficiently capable of predicting peak loads observed in POC diagnosticsmore » data while staying within reasonable and empirical parameter ranges reported in the literature. Additionally, we explore the future use of the calibrated values by testing the correlation between peak load and population density from Census data. Our results show that linearity assumptions for the relationships among various factors can be misleading, therefore further data sources and analysis are needed to identify relationships between additional parameters and existing calibrated ones. As a result, calibration approaches such as ours can determine the values of newly added parameters along with existing ones and enable policy-makers to make better multi-scale decisions.« less
Augmenting epidemiological models with point-of-care diagnostics data
Pullum, Laura L.; Ramanathan, Arvind; Nutaro, James J.; ...
2016-04-20
Although adoption of newer Point-of-Care (POC) diagnostics is increasing, there is a significant challenge using POC diagnostics data to improve epidemiological models. In this work, we propose a method to process zip-code level POC datasets and apply these processed data to calibrate an epidemiological model. We specifically develop a calibration algorithm using simulated annealing and calibrate a parsimonious equation-based model of modified Susceptible-Infected-Recovered (SIR) dynamics. The results show that parsimonious models are remarkably effective in predicting the dynamics observed in the number of infected patients and our calibration algorithm is sufficiently capable of predicting peak loads observed in POC diagnosticsmore » data while staying within reasonable and empirical parameter ranges reported in the literature. Additionally, we explore the future use of the calibrated values by testing the correlation between peak load and population density from Census data. Our results show that linearity assumptions for the relationships among various factors can be misleading, therefore further data sources and analysis are needed to identify relationships between additional parameters and existing calibrated ones. As a result, calibration approaches such as ours can determine the values of newly added parameters along with existing ones and enable policy-makers to make better multi-scale decisions.« less
Online Calibration of Polytomous Items Under the Generalized Partial Credit Model
Zheng, Yi
2016-01-01
Online calibration is a technology-enhanced architecture for item calibration in computerized adaptive tests (CATs). Many CATs are administered continuously over a long term and rely on large item banks. To ensure test validity, these item banks need to be frequently replenished with new items, and these new items need to be pretested before being used operationally. Online calibration dynamically embeds pretest items in operational tests and calibrates their parameters as response data are gradually obtained through the continuous test administration. This study extends existing formulas, procedures, and algorithms for dichotomous item response theory models to the generalized partial credit model, a popular model for items scored in more than two categories. A simulation study was conducted to investigate the developed algorithms and procedures under a variety of conditions, including two estimation algorithms, three pretest item selection methods, three seeding locations, two numbers of score categories, and three calibration sample sizes. Results demonstrated acceptable estimation accuracy of the two estimation algorithms in some of the simulated conditions. A variety of findings were also revealed for the interacted effects of included factors, and recommendations were made respectively. PMID:29881063
Shirazi, Mohammadali; Reddy Geedipally, Srinivas; Lord, Dominique
2017-01-01
Severity distribution functions (SDFs) are used in highway safety to estimate the severity of crashes and conduct different types of safety evaluations and analyses. Developing a new SDF is a difficult task and demands significant time and resources. To simplify the process, the Highway Safety Manual (HSM) has started to document SDF models for different types of facilities. As such, SDF models have recently been introduced for freeway and ramps in HSM addendum. However, since these functions or models are fitted and validated using data from a few selected number of states, they are required to be calibrated to the local conditions when applied to a new jurisdiction. The HSM provides a methodology to calibrate the models through a scalar calibration factor. However, the proposed methodology to calibrate SDFs was never validated through research. Furthermore, there are no concrete guidelines to select a reliable sample size. Using extensive simulation, this paper documents an analysis that examined the bias between the 'true' and 'estimated' calibration factors. It was indicated that as the value of the true calibration factor deviates further away from '1', more bias is observed between the 'true' and 'estimated' calibration factors. In addition, simulation studies were performed to determine the calibration sample size for various conditions. It was found that, as the average of the coefficient of variation (CV) of the 'KAB' and 'C' crashes increases, the analyst needs to collect a larger sample size to calibrate SDF models. Taking this observation into account, sample-size guidelines are proposed based on the average CV of crash severities that are used for the calibration process. Copyright © 2016 Elsevier Ltd. All rights reserved.
Modeling in vivo fluorescence of small animals using TracePro software
NASA Astrophysics Data System (ADS)
Leavesley, Silas; Rajwa, Bartek; Freniere, Edward R.; Smith, Linda; Hassler, Richard; Robinson, J. Paul
2007-02-01
The theoretical modeling of fluorescence excitation, emission, and propagation within living tissue has been a limiting factor in the development and calibration of in vivo small animal fluorescence imagers. To date, no definitive calibration standard, or phantom, has been developed for use with small animal fluorescence imagers. Our work in the theoretical modeling of fluorescence in small animals using solid modeling software is useful in optimizing the design of small animal imaging systems, and in predicting their response to a theoretical model. In this respect, it is also valuable in the design of a fluorescence phantom for use in in vivo small animal imaging. The use of phantoms is a critical step in the testing and calibration of most diagnostic medical imaging systems. Despite this, a realistic, reproducible, and informative phantom has yet to be produced for use in small animal fluorescence imaging. By modeling the theoretical response of various types of phantoms, it is possible to determine which parameters are necessary for accurately modeling fluorescence within inhomogenous scattering media such as tissue. Here, we present the model that has been developed, the challenges and limitations associated with developing such a model, and the applicability of this model to experimental results obtained in a commercial small animal fluorescence imager.
DOT National Transportation Integrated Search
2011-03-01
This report documents the calibration of the Highway Safety Manual (HSM) safety performance function (SPF) : for rural two-lane two-way roadway segments in Utah and the development of new models using negative : binomial and hierarchical Bayesian mod...
NASA Astrophysics Data System (ADS)
Campforts, Benjamin; Govers, Gerard; Vanacker, Veerle; Tenorio, Gustavo
2013-04-01
River profile development is studied at different timescales, from the response to uplift over millions of years over steady state erosion rates over millennia to the response to a single event, such as a major landslide. At present, few attempts have been made to compare data obtained over various timescales. Therefore we do not know to what extent data and model results are compatible: do long-term river profile development models yield erosion rates that are compatible with information obtained over shorter time spans, both in terms of absolute rates and spatial patterns or not? Such comparisons could provide crucial insights into the nature of river development and allow us to assess the confidence we may have when predicting river response at different timescales (e.g. Kirchner et al., 2001). A major issue hampering such comparison is the uncertainty involved in the calibration of long-term river profile development models. Furthermore, calibration data on different timescales are rarely available for a specific region. In this research, we set up a river profile development model similar to the one used by Roberts & White (2010) and successfully calibrated it for the northern Ecuadorian Andes using detailed uplift and sedimentological data. Subsequently we used the calibrated model to simulate river profile development in the southern Ecuadorian Andes. The calibrated model allows to reconstruct the Andean uplift history in southern Ecuador, which is characterized by a very strong uplift phase during the last 5 My. Erosion rates derived from the modeled river incision rates were then compared with 10Be derived basin-wide erosion rates for a series of basins within the study area. We found that the model-inferred erosion rates for the last millennia are broadly compatible with the cosmogenic derived denudation rates, both in terms of absolute erosion rates as well as in terms of their spatial distribution. Hence, a relatively simple river profile development model captures the essential controls on long-term landscape development in the studied landscapes. Kirchner, J., Finkel, R., and Riebe, C., 2001, Mountain erosion over 10 yr, 10 ky, and 10 my time scales: Geology, v. 29, no. 7, p. 591-594. Roberts, G., and White, N., 2010, Estimating uplift rate histories from river profiles using African examples: Journal of Geophysical Research, v. 115, p. 1-24.
Bayesian calibration of the Community Land Model using surrogates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ray, Jaideep; Hou, Zhangshuan; Huang, Maoyi
2014-02-01
We present results from the Bayesian calibration of hydrological parameters of the Community Land Model (CLM), which is often used in climate simulations and Earth system models. A statistical inverse problem is formulated for three hydrological parameters, conditional on observations of latent heat surface fluxes over 48 months. Our calibration method uses polynomial and Gaussian process surrogates of the CLM, and solves the parameter estimation problem using a Markov chain Monte Carlo sampler. Posterior probability densities for the parameters are developed for two sites with different soil and vegetation covers. Our method also allows us to examine the structural errormore » in CLM under two error models. We find that surrogate models can be created for CLM in most cases. The posterior distributions are more predictive than the default parameter values in CLM. Climatologically averaging the observations does not modify the parameters' distributions significantly. The structural error model reveals a correlation time-scale which can be used to identify the physical process that could be contributing to it. While the calibrated CLM has a higher predictive skill, the calibration is under-dispersive.« less
Sen, Rahul; Sharma, Sanjula; Kaur, Gurpreet; Banga, Surinder S
2018-01-31
Very few near-infrared reflectance spectroscopy (NIRS) calibration models are available for non-destructive estimation of seed quality traits in Brassica juncea. Those that are available also fail to adequately discern variation for oleic acid (C 18:1 ) , linolenic (C 18:3 ) fatty acids, meal glucosinolates and phenols. We report the development of a new NIRS calibration equation that is expected to fill the gaps in the existing NIRS equations. Calibrations were based on the reference values of important quality traits estimated from a purposely selected germplasm set comprising 240 genotypes of B. juncea and 193 of B. napus. We were able to develop optimal NIRS-based calibration models for oil, phenols, glucosinolates, oleic acid, linoleic acid and erucic acid for B. juncea and B. napus. Correlation coefficients (RSQ) of the external validations appeared greater than 0.7 for the majority of traits, such as oil (0.766, 0.865), phenols (0.821, 0.915), glucosinolates (0.951, 0.986), oleic acid (0.814. 0.810), linoleic acid (0.974, 0.781) and erucic acid (0.963, 0.943) for B. juncea and B. napus, respectively. The results demonstrate the robust predictive power of the developed calibration models for rapid estimation of many quality traits in intact rapeseed-mustard seeds which will assist plant breeders in effective screening and selection of lines in quality improvement breeding programmes. © 2018 Society of Chemical Industry. © 2018 Society of Chemical Industry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Kaiyu; Yan, Da; Hong, Tianzhen
2014-02-28
Overtime is a common phenomenon around the world. Overtime drives both internal heat gains from occupants, lighting and plug-loads, and HVAC operation during overtime periods. Overtime leads to longer occupancy hours and extended operation of building services systems beyond normal working hours, thus overtime impacts total building energy use. Current literature lacks methods to model overtime occupancy because overtime is stochastic in nature and varies by individual occupants and by time. To address this gap in the literature, this study aims to develop a new stochastic model based on the statistical analysis of measured overtime occupancy data from an officemore » building. A binomial distribution is used to represent the total number of occupants working overtime, while an exponential distribution is used to represent the duration of overtime periods. The overtime model is used to generate overtime occupancy schedules as an input to the energy model of a second office building. The measured and simulated cooling energy use during the overtime period is compared in order to validate the overtime model. A hybrid approach to energy model calibration is proposed and tested, which combines ASHRAE Guideline 14 for the calibration of the energy model during normal working hours, and a proposed KS test for the calibration of the energy model during overtime. The developed stochastic overtime model and the hybrid calibration approach can be used in building energy simulations to improve the accuracy of results, and better understand the characteristics of overtime in office buildings.« less
Calibration and Propagation of Uncertainty for Independence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holland, Troy Michael; Kress, Joel David; Bhat, Kabekode Ghanasham
This document reports on progress and methods for the calibration and uncertainty quantification of the Independence model developed at UT Austin. The Independence model is an advanced thermodynamic and process model framework for piperazine solutions as a high-performance CO 2 capture solvent. Progress is presented in the framework of the CCSI standard basic data model inference framework. Recent work has largely focused on the thermodynamic submodels of Independence.
On Inertial Body Tracking in the Presence of Model Calibration Errors
Miezal, Markus; Taetz, Bertram; Bleser, Gabriele
2016-01-01
In inertial body tracking, the human body is commonly represented as a biomechanical model consisting of rigid segments with known lengths and connecting joints. The model state is then estimated via sensor fusion methods based on data from attached inertial measurement units (IMUs). This requires the relative poses of the IMUs w.r.t. the segments—the IMU-to-segment calibrations, subsequently called I2S calibrations—to be known. Since calibration methods based on static poses, movements and manual measurements are still the most widely used, potentially large human-induced calibration errors have to be expected. This work compares three newly developed/adapted extended Kalman filter (EKF) and optimization-based sensor fusion methods with an existing EKF-based method w.r.t. their segment orientation estimation accuracy in the presence of model calibration errors with and without using magnetometer information. While the existing EKF-based method uses a segment-centered kinematic chain biomechanical model and a constant angular acceleration motion model, the newly developed/adapted methods are all based on a free segments model, where each segment is represented with six degrees of freedom in the global frame. Moreover, these methods differ in the assumed motion model (constant angular acceleration, constant angular velocity, inertial data as control input), the state representation (segment-centered, IMU-centered) and the estimation method (EKF, sliding window optimization). In addition to the free segments representation, the optimization-based method also represents each IMU with six degrees of freedom in the global frame. In the evaluation on simulated and real data from a three segment model (an arm), the optimization-based method showed the smallest mean errors, standard deviations and maximum errors throughout all tests. It also showed the lowest dependency on magnetometer information and motion agility. Moreover, it was insensitive w.r.t. I2S position and segment length errors in the tested ranges. Errors in the I2S orientations were, however, linearly propagated into the estimated segment orientations. In the absence of magnetic disturbances, severe model calibration errors and fast motion changes, the newly developed IMU centered EKF-based method yielded comparable results with lower computational complexity. PMID:27455266
The Role of Wakes in Modelling Tidal Current Turbines
NASA Astrophysics Data System (ADS)
Conley, Daniel; Roc, Thomas; Greaves, Deborah
2010-05-01
The eventual proper development of arrays of Tidal Current Turbines (TCT) will require a balance which maximizes power extraction while minimizing environmental impacts. Idealized analytical analogues and simple 2-D models are useful tools for investigating questions of a general nature but do not represent a practical tool for application to realistic cases. Some form of 3-D numerical simulations will be required for such applications and the current project is designed to develop a numerical decision-making tool for use in planning large scale TCT projects. The project is predicated on the use of an existing regional ocean modelling framework (the Regional Ocean Modelling System - ROMS) which is modified to enable the user to account for the effects of TCTs. In such a framework where mixing processes are highly parametrized, the fidelity of the quantitative results is critically dependent on the parameter values utilized. In light of the early stage of TCT development and the lack of field scale measurements, the calibration of such a model is problematic. In the absence of explicit calibration data sets, the device wake structure has been identified as an efficient feature for model calibration. This presentation will discuss efforts to design an appropriate calibration scheme which focuses on wake decay and the motivation for this approach, techniques applied, validation results from simple test cases and limitations shall be presented.
On constraining pilot point calibration with regularization in PEST
Fienen, M.N.; Muffels, C.T.; Hunt, R.J.
2009-01-01
Ground water model calibration has made great advances in recent years with practical tools such as PEST being instrumental for making the latest techniques available to practitioners. As models and calibration tools get more sophisticated, however, the power of these tools can be misapplied, resulting in poor parameter estimates and/or nonoptimally calibrated models that do not suit their intended purpose. Here, we focus on an increasingly common technique for calibrating highly parameterized numerical models - pilot point parameterization with Tikhonov regularization. Pilot points are a popular method for spatially parameterizing complex hydrogeologic systems; however, additional flexibility offered by pilot points can become problematic if not constrained by Tikhonov regularization. The objective of this work is to explain and illustrate the specific roles played by control variables in the PEST software for Tikhonov regularization applied to pilot points. A recent study encountered difficulties implementing this approach, but through examination of that analysis, insight into underlying sources of potential misapplication can be gained and some guidelines for overcoming them developed. ?? 2009 National Ground Water Association.
NASA Technical Reports Server (NTRS)
Crawford, Bradley L.
2007-01-01
The angle measurement system (AMS) developed at NASA Langley Research Center (LaRC) is a system for many uses. It was originally developed to check taper fits in the wind tunnel model support system. The system was further developed to measure simultaneous pitch and roll angles using 3 orthogonally mounted accelerometers (3-axis). This 3-axis arrangement is used as a transfer standard from the calibration standard to the wind tunnel facility. It is generally used to establish model pitch and roll zero and performs the in-situ calibration on model attitude devices. The AMS originally used a laptop computer running DOS based software but has recently been upgraded to operate in a windows environment. Other improvements have also been made to the software to enhance its accuracy and add features. This paper will discuss the accuracy and calibration methodologies used in this system and some of the features that have contributed to its popularity.
NASA Astrophysics Data System (ADS)
Roberts, S. J.; Foster, L. C.; Pearson, E. J.; Steve, J.; Hodgson, D.; Saunders, K. M.; Verleyen, E.
2016-12-01
Temperature calibration models based on the relative abundances of sedimentary glycerol dialkyl glycerol tetraethers (GDGTs) have been used to reconstruct past temperatures in both marine and terrestrial environments, but have not been widely applied in high latitude environments. This is mainly because the performance of GDGT-temperature calibrations at lower temperatures and GDGT provenance in many lacustrine settings remains uncertain. To address these issues, we examined surface sediments from 32 Antarctic, sub-Antarctic and Southern Chilean lakes. First, we quantified GDGT compositions present and then investigated modern-day environmental controls on GDGT composition. GDGTs were found in all 32 lakes studied. Branched GDGTs (brGDGTs) were dominant in 31 lakes and statistical analyses showed that their composition was strongly correlated with mean summer air temperature (MSAT) rather than pH, conductivity or water depth. Second, we developed the first regional brGDGT-temperature calibration for Antarctic and sub-Antarctic lakes based on four brGDGT compounds (GDGT-Ib, GDGT-II, GDGT-III and GDGT-IIIb). Of these, GDGT-IIIb proved particularly important in cold lacustrine environments. Our brGDGT-Antarctic temperature calibration dataset has an improved statistical performance at low temperatures compared to previous global calibrations (r2=0.83, RMSE=1.45°C, RMSEP-LOO=1.68°C, n=36 samples), highlighting the importance of basing palaeotemperature reconstructions on regional GDGT-temperature calibrations, especially if specific compounds lead to improved model performance. Finally, we applied the new Antarctic brGDGT-temperature calibration to two key lake records from the Antarctic Peninsula and South Georgia. In both, downcore temperature reconstructions show similarities to known Holocene warm periods, providing proof of concept for the new Antarctic calibration model.
Surrogate Based Uni/Multi-Objective Optimization and Distribution Estimation Methods
NASA Astrophysics Data System (ADS)
Gong, W.; Duan, Q.; Huo, X.
2017-12-01
Parameter calibration has been demonstrated as an effective way to improve the performance of dynamic models, such as hydrological models, land surface models, weather and climate models etc. Traditional optimization algorithms usually cost a huge number of model evaluations, making dynamic model calibration very difficult, or even computationally prohibitive. With the help of a serious of recently developed adaptive surrogate-modelling based optimization methods: uni-objective optimization method ASMO, multi-objective optimization method MO-ASMO, and probability distribution estimation method ASMO-PODE, the number of model evaluations can be significantly reduced to several hundreds, making it possible to calibrate very expensive dynamic models, such as regional high resolution land surface models, weather forecast models such as WRF, and intermediate complexity earth system models such as LOVECLIM. This presentation provides a brief introduction to the common framework of adaptive surrogate-based optimization algorithms of ASMO, MO-ASMO and ASMO-PODE, a case study of Common Land Model (CoLM) calibration in Heihe river basin in Northwest China, and an outlook of the potential applications of the surrogate-based optimization methods.
A Bayesian approach for calibrating probability judgments
NASA Astrophysics Data System (ADS)
Firmino, Paulo Renato A.; Santana, Nielson A.
2012-10-01
Eliciting experts' opinions has been one of the main alternatives for addressing paucity of data. In the vanguard of this area is the development of calibration models (CMs). CMs are models dedicated to overcome miscalibration, i.e. judgment biases reflecting deficient strategies of reasoning adopted by the expert when inferring about an unknown. One of the main challenges of CMs is to determine how and when to intervene against miscalibration, in order to enhance the tradeoff between costs (time spent with calibration processes) and accuracy of the resulting models. The current paper dedicates special attention to this issue by presenting a dynamic Bayesian framework for monitoring, diagnosing, and handling miscalibration patterns. The framework is based on Beta-, Uniform, or Triangular-Bernoulli models and classes of judgmental calibration theories. Issues regarding the usefulness of the proposed framework are discussed and illustrated via simulation studies.
NREL Improves Building Energy Simulation Programs Through Diagnostic Testing (Fact Sheet)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2012-01-01
This technical highlight describes NREL research to develop Building Energy Simulation Test for Existing Homes (BESTEST-EX) to increase the quality and accuracy of energy analysis tools for the building retrofit market. Researchers at the National Renewable Energy Laboratory (NREL) have developed a new test procedure to increase the quality and accuracy of energy analysis tools for the building retrofit market. The Building Energy Simulation Test for Existing Homes (BESTEST-EX) is a test procedure that enables software developers to evaluate the performance of their audit tools in modeling energy use and savings in existing homes when utility bills are available formore » model calibration. Similar to NREL's previous energy analysis tests, such as HERS BESTEST and other BESTEST suites included in ANSI/ASHRAE Standard 140, BESTEST-EX compares software simulation findings to reference results generated with state-of-the-art simulation tools such as EnergyPlus, SUNREL, and DOE-2.1E. The BESTEST-EX methodology: (1) Tests software predictions of retrofit energy savings in existing homes; (2) Ensures building physics calculations and utility bill calibration procedures perform to a minimum standard; and (3) Quantifies impacts of uncertainties in input audit data and occupant behavior. BESTEST-EX includes building physics and utility bill calibration test cases. The diagram illustrates the utility bill calibration test cases. Participants are given input ranges and synthetic utility bills. Software tools use the utility bills to calibrate key model inputs and predict energy savings for the retrofit cases. Participant energy savings predictions using calibrated models are compared to NREL predictions using state-of-the-art building energy simulation programs.« less
In-flight calibration/validation of the ENVISAT/MWR
NASA Astrophysics Data System (ADS)
Tran, N.; Obligis, E.; Eymard, L.
2003-04-01
Retrieval algorithms for wet tropospheric correction, integrated vapor and liquid water contents, atmospheric attenuations of backscattering coefficients in Ku and S band, have been developed using a database of geophysical parameters from global analyses from a meteorological model and corresponding simulated brightness temperatures and backscattering cross-sections by a radiative transfer model. Meteorological data correspond to 12 hours predictions from the European Center for Medium range Weather Forecasts (ECMWF) model. Relationships between satellite measurements and geophysical parameters are determined using a statistical method. The quality of the retrieval algorithms depends therefore on the representativity of the database, the accuracy of the radiative transfer model used for the simulations and finally on the quality of the inversion model. The database has been built using the latest version of the ECMWF forecast model, which has been operationally run since November 2000. The 60 levels in the model allow a complete description of the troposphere/stratosphere profiles and the horizontal resolution is now half of a degree. The radiative transfer model is the emissivity model developed at the Université Catholique de Louvain [Lemaire, 1998], coupled to an atmospheric model [Liebe et al, 1993] for gaseous absorption. For the inversion, we have replaced the classical log-linear regression with a neural networks inversion. For Envisat, the backscattering coefficient in Ku band is used in the different algorithms to take into account the surface roughness as it is done with the 18 GHz channel for the TOPEX algorithms or an additional term in wind speed for ERS2 algorithms. The in-flight calibration/validation of the Envisat radiometer has been performed with the tuning of 3 internal parameters (the transmission coefficient of the reflector, the sky horn feed transmission coefficient and the main antenna transmission coefficient). First an adjustment of the ERS2 brightness temperatures to the simulations for the 2000/2001 version of the ECMWF model has been applied. Then, Envisat brightness temperatures have been calibrated on these adjusted ERS2 values. The advantages of this calibration approach are that : i) such a method provides the relative discrepancy with respect to the simulation chain. The results, obtained simultaneously for several radiometers (we repeat the same analyze with TOPEX and JASON radiometers), can be used to detect significant calibration problems, more than 2 3 K). ii) the retrieval algorithms have been developed using the same meteorological model (2000/2001 version of the ECMWF model), and the same radiative transfer model than the calibration process, insuring the consistency between calibration and retrieval processing. Retrieval parameters are then optimized. iii) the calibration of the Envisat brightness temperatures over the 2000/2001 version of the ECMWF model, as well as the recommendation to use the same model as a reference to correct ERS2 brightness temperatures, allow the use of the same retrieval algorithms for the two missions, providing the continuity between the two. iv) by comparison with other calibration methods (such as systematic calibration of an instrument or products by using respectively the ones from previous mission), this method is more satisfactory since improvements in terms of technology, modelisation, retrieval processing are taken into account. For the validation of the brightness temperatures, we use either a direct comparison with measurements provided by other instruments in similar channel, or the monitoring over stable areas (coldest ocean points, stable continental areas). The validation of the wet tropospheric correction can be also provided by comparison with other radiometer products, but the only real validation rely on the comparison between in-situ measurements (performed by radiosonding) and retrieved products in coincidence.
Calibration of a distributed hydrologic model for six European catchments using remote sensing data
NASA Astrophysics Data System (ADS)
Stisen, S.; Demirel, M. C.; Mendiguren González, G.; Kumar, R.; Rakovec, O.; Samaniego, L. E.
2017-12-01
While observed streamflow has been the single reference for most conventional hydrologic model calibration exercises, the availability of spatially distributed remote sensing observations provide new possibilities for multi-variable calibration assessing both spatial and temporal variability of different hydrologic processes. In this study, we first identify the key transfer parameters of the mesoscale Hydrologic Model (mHM) controlling both the discharge and the spatial distribution of actual evapotranspiration (AET) across six central European catchments (Elbe, Main, Meuse, Moselle, Neckar and Vienne). These catchments are selected based on their limited topographical and climatic variability which enables to evaluate the effect of spatial parameterization on the simulated evapotranspiration patterns. We develop a European scale remote sensing based actual evapotranspiration dataset at a 1 km grid scale driven primarily by land surface temperature observations from MODIS using the TSEB approach. Using the observed AET maps we analyze the potential benefits of incorporating spatial patterns from MODIS data to calibrate the mHM model. This model allows calibrating one-basin-at-a-time or all-basins-together using its unique structure and multi-parameter regionalization approach. Results will indicate any tradeoffs between spatial pattern and discharge simulation during model calibration and through validation against independent internal discharge locations. Moreover, added value on internal water balances will be analyzed.
Computer simulation of storm runoff for three watersheds in Albuquerque, New Mexico
Knutilla, R.L.; Veenhuis, J.E.
1994-01-01
Rainfall-runoff data from three watersheds were selected for calibration and verification of the U.S. Geological Survey's Distributed Routing Rainfall-Runoff Model. The watersheds chosen are residentially developed. The conceptually based model uses an optimization process that adjusts selected parameters to achieve the best fit between measured and simulated runoff volumes and peak discharges. Three of these optimization parameters represent soil-moisture conditions, three represent infiltration, and one accounts for effective impervious area. Each watershed modeled was divided into overland-flow segments and channel segments. The overland-flow segments were further subdivided to reflect pervious and impervious areas. Each overland-flow and channel segment was assigned representative values of area, slope, percentage of imperviousness, and roughness coefficients. Rainfall-runoff data for each watershed were separated into two sets for use in calibration and verification. For model calibration, seven input parameters were optimized to attain a best fit of the data. For model verification, parameter values were set using values from model calibration. The standard error of estimate for calibration of runoff volumes ranged from 19 to 34 percent, and for peak discharge calibration ranged from 27 to 44 percent. The standard error of estimate for verification of runoff volumes ranged from 26 to 31 percent, and for peak discharge verification ranged from 31 to 43 percent.
Calibrating Charisma: The many-facet Rasch model for leader measurement and automated coaching
NASA Astrophysics Data System (ADS)
Barney, Matt
2016-11-01
No one is a leader unless others follow. Consequently, leadership is fundamentally a social judgment construct, and may be best measured via a Many Facet Rasch Model designed for this purpose. Uniquely, the MFRM allows for objective, accurate and precise estimation of leader attributes, along with identification of rater biases and other distortions of the available information. This presentation will outline a mobile computer-adaptive measurement system that measures and develops charisma, among others. Uniquely, the approach calibrates and mass-personalizes artificially intelligent, Rasch-calibrated electronic coaching that is neither too hard nor too easy but “just right” to help each unique leader develop improved charisma.
Janssen, Daniël M C; van Kuijk, Sander M J; d'Aumerie, Boudewijn B; Willems, Paul C
2018-05-16
A prediction model for surgical site infection (SSI) after spine surgery was developed in 2014 by Lee et al. This model was developed to compute an individual estimate of the probability of SSI after spine surgery based on the patient's comorbidity profile and invasiveness of surgery. Before any prediction model can be validly implemented in daily medical practice, it should be externally validated to assess how the prediction model performs in patients sampled independently from the derivation cohort. We included 898 consecutive patients who underwent instrumented thoracolumbar spine surgery. To quantify overall performance using Nagelkerke's R 2 statistic, the discriminative ability was quantified as the area under the receiver operating characteristic curve (AUC). We computed the calibration slope of the calibration plot, to judge prediction accuracy. Sixty patients developed an SSI. The overall performance of the prediction model in our population was poor: Nagelkerke's R 2 was 0.01. The AUC was 0.61 (95% confidence interval (CI) 0.54-0.68). The estimated slope of the calibration plot was 0.52. The previously published prediction model showed poor performance in our academic external validation cohort. To predict SSI after instrumented thoracolumbar spine surgery for the present population, a better fitting prediction model should be developed.
Simulation of natural flows in major river basins in Alabama
Hunt, Alexandria M.; García, Ana María
2014-01-01
The Office of Water Resources (OWR) in the Alabama Department of Economic and Community Affairs (ADECA) is charged with the assessment of the State’s water resources. This study developed a watershed model for the major river basins that are within Alabama or that cross Alabama’s borders, which serves as a planning tool for water-resource decisionmakers. The watershed model chosen to assess the natural amount of available water was the Precipitation-Runoff Modeling System (PRMS). Models were configured and calibrated for the following four river basins: Mobile, Gulf of Mexico, Middle Tennessee, and Chattahoochee. These models required calibrating unregulated U.S. Geological Survey (USGS) streamflow gaging stations to estimate natural flows, with emphases on low-flow calibration. The target calibration criteria required the errors be within the range of: (1) ±10 percent for total-streamflow volume, (2) ±10 percent for low-flow volume, (3) ±15 percent for high-flow volume, (4) ±30 percent for summer volume, and (5) above 0.5 for the correlation coefficient (R2). Seventy-one of the 90 calibration stations in the watershed models for the four major river basins within Alabama met the target calibration criteria. Variability in the model performance can be attributed to limitations in correctly representing certain hydrologic conditions that are characterized by some of the ecoregions in Alabama. Ecoregions consisting of predominantly clayey soils and (or) low topographic relief yield less successful calibration results, whereas ecoregions consisting of loamy and sandy soils and (or) high topographic relief yield more successful calibration results. Results indicate that the model does well in hilly regions with sandy soils because of rapid surface runoff and more direct interaction with subsurface flow.
Model Calibration in Watershed Hydrology
NASA Technical Reports Server (NTRS)
Yilmaz, Koray K.; Vrugt, Jasper A.; Gupta, Hoshin V.; Sorooshian, Soroosh
2009-01-01
Hydrologic models use relatively simple mathematical equations to conceptualize and aggregate the complex, spatially distributed, and highly interrelated water, energy, and vegetation processes in a watershed. A consequence of process aggregation is that the model parameters often do not represent directly measurable entities and must, therefore, be estimated using measurements of the system inputs and outputs. During this process, known as model calibration, the parameters are adjusted so that the behavior of the model approximates, as closely and consistently as possible, the observed response of the hydrologic system over some historical period of time. This Chapter reviews the current state-of-the-art of model calibration in watershed hydrology with special emphasis on our own contributions in the last few decades. We discuss the historical background that has led to current perspectives, and review different approaches for manual and automatic single- and multi-objective parameter estimation. In particular, we highlight the recent developments in the calibration of distributed hydrologic models using parameter dimensionality reduction sampling, parameter regularization and parallel computing.
NASA Technical Reports Server (NTRS)
Estefan, J. A.; Sovers, O. J.
1994-01-01
The standard tropospheric calibration model implemented in the operational Orbit Determination Program is the seasonal model developed by C. C. Chao in the early 1970's. The seasonal model has seen only slight modification since its release, particularly in the format and content of the zenith delay calibrations. Chao's most recent standard mapping tables, which are used to project the zenith delay calibrations along the station-to-spacecraft line of sight, have not been modified since they were first published in late 1972. This report focuses principally on proposed upgrades to the zenith delay mapping process, although modeling improvements to the zenith delay calibration process are also discussed. A number of candidate approximation models for the tropospheric mapping are evaluated, including the semi-analytic mapping function of Lanyi, and the semi-empirical mapping functions of Davis, et. al.('CfA-2.2'), of Ifadis (global solution model), of Herring ('MTT'), and of Niell ('NMF'). All of the candidate mapping functions are superior to the Chao standard mapping tables and approximation formulas when evaluated against the current Deep Space Network Mark 3 intercontinental very long baselines interferometry database.
Sensitivity Analysis of Fatigue Crack Growth Model for API Steels in Gaseous Hydrogen.
Amaro, Robert L; Rustagi, Neha; Drexler, Elizabeth S; Slifka, Andrew J
2014-01-01
A model to predict fatigue crack growth of API pipeline steels in high pressure gaseous hydrogen has been developed and is presented elsewhere. The model currently has several parameters that must be calibrated for each pipeline steel of interest. This work provides a sensitivity analysis of the model parameters in order to provide (a) insight to the underlying mathematical and mechanistic aspects of the model, and (b) guidance for model calibration of other API steels.
NASA Astrophysics Data System (ADS)
Cannata, Massimiliano; Neumann, Jakob; Cardoso, Mirko; Rossetto, Rudy; Foglia, Laura; Borsi, Iacopo
2017-04-01
In situ time-series are an important aspect of environmental modelling, especially with the advancement of numerical simulation techniques and increased model complexity. In order to make use of the increasing data available through the requirements of the EU Water Framework Directive, the FREEWAT GIS environment incorporates the newly developed Observation Analysis Tool for time-series analysis. The tool is used to import time-series data into QGIS from local CSV files, online sensors using the istSOS service, or MODFLOW model result files and enables visualisation, pre-processing of data for model development, and post-processing of model results. OAT can be used as a pre-processor for calibration observations, integrating the creation of observations for calibration directly from sensor time-series. The tool consists in an expandable Python library of processing methods and an interface integrated in the QGIS FREEWAT plug-in which includes a large number of modelling capabilities, data management tools and calibration capacity.
Developing an Abaqus *HYPERFOAM Model for M9747 (4003047) Cellular Silicone Foam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siranosian, Antranik A.; Stevens, R. Robert
This report documents work done to develop an Abaqus *HYPERFOAM hyperelastic model for M9747 (4003047) cellular silicone foam for use in quasi-static analyses at ambient temperature. Experimental data, from acceptance tests for 'Pad A' conducted at the Kansas City Plant (KCP), was used to calibrate the model. The data includes gap (relative displacement) and load measurements from three locations on the pad. Thirteen sets of data, from pads with different serial numbers, were provided. The thirty-nine gap-load curves were extracted from the thirteen supplied Excel spreadsheets and analyzed, and from those thirty-nine one set of data, representing a qualitative mean,more » was chosen to calibrate the model. The data was converted from gap and load to nominal (engineering) strain and nominal stress in order to implement it in Abaqus. Strain computations required initial pad thickness estimates. An Abaqus model of a right-circular cylinder was used to evaluate and calibrate the *HYPERFOAM model.« less
Computer Vision Assisted Virtual Reality Calibration
NASA Technical Reports Server (NTRS)
Kim, W.
1999-01-01
A computer vision assisted semi-automatic virtual reality (VR) calibration technology has been developed that can accurately match a virtual environment of graphically simulated three-dimensional (3-D) models to the video images of the real task environment.
Calibration power of the Braden scale in predicting pressure ulcer development.
Chen, Hong-Lin; Cao, Ying-Juan; Wang, Jing; Huai, Bao-Sha
2016-11-02
Calibration is the degree of correspondence between the estimated probability produced by a model and the actual observed probability. The aim of this study was to investigate the calibration power of the Braden scale in predicting pressure ulcer development (PU). A retrospective analysis was performed among consecutive patients in 2013. The patients were separated into training a group and a validation group. The predicted incidence was calculated using a logistic regression model in the training group and the Hosmer-Lemeshow test was used for assessing the goodness of fit. In the validation cohort, the observed and the predicted incidence were compared by the Chi-square (χ 2 ) goodness of fit test for calibration power. We included 2585 patients in the study, of these 78 patients (3.0%) developed a PU. Between the training and validation groups the patient characteristics were non-significant (p>0.05). In the training group, the logistic regression model for predicting pressure ulcer was Logit(P) = -0.433*Braden score+2.616. The Hosmer-Lemeshow test showed no goodness fit (χ 2 =13.472; p=0.019). In the validation group, the predicted pressure ulcer incidence also did not fit well with the observed incidence (χ 2 =42.154, p=0.000 by Braden scores; and χ 2 =17.223, p=0.001 by Braden scale risk classification). The Braden scale has low calibration power in predicting PU formation.
Dudley, Robert W.; Nielsen, Martha G.
2011-01-01
The U.S. Geological Survey (USGS) began a study in 2008 to investigate anticipated changes in summer streamflows and stream temperatures in four coastal Maine river basins and the potential effects of those changes on populations of endangered Atlantic salmon. To achieve this purpose, it was necessary to characterize the quantity and timing of streamflow in these rivers by developing and evaluating a distributed-parameter watershed model for a part of each river basin by using the USGS Precipitation-Runoff Modeling System (PRMS). The GIS (geographic information system) Weasel, a USGS software application, was used to delineate the four study basins and their many subbasins, and to derive parameters for their geographic features. The models were calibrated using a four-step optimization procedure in which model output was evaluated against four datasets for calibrating solar radiation, potential evapotranspiration, annual and seasonal water balances, and daily streamflows. The calibration procedure involved thousands of model runs that used the USGS software application Luca (Let us calibrate). Luca uses the Shuffled Complex Evolution (SCE) global search algorithm to calibrate the model parameters. The calibrated watershed models performed satisfactorily, in that Nash-Sutcliffe efficiency (NSE) statistic values for the calibration periods ranged from 0.59 to 0.75 (on a scale of negative infinity to 1) and NSE statistic values for the evaluation periods ranged from 0.55 to 0.73. The calibrated watershed models simulate daily streamflow at many locations in each study basin. These models enable natural resources managers to characterize the timing and amount of streamflow in order to support a variety of water-resources efforts including water-quality calculations, assessments of water use, modeling of population dynamics and migration of Atlantic salmon, modeling and assessment of habitat, and simulation of anticipated changes to streamflow and water temperature resulting from changes forecast for air temperature and precipitation.
Uncertainty Analysis of Instrument Calibration and Application
NASA Technical Reports Server (NTRS)
Tripp, John S.; Tcheng, Ping
1999-01-01
Experimental aerodynamic researchers require estimated precision and bias uncertainties of measured physical quantities, typically at 95 percent confidence levels. Uncertainties of final computed aerodynamic parameters are obtained by propagation of individual measurement uncertainties through the defining functional expressions. In this paper, rigorous mathematical techniques are extended to determine precision and bias uncertainties of any instrument-sensor system. Through this analysis, instrument uncertainties determined through calibration are now expressed as functions of the corresponding measurement for linear and nonlinear univariate and multivariate processes. Treatment of correlated measurement precision error is developed. During laboratory calibration, calibration standard uncertainties are assumed to be an order of magnitude less than those of the instrument being calibrated. Often calibration standards do not satisfy this assumption. This paper applies rigorous statistical methods for inclusion of calibration standard uncertainty and covariance due to the order of their application. The effects of mathematical modeling error on calibration bias uncertainty are quantified. The effects of experimental design on uncertainty are analyzed. The importance of replication is emphasized, techniques for estimation of both bias and precision uncertainties using replication are developed. Statistical tests for stationarity of calibration parameters over time are obtained.
Daytime sky polarization calibration limitations
NASA Astrophysics Data System (ADS)
Harrington, David M.; Kuhn, Jeffrey R.; Ariste, Arturo López
2017-01-01
The daytime sky has recently been demonstrated as a useful calibration tool for deriving polarization cross-talk properties of large astronomical telescopes. The Daniel K. Inouye Solar Telescope and other large telescopes under construction can benefit from precise polarimetric calibration of large mirrors. Several atmospheric phenomena and instrumental errors potentially limit the technique's accuracy. At the 3.67-m AEOS telescope on Haleakala, we performed a large observing campaign with the HiVIS spectropolarimeter to identify limitations and develop algorithms for extracting consistent calibrations. Effective sampling of the telescope optical configurations and filtering of data for several derived parameters provide robustness to the derived Mueller matrix calibrations. Second-order scattering models of the sky show that this method is relatively insensitive to multiple-scattering in the sky, provided calibration observations are done in regions of high polarization degree. The technique is also insensitive to assumptions about telescope-induced polarization, provided the mirror coatings are highly reflective. Zemax-derived polarization models show agreement between the functional dependence of polarization predictions and the corresponding on-sky calibrations.
NASA Astrophysics Data System (ADS)
Asadzadeh, M.; Maclean, A.; Tolson, B. A.; Burn, D. H.
2009-05-01
Hydrologic model calibration aims to find a set of parameters that adequately simulates observations of watershed behavior, such as streamflow, or a state variable, such as snow water equivalent (SWE). There are different metrics for evaluating calibration effectiveness that involve quantifying prediction errors, such as the Nash-Sutcliffe (NS) coefficient and bias evaluated for the entire calibration period, on a seasonal basis, for low flows, or for high flows. Many of these metrics are conflicting such that the set of parameters that maximizes the high flow NS differs from the set of parameters that maximizes the low flow NS. Conflicting objectives are very likely when different calibration objectives are based on different fluxes and/or state variables (e.g., NS based on streamflow versus SWE). One of the most popular ways to balance different metrics is to aggregate them based on their importance and find the set of parameters that optimizes a weighted sum of the efficiency metrics. Comparing alternative hydrologic models (e.g., assessing model improvement when a process or more detail is added to the model) based on the aggregated objective might be misleading since it represents one point on the tradeoff of desired error metrics. To derive a more comprehensive model comparison, we solved a bi-objective calibration problem to estimate the tradeoff between two error metrics for each model. Although this approach is computationally more expensive than the aggregation approach, it results in a better understanding of the effectiveness of selected models at each level of every error metric and therefore provides a better rationale for judging relative model quality. The two alternative models used in this study are two MESH hydrologic models (version 1.2) of the Wolf Creek Research basin that differ in their watershed spatial discretization (a single Grouped Response Unit, GRU, versus multiple GRUs). The MESH model, currently under development by Environment Canada, is a coupled land-surface and hydrologic model. Results will demonstrate the conclusions a modeller might make regarding the value of additional watershed spatial discretization under both an aggregated (single-objective) and multi-objective model comparison framework.
Coupling HYDRUS-1D Code with PA-DDS Algorithms for Inverse Calibration
NASA Astrophysics Data System (ADS)
Wang, Xiang; Asadzadeh, Masoud; Holländer, Hartmut
2017-04-01
Numerical modelling requires calibration to predict future stages. A standard method for calibration is inverse calibration where generally multi-objective optimization algorithms are used to find a solution, e.g. to find an optimal solution of the van Genuchten Mualem (VGM) parameters to predict water fluxes in the vadose zone. We coupled HYDRUS-1D with PA-DDS to add a new, robust function for inverse calibration to the model. The PA-DDS method is a recently developed multi-objective optimization algorithm, which combines Dynamically Dimensioned Search (DDS) and Pareto Archived Evolution Strategy (PAES). The results were compared to a standard method (Marquardt-Levenberg method) implemented in HYDRUS-1D. Calibration performance is evaluated using observed and simulated soil moisture at two soil layers in the Southern Abbotsford, British Columbia, Canada in the terms of the root mean squared error (RMSE) and the Nash-Sutcliffe Efficiency (NSE). Results showed low RMSE values of 0.014 and 0.017 and strong NSE values of 0.961 and 0.939. Compared to the results by the Marquardt-Levenberg method, we received better calibration results for deeper located soil sensors. However, VGM parameters were similar comparing with previous studies. Both methods are equally computational efficient. We claim that a direct implementation of PA-DDS into HYDRUS-1D should reduce the computation effort further. This, the PA-DDS method is efficient for calibrating recharge for complex vadose zone modelling with multiple soil layer and can be a potential tool for calibration of heat and solute transport. Future work should focus on the effectiveness of PA-DDS for calibrating more complex versions of the model with complex vadose zone settings, with more soil layers, and against measured heat and solute transport. Keywords: Recharge, Calibration, HYDRUS-1D, Multi-objective Optimization
NASA Astrophysics Data System (ADS)
Joiner, N.; Esser, B.; Fertig, M.; Gülhan, A.; Herdrich, G.; Massuti-Ballester, B.
2016-12-01
This paper summarises the final synthesis of an ESA technology research programme entitled "Development of an Innovative Validation Strategy of Gas Surface Interaction Modelling for Re-entry Applications". The focus of the project was to demonstrate the correct pressure dependency of catalytic surface recombination, with an emphasis on Low Earth Orbit (LEO) re-entry conditions and thermal protection system materials. A physics-based model describing the prevalent recombination mechanisms was proposed for implementation into two CFD codes, TINA and TAU. A dedicated experimental campaign was performed to calibrate and validate the CFD model on TPS materials pertinent to the EXPERT space vehicle at a wide range of temperatures and pressures relevant to LEO. A new set of catalytic recombination data was produced that was able to improve the chosen model calibration for CVD-SiC and provide the first model calibration for the Nickel-Chromium super-alloy PM1000. The experimentally observed pressure dependency of catalytic recombination can only be reproduced by the Langmuir-Hinshelwood recombination mechanism. Due to decreasing degrees of (enthalpy and hence) dissociation with facility stagnation pressure, it was not possible to obtain catalytic recombination coefficients from the measurements at high experimental stagnation pressures. Therefore, the CFD model calibration has been improved by this activity based on the low pressure results. The results of the model calibration were applied to the existing EXPERT mission profile to examine the impact of the experimentally calibrated model at flight relevant conditions. The heat flux overshoot at the CVD-SiC/PM1000 junction on EXPERT is confirmed to produce radiative equilibrium temperatures in close proximity to the PM1000 melt temperature.This was anticipated within the margins of the vehicle design; however, due to the measurements made here for the first time at relevant temperatures for the junction, an increased confidence in this finding is placed on the computations.
Development and calibration of the statewide land use-transport model
DOT National Transportation Integrated Search
1999-02-12
The TRANUS package has been used to develop an integrated land use-transport model at the statewide level. It is based upon a specific modeling approach described by de la Barra (1989,1995). For the prototype statewide model developed during this pro...
The new camera calibration system at the US Geological Survey
Light, D.L.
1992-01-01
Modern computerized photogrammetric instruments are capable of utilizing both radial and decentering camera calibration parameters which can increase plotting accuracy over that of older analog instrumentation technology from previous decades. Also, recent design improvements in aerial cameras have minimized distortions and increased the resolving power of camera systems, which should improve the performance of the overall photogrammetric process. In concert with these improvements, the Geological Survey has adopted the rigorous mathematical model for camera calibration developed by Duane Brown. An explanation of the Geological Survey's calibration facility and the additional calibration parameters now being provided in the USGS calibration certificate are reviewed. -Author
Li, Weiyong; Worosila, Gregory D
2005-05-13
This research note demonstrates the simultaneous quantitation of a pharmaceutical active ingredient and three excipients in a simulated powder blend containing acetaminophen, Prosolv and Crospovidone. An experimental design approach was used in generating a 5-level (%, w/w) calibration sample set that included 125 samples. The samples were prepared by weighing suitable amount of powders into separate 20-mL scintillation vials and were mixed manually. Partial least squares (PLS) regression was used in calibration model development. The models generated accurate results for quantitation of Crospovidone (at 5%, w/w) and magnesium stearate (at 0.5%, w/w). Further testing of the models demonstrated that the 2-level models were as effective as the 5-level ones, which reduced the calibration sample number to 50. The models had a small bias for quantitation of acetaminophen (at 30%, w/w) and Prosolv (at 64.5%, w/w) in the blend. The implication of the bias is discussed.
Tian, Yuan; Hassmiller Lich, Kristen; Osgood, Nathaniel D; Eom, Kirsten; Matchar, David B
2016-11-01
As health services researchers and decision makers tackle more difficult problems using simulation models, the number of parameters and the corresponding degree of uncertainty have increased. This often results in reduced confidence in such complex models to guide decision making. To demonstrate a systematic approach of linked sensitivity analysis, calibration, and uncertainty analysis to improve confidence in complex models. Four techniques were integrated and applied to a System Dynamics stroke model of US veterans, which was developed to inform systemwide intervention and research planning: Morris method (sensitivity analysis), multistart Powell hill-climbing algorithm and generalized likelihood uncertainty estimation (calibration), and Monte Carlo simulation (uncertainty analysis). Of 60 uncertain parameters, sensitivity analysis identified 29 needing calibration, 7 that did not need calibration but significantly influenced key stroke outcomes, and 24 not influential to calibration or stroke outcomes that were fixed at their best guess values. One thousand alternative well-calibrated baselines were obtained to reflect calibration uncertainty and brought into uncertainty analysis. The initial stroke incidence rate among veterans was identified as the most influential uncertain parameter, for which further data should be collected. That said, accounting for current uncertainty, the analysis of 15 distinct prevention and treatment interventions provided a robust conclusion that hypertension control for all veterans would yield the largest gain in quality-adjusted life years. For complex health care models, a mixed approach was applied to examine the uncertainty surrounding key stroke outcomes and the robustness of conclusions. We demonstrate that this rigorous approach can be practical and advocate for such analysis to promote understanding of the limits of certainty in applying models to current decisions and to guide future data collection. © The Author(s) 2016.
NASA Astrophysics Data System (ADS)
Norton, P. A., II
2015-12-01
The U. S. Geological Survey is developing a National Hydrologic Model (NHM) to support consistent hydrologic modeling across the conterminous United States (CONUS). The Precipitation-Runoff Modeling System (PRMS) simulates daily hydrologic and energy processes in watersheds, and is used for the NHM application. For PRMS each watershed is divided into hydrologic response units (HRUs); by default each HRU is assumed to have a uniform hydrologic response. The Geospatial Fabric (GF) is a database containing initial parameter values for input to PRMS and was created for the NHM. The parameter values in the GF were derived from datasets that characterize the physical features of the entire CONUS. The NHM application is composed of more than 100,000 HRUs from the GF. Selected parameter values commonly are adjusted by basin in PRMS using an automated calibration process based on calibration targets, such as streamflow. Providing each HRU with distinct values that captures variability within the CONUS may improve simulation performance of the NHM. During calibration of the NHM by HRU, selected parameter values are adjusted for PRMS based on calibration targets, such as streamflow, snow water equivalent (SWE) and actual evapotranspiration (AET). Simulated SWE, AET, and runoff were compared to value ranges derived from multiple sources (e.g. the Snow Data Assimilation System, the Moderate Resolution Imaging Spectroradiometer (i.e. MODIS) Global Evapotranspiration Project, the Simplified Surface Energy Balance model, and the Monthly Water Balance Model). This provides each HRU with a distinct set of parameter values that captures the variability within the CONUS, leading to improved model performance. We present simulation results from the NHM after preliminary calibration, including the results of basin-level calibration for the NHM using: 1) default initial GF parameter values, and 2) parameter values calibrated by HRU.
A calibration model for screen-caged Peltier thermocouple psychrometers
Ray W. Brown; Dale L. Bartos
1982-01-01
A calibration model for screen-caged Peltier thermocouple psychrometers was developed that applies to a water potential range of 0 to-80 bars, over a temperature range of 0° to 40° C, and for cooling times of 15 to 60 seconds. In addition, the model corrects for the effects of temperature gradients over zero-offsets from -60 to + 60 microvolts. Complete details of...
Procedure for the Selection and Validation of a Calibration Model I-Description and Application.
Desharnais, Brigitte; Camirand-Lemyre, Félix; Mireault, Pascal; Skinner, Cameron D
2017-05-01
Calibration model selection is required for all quantitative methods in toxicology and more broadly in bioanalysis. This typically involves selecting the equation order (quadratic or linear) and weighting factor correctly modelizing the data. A mis-selection of the calibration model will generate lower quality control (QC) accuracy, with an error up to 154%. Unfortunately, simple tools to perform this selection and tests to validate the resulting model are lacking. We present a stepwise, analyst-independent scheme for selection and validation of calibration models. The success rate of this scheme is on average 40% higher than a traditional "fit and check the QCs accuracy" method of selecting the calibration model. Moreover, the process was completely automated through a script (available in Supplemental Data 3) running in RStudio (free, open-source software). The need for weighting was assessed through an F-test using the variances of the upper limit of quantification and lower limit of quantification replicate measurements. When weighting was required, the choice between 1/x and 1/x2 was determined by calculating which option generated the smallest spread of weighted normalized variances. Finally, model order was selected through a partial F-test. The chosen calibration model was validated through Cramer-von Mises or Kolmogorov-Smirnov normality testing of the standardized residuals. Performance of the different tests was assessed using 50 simulated data sets per possible calibration model (e.g., linear-no weight, quadratic-no weight, linear-1/x, etc.). This first of two papers describes the tests, procedures and outcomes of the developed procedure using real LC-MS-MS results for the quantification of cocaine and naltrexone. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Bello, Alessandra; Bianchi, Federica; Careri, Maria; Giannetto, Marco; Mori, Giovanni; Musci, Marilena
2007-11-05
A new NIR method based on multivariate calibration for determination of ethanol in industrially packed wholemeal bread was developed and validated. GC-FID was used as reference method for the determination of actual ethanol concentration of different samples of wholemeal bread with proper content of added ethanol, ranging from 0 to 3.5% (w/w). Stepwise discriminant analysis was carried out on the NIR dataset, in order to reduce the number of original variables by selecting those that were able to discriminate between the samples of different ethanol concentrations. With the so selected variables a multivariate calibration model was then obtained by multiple linear regression. The prediction power of the linear model was optimized by a new "leave one out" method, so that the number of original variables resulted further reduced.
NASA Astrophysics Data System (ADS)
Lafontaine, J.; Hay, L.
2015-12-01
The United States Geological Survey (USGS) has developed a National Hydrologic Model (NHM) to support coordinated, comprehensive and consistent hydrologic model development, and facilitate the application of hydrologic simulations within the conterminous United States (CONUS). More than 1,700 gaged watersheds across the CONUS were modeled to test the feasibility of improving streamflow simulations in gaged and ungaged watersheds by linking statistically- and physically-based hydrologic models with remotely-sensed data products (i.e. - snow water equivalent) and estimates of uncertainty. Initially, the physically-based models were calibrated to measured streamflow data to provide a baseline for comparison. As many stream reaches in the CONUS are either not gaged, or are substantially impacted by water use or flow regulation, ancillary information must be used to determine reasonable parameter estimations for streamflow simulations. In addition, not all ancillary datasets are appropriate for application to all parts of the CONUS (e.g. - snow water equivalent in the southeastern U.S., where snow is a rarity). As it is not expected that any one data product or model simulation will be sufficient for representing hydrologic behavior across the entire CONUS, a systematic evaluation of which data products improve simulations of streamflow for various regions across the CONUS was performed. The resulting portfolio of calibration strategies can be used to guide selection of an appropriate combination of simulated and measured information for model development and calibration at a given location of interest. In addition, these calibration strategies have been developed to be flexible so that new data products or simulated information can be assimilated. This analysis provides a foundation to understand how well models work when streamflow data is either not available or is limited and could be used to further inform hydrologic model parameter development for ungaged areas.
Groundwater flow simulation of the Savannah River Site general separations area
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flach, G.; Bagwell, L.; Bennett, P.
The most recent groundwater flow model of the General Separations Area, Savannah River Site, is referred to as the “GSA/PORFLOW” model. GSA/PORFLOW was developed in 2004 by porting an existing General Separations Area groundwater flow model from the FACT code to the PORFLOW code. The preceding “GSA/FACT” model was developed in 1997 using characterization and monitoring data through the mid-1990’s. Both models were manually calibrated to field data. Significantly more field data have been acquired since the 1990’s and model calibration using mathematical optimization software has become routine and recommended practice. The current task involved updating the GSA/PORFLOW model usingmore » selected field data current through at least 2015, and use of the PEST code to calibrate the model and quantify parameter uncertainty. This new GSA groundwater flow model is named “GSA2016” in reference to the year in which most development occurred. The GSA2016 model update is intended to address issues raised by the DOE Low-Level Waste (LLW) Disposal Facility Federal Review Group (LFRG) in a 2008 review of the E-Area Performance Assessment, and by the Nuclear Regulatory Commission in reviews of tank closure and Saltstone Disposal Facility Performance Assessments.« less
NASA Astrophysics Data System (ADS)
de Almeida Bressiani, D.; Srinivasan, R.; Mendiondo, E. M.
2013-12-01
The use of distributed or semi-distributed models to represent the processes and dynamics of a watershed in the last few years has increased. These models are important tools to predict and forecast the hydrological responses of the watersheds, and they can subside disaster risk management and planning. However they usually have a lot of parameters, of which, due to the spatial and temporal variability of the processes, are not known, specially in developing countries; therefore a robust and sensible calibration is very important. This study conduced a sub-daily calibration and parameterization of the Soil & Water Assessment Tool (SWAT) for a 12,600 km2 watershed in southeast Brazil, and uses ensemble forecasts to evaluate if the model can be used as a tool for flood forecasting. The Piracicaba Watershed, in São Paulo State, is mainly rural, but has about 4 million of population in highly relevant urban areas, and three cities in the list of critical cities of the National Center for Natural Disasters Monitoring and Alerts. For calibration: the watershed was divided in areas with similar hydrological characteristics, for each of these areas one gauge station was chosen for calibration; this procedure was performed to evaluate the effectiveness of calibrating in fewer places, since areas with the same group of groundwater, soil, land use and slope characteristics should have similar parameters; making calibration a less time-consuming task. The sensibility analysis and calibration were performed on the software SWAT-CUP with the optimization algorithm: Sequential Uncertainly Fitting Version 2 (SUFI-2), which uses Latin hypercube sampling scheme in an iterative process. The performance of the models to evaluate the calibration and validation was done with: Nash-Sutcliffe efficiency coefficient (NSE), determination coefficient (r2), root mean square error (RMSE), and percent bias (PBIAS), with monthly average values of NSE around 0.70, r2 of 0.9, normalized RMSE of 0.01, and PBIAS of 10. Past events were analysed to evaluate the possibility of using the SWAT developed model for Piracicaba watershed as a tool for ensemble flood forecasting. For the ensemble evaluation members from the numerical model Eta were used. Eta is an atmospheric model used for research and operational purposes, with 5km resolution, and is updated twice a day (00 e 12 UTC) for a ten day horizon, with precipitation and weather estimates for each hour. The parameterized SWAT model performed overall well for ensemble flood forecasting.
An efficient surrogate-based simulation-optimization method for calibrating a regional MODFLOW model
NASA Astrophysics Data System (ADS)
Chen, Mingjie; Izady, Azizallah; Abdalla, Osman A.
2017-01-01
Simulation-optimization method entails a large number of model simulations, which is computationally intensive or even prohibitive if the model simulation is extremely time-consuming. Statistical models have been examined as a surrogate of the high-fidelity physical model during simulation-optimization process to tackle this problem. Among them, Multivariate Adaptive Regression Splines (MARS), a non-parametric adaptive regression method, is superior in overcoming problems of high-dimensions and discontinuities of the data. Furthermore, the stability and accuracy of MARS model can be improved by bootstrap aggregating methods, namely, bagging. In this paper, Bagging MARS (BMARS) method is integrated to a surrogate-based simulation-optimization framework to calibrate a three-dimensional MODFLOW model, which is developed to simulate the groundwater flow in an arid hardrock-alluvium region in northwestern Oman. The physical MODFLOW model is surrogated by the statistical model developed using BMARS algorithm. The surrogate model, which is fitted and validated using training dataset generated by the physical model, can approximate solutions rapidly. An efficient Sobol' method is employed to calculate global sensitivities of head outputs to input parameters, which are used to analyze their importance for the model outputs spatiotemporally. Only sensitive parameters are included in the calibration process to further improve the computational efficiency. Normalized root mean square error (NRMSE) between measured and simulated heads at observation wells is used as the objective function to be minimized during optimization. The reasonable history match between the simulated and observed heads demonstrated feasibility of this high-efficient calibration framework.
Lin, Yiqing; Li, Weiyong; Xu, Jin; Boulas, Pierre
2015-07-05
The aim of this study is to develop an at-line near infrared (NIR) method for the rapid and simultaneous determination of four structurally similar active pharmaceutical ingredients (APIs) in powder blends intended for the manufacturing of tablets. Two of the four APIs in the formula are present in relatively small amounts, one at 0.95% and the other at 0.57%. Such small amounts in addition to the similarity in structures add significant complexity to the blend uniformity analysis. The NIR method is developed using spectra from six laboratory-created calibration samples augmented by a small set of spectra from a large-scale blending sample. Applying the quality by design (QbD) principles, the calibration design included concentration variations of the four APIs and a main excipient, microcrystalline cellulose. A bench-top FT-NIR instrument was used to acquire the spectra. The obtained NIR spectra were analyzed by applying principal component analysis (PCA) before calibration model development. Score patterns from the PCA were analyzed to reveal relationship between latent variables and concentration variations of the APIs. In calibration model development, both PLS-1 and PLS-2 models were created and evaluated for their effectiveness in predicting API concentrations in the blending samples. The final NIR method shows satisfactory specificity and accuracy. Copyright © 2015 Elsevier B.V. All rights reserved.
Calibrating Historical IR Sensors Using GEO, and AVHRR Infrared Tropical Mean Calibration Models
NASA Technical Reports Server (NTRS)
Scarino, Benjamin; Doelling, David R.; Minnis, Patrick; Gopalan, Arun; Haney, Conor; Bhatt, Rajendra
2014-01-01
Long-term, remote-sensing-based climate data records (CDRs) are highly dependent on having consistent, wellcalibrated satellite instrument measurements of the Earth's radiant energy. Therefore, by making historical satellite calibrations consistent with those of today's imagers, the Earth-observing community can benefit from a CDR that spans a minimum of 30 years. Most operational meteorological satellites rely on an onboard blackbody and space looks to provide on-orbit IR calibration, but neither target is traceable to absolute standards. The IR channels can also be affected by ice on the detector window, angle dependency of the scan mirror emissivity, stray-light, and detector-to-detector striping. Being able to quantify and correct such degradations would mean IR data from any satellite imager could contribute to a CDR. Recent efforts have focused on utilizing well-calibrated modern hyper-spectral sensors to intercalibrate concurrent operational IR imagers to a single reference. In order to consistently calibrate both historical and current IR imagers to the same reference, however, another strategy is needed. Large, well-characterized tropical-domain Earth targets have the potential of providing an Earth-view reference accuracy of within 0.5 K. To that effort, NASA Langley is developing an IR tropical mean calibration model in order to calibrate historical Advanced Very High Resolution Radiometer (AVHRR) instruments. Using Meteosat-9 (Met-9) as a reference, empirical models are built based on spatially/temporally binned Met-9 and AVHRR tropical IR brightness temperatures. By demonstrating the stability of the Met-9 tropical models, NOAA-18 AVHRR can be calibrated to Met-9 by matching the AVHRR monthly histogram averages with the Met-9 model. This method is validated with ray-matched AVHRR and Met-9 biasdifference time series. Establishing the validity of this empirical model will allow for the calibration of historical AVHRR sensors to within 0.5 K, and thereby establish a climate-quality IR data record.
Calibration of sea ice dynamic parameters in an ocean-sea ice model using an ensemble Kalman filter
NASA Astrophysics Data System (ADS)
Massonnet, F.; Goosse, H.; Fichefet, T.; Counillon, F.
2014-07-01
The choice of parameter values is crucial in the course of sea ice model development, since parameters largely affect the modeled mean sea ice state. Manual tuning of parameters will soon become impractical, as sea ice models will likely include more parameters to calibrate, leading to an exponential increase of the number of possible combinations to test. Objective and automatic methods for parameter calibration are thus progressively called on to replace the traditional heuristic, "trial-and-error" recipes. Here a method for calibration of parameters based on the ensemble Kalman filter is implemented, tested and validated in the ocean-sea ice model NEMO-LIM3. Three dynamic parameters are calibrated: the ice strength parameter P*, the ocean-sea ice drag parameter Cw, and the atmosphere-sea ice drag parameter Ca. In twin, perfect-model experiments, the default parameter values are retrieved within 1 year of simulation. Using 2007-2012 real sea ice drift data, the calibration of the ice strength parameter P* and the oceanic drag parameter Cw improves clearly the Arctic sea ice drift properties. It is found that the estimation of the atmospheric drag Ca is not necessary if P* and Cw are already estimated. The large reduction in the sea ice speed bias with calibrated parameters comes with a slight overestimation of the winter sea ice areal export through Fram Strait and a slight improvement in the sea ice thickness distribution. Overall, the estimation of parameters with the ensemble Kalman filter represents an encouraging alternative to manual tuning for ocean-sea ice models.
Hunt, R.J.; Feinstein, D.T.; Pint, C.D.; Anderson, M.P.
2006-01-01
As part of the USGS Water, Energy, and Biogeochemical Budgets project and the NSF Long-Term Ecological Research work, a parameter estimation code was used to calibrate a deterministic groundwater flow model of the Trout Lake Basin in northern Wisconsin. Observations included traditional calibration targets (head, lake stage, and baseflow observations) as well as unconventional targets such as groundwater flows to and from lakes, depth of a lake water plume, and time of travel. The unconventional data types were important for parameter estimation convergence and allowed the development of a more detailed parameterization capable of resolving model objectives with well-constrained parameter values. Independent estimates of groundwater inflow to lakes were most important for constraining lakebed leakance and the depth of the lake water plume was important for determining hydraulic conductivity and conceptual aquifer layering. The most important target overall, however, was a conventional regional baseflow target that led to correct distribution of flow between sub-basins and the regional system during model calibration. The use of an automated parameter estimation code: (1) facilitated the calibration process by providing a quantitative assessment of the model's ability to match disparate observed data types; and (2) allowed assessment of the influence of observed targets on the calibration process. The model calibration required the use of a 'universal' parameter estimation code in order to include all types of observations in the objective function. The methods described in this paper help address issues of watershed complexity and non-uniqueness common to deterministic watershed models. ?? 2005 Elsevier B.V. All rights reserved.
Cárdenas, V; Cordobés, M; Blanco, M; Alcalà, M
2015-10-10
The pharmaceutical industry is under stringent regulations on quality control of their products because is critical for both, productive process and consumer safety. According to the framework of "process analytical technology" (PAT), a complete understanding of the process and a stepwise monitoring of manufacturing are required. Near infrared spectroscopy (NIRS) combined with chemometrics have lately performed efficient, useful and robust for pharmaceutical analysis. One crucial step in developing effective NIRS-based methodologies is selecting an appropriate calibration set to construct models affording accurate predictions. In this work, we developed calibration models for a pharmaceutical formulation during its three manufacturing stages: blending, compaction and coating. A novel methodology is proposed for selecting the calibration set -"process spectrum"-, into which physical changes in the samples at each stage are algebraically incorporated. Also, we established a "model space" defined by Hotelling's T(2) and Q-residuals statistics for outlier identification - inside/outside the defined space - in order to select objectively the factors to be used in calibration set construction. The results obtained confirm the efficacy of the proposed methodology for stepwise pharmaceutical quality control, and the relevance of the study as a guideline for the implementation of this easy and fast methodology in the pharma industry. Copyright © 2015 Elsevier B.V. All rights reserved.
Zhang, Lin; Small, Gary W; Arnold, Mark A
2003-11-01
The transfer of multivariate calibration models is investigated between a primary (A) and two secondary Fourier transform near-infrared (near-IR) spectrometers (B, C). The application studied in this work is the use of bands in the near-IR combination region of 5000-4000 cm(-)(1) to determine physiological levels of glucose in a buffered aqueous matrix containing varying levels of alanine, ascorbate, lactate, triacetin, and urea. The three spectrometers are used to measure 80 samples produced through a randomized experimental design that minimizes correlations between the component concentrations and between the concentrations of glucose and water. Direct standardization (DS), piecewise direct standardization (PDS), and guided model reoptimization (GMR) are evaluated for use in transferring partial least-squares calibration models developed with the spectra of 64 samples from the primary instrument to the prediction of glucose concentrations in 16 prediction samples measured with each secondary spectrometer. The three algorithms are evaluated as a function of the number of standardization samples used in transferring the calibration models. Performance criteria for judging the success of the calibration transfer are established as the standard error of prediction (SEP) for internal calibration models built with the spectra of the 64 calibration samples collected with each secondary spectrometer. These SEP values are 1.51 and 1.14 mM for spectrometers B and C, respectively. When calibration standardization is applied, the GMR algorithm is observed to outperform DS and PDS. With spectrometer C, the calibration transfer is highly successful, producing an SEP value of 1.07 mM. However, an SEP of 2.96 mM indicates unsuccessful calibration standardization with spectrometer B. This failure is attributed to differences in the variance structure of the spectra collected with spectrometers A and B. Diagnostic procedures are presented for use with the GMR algorithm that forecasts the successful calibration transfer with spectrometer C and the unsatisfactory results with spectrometer B.
Process analytical technology in continuous manufacturing of a commercial pharmaceutical product.
Vargas, Jenny M; Nielsen, Sarah; Cárdenas, Vanessa; Gonzalez, Anthony; Aymat, Efrain Y; Almodovar, Elvin; Classe, Gustavo; Colón, Yleana; Sanchez, Eric; Romañach, Rodolfo J
2018-03-01
The implementation of process analytical technology and continuous manufacturing at an FDA approved commercial manufacturing site is described. In this direct compaction process the blends produced were monitored with a Near Infrared (NIR) spectroscopic calibration model developed with partial least squares (PLS) regression. The authors understand that this is the first study where the continuous manufacturing (CM) equipment was used as a gravimetric reference method for the calibration model. A principal component analysis (PCA) model was also developed to identify the powder blend, and determine whether it was similar to the calibration blends. An air diagnostic test was developed to assure that powder was present within the interface when the NIR spectra were obtained. The air diagnostic test as well the PCA and PLS calibration model were integrated into an industrial software platform that collects the real time NIR spectra and applies the calibration models. The PCA test successfully detected an equipment malfunction. Variographic analysis was also performed to estimate the sampling analytical errors that affect the results from the NIR spectroscopic method during commercial production. The system was used to monitor and control a 28 h continuous manufacturing run, where the average drug concentration determined by the NIR method was 101.17% of label claim with a standard deviation of 2.17%, based on 12,633 spectra collected. The average drug concentration for the tablets produced from these blends was 100.86% of label claim with a standard deviation of 0.4%, for 500 tablets analyzed by Fourier Transform Near Infrared (FT-NIR) transmission spectroscopy. The excellent agreement between the mean drug concentration values in the blends and tablets produced provides further evidence of the suitability of the validation strategy that was followed. Copyright © 2018 Elsevier B.V. All rights reserved.
Calibration of two complex ecosystem models with different likelihood functions
NASA Astrophysics Data System (ADS)
Hidy, Dóra; Haszpra, László; Pintér, Krisztina; Nagy, Zoltán; Barcza, Zoltán
2014-05-01
The biosphere is a sensitive carbon reservoir. Terrestrial ecosystems were approximately carbon neutral during the past centuries, but they became net carbon sinks due to climate change induced environmental change and associated CO2 fertilization effect of the atmosphere. Model studies and measurements indicate that the biospheric carbon sink can saturate in the future due to ongoing climate change which can act as a positive feedback. Robustness of carbon cycle models is a key issue when trying to choose the appropriate model for decision support. The input parameters of the process-based models are decisive regarding the model output. At the same time there are several input parameters for which accurate values are hard to obtain directly from experiments or no local measurements are available. Due to the uncertainty associated with the unknown model parameters significant bias can be experienced if the model is used to simulate the carbon and nitrogen cycle components of different ecosystems. In order to improve model performance the unknown model parameters has to be estimated. We developed a multi-objective, two-step calibration method based on Bayesian approach in order to estimate the unknown parameters of PaSim and Biome-BGC models. Biome-BGC and PaSim are a widely used biogeochemical models that simulate the storage and flux of water, carbon, and nitrogen between the ecosystem and the atmosphere, and within the components of the terrestrial ecosystems (in this research the developed version of Biome-BGC is used which is referred as BBGC MuSo). Both models were calibrated regardless the simulated processes and type of model parameters. The calibration procedure is based on the comparison of measured data with simulated results via calculating a likelihood function (degree of goodness-of-fit between simulated and measured data). In our research different likelihood function formulations were used in order to examine the effect of the different model goodness metric on calibration. The different likelihoods are different functions of RMSE (root mean squared error) weighted by measurement uncertainty: exponential / linear / quadratic / linear normalized by correlation. As a first calibration step sensitivity analysis was performed in order to select the influential parameters which have strong effect on the output data. In the second calibration step only the sensitive parameters were calibrated (optimal values and confidence intervals were calculated). In case of PaSim more parameters were found responsible for the 95% of the output data variance than is case of BBGC MuSo. Analysis of the results of the optimized models revealed that the exponential likelihood estimation proved to be the most robust (best model simulation with optimized parameter, highest confidence interval increase). The cross-validation of the model simulations can help in constraining the highly uncertain greenhouse gas budget of grasslands.
Validation of Storm Water Management Model Storm Control Measures Modules
NASA Astrophysics Data System (ADS)
Simon, M. A.; Platz, M. C.
2017-12-01
EPA's Storm Water Management Model (SWMM) is a computational code heavily relied upon by industry for the simulation of wastewater and stormwater infrastructure performance. Many municipalities are relying on SWMM results to design multi-billion-dollar, multi-decade infrastructure upgrades. Since the 1970's, EPA and others have developed five major releases, the most recent ones containing storm control measures modules for green infrastructure. The main objective of this study was to quantify the accuracy with which SWMM v5.1.10 simulates the hydrologic activity of previously monitored low impact developments. Model performance was evaluated with a mathematical comparison of outflow hydrographs and total outflow volumes, using empirical data and a multi-event, multi-objective calibration method. The calibration methodology utilized PEST++ Version 3, a parameter estimation tool, which aided in the selection of unmeasured hydrologic parameters. From the validation study and sensitivity analysis, several model improvements were identified to advance SWMM LID Module performance for permeable pavements, infiltration units and green roofs, and these were performed and reported herein. Overall, it was determined that SWMM can successfully simulate low impact development controls given accurate model confirmation, parameter measurement, and model calibration.
ERIC Educational Resources Information Center
Zhang, Mo; Williamson, David M.; Breyer, F. Jay; Trapani, Catherine
2012-01-01
This article describes two separate, related studies that provide insight into the effectiveness of "e-rater" score calibration methods based on different distributional targets. In the first study, we developed and evaluated a new type of "e-rater" scoring model that was cost-effective and applicable under conditions of absent human rating and…
Mohamad Nabavi; Joseph Dahlen; Laurence Schimleck; Thomas L. Eberhardt; Cristian Montes
2018-01-01
This study developed regional calibration models for the prediction of loblolly pine (Pinus taeda) tracheid properties using near-infrared (NIR) spectroscopy. A total of 1842 pith-to-bark radial strips, aged 19â31 years, were acquired from 268 trees from 109 stands across the southeastern USA. Diffuse reflectance NIR spectra were collected at 10-mm...
Assessment of uncertainty in ROLO lunar irradiance for on-orbit calibration
Stone, T.C.; Kieffer, H.H.; Barnes, W.L.; Butler, J.J.
2004-01-01
A system to provide radiometric calibration of remote sensing imaging instruments on-orbit using the Moon has been developed by the US Geological Survey RObotic Lunar Observatory (ROLO) project. ROLO has developed a model for lunar irradiance which treats the primary geometric variables of phase and libration explicitly. The model fits hundreds of data points in each of 23 VNIR and 9 SWIR bands; input data are derived from lunar radiance images acquired by the project's on-site telescopes, calibrated to exoatmospheric radiance and converted to disk-equivalent reflectance. Experimental uncertainties are tracked through all stages of the data processing and modeling. Model fit residuals are ???1% in each band over the full range of observed phase and libration angles. Application of ROLO lunar calibration to SeaWiFS has demonstrated the capability for long-term instrument response trending with precision approaching 0.1% per year. Current work involves assessing the error in absolute responsivity and relative spectral response of the ROLO imaging systems, and propagation of error through the data reduction and modeling software systems with the goal of reducing the uncertainty in the absolute scale, now estimated at 5-10%. This level is similar to the scatter seen in ROLO lunar irradiance comparisons of multiple spacecraft instruments that have viewed the Moon. A field calibration campaign involving NASA and NIST has been initiated that ties the ROLO lunar measurements to the NIST (SI) radiometric scale.
Adaptive Prior Variance Calibration in the Bayesian Continual Reassessment Method
Zhang, Jin; Braun, Thomas M.; Taylor, Jeremy M.G.
2012-01-01
Use of the Continual Reassessment Method (CRM) and other model-based approaches to design in Phase I clinical trials has increased due to the ability of the CRM to identify the maximum tolerated dose (MTD) better than the 3+3 method. However, the CRM can be sensitive to the variance selected for the prior distribution of the model parameter, especially when a small number of patients are enrolled. While methods have emerged to adaptively select skeletons and to calibrate the prior variance only at the beginning of a trial, there has not been any approach developed to adaptively calibrate the prior variance throughout a trial. We propose three systematic approaches to adaptively calibrate the prior variance during a trial and compare them via simulation to methods proposed to calibrate the variance at the beginning of a trial. PMID:22987660
Calibration of 3D ALE finite element model from experiments on friction stir welding of lap joints
NASA Astrophysics Data System (ADS)
Fourment, Lionel; Gastebois, Sabrina; Dubourg, Laurent
2016-10-01
In order to support the design of such a complex process like Friction Stir Welding (FSW) for the aeronautic industry, numerical simulation software requires (1) developing an efficient and accurate Finite Element (F.E.) formulation that allows predicting welding defects, (2) properly modeling the thermo-mechanical complexity of the FSW process and (3) calibrating the F.E. model from accurate measurements from FSW experiments. This work uses a parallel ALE formulation developed in the Forge® F.E. code to model the different possible defects (flashes and worm holes), while pin and shoulder threads are modeled by a new friction law at the tool / material interface. FSW experiments require using a complex tool with scroll on shoulder, which is instrumented for providing sensitive thermal data close to the joint. Calibration of unknown material thermal coefficients, constitutive equations parameters and friction model from measured forces, torques and temperatures is carried out using two F.E. models, Eulerian and ALE, to reach a satisfactory agreement assessed by the proper sensitivity of the simulation to process parameters.
Doherty, John E.; Hunt, Randall J.; Tonkin, Matthew J.
2010-01-01
Analysis of the uncertainty associated with parameters used by a numerical model, and with predictions that depend on those parameters, is fundamental to the use of modeling in support of decisionmaking. Unfortunately, predictive uncertainty analysis with regard to models can be very computationally demanding, due in part to complex constraints on parameters that arise from expert knowledge of system properties on the one hand (knowledge constraints) and from the necessity for the model parameters to assume values that allow the model to reproduce historical system behavior on the other hand (calibration constraints). Enforcement of knowledge and calibration constraints on parameters used by a model does not eliminate the uncertainty in those parameters. In fact, in many cases, enforcement of calibration constraints simply reduces the uncertainties associated with a number of broad-scale combinations of model parameters that collectively describe spatially averaged system properties. The uncertainties associated with other combinations of parameters, especially those that pertain to small-scale parameter heterogeneity, may not be reduced through the calibration process. To the extent that a prediction depends on system-property detail, its postcalibration variability may be reduced very little, if at all, by applying calibration constraints; knowledge constraints remain the only limits on the variability of predictions that depend on such detail. Regrettably, in many common modeling applications, these constraints are weak. Though the PEST software suite was initially developed as a tool for model calibration, recent developments have focused on the evaluation of model-parameter and predictive uncertainty. As a complement to functionality that it provides for highly parameterized inversion (calibration) by means of formal mathematical regularization techniques, the PEST suite provides utilities for linear and nonlinear error-variance and uncertainty analysis in these highly parameterized modeling contexts. Availability of these utilities is particularly important because, in many cases, a significant proportion of the uncertainty associated with model parameters-and the predictions that depend on them-arises from differences between the complex properties of the real world and the simplified representation of those properties that is expressed by the calibrated model. This report is intended to guide intermediate to advanced modelers in the use of capabilities available with the PEST suite of programs for evaluating model predictive error and uncertainty. A brief theoretical background is presented on sources of parameter and predictive uncertainty and on the means for evaluating this uncertainty. Applications of PEST tools are then discussed for overdetermined and underdetermined problems, both linear and nonlinear. PEST tools for calculating contributions to model predictive uncertainty, as well as optimization of data acquisition for reducing parameter and predictive uncertainty, are presented. The appendixes list the relevant PEST variables, files, and utilities required for the analyses described in the document.
NASA Astrophysics Data System (ADS)
Ercan, Mehmet Bulent
Watershed-scale hydrologic models are used for a variety of applications from flood prediction, to drought analysis, to water quality assessments. A particular challenge in applying these models is calibration of the model parameters, many of which are difficult to measure at the watershed-scale. A primary goal of this dissertation is to contribute new computational methods and tools for calibration of watershed-scale hydrologic models and the Soil and Water Assessment Tool (SWAT) model, in particular. SWAT is a physically-based, watershed-scale hydrologic model developed to predict the impact of land management practices on water quality and quantity. The dissertation follows a manuscript format meaning it is comprised of three separate but interrelated research studies. The first two research studies focus on SWAT model calibration, and the third research study presents an application of the new calibration methods and tools to study climate change impacts on water resources in the Upper Neuse Watershed of North Carolina using SWAT. The objective of the first two studies is to overcome computational challenges associated with calibration of SWAT models. The first study evaluates a parallel SWAT calibration tool built using the Windows Azure cloud environment and a parallel version of the Dynamically Dimensioned Search (DDS) calibration method modified to run in Azure. The calibration tool was tested for six model scenarios constructed using three watersheds of increasing size (the Eno, Upper Neuse, and Neuse) for both a 2 year and 10 year simulation duration. Leveraging the cloud as an on demand computing resource allowed for a significantly reduced calibration time such that calibration of the Neuse watershed went from taking 207 hours on a personal computer to only 3.4 hours using 256 cores in the Azure cloud. The second study aims at increasing SWAT model calibration efficiency by creating an open source, multi-objective calibration tool using the Non-Dominated Sorting Genetic Algorithm II (NSGA-II). This tool was demonstrated through an application for the Upper Neuse Watershed in North Carolina, USA. The objective functions used for the calibration were Nash-Sutcliffe (E) and Percent Bias (PB), and the objective sites were the Flat, Little, and Eno watershed outlets. The results show that the use of multi-objective calibration algorithms for SWAT calibration improved model performance especially in terms of minimizing PB compared to the single objective model calibration. The third study builds upon the first two studies by leveraging the new calibration methods and tools to study future climate impacts on the Upper Neuse watershed. Statistically downscaled outputs from eight Global Circulation Models (GCMs) were used for both low and high emission scenarios to drive a well calibrated SWAT model of the Upper Neuse watershed. The objective of the study was to understand the potential hydrologic response of the watershed, which serves as a public water supply for the growing Research Triangle Park region of North Carolina, under projected climate change scenarios. The future climate change scenarios, in general, indicate an increase in precipitation and temperature for the watershed in coming decades. The SWAT simulations using the future climate scenarios, in general, suggest an increase in soil water and water yield, and a decrease in evapotranspiration within the Upper Neuse watershed. In summary, this dissertation advances the field of watershed-scale hydrologic modeling by (i) providing some of the first work to apply cloud computing for the computationally-demanding task of model calibration; (ii) providing a new, open source library that can be used by SWAT modelers to perform multi-objective calibration of their models; and (iii) advancing understanding of climate change impacts on water resources for an important watershed in the Research Triangle Park region of North Carolina. The third study leveraged the methodological advances presented in the first two studies. Therefore, the dissertation contains three independent by interrelated studies that collectively advance the field of watershed-scale hydrologic modeling and analysis.
Integration program, developing inverse modeling algorithms to calibrate building energy models, and is part related equipment. This work included developing an engineering grade operator training simulator for an
Masterson, John P.; Fienen, Michael N.; Gesch, Dean B.; Carlson, Carl S.
2013-01-01
A three-dimensional groundwater-flow model was developed for Assateague Island in eastern Maryland and Virginia to simulate both groundwater flow and solute (salt) transport to evaluate the groundwater system response to sea-level rise. The model was constructed using geologic and spatial information to represent the island geometry, boundaries, and physical properties and was calibrated using an inverse modeling parameter-estimation technique. An initial transient solute-transport simulation was used to establish the freshwater-saltwater boundary for a final calibrated steady-state model of groundwater flow. This model was developed as part of an ongoing investigation by the U.S. Geological Survey Climate and Land Use Change Research and Development Program to improve capabilities for predicting potential climate-change effects and provide the necessary tools for adaptation and mitigation of potentially adverse impacts.
Review of technological advancements in calibration systems for laser vision correction
NASA Astrophysics Data System (ADS)
Arba-Mosquera, Samuel; Vinciguerra, Paolo; Verma, Shwetabh
2018-02-01
Using PubMed and our internal database, we extensively reviewed the literature on the technological advancements in calibration systems, with a motive to present an account of the development history, and latest developments in calibration systems used in refractive surgery laser systems. As a second motive, we explored the clinical impact of the error introduced due to the roughness in ablation and its corresponding effect on system calibration. The inclusion criterion for this review was strict relevance to the clinical questions under research. The existing calibration methods, including various plastic models, are highly affected by various factors involved in refractive surgery, such as temperature, airflow, and hydration. Surface roughness plays an important role in accurate measurement of ablation performance on calibration materials. The ratio of ablation efficiency between the human cornea and calibration material is very critical and highly dependent on the laser beam characteristics and test conditions. Objective evaluation of the calibration data and corresponding adjustment of the laser systems at regular intervals are essential for the continuing success and further improvements in outcomes of laser vision correction procedures.
Development and calibration of an accurate 6-degree-of-freedom measurement system with total station
NASA Astrophysics Data System (ADS)
Gao, Yang; Lin, Jiarui; Yang, Linghui; Zhu, Jigui
2016-12-01
To meet the demand of high-accuracy, long-range and portable use in large-scale metrology for pose measurement, this paper develops a 6-degree-of-freedom (6-DOF) measurement system based on total station by utilizing its advantages of long range and relative high accuracy. The cooperative target sensor, which is mainly composed of a pinhole prism, an industrial lens, a camera and a biaxial inclinometer, is designed to be portable in use. Subsequently, a precise mathematical model is proposed from the input variables observed by total station, imaging system and inclinometer to the output six pose variables. The model must be calibrated in two levels: the intrinsic parameters of imaging system, and the rotation matrix between coordinate systems of the camera and the inclinometer. Then corresponding approaches are presented. For the first level, we introduce a precise two-axis rotary table as a calibration reference. And for the second level, we propose a calibration method by varying the pose of a rigid body with the target sensor and a reference prism on it. Finally, through simulations and various experiments, the feasibilities of the measurement model and calibration methods are validated, and the measurement accuracy of the system is evaluated.
NASA Astrophysics Data System (ADS)
Zimmerman, Naomi; Presto, Albert A.; Kumar, Sriniwasa P. N.; Gu, Jason; Hauryliuk, Aliaksei; Robinson, Ellis S.; Robinson, Allen L.; Subramanian, R.
2018-01-01
Low-cost sensing strategies hold the promise of denser air quality monitoring networks, which could significantly improve our understanding of personal air pollution exposure. Additionally, low-cost air quality sensors could be deployed to areas where limited monitoring exists. However, low-cost sensors are frequently sensitive to environmental conditions and pollutant cross-sensitivities, which have historically been poorly addressed by laboratory calibrations, limiting their utility for monitoring. In this study, we investigated different calibration models for the Real-time Affordable Multi-Pollutant (RAMP) sensor package, which measures CO, NO2, O3, and CO2. We explored three methods: (1) laboratory univariate linear regression, (2) empirical multiple linear regression, and (3) machine-learning-based calibration models using random forests (RF). Calibration models were developed for 16-19 RAMP monitors (varied by pollutant) using training and testing windows spanning August 2016 through February 2017 in Pittsburgh, PA, US. The random forest models matched (CO) or significantly outperformed (NO2, CO2, O3) the other calibration models, and their accuracy and precision were robust over time for testing windows of up to 16 weeks. Following calibration, average mean absolute error on the testing data set from the random forest models was 38 ppb for CO (14 % relative error), 10 ppm for CO2 (2 % relative error), 3.5 ppb for NO2 (29 % relative error), and 3.4 ppb for O3 (15 % relative error), and Pearson r versus the reference monitors exceeded 0.8 for most units. Model performance is explored in detail, including a quantification of model variable importance, accuracy across different concentration ranges, and performance in a range of monitoring contexts including the National Ambient Air Quality Standards (NAAQS) and the US EPA Air Sensors Guidebook recommendations of minimum data quality for personal exposure measurement. A key strength of the RF approach is that it accounts for pollutant cross-sensitivities. This highlights the importance of developing multipollutant sensor packages (as opposed to single-pollutant monitors); we determined this is especially critical for NO2 and CO2. The evaluation reveals that only the RF-calibrated sensors meet the US EPA Air Sensors Guidebook recommendations of minimum data quality for personal exposure measurement. We also demonstrate that the RF-model-calibrated sensors could detect differences in NO2 concentrations between a near-road site and a suburban site less than 1.5 km away. From this study, we conclude that combining RF models with carefully controlled state-of-the-art multipollutant sensor packages as in the RAMP monitors appears to be a very promising approach to address the poor performance that has plagued low-cost air quality sensors.
Ray, Jaideep; Lefantzi, Sophia; Arunajatesan, Srinivasan; ...
2017-09-07
In this paper, we demonstrate a statistical procedure for learning a high-order eddy viscosity model (EVM) from experimental data and using it to improve the predictive skill of a Reynolds-averaged Navier–Stokes (RANS) simulator. The method is tested in a three-dimensional (3D), transonic jet-in-crossflow (JIC) configuration. The process starts with a cubic eddy viscosity model (CEVM) developed for incompressible flows. It is fitted to limited experimental JIC data using shrinkage regression. The shrinkage process removes all the terms from the model, except an intercept, a linear term, and a quadratic one involving the square of the vorticity. The shrunk eddy viscositymore » model is implemented in an RANS simulator and calibrated, using vorticity measurements, to infer three parameters. The calibration is Bayesian and is solved using a Markov chain Monte Carlo (MCMC) method. A 3D probability density distribution for the inferred parameters is constructed, thus quantifying the uncertainty in the estimate. The phenomenal cost of using a 3D flow simulator inside an MCMC loop is mitigated by using surrogate models (“curve-fits”). A support vector machine classifier (SVMC) is used to impose our prior belief regarding parameter values, specifically to exclude nonphysical parameter combinations. The calibrated model is compared, in terms of its predictive skill, to simulations using uncalibrated linear and CEVMs. Finally, we find that the calibrated model, with one quadratic term, is more accurate than the uncalibrated simulator. The model is also checked at a flow condition at which the model was not calibrated.« less
Calibration of HEC-Ras hydrodynamic model using gauged discharge data and flood inundation maps
NASA Astrophysics Data System (ADS)
Tong, Rui; Komma, Jürgen
2017-04-01
The estimation of flood is essential for disaster alleviation. Hydrodynamic models are implemented to predict the occurrence and variance of flood in different scales. In practice, the calibration of hydrodynamic models aims to search the best possible parameters for the representation the natural flow resistance. Recent years have seen the calibration of hydrodynamic models being more actual and faster following the advance of earth observation products and computer based optimization techniques. In this study, the Hydrologic Engineering River Analysis System (HEC-Ras) model was set up with high-resolution digital elevation model from Laser scanner for the river Inn in Tyrol, Austria. 10 largest flood events from 19 hourly discharge gauges and flood inundation maps were selected to calibrate the HEC-Ras model. Manning roughness values and lateral inflow factors as parameters were automatically optimized with the Shuffled complex with Principal component analysis (SP-UCI) algorithm developed from the Shuffled Complex Evolution (SCE-UA). Different objective functions (Nash-Sutcliffe model efficiency coefficient, the timing of peak, peak value and Root-mean-square deviation) were used in single or multiple way. It was found that the lateral inflow factor was the most sensitive parameter. SP-UCI algorithm could avoid the local optimal and achieve efficient and effective parameters in the calibration of HEC-Ras model using flood extension images. As results showed, calibration by means of gauged discharge data and flood inundation maps, together with objective function of Nash-Sutcliffe model efficiency coefficient, was very robust to obtain more reliable flood simulation, and also to catch up with the peak value and the timing of peak.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ray, Jaideep; Lefantzi, Sophia; Arunajatesan, Srinivasan
In this paper, we demonstrate a statistical procedure for learning a high-order eddy viscosity model (EVM) from experimental data and using it to improve the predictive skill of a Reynolds-averaged Navier–Stokes (RANS) simulator. The method is tested in a three-dimensional (3D), transonic jet-in-crossflow (JIC) configuration. The process starts with a cubic eddy viscosity model (CEVM) developed for incompressible flows. It is fitted to limited experimental JIC data using shrinkage regression. The shrinkage process removes all the terms from the model, except an intercept, a linear term, and a quadratic one involving the square of the vorticity. The shrunk eddy viscositymore » model is implemented in an RANS simulator and calibrated, using vorticity measurements, to infer three parameters. The calibration is Bayesian and is solved using a Markov chain Monte Carlo (MCMC) method. A 3D probability density distribution for the inferred parameters is constructed, thus quantifying the uncertainty in the estimate. The phenomenal cost of using a 3D flow simulator inside an MCMC loop is mitigated by using surrogate models (“curve-fits”). A support vector machine classifier (SVMC) is used to impose our prior belief regarding parameter values, specifically to exclude nonphysical parameter combinations. The calibrated model is compared, in terms of its predictive skill, to simulations using uncalibrated linear and CEVMs. Finally, we find that the calibrated model, with one quadratic term, is more accurate than the uncalibrated simulator. The model is also checked at a flow condition at which the model was not calibrated.« less
Advanced very high resolution radiometer
NASA Technical Reports Server (NTRS)
1976-01-01
The advanced very high resolution radiometer development program is considered. The program covered the design, construction, and test of a breadboard model, engineering model, protoflight model, mechanical structural model, and a life test model. Special bench test and calibration equipment was also developed for use on the program.
NASA Astrophysics Data System (ADS)
Zheng, Feifei; Maier, Holger R.; Wu, Wenyan; Dandy, Graeme C.; Gupta, Hoshin V.; Zhang, Tuqiao
2018-02-01
Hydrological models are used for a wide variety of engineering purposes, including streamflow forecasting and flood-risk estimation. To develop such models, it is common to allocate the available data to calibration and evaluation data subsets. Surprisingly, the issue of how this allocation can affect model evaluation performance has been largely ignored in the research literature. This paper discusses the evaluation performance bias that can arise from how available data are allocated to calibration and evaluation subsets. As a first step to assessing this issue in a statistically rigorous fashion, we present a comprehensive investigation of the influence of data allocation on the development of data-driven artificial neural network (ANN) models of streamflow. Four well-known formal data splitting methods are applied to 754 catchments from Australia and the U.S. to develop 902,483 ANN models. Results clearly show that the choice of the method used for data allocation has a significant impact on model performance, particularly for runoff data that are more highly skewed, highlighting the importance of considering the impact of data splitting when developing hydrological models. The statistical behavior of the data splitting methods investigated is discussed and guidance is offered on the selection of the most appropriate data splitting methods to achieve representative evaluation performance for streamflow data with different statistical properties. Although our results are obtained for data-driven models, they highlight the fact that this issue is likely to have a significant impact on all types of hydrological models, especially conceptual rainfall-runoff models.
Geometric Characterization of Multi-Axis Multi-Pinhole SPECT
DiFilippo, Frank P.
2008-01-01
A geometric model and calibration process are developed for SPECT imaging with multiple pinholes and multiple mechanical axes. Unlike the typical situation where pinhole collimators are mounted directly to rotating gamma ray detectors, this geometric model allows for independent rotation of the detectors and pinholes, for the case where the pinhole collimator is physically detached from the detectors. This geometric model is applied to a prototype small animal SPECT device with a total of 22 pinholes and which uses dual clinical SPECT detectors. All free parameters in the model are estimated from a calibration scan of point sources and without the need for a precision point source phantom. For a full calibration of this device, a scan of four point sources with 360° rotation is suitable for estimating all 95 free parameters of the geometric model. After a full calibration, a rapid calibration scan of two point sources with 180° rotation is suitable for estimating the subset of 22 parameters associated with repositioning the collimation device relative to the detectors. The high accuracy of the calibration process is validated experimentally. Residual differences between predicted and measured coordinates are normally distributed with 0.8 mm full width at half maximum and are estimated to contribute 0.12 mm root mean square to the reconstructed spatial resolution. Since this error is small compared to other contributions arising from the pinhole diameter and the detector, the accuracy of the calibration is sufficient for high resolution small animal SPECT imaging. PMID:18293574
Wang, Gang; Briskot, Till; Hahn, Tobias; Baumann, Pascal; Hubbuch, Jürgen
2017-03-03
Mechanistic modeling has been repeatedly successfully applied in process development and control of protein chromatography. For each combination of adsorbate and adsorbent, the mechanistic models have to be calibrated. Some of the model parameters, such as system characteristics, can be determined reliably by applying well-established experimental methods, whereas others cannot be measured directly. In common practice of protein chromatography modeling, these parameters are identified by applying time-consuming methods such as frontal analysis combined with gradient experiments, curve-fitting, or combined Yamamoto approach. For new components in the chromatographic system, these traditional calibration approaches require to be conducted repeatedly. In the presented work, a novel method for the calibration of mechanistic models based on artificial neural network (ANN) modeling was applied. An in silico screening of possible model parameter combinations was performed to generate learning material for the ANN model. Once the ANN model was trained to recognize chromatograms and to respond with the corresponding model parameter set, it was used to calibrate the mechanistic model from measured chromatograms. The ANN model's capability of parameter estimation was tested by predicting gradient elution chromatograms. The time-consuming model parameter estimation process itself could be reduced down to milliseconds. The functionality of the method was successfully demonstrated in a study with the calibration of the transport-dispersive model (TDM) and the stoichiometric displacement model (SDM) for a protein mixture. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Attia, Khalid A. M.; Nassar, Mohammed W. I.; El-Zeiny, Mohamed B.; Serag, Ahmed
2017-01-01
For the first time, a new variable selection method based on swarm intelligence namely firefly algorithm is coupled with three different multivariate calibration models namely, concentration residual augmented classical least squares, artificial neural network and support vector regression in UV spectral data. A comparative study between the firefly algorithm and the well-known genetic algorithm was developed. The discussion revealed the superiority of using this new powerful algorithm over the well-known genetic algorithm. Moreover, different statistical tests were performed and no significant differences were found between all the models regarding their predictabilities. This ensures that simpler and faster models were obtained without any deterioration of the quality of the calibration.
NASA Technical Reports Server (NTRS)
Zhai, Chengxing; Milman, Mark H.; Regehr, Martin W.; Best, Paul K.
2007-01-01
In the companion paper, [Appl. Opt. 46, 5853 (2007)] a highly accurate white light interference model was developed from just a few key parameters characterized in terms of various moments of the source and instrument transmission function. We develop and implement the end-to-end process of calibrating these moment parameters together with the differential dispersion of the instrument and applying them to the algorithms developed in the companion paper. The calibration procedure developed herein is based on first obtaining the standard monochromatic parameters at the pixel level: wavenumber, phase, intensity, and visibility parameters via a nonlinear least-squares procedure that exploits the structure of the model. The pixel level parameters are then combined to obtain the required 'global' moment and dispersion parameters. The process is applied to both simulated scenarios of astrometric observations and to data from the microarcsecond metrology testbed (MAM), an interferometer testbed that has played a prominent role in the development of this technology.
Calibration strategy for the COROT photometry
NASA Astrophysics Data System (ADS)
Buey, J.-T.; Auvergne, M.; Lapeyrere, V.; Boumier, P.
2004-01-01
Like Eddington, the COROT photometer will measure very small fluctutions on a large signal: the amplitudes of planetary transits and solar-like oscillations are expressed in ppm (parts per million). For such an instrument, specific calibration has to be done during the different phases of the development of the instrument and of all the subsystems. Two main things have to be taken into account: - the calibration during the study phase; - the calibration of the sub-systems and building of numerical models. The first item allows us to clearly understand all the perturbations (internal and external) and to identify their relative impacts on the expected signal (by numerical models including expected values of perturbations and sensitivity of the instrument). Methods and a schedule for the calibration process can also be introduced, in good agreement with the development plan of the instrument. The second item is more related to the measurement of the sensitivity of the instrument and all its sub-systems. As the instrument is designed to be as stable as possible, we have to mix measurements (with larger fluctuations of parameters than expected) and numerical models. Some typical reasons for that are: - there are many parameters to introduce in the measurements and results from some models (bread-board for example) may be extrapolated to the flight model; - larger fluctuations than expected are used (to measure precisely the sensitivity) and numerical models give the real value of noise with the expected fluctuations. - Characteristics of sub-systems may be measured and models used to give the sensitivity of the whole system built with them, as end-to-end measurements may be impossible (time, budget, physical limitations). Also, house-keeping measurements have to be set up on the critical parts of the sub-systems: measurements on thermal probes, power supply, pointing, etc. All these house-keeping data are used during ground calibration and during the flight, so that correct correlation between signal and house-keeping can be achieved.
Load estimator (LOADEST): a FORTRAN program for estimating constituent loads in streams and rivers
Runkel, Robert L.; Crawford, Charles G.; Cohn, Timothy A.
2004-01-01
LOAD ESTimator (LOADEST) is a FORTRAN program for estimating constituent loads in streams and rivers. Given a time series of streamflow, additional data variables, and constituent concentration, LOADEST assists the user in developing a regression model for the estimation of constituent load (calibration). Explanatory variables within the regression model include various functions of streamflow, decimal time, and additional user-specified data variables. The formulated regression model then is used to estimate loads over a user-specified time interval (estimation). Mean load estimates, standard errors, and 95 percent confidence intervals are developed on a monthly and(or) seasonal basis. The calibration and estimation procedures within LOADEST are based on three statistical estimation methods. The first two methods, Adjusted Maximum Likelihood Estimation (AMLE) and Maximum Likelihood Estimation (MLE), are appropriate when the calibration model errors (residuals) are normally distributed. Of the two, AMLE is the method of choice when the calibration data set (time series of streamflow, additional data variables, and concentration) contains censored data. The third method, Least Absolute Deviation (LAD), is an alternative to maximum likelihood estimation when the residuals are not normally distributed. LOADEST output includes diagnostic tests and warnings to assist the user in determining the appropriate estimation method and in interpreting the estimated loads. This report describes the development and application of LOADEST. Sections of the report describe estimation theory, input/output specifications, sample applications, and installation instructions.
Development of a prognostic model for predicting spontaneous singleton preterm birth.
Schaaf, Jelle M; Ravelli, Anita C J; Mol, Ben Willem J; Abu-Hanna, Ameen
2012-10-01
To develop and validate a prognostic model for prediction of spontaneous preterm birth. Prospective cohort study using data of the nationwide perinatal registry in The Netherlands. We studied 1,524,058 singleton pregnancies between 1999 and 2007. We developed a multiple logistic regression model to estimate the risk of spontaneous preterm birth based on maternal and pregnancy characteristics. We used bootstrapping techniques to internally validate our model. Discrimination (AUC), accuracy (Brier score) and calibration (calibration graphs and Hosmer-Lemeshow C-statistic) were used to assess the model's predictive performance. Our primary outcome measure was spontaneous preterm birth at <37 completed weeks. Spontaneous preterm birth occurred in 57,796 (3.8%) pregnancies. The final model included 13 variables for predicting preterm birth. The predicted probabilities ranged from 0.01 to 0.71 (IQR 0.02-0.04). The model had an area under the receiver operator characteristic curve (AUC) of 0.63 (95% CI 0.63-0.63), the Brier score was 0.04 (95% CI 0.04-0.04) and the Hosmer Lemeshow C-statistic was significant (p<0.0001). The calibration graph showed overprediction at higher values of predicted probability. The positive predictive value was 26% (95% CI 20-33%) for the 0.4 probability cut-off point. The model's discrimination was fair and it had modest calibration. Previous preterm birth, drug abuse and vaginal bleeding in the first half of pregnancy were the most important predictors for spontaneous preterm birth. Although not applicable in clinical practice yet, this model is a next step towards early prediction of spontaneous preterm birth that enables caregivers to start preventive therapy in women at higher risk. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
DOT National Transportation Integrated Search
2013-12-01
Travel forecasting models predict travel demand based on the present transportation system and its use. Transportation modelers must develop, validate, and calibrate models to ensure that predicted travel demand is as close to reality as possible. Mo...
Note: A portable automatic capillary viscometer for transparent and opaque liquids
NASA Astrophysics Data System (ADS)
Soltani Ghalehjooghi, A.; Minaei, S.; Gholipour Zanjani, N.; Beheshti, B.
2017-07-01
A portable automatic capillary viscometer, equipped with an AVR microcontroller, was designed and developed. The viscometer was calibrated with Certified Reference Material (CRM) s200 and utilized for measurement of kinematic viscosity. A quadratic equation was developed for calibration of the instrument at various temperatures. Also, a model was developed for viscosity determination in terms of the viscometer dimensions. Development of the portable viscometer provides for on-site monitoring of engine oil viscosity.
NASA Astrophysics Data System (ADS)
Huang, C. L.; Hsu, N. S.; Hsu, F. C.; Liu, H. J.
2016-12-01
This study develops a novel methodology for the spatiotemporal groundwater calibration of mega-quantitative recharge and parameters by coupling a specialized numerical model and analytical empirical orthogonal function (EOF). The actual spatiotemporal patterns of groundwater pumpage are estimated by an originally developed back propagation neural network-based response matrix with the electrical consumption analysis. The spatiotemporal patterns of the recharge from surface water and hydrogeological parameters (i.e. horizontal hydraulic conductivity and vertical leakance) are calibrated by EOF with the simulated error hydrograph of groundwater storage, in order to qualify the multiple error sources and quantify the revised volume. The objective function of the optimization model is minimizing the root mean square error of the simulated storage error percentage across multiple aquifers, meanwhile subject to mass balance of groundwater budget and the governing equation in transient state. The established method was applied on the groundwater system of Chou-Shui River Alluvial Fan. The simulated period is from January 2012 to December 2014. The total numbers of hydraulic conductivity, vertical leakance and recharge from surface water among four aquifers are 126, 96 and 1080, respectively. Results showed that the RMSE during the calibration process was decreased dramatically and can quickly converse within 6th iteration, because of efficient filtration of the transmission induced by the estimated error and recharge across the boundary. Moreover, the average simulated error percentage according to groundwater level corresponding to the calibrated budget variables and parameters of aquifer one is as small as 0.11%. It represent that the developed methodology not only can effectively detect the flow tendency and error source in all aquifers to achieve accurately spatiotemporal calibration, but also can capture the peak and fluctuation of groundwater level in shallow aquifer.
Harrison, David A; Brady, Anthony R; Parry, Gareth J; Carpenter, James R; Rowan, Kathy
2006-05-01
To assess the performance of published risk prediction models in common use in adult critical care in the United Kingdom and to recalibrate these models in a large representative database of critical care admissions. Prospective cohort study. A total of 163 adult general critical care units in England, Wales, and Northern Ireland, during the period of December 1995 to August 2003. A total of 231,930 admissions, of which 141,106 met inclusion criteria and had sufficient data recorded for all risk prediction models. None. The published versions of the Acute Physiology and Chronic Health Evaluation (APACHE) II, APACHE II UK, APACHE III, Simplified Acute Physiology Score (SAPS) II, and Mortality Probability Models (MPM) II were evaluated for discrimination and calibration by means of a combination of appropriate statistical measures recommended by an expert steering committee. All models showed good discrimination (the c index varied from 0.803 to 0.832) but imperfect calibration. Recalibration of the models, which was performed by both the Cox method and re-estimating coefficients, led to improved discrimination and calibration, although all models still showed significant departures from perfect calibration. Risk prediction models developed in another country require validation and recalibration before being used to provide risk-adjusted outcomes within a new country setting. Periodic reassessment is beneficial to ensure calibration is maintained.
NASA Astrophysics Data System (ADS)
Zammouri, Mounira; Ribeiro, Luis
2017-05-01
Groundwater flow model of the transboundary Saharan aquifer system is developed in 2003 and used for management and decision-making by Algeria, Tunisia and Libya. In decision-making processes, reliability plays a decisive role. This paper looks into the reliability assessment of the Saharan aquifers model. It aims to detect the shortcomings of the model considered properly calibrated. After presenting the calibration results of the effort modelling in 2003, the uncertainty in the model which arising from the lack of the groundwater level and the transmissivity data is analyzed using kriging technique and stochastic approach. The structural analysis of piezometry in steady state and logarithms of transmissivity were carried out for the Continental Intercalaire (CI) and the Complexe Terminal (CT) aquifers. The available data (piezometry and transmissivity) were compared to the calculated values, using geostatistics approach. Using a stochastic approach, 2500 realizations of a log-normal random transmissivity field of the CI aquifer has been performed to assess the errors of the model output, due to the uncertainty in transmissivity. Two types of bad calibration are shown. In some regions, calibration should be improved using the available data. In others areas, undertaking the model refinement requires gathering new data to enhance the aquifer system knowledge. Stochastic simulations' results showed that the calculated drawdowns in 2050 could be higher than the values predicted by the calibrated model.
Longitudinal train dynamics model for a rail transit simulation system
Wang, Jinghui; Rakha, Hesham A.
2018-01-01
The paper develops a longitudinal train dynamics model in support of microscopic railway transportation simulation. The model can be calibrated without any mechanical data making it ideal for implementation in transportation simulators. The calibration and validation work is based on data collected from the Portland light rail train fleet. The calibration procedure is mathematically formulated as a constrained non-linear optimization problem. The validity of the model is assessed by comparing instantaneous model predictions against field observations, and also evaluated in the domains of acceleration/deceleration versus speed and acceleration/deceleration versus distance. A test is conducted to investigate the adequacy of themore » model in simulation implementation. The results demonstrate that the proposed model can adequately capture instantaneous train dynamics, and provides good performance in the simulation test. Thus, the model provides a simple theoretical foundation for microscopic simulators and will significantly support the planning, management and control of railway transportation systems.« less
Longitudinal train dynamics model for a rail transit simulation system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Jinghui; Rakha, Hesham A.
The paper develops a longitudinal train dynamics model in support of microscopic railway transportation simulation. The model can be calibrated without any mechanical data making it ideal for implementation in transportation simulators. The calibration and validation work is based on data collected from the Portland light rail train fleet. The calibration procedure is mathematically formulated as a constrained non-linear optimization problem. The validity of the model is assessed by comparing instantaneous model predictions against field observations, and also evaluated in the domains of acceleration/deceleration versus speed and acceleration/deceleration versus distance. A test is conducted to investigate the adequacy of themore » model in simulation implementation. The results demonstrate that the proposed model can adequately capture instantaneous train dynamics, and provides good performance in the simulation test. Thus, the model provides a simple theoretical foundation for microscopic simulators and will significantly support the planning, management and control of railway transportation systems.« less
NASA Astrophysics Data System (ADS)
Stern, M. A.; Flint, L. E.; Flint, A. L.; Wright, S. A.; Minear, J. T.
2014-12-01
A watershed model of the Sacramento River Basin, CA was developed to simulate streamflow and suspended sediment transport to the San Francisco Bay Delta (SFBD) for fifty years (1958-2008) using the Hydrological Simulation Program - FORTRAN (HSPF). To compensate for the large model domain and sparse data, rigorous meteorological development and characterization of hydraulic geometry were employed to spatially distribute climate and hydrologic processes in unmeasured locations. Parameterization techniques sought to include known spatial information for tributaries such as soil information and slope, and then parameters were scaled up or down during calibration to retain the spatial characteristics of the land surface in un-gaged areas. Accuracy was assessed by comparing model calibration to measured streamflow. Calibration and validation of the Sacramento River ranged from "good" to "very good" performance based upon a "goodness-of-fit" statistical guideline. Model calibration to measured sediment loads were underestimated on average by 39% for the Sacramento River, and model calibration to suspended sediment concentrations were underestimated on average by 22% for the Sacramento River. Sediment loads showed a slight decreasing trend from 1958-2008 and was significant (p < 0.0025) in the lower 50% of stream flows. Hypothetical climate change scenarios were developed using the Climate Assessment Tool (CAT). Several wet and dry scenarios coupled with temperature increases were imposed on the historical base conditions to evaluate sensitivity of streamflow and sediment on potential changes in climate. Wet scenarios showed an increase of 9.7 - 17.5% in streamflow, a 7.6 - 17.5% increase in runoff, and a 30 - 93% increase in sediment loads. The dry scenarios showed a roughly 5% decrease in flow and runoff, and a 16 - 18% decrease in sediment loads. The base hydrology was most sensitive to a temperature increase of 1.5 degrees Celsius and an increase in storm intensity and frequency. The complete calibrated HSPF model will use future climate scenarios to make projections of potential hydrologic and sediment trends to the SFBD from 2000-2100.
Development of landsat-5 thematic mapper internal calibrator gain and offset table
Barsi, J.A.; Chander, G.; Micijevic, E.; Markham, B.L.; Haque, Md. O.
2008-01-01
The National Landsat Archive Production System (NLAPS) has been the primary processing system for Landsat data since U.S. Geological Survey (USGS) Earth Resources Observation and Science Center (EROS) started archiving Landsat data. NLAPS converts raw satellite data into radiometrically and geometrically calibrated products. NLAPS has historically used the Internal Calibrator (IC) to calibrate the reflective bands of the Landsat-5 Thematic Mapper (TM), even though the lamps in the IC were less stable than the TM detectors, as evidenced by vicarious calibration results. In 2003, a major effort was made to model the actual TM gain change and to update NLAPS to use this model rather than the unstable IC data for radiometric calibration. The model coefficients were revised in 2007 to reflect greater understanding of the changes in the TM responsivity. While the calibration updates are important to users with recently processed data, the processing system no longer calculates the original IC gain or offset. For specific applications, it is useful to have a record of the gain and offset actually applied to the older data. Thus, the NLAPS calibration database was used to generate estimated daily values for the radiometric gain and offset that might have been applied to TM data. This paper discusses the need for and generation of the NLAPSIC gain and offset tables. A companion paper covers the application of and errors associated with using these tables.
Multielevation calibration of frequency-domain electromagnetic data
Minsley, Burke J.; Kass, M. Andy; Hodges, Greg; Smith, Bruce D.
2014-01-01
Systematic calibration errors must be taken into account because they can substantially impact the accuracy of inverted subsurface resistivity models derived from frequency-domain electromagnetic data, resulting in potentially misleading interpretations. We have developed an approach that uses data acquired at multiple elevations over the same location to assess calibration errors. A significant advantage is that this method does not require prior knowledge of subsurface properties from borehole or ground geophysical data (though these can be readily incorporated if available), and is, therefore, well suited to remote areas. The multielevation data were used to solve for calibration parameters and a single subsurface resistivity model that are self consistent over all elevations. The deterministic and Bayesian formulations of the multielevation approach illustrate parameter sensitivity and uncertainty using synthetic- and field-data examples. Multiplicative calibration errors (gain and phase) were found to be better resolved at high frequencies and when data were acquired over a relatively conductive area, whereas additive errors (bias) were reasonably resolved over conductive and resistive areas at all frequencies. The Bayesian approach outperformed the deterministic approach when estimating calibration parameters using multielevation data at a single location; however, joint analysis of multielevation data at multiple locations using the deterministic algorithm yielded the most accurate estimates of calibration parameters. Inversion results using calibration-corrected data revealed marked improvement in misfit, lending added confidence to the interpretation of these models.
McKee, Paul W.; Clark, Brian R.
2003-01-01
The Sparta aquifer, which consists of the Sparta Sand, in southeastern Arkansas and north-central Louisiana is a major water resource and provides water for municipal, industrial, and agricultural uses. In recent years, the demand in some areas has resulted in withdrawals from the Sparta aquifer that substantially exceed replenishment of the aquifer. Considerable drawdown has occurred in the potentiometric surface forming regional cones of depression as water is removed from storage by withdrawals. These cones of depression are centered beneath the Grand Prairie area and the cities of Pine Bluff and El Dorado in Arkansas, and Monroe in Louisiana. The rate of decline for hydraulic heads in the aquifer has been greater than 1 foot per year for more than a decade in much of southern Arkansas and northern Louisiana where hydraulic heads are now below the top of the Sparta Sand. Continued hydraulic-head declines have caused water users and managers alike to question the ability of the aquifer to supply water for the long term. Concern over protecting the Sparta aquifer as a sustainable resource has resulted in a continued, cooperative effort by the Arkansas Soil and Water Conservation Commission, U.S. Army Corps of Engineers, and the U.S. Geological Survey to develop, maintain, and utilize numerical ground-water flow models to manage and further analyze the ground-water system. The work presented in this report describes the development and calibration of a ground-water flow model representing the Sparta aquifer to simulate observed hydraulic heads, documents major differences in the current Sparta model compared to the previous Sparta model calibrated in the mid-1980's, and presents the results of three hypothetical future withdrawal scenarios. The current Sparta model-a regional scale, three-dimensional numerical ground-water flow model-was constructed and calibrated using available hydrogeologic, hydraulic, and water-use data from 1898 to 1997. Significant changes from the previous model include grid rediscretization of the aquifer, extension of the active model area northward beyond the Cane River Formation facies change, and representation of model boundaries. The current model was calibrated with the aid of parameter estimation, a nonlinear regression technique, combined with trial and error parameter adjustment using a total of 795 observations from 316 wells over 4 different years-1970, 1985, 1990, and 1997. The calibration data set provides broad spatial and temporal coverage of aquifer conditions. Analysis of the residual statistics, spatial distribution of residuals, simulated compared to observed hydrographs, and simulated compared to observed potentiometric surfaces were used to analyze the ability of the calibrated model to simulate aquifer conditions within acceptable error. The calibrated model has a root mean square error of 18 feet for all observations, an improvement of more than 12 feet from the previous model. The current Sparta model was used to predict the effects of three hypothetical withdrawal scenarios on hydraulic heads over the period 1998-2027 with one of those extended indefinitely until equilibrium conditions were attained, or steady state. In scenario 1a, withdrawals representing the time period from 1990 to 1997 was held constant for 30 years from 1998 to 2027. Hydraulic heads in the middle of the cone of depression centered on El Dorado decreased by 10 feet from the 1997 simulation to 222 feet below NGVD of 1929 in 2027. Hydraulic heads in the Pine Bluff cone of depression showed a greater decline from 61 feet below NGVD of 1929 to 78 feet below NGVD of 1929 in the center of the cone. With these same withdrawals extended to steady state (scenario 1b), hydraulic heads in the Pine Bluff cone of depression center declined an 2 Development and Calibration of a Ground-Water Flow Model for the Sparta Aquifer of Southeastern Arkansas and North-Central Louisiana and Simulated Response to Withdrawa
A self-modifying cellular automaton model of historical urbanization in the San Francisco Bay area
Clarke, K.C.; Hoppen, S.; Gaydos, L.
1997-01-01
In this paper we describe a cellular automaton (CA) simulation model developed to predict urban growth as part of a project for estimating the regional and broader impact of urbanization on the San Francisco Bay area's climate. The rules of the model are more complex than those of a typical CA and involve the use of multiple data sources, including topography, road networks, and existing settlement distributions, and their modification over time. In addition, the control parameters of the model are allowed to self-modify: that is, the CA adapts itself to the circumstances it generates, in particular, during periods of rapid growth or stagnation. In addition, the model was written to allow the accumulation of probabilistic estimates based on Monte Carlo methods. Calibration of the model has been accomplished by the use of historical maps to compare model predictions of urbanization, based solely upon the distribution in year 1900, with observed data for years 1940, 1954, 1962, 1974, and 1990. The complexity of this model has made calibration a particularly demanding step. Lessons learned about the methods, measures, and strategies developed to calibrate the model may be of use in other environmental modeling contexts. With the calibration complete, the model is being used to generate a set of future scenarios for the San Francisco Bay area along with their probabilities based on the Monte Carlo version of the model. Animated dynamic mapping of the simulations will be used to allow visualization of the impact of future urban growth.
NASA Astrophysics Data System (ADS)
Jena, S.
2015-12-01
The overexploitation of groundwater resulted in abandoning many shallow tube wells in the river Basin in Eastern India. For the sustainability of groundwater resources, basin-scale modelling of groundwater flow is essential for the efficient planning and management of the water resources. The main intent of this study is to develope a 3-D groundwater flow model of the study basin using the Visual MODFLOW package and successfully calibrate and validate it using 17 years of observed data. The sensitivity analysis was carried out to quantify the susceptibility of aquifer system to the river bank seepage, recharge from rainfall and agriculture practices, horizontal and vertical hydraulic conductivities, and specific yield. To quantify the impact of parameter uncertainties, Sequential Uncertainty Fitting Algorithm (SUFI-2) and Markov chain Monte Carlo (MCMC) techniques were implemented. Results from the two techniques were compared and the advantages and disadvantages were analysed. Nash-Sutcliffe coefficient (NSE) and coefficient of determination (R2) were adopted as two criteria during calibration and validation of the developed model. NSE and R2 values of groundwater flow model for calibration and validation periods were in acceptable range. Also, the MCMC technique was able to provide more reasonable results than SUFI-2. The calibrated and validated model will be useful to identify the aquifer properties, analyse the groundwater flow dynamics and the change in groundwater levels in future forecasts.
Koláčková, Pavla; Růžičková, Gabriela; Gregor, Tomáš; Šišperová, Eliška
2015-08-30
Calibration models for the Fourier transform-near infrared (FT-NIR) instrument were developed for quick and non-destructive determination of oil and fatty acids in whole achenes of milk thistle. Samples with a range of oil and fatty acid levels were collected and their transmittance spectra were obtained by the FT-NIR instrument. Based on these spectra and data gained by the means of the reference method - Soxhlet extraction and gas chromatography (GC) - calibration models were created by means of partial least square (PLS) regression analysis. Precision and accuracy of the calibration models was verified via the cross-validation of validation samples whose spectra were not part of the calibration model and also according to the root mean square error of prediction (RMSEP), root mean square error of calibration (RMSEC), root mean square error of cross-validation (RMSECV) and the validation coefficient of determination (R(2) ). R(2) for whole seeds were 0.96, 0.96, 0.83 and 0.67 and the RMSEP values were 0.76, 1.68, 1.24, 0.54 for oil, linoleic (C18:2), oleic (C18:1) and palmitic (C16:0) acids, respectively. The calibration models are appropriate for the non-destructive determination of oil and fatty acids levels in whole seeds of milk thistle. © 2014 Society of Chemical Industry.
Too Little Too Soon? Modeling the Risks of Spiral Development
2007-04-30
270 315 360 405 450 495 540 585 630 675 720 765 810 855 900 Time (Week) Work started and active PhIt [Requirements,Iter1] : JavelinCalibration work...packages1 1 1 Work started and active PhIt [Technology,Iter1] : JavelinCalibration work packages2 2 2 Work started and active PhIt [Design,Iter1...JavelinCalibration work packages3 3 3 3 Work started and active PhIt [Manufacturing,Iter1] : JavelinCalibration work packages4 4 Work started and active PhIt
Kalukin, Andrew; Endo, Satashi
2016-08-30
Test the feasibility of incorporating atmospheric models to improve simulation algorithms of image collection, developed at NGA. Various calibration objects will be used to compare simulated image products with real image products.
Hickey, Graeme L.; Grant, Stuart W.; Murphy, Gavin J.; Bhabra, Moninder; Pagano, Domenico; McAllister, Katherine; Buchan, Iain; Bridgewater, Ben
2013-01-01
OBJECTIVES Progressive loss of calibration of the original EuroSCORE models has necessitated the introduction of the EuroSCORE II model. Poor model calibration has important implications for clinical decision-making and risk adjustment of governance analyses. The objective of this study was to explore the reasons for the calibration drift of the logistic EuroSCORE. METHODS Data from the Society for Cardiothoracic Surgery in Great Britain and Ireland database were analysed for procedures performed at all National Health Service and some private hospitals in England and Wales between April 2001 and March 2011. The primary outcome was in-hospital mortality. EuroSCORE risk factors, overall model calibration and discrimination were assessed over time. RESULTS A total of 317 292 procedures were included. Over the study period, mean age at surgery increased from 64.6 to 67.2 years. The proportion of procedures that were isolated coronary artery bypass grafts decreased from 67.5 to 51.2%. In-hospital mortality fell from 4.1 to 2.8%, but the mean logistic EuroSCORE increased from 5.6 to 7.6%. The logistic EuroSCORE remained a good discriminant throughout the study period (area under the receiver-operating characteristic curve between 0.79 and 0.85), but calibration (observed-to-expected mortality ratio) fell from 0.76 to 0.37. Inadequate adjustment for decreasing baseline risk affected calibration considerably. DISCUSSIONS Patient risk factors and case-mix in adult cardiac surgery change dynamically over time. Models like the EuroSCORE that are developed using a ‘snapshot’ of data in time do not account for this and can subsequently lose calibration. It is therefore important to regularly revalidate clinical prediction models. PMID:23152436
Development of PBPK Models for Gasoline in Adult and ...
Concern for potential developmental effects of exposure to gasoline-ethanol blends has grown along with their increased use in the US fuel supply. Physiologically-based pharmacokinetic (PBPK) models for these complex mixtures were developed to address dosimetric issues related to selection of exposure concentrations for in vivo toxicity studies. Sub-models for individual hydrocarbon (HC) constituents were first developed and calibrated with published literature or QSAR-derived data where available. Successfully calibrated sub-models for individual HCs were combined, assuming competitive metabolic inhibition in the liver, and a priori simulations of mixture interactions were performed. Blood HC concentration data were collected from exposed adult non-pregnant (NP) rats (9K ppm total HC vapor, 6h/day) to evaluate performance of the NP mixture model. This model was then converted to a pregnant (PG) rat mixture model using gestational growth equations that enabled a priori estimation of life-stage specific kinetic differences. To address the impact of changing relevant physiological parameters from NP to PG, the PG mixture model was first calibrated against the NP data. The PG mixture model was then evaluated against data from PG rats that were subsequently exposed (9K ppm/6.33h gestation days (GD) 9-20). Overall, the mixture models adequately simulated concentrations of HCs in blood from single (NP) or repeated (PG) exposures (within ~2-3 fold of measured values of
New NIR Calibration Models Speed Biomass Composition and Reactivity Characterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-09-01
Obtaining accurate chemical composition and reactivity (measures of carbohydrate release and yield) information for biomass feedstocks in a timely manner is necessary for the commercialization of biofuels. This highlight describes NREL's work to use near-infrared (NIR) spectroscopy and partial least squares multivariate analysis to develop calibration models to predict the feedstock composition and the release and yield of soluble carbohydrates generated by a bench-scale dilute acid pretreatment and enzymatic hydrolysis assay. This highlight is being developed for the September 2015 Alliance S&T Board meeting.
Cooley, Richard L.
1993-01-01
Calibration data (observed values corresponding to model-computed values of dependent variables) are incorporated into a general method of computing exact Scheffé-type confidence intervals analogous to the confidence intervals developed in part 1 (Cooley, this issue) for a function of parameters derived from a groundwater flow model. Parameter uncertainty is specified by a distribution of parameters conditioned on the calibration data. This distribution was obtained as a posterior distribution by applying Bayes' theorem to the hydrogeologically derived prior distribution of parameters from part 1 and a distribution of differences between the calibration data and corresponding model-computed dependent variables. Tests show that the new confidence intervals can be much smaller than the intervals of part 1 because the prior parameter variance-covariance structure is altered so that combinations of parameters that give poor model fit to the data are unlikely. The confidence intervals of part 1 and the new confidence intervals can be effectively employed in a sequential method of model construction whereby new information is used to reduce confidence interval widths at each stage.
Hypersonic Wind Tunnel Calibration Using the Modern Design of Experiments
NASA Technical Reports Server (NTRS)
Rhode, Matthew N.; DeLoach, Richard
2005-01-01
A calibration of a hypersonic wind tunnel has been conducted using formal experiment design techniques and response surface modeling. Data from a compact, highly efficient experiment was used to create a regression model of the pitot pressure as a function of the facility operating conditions as well as the longitudinal location within the test section. The new calibration utilized far fewer design points than prior experiments, but covered a wider range of the facility s operating envelope while revealing interactions between factors not captured in previous calibrations. A series of points chosen randomly within the design space was used to verify the accuracy of the response model. The development of the experiment design is discussed along with tactics used in the execution of the experiment to defend against systematic variation in the results. Trends in the data are illustrated, and comparisons are made to earlier findings.
NASA Astrophysics Data System (ADS)
Chen, Hua-cai; Chen, Xing-dan; Lu, Yong-jun; Cao, Zhi-qiang
2006-01-01
Near infrared (NIR) reflectance spectroscopy was used to develop a fast determination method for total ginsenosides in Ginseng (Panax Ginseng) powder. The spectra were analyzed with multiplicative signal correction (MSC) correlation method. The best correlative spectra region with the total ginsenosides content was 1660 nm~1880 nm and 2230nm~2380 nm. The NIR calibration models of ginsenosides were built with multiple linear regression (MLR), principle component regression (PCR) and partial least squares (PLS) regression respectively. The results showed that the calibration model built with PLS combined with MSC and the optimal spectrum region was the best one. The correlation coefficient and the root mean square error of correction validation (RMSEC) of the best calibration model were 0.98 and 0.15% respectively. The optimal spectrum region for calibration was 1204nm~2014nm. The result suggested that using NIR to rapidly determinate the total ginsenosides content in ginseng powder were feasible.
NASA Astrophysics Data System (ADS)
Akerib, D. S.; Alsum, S.; Araújo, H. M.; Bai, X.; Bailey, A. J.; Balajthy, J.; Beltrame, P.; Bernard, E. P.; Bernstein, A.; Biesiadzinski, T. P.; Boulton, E. M.; Brás, P.; Byram, D.; Cahn, S. B.; Carmona-Benitez, M. C.; Chan, C.; Currie, A.; Cutter, J. E.; Davison, T. J. R.; Dobi, A.; Dobson, J. E. Y.; Druszkiewicz, E.; Edwards, B. N.; Faham, C. H.; Fallon, S. R.; Fan, A.; Fiorucci, S.; Gaitskell, R. J.; Gehman, V. M.; Genovesi, J.; Ghag, C.; Gilchriese, M. G. D.; Hall, C. R.; Hanhardt, M.; Haselschwardt, S. J.; Hertel, S. A.; Hogan, D. P.; Horn, M.; Huang, D. Q.; Ignarra, C. M.; Jacobsen, R. G.; Ji, W.; Kamdin, K.; Kazkaz, K.; Khaitan, D.; Knoche, R.; Larsen, N. A.; Lee, C.; Lenardo, B. G.; Lesko, K. T.; Lindote, A.; Lopes, M. I.; Manalaysay, A.; Mannino, R. L.; Marzioni, M. F.; McKinsey, D. N.; Mei, D.-M.; Mock, J.; Moongweluwan, M.; Morad, J. A.; Murphy, A. St. J.; Nehrkorn, C.; Nelson, H. N.; Neves, F.; O'Sullivan, K.; Oliver-Mallory, K. C.; Palladino, K. J.; Pease, E. K.; Reichhart, L.; Rhyne, C.; Shaw, S.; Shutt, T. A.; Silva, C.; Solmaz, M.; Solovov, V. N.; Sorensen, P.; Sumner, T. J.; Szydagis, M.; Taylor, D. J.; Taylor, W. C.; Tennyson, B. P.; Terman, P. A.; Tiedt, D. R.; To, W. H.; Tripathi, M.; Tvrznikova, L.; Uvarov, S.; Velan, V.; Verbus, J. R.; Webb, R. C.; White, J. T.; Whitis, T. J.; Witherell, M. S.; Wolfs, F. L. H.; Xu, J.; Yazdani, K.; Young, S. K.; Zhang, C.; LUX Collaboration
2018-05-01
The LUX experiment has performed searches for dark-matter particles scattering elastically on xenon nuclei, leading to stringent upper limits on the nuclear scattering cross sections for dark matter. Here, for results derived from 1.4 ×104 kg days of target exposure in 2013, details of the calibration, event-reconstruction, modeling, and statistical tests that underlie the results are presented. Detector performance is characterized, including measured efficiencies, stability of response, position resolution, and discrimination between electron- and nuclear-recoil populations. Models are developed for the drift field, optical properties, background populations, the electron- and nuclear-recoil responses, and the absolute rate of low-energy background events. Innovations in the analysis include in situ measurement of the photomultipliers' response to xenon scintillation photons, verification of fiducial mass with a low-energy internal calibration source, and new empirical models for low-energy signal yield based on large-sample, in situ calibrations.
On the Photometric Calibration of FORS2 and the Sloan Digital Sky Survey
NASA Astrophysics Data System (ADS)
Bramich, D.; Moehler, S.; Coccato, L.; Freudling, W.; Garcia-Dabó, C. E.; Müller, P.; Saviane, I.
2012-09-01
An accurate absolute calibration of photometric data to place them on a standard magnitude scale is very important for many science goals. Absolute calibration requires the observation of photometric standard stars and analysis of the observations with an appropriate photometric model including all relevant effects. In the FORS Absolute Photometry (FAP) project, we have developed a standard star observing strategy and modelling procedure that enables calibration of science target photometry to better than 3% accuracy on photometrically stable nights given sufficient signal-to-noise. In the application of this photometric modelling to large photometric databases, we have investigated the Sloan Digital Sky Survey (SDSS) and found systematic trends in the published photometric data. The amplitudes of these trends are similar to the reported typical precision (˜1% and ˜2%) of the SDSS photometry in the griz- and u-bands, respectively.
NASA Astrophysics Data System (ADS)
Javernick, L.; Bertoldi, W.; Redolfi, M.
2017-12-01
Accessing or acquiring high quality, low-cost topographic data has never been easier due to recent developments of the photogrammetric techniques of Structure-from-Motion (SfM). Researchers can acquire the necessary SfM imagery with various platforms, with the ability to capture millimetre resolution and accuracy, or large-scale areas with the help of unmanned platforms. Such datasets in combination with numerical modelling have opened up new opportunities to study river environments physical and ecological relationships. While numerical models overall predictive accuracy is most influenced by topography, proper model calibration requires hydraulic data and morphological data; however, rich hydraulic and morphological datasets remain scarce. This lack in field and laboratory data has limited model advancement through the inability to properly calibrate, assess sensitivity, and validate the models performance. However, new time-lapse imagery techniques have shown success in identifying instantaneous sediment transport in flume experiments and their ability to improve hydraulic model calibration. With new capabilities to capture high resolution spatial and temporal datasets of flume experiments, there is a need to further assess model performance. To address this demand, this research used braided river flume experiments and captured time-lapse observed sediment transport and repeat SfM elevation surveys to provide unprecedented spatial and temporal datasets. Through newly created metrics that quantified observed and modeled activation, deactivation, and bank erosion rates, the numerical model Delft3d was calibrated. This increased temporal data of both high-resolution time series and long-term temporal coverage provided significantly improved calibration routines that refined calibration parameterization. Model results show that there is a trade-off between achieving quantitative statistical and qualitative morphological representations. Specifically, statistical agreement simulations suffered to represent braiding planforms (evolving toward meandering), and parameterization that ensured braided produced exaggerated activation and bank erosion rates. Marie Sklodowska-Curie Individual Fellowship: River-HMV, 656917
DOE Office of Scientific and Technical Information (OSTI.GOV)
Atamturktur, Sez; Unal, Cetin; Hemez, Francois
The project proposed to provide a Predictive Maturity Framework with its companion metrics that (1) introduce a formalized, quantitative means to communicate information between interested parties, (2) provide scientifically dependable means to claim completion of Validation and Uncertainty Quantification (VU) activities, and (3) guide the decision makers in the allocation of Nuclear Energy’s resources for code development and physical experiments. The project team proposed to develop this framework based on two complimentary criteria: (1) the extent of experimental evidence available for the calibration of simulation models and (2) the sophistication of the physics incorporated in simulation models. The proposed frameworkmore » is capable of quantifying the interaction between the required number of physical experiments and degree of physics sophistication. The project team has developed this framework and implemented it with a multi-scale model for simulating creep of a core reactor cladding. The multi-scale model is composed of the viscoplastic self-consistent (VPSC) code at the meso-scale, which represents the visco-plastic behavior and changing properties of a highly anisotropic material and a Finite Element (FE) code at the macro-scale to represent the elastic behavior and apply the loading. The framework developed takes advantage of the transparency provided by partitioned analysis, where independent constituent codes are coupled in an iterative manner. This transparency allows model developers to better understand and remedy the source of biases and uncertainties, whether they stem from the constituents or the coupling interface by exploiting separate-effect experiments conducted within the constituent domain and integral-effect experiments conducted within the full-system domain. The project team has implemented this procedure with the multi- scale VPSC-FE model and demonstrated its ability to improve the predictive capability of the model. Within this framework, the project team has focused on optimizing resource allocation for improving numerical models through further code development and experimentation. Related to further code development, we have developed a code prioritization index (CPI) for coupled numerical models. CPI is implemented to effectively improve the predictive capability of the coupled model by increasing the sophistication of constituent codes. In relation to designing new experiments, we investigated the information gained by the addition of each new experiment used for calibration and bias correction of a simulation model. Additionally, the variability of ‘information gain’ through the design domain has been investigated in order to identify the experiment settings where maximum information gain occurs and thus guide the experimenters in the selection of the experiment settings. This idea was extended to evaluate the information gain from each experiment can be improved by intelligently selecting the experiments, leading to the development of the Batch Sequential Design (BSD) technique. Additionally, we evaluated the importance of sufficiently exploring the domain of applicability in experiment-based validation of high-consequence modeling and simulation by developing a new metric to quantify coverage. This metric has also been incorporated into the design of new experiments. Finally, we have proposed a data-aware calibration approach for the calibration of numerical models. This new method considers the complexity of a numerical model (the number of parameters to be calibrated, parameter uncertainty, and form of the model) and seeks to identify the number of experiments necessary to calibrate the model based on the level of sophistication of the physics. The final component in the project team’s work to improve model calibration and validation methods is the incorporation of robustness to non-probabilistic uncertainty in the input parameters. This is an improvement to model validation and uncertainty quantification stemming beyond the originally proposed scope of the project. We have introduced a new metric for incorporating the concept of robustness into experiment-based validation of numerical models. This project has accounted for the graduation of two Ph.D. students (Kendra Van Buren and Josh Hegenderfer) and two M.S. students (Matthew Egeberg and Parker Shields). One of the doctoral students is now working in the nuclear engineering field and the other one is a post-doctoral fellow at the Los Alamos National Laboratory. Additionally, two more Ph.D. students (Garrison Stevens and Tunc Kulaksiz) who are working towards graduation have been supported by this project.« less
Li, Rui; Ye, Hongfei; Zhang, Weisheng; Ma, Guojun; Su, Yewang
2015-10-29
Spring constant calibration of the atomic force microscope (AFM) cantilever is of fundamental importance for quantifying the force between the AFM cantilever tip and the sample. The calibration within the framework of thin plate theory undoubtedly has a higher accuracy and broader scope than that within the well-established beam theory. However, thin plate theory-based accurate analytic determination of the constant has been perceived as an extremely difficult issue. In this paper, we implement the thin plate theory-based analytic modeling for the static behavior of rectangular AFM cantilevers, which reveals that the three-dimensional effect and Poisson effect play important roles in accurate determination of the spring constants. A quantitative scaling law is found that the normalized spring constant depends only on the Poisson's ratio, normalized dimension and normalized load coordinate. Both the literature and our refined finite element model validate the present results. The developed model is expected to serve as the benchmark for accurate calibration of rectangular AFM cantilevers.
Stochastic calibration and learning in nonstationary hydroeconomic models
NASA Astrophysics Data System (ADS)
Maneta, M. P.; Howitt, R.
2014-05-01
Concern about water scarcity and adverse climate events over agricultural regions has motivated a number of efforts to develop operational integrated hydroeconomic models to guide adaptation and optimal use of water. Once calibrated, these models are used for water management and analysis assuming they remain valid under future conditions. In this paper, we present and demonstrate a methodology that permits the recursive calibration of economic models of agricultural production from noisy but frequently available data. We use a standard economic calibration approach, namely positive mathematical programming, integrated in a data assimilation algorithm based on the ensemble Kalman filter equations to identify the economic model parameters. A moving average kernel ensures that new and past information on agricultural activity are blended during the calibration process, avoiding loss of information and overcalibration for the conditions of a single year. A regularization constraint akin to the standard Tikhonov regularization is included in the filter to ensure its stability even in the presence of parameters with low sensitivity to observations. The results show that the implementation of the PMP methodology within a data assimilation framework based on the enKF equations is an effective method to calibrate models of agricultural production even with noisy information. The recursive nature of the method incorporates new information as an added value to the known previous observations of agricultural activity without the need to store historical information. The robustness of the method opens the door to the use of new remote sensing algorithms for operational water management.
Prognostic models for complete recovery in ischemic stroke: a systematic review and meta-analysis.
Jampathong, Nampet; Laopaiboon, Malinee; Rattanakanokchai, Siwanon; Pattanittum, Porjai
2018-03-09
Prognostic models have been increasingly developed to predict complete recovery in ischemic stroke. However, questions arise about the performance characteristics of these models. The aim of this study was to systematically review and synthesize performance of existing prognostic models for complete recovery in ischemic stroke. We searched journal publications indexed in PUBMED, SCOPUS, CENTRAL, ISI Web of Science and OVID MEDLINE from inception until 4 December, 2017, for studies designed to develop and/or validate prognostic models for predicting complete recovery in ischemic stroke patients. Two reviewers independently examined titles and abstracts, and assessed whether each study met the pre-defined inclusion criteria and also independently extracted information about model development and performance. We evaluated validation of the models by medians of the area under the receiver operating characteristic curve (AUC) or c-statistic and calibration performance. We used a random-effects meta-analysis to pool AUC values. We included 10 studies with 23 models developed from elderly patients with a moderately severe ischemic stroke, mainly in three high income countries. Sample sizes for each study ranged from 75 to 4441. Logistic regression was the only analytical strategy used to develop the models. The number of various predictors varied from one to 11. Internal validation was performed in 12 models with a median AUC of 0.80 (95% CI 0.73 to 0.84). One model reported good calibration. Nine models reported external validation with a median AUC of 0.80 (95% CI 0.76 to 0.82). Four models showed good discrimination and calibration on external validation. The pooled AUC of the two validation models of the same developed model was 0.78 (95% CI 0.71 to 0.85). The performance of the 23 models found in the systematic review varied from fair to good in terms of internal and external validation. Further models should be developed with internal and external validation in low and middle income countries.
NASA Astrophysics Data System (ADS)
Yulia, M.; Suhandy, D.
2018-03-01
NIR spectra obtained from spectral data acquisition system contains both chemical information of samples as well as physical information of the samples, such as particle size and bulk density. Several methods have been established for developing calibration models that can compensate for sample physical information variations. One common approach is to include physical information variation in the calibration model both explicitly and implicitly. The objective of this study was to evaluate the feasibility of using explicit method to compensate the influence of different particle size of coffee powder in NIR calibration model performance. A number of 220 coffee powder samples with two different types of coffee (civet and non-civet) and two different particle sizes (212 and 500 µm) were prepared. Spectral data was acquired using NIR spectrometer equipped with an integrating sphere for diffuse reflectance measurement. A discrimination method based on PLS-DA was conducted and the influence of different particle size on the performance of PLS-DA was investigated. In explicit method, we add directly the particle size as predicted variable results in an X block containing only the NIR spectra and a Y block containing the particle size and type of coffee. The explicit inclusion of the particle size into the calibration model is expected to improve the accuracy of type of coffee determination. The result shows that using explicit method the quality of the developed calibration model for type of coffee determination is a little bit superior with coefficient of determination (R2) = 0.99 and root mean square error of cross-validation (RMSECV) = 0.041. The performance of the PLS2 calibration model for type of coffee determination with particle size compensation was quite good and able to predict the type of coffee in two different particle sizes with relatively high R2 pred values. The prediction also resulted in low bias and RMSEP values.
Dudley, Robert W.
2008-01-01
The U.S. Geological Survey (USGS), in cooperation with the Maine Department of Marine Resources Bureau of Sea Run Fisheries and Habitat, began a study in 2004 to characterize the quantity, variability, and timing of streamflow in the Dennys River. The study included a synoptic summary of historical streamflow data at a long-term streamflow gage, collecting data from an additional four short-term streamflow gages, and the development and evaluation of a distributed-parameter watershed model for the Dennys River Basin. The watershed model used in this investigation was the USGS Precipitation-Runoff Modeling System (PRMS). The Geographic Information System (GIS) Weasel was used to delineate the Dennys River Basin and subbasins and derive parameters for their physical geographic features. Calibration of the models used in this investigation involved a four-step procedure in which model output was evaluated against four calibration data sets using computed objective functions for solar radiation, potential evapotranspiration, annual and seasonal water budgets, and daily streamflows. The calibration procedure involved thousands of model runs and was carried out using the USGS software application Luca (Let us calibrate). Luca uses the Shuffled Complex Evolution (SCE) global search algorithm to calibrate the model parameters. The SCE method reliably produces satisfactory solutions for large, complex optimization problems. The primary calibration effort went into the Dennys main stem watershed model. Calibrated parameter values obtained for the Dennys main stem model were transferred to the Cathance Stream model, and a similar four-step SCE calibration procedure was performed; this effort was undertaken to determine the potential to transfer modeling information to a nearby basin in the same region. The calibrated Dennys main stem watershed model performed with Nash-Sutcliffe efficiency (NSE) statistic values for the calibration period and evaluation period of 0.79 and 0.76, respectively. The Cathance Stream model had an NSE value of 0.68. The Dennys River Basin models make use of limited streamflow-gaging station data and provide information to characterize subbasin hydrology. The calibrated PRMS watershed models of the Dennys River Basin provide simulated daily streamflow time series from October 1, 1985, through September 30, 2006, for nearly any location within the basin. These models enable natural-resources managers to characterize the timing and quantity of water moving through the basin to support many endeavors including geochemical calculations, water-use assessment, Atlantic salmon population dynamics and migration modeling, habitat modeling and assessment, and other resource-management scenario evaluations. Characterizing streamflow contributions from subbasins in the basin and the relative amounts of surface- and ground-water contributions to streamflow throughout the basin will lead to a better understanding of water quantity and quality in the basin. Improved water-resources information will support Atlantic salmon protection efforts.
The Status of the Tropical Rainfall Measuring Mission (TRMM) after 2 Years in Orbit
NASA Technical Reports Server (NTRS)
Kummerow, C.; Simpson, J.; Thiele, O.; Barnes, W.; Chang, A. T. C.; Stocker, E.; Adler, R. F.; Hou, A.; Kakar, R.; Wentz, F.
1999-01-01
The Tropical Rainfall Measuring Mission (TRMM) satellite was launched on November 27, 1997, and data from all the instruments first became available approximately 30 days after launch. Since then, much progress has been made in the calibration of the sensors, the improvement of the rainfall algorithms, in related modeling applications and in new datasets tailored specifically for these applications. This paper reports the latest results regarding the calibration of the TRMM Microwave Imager, (TMI), Precipitation Radar (PR) and Visible and Infrared Sensor (VIRS). For the TMI, a new product is in place that corrects for a still unknown source of radiation leaking in to the TMI receiver. The PR calibration has been adjusted upward slightly (by 0.6 dBZ) to better match ground reference targets, while the VIRS calibration remains largely unchanged. In addition to the instrument calibration, great strides have been made with the rainfall algorithms as well, with the new rainfall products agreeing with each other to within less than 20% over monthly zonally averaged statistics. The TRMM Science Data and Information System (TSDIS) has responded equally well by making a number of new products, including real-time and fine resolution gridded rainfall fields available to the modeling community. The TRMM Ground Validation (GV) program is also responding with improved radar calibration techniques and rainfall algorithms to provide more accurate GV products which will be further enhanced with the new multiparameter 10 cm radar being developed for TRMM validation and precipitation studies. Progress in these various areas has, in turn, led to exciting new developments in the modeling area where Data Assimilation, and Weather Forecast models are showing dramatic improvements after the assimilation of observed rainfall fields.
Brito, Rita S; Pinheiro, Helena M; Ferreira, Filipa; Matos, José S; Pinheiro, Alexandre; Lourenço, Nídia D
2016-03-01
Online monitoring programs based on spectroscopy have a high application potential for the detection of hazardous wastewater discharges in sewer systems. Wastewater hydraulics poses a challenge for in situ spectroscopy, especially when the system includes storm water connections leading to rapid changes in water depth, velocity, and in the water quality matrix. Thus, there is a need to optimize and fix the location of in situ instruments, limiting their availability for calibration. In this context, the development of calibration models on bench spectrophotometers to estimate wastewater quality parameters from spectra acquired with in situ instruments could be very useful. However, spectra contain information not only from the samples, but also from the spectrophotometer generally invalidating this approach. The use of calibration transfer methods is a promising solution to this problem. In this study, calibration models were developed using interval partial least squares (iPLS), for the estimation of total suspended solids (TSS) and chemical oxygen demand (COD) in sewage from Ultraviolet-visible spectra acquired in a bench scanning spectrophotometer. The feasibility of calibration transfer to a submersible, diode array equipment, to be subsequently operated in situ, was assessed using three procedures: slope and bias correction (SBC); single wavelength standardization (SWS) on mean spectra; and local centering (LC). The results showed that SBC was the most adequate for the available data, adding insignificant error to the base model estimates. Single wavelength standardization was a close second best, potentially more robust, and independent of the base iPLS model. Local centering was shown to be inadequate for the samples and instruments used. © The Author(s) 2016.
Westenbroek, Stephen M.; Doherty, John; Walker, John F.; Kelson, Victor A.; Hunt, Randall J.; Cera, Timothy B.
2012-01-01
The TSPROC (Time Series PROCessor) computer software uses a simple scripting language to process and analyze time series. It was developed primarily to assist in the calibration of environmental models. The software is designed to perform calculations on time-series data commonly associated with surface-water models, including calculation of flow volumes, transformation by means of basic arithmetic operations, and generation of seasonal and annual statistics and hydrologic indices. TSPROC can also be used to generate some of the key input files required to perform parameter optimization by means of the PEST (Parameter ESTimation) computer software. Through the use of TSPROC, the objective function for use in the model-calibration process can be focused on specific components of a hydrograph.
Attia, Khalid A M; Nassar, Mohammed W I; El-Zeiny, Mohamed B; Serag, Ahmed
2017-01-05
For the first time, a new variable selection method based on swarm intelligence namely firefly algorithm is coupled with three different multivariate calibration models namely, concentration residual augmented classical least squares, artificial neural network and support vector regression in UV spectral data. A comparative study between the firefly algorithm and the well-known genetic algorithm was developed. The discussion revealed the superiority of using this new powerful algorithm over the well-known genetic algorithm. Moreover, different statistical tests were performed and no significant differences were found between all the models regarding their predictabilities. This ensures that simpler and faster models were obtained without any deterioration of the quality of the calibration. Copyright © 2016 Elsevier B.V. All rights reserved.
Steps towards Improving GNSS Systematic Errors and Biases
NASA Astrophysics Data System (ADS)
Herring, T.; Moore, M.
2017-12-01
Four general areas of analysis method improvements, three related to data analysis models and the fourth to calibration methods, have been recommended at the recent unified analysis workshop (UAW) and we discuss aspects of these areas for improvement. The gravity fields used in the GNSS orbit integrations should be updated to match modern fields to make them consistent with the fields being used by the other IAG services. The update would include the static part of the field and a time variable component. The force models associated with radiation forces are the most uncertain and modeling of these forces can be made more consistent with the exchange of attitude information. The international GNSS service (IGS) will develop an attitude format and make attitude information available so that analysis centers can validate their models. The IGS has noted the appearance of the GPS draconitic period and harmonics of this period in time series of various geodetic products (e.g., positions and Earth orientation parameters). An updated short-period (diurnal and semidiurnal) model is needed and a method to determine the best model developed. The final area, not directly related to analysis models, is the recommendation that site dependent calibration of GNSS antennas are needed since these have a direct effect on the ITRF realization and position offsets when antennas are changed. Evaluation of the effects of the use of antenna specific phase center models will be investigated for those sites where these values are available without disturbing an existing antenna installation. Potential development of an in-situ antenna calibration system is strongly encouraged. In-situ calibration would be deployed at core sites where GNSS sites are tied to other geodetic systems. With recent expansion of the number of GPS satellites transmitting unencrypted codes on the GPS L2 frequency and the availability of software GNSS receivers in-situ calibration between an existing installation and a movable directional antenna is now more likely to generate accurate results than earlier analog switching systems. With all of these improvements, there is the expectation that there will be better agreement between the space geodetic methods thus allowing more definitive assessment and modeling of the Earth's time variable shape and gravity field.
Real-time monitoring of high-gravity corn mash fermentation using in situ raman spectroscopy.
Gray, Steven R; Peretti, Steven W; Lamb, H Henry
2013-06-01
In situ Raman spectroscopy was employed for real-time monitoring of simultaneous saccharification and fermentation (SSF) of corn mash by an industrial strain of Saccharomyces cerevisiae. An accurate univariate calibration model for ethanol was developed based on the very strong 883 cm(-1) C-C stretching band. Multivariate partial least squares (PLS) calibration models for total starch, dextrins, maltotriose, maltose, glucose, and ethanol were developed using data from eight batch fermentations and validated using predictions for a separate batch. The starch, ethanol, and dextrins models showed significant prediction improvement when the calibration data were divided into separate high- and low-concentration sets. Collinearity between the ethanol and starch models was avoided by excluding regions containing strong ethanol peaks from the starch model and, conversely, excluding regions containing strong saccharide peaks from the ethanol model. The two-set calibration models for starch (R(2) = 0.998, percent error = 2.5%) and ethanol (R(2) = 0.999, percent error = 2.1%) provide more accurate predictions than any previously published spectroscopic models. Glucose, maltose, and maltotriose are modeled to accuracy comparable to previous work on less complex fermentation processes. Our results demonstrate that Raman spectroscopy is capable of real time in situ monitoring of a complex industrial biomass fermentation. To our knowledge, this is the first PLS-based chemometric modeling of corn mash fermentation under typical industrial conditions, and the first Raman-based monitoring of a fermentation process with glucose, oligosaccharides and polysaccharides present. Copyright © 2013 Wiley Periodicals, Inc.
A dynamic ventilation model for gravity sewer networks.
Wang, Y C; Nobi, N; Nguyen, T; Vorreiter, L
2012-01-01
To implement any effective odour and corrosion control technology in the sewer network, it is imperative that the airflow through gravity sewer airspaces be quantified. This paper presents a full dynamic airflow model for gravity sewer systems. The model, which is developed using the finite element method, is a compressible air transport model. The model has been applied to the North Head Sewerage Ocean Outfall System (NSOOS) and calibrated using the air pressure and airflow data collected during October 2008. Although the calibration is focused on forced ventilation, the model can be applied to natural ventilation as well.
Sasakura, D; Nakayama, K; Sakamoto, T; Chikuma, T
2015-05-01
The use of transmission near infrared spectroscopy (TNIRS) is of particular interest in the pharmaceutical industry. This is because TNIRS does not require sample preparation and can analyze several tens of tablet samples in an hour. It has the capability to measure all relevant information from a tablet, while still on the production line. However, TNIRS has a narrow spectrum range and overtone vibrations often overlap. To perform content uniformity testing in tablets by TNIRS, various properties in the tableting process need to be analyzed by a multivariate prediction model, such as a Partial Least Square Regression modeling. One issue is that typical approaches require several hundred reference samples to act as the basis of the method rather than a strategically designed method. This means that many batches are needed to prepare the reference samples; this requires time and is not cost effective. Our group investigated the concentration dependence of the calibration model with a strategic design. Consequently, we developed a more effective approach to the TNIRS calibration model than the existing methodology.
Water-Balance Model to Simulate Historical Lake Levels for Lake Merced, California
NASA Astrophysics Data System (ADS)
Maley, M. P.; Onsoy, S.; Debroux, J.; Eagon, B.
2009-12-01
Lake Merced is a freshwater lake located in southwestern San Francisco, California. In the late 1980s and early 1990s, an extended, severe drought impacted the area that resulted in significant declines in Lake Merced lake levels that raised concerns about the long-term health of the lake. In response to these concerns, the Lake Merced Water Level Restoration Project was developed to evaluate an engineered solution to increase and maintain Lake Merced lake levels. The Lake Merced Lake-Level Model was developed to support the conceptual engineering design to restore lake levels. It is a spreadsheet-based water-balance model that performs monthly water-balance calculations based on the hydrological conceptual model. The model independently calculates each water-balance component based on available climate and hydrological data. The model objective was to develop a practical, rule-based approach for the water balance and to calibrate the model results to measured lake levels. The advantage of a rule-based approach is that once the rules are defined, they enhance the ability to then adapt the model for use in future-case simulations. The model was calibrated to historical lake levels over a 70-year period from 1939 to 2009. Calibrating the model over this long historical range tested the model over a variety of hydrological conditions including wet, normal and dry precipitation years, flood events, and periods of high and low lake levels. The historical lake level range was over 16 feet. The model calibration of historical to simulated lake levels had a residual mean of 0.02 feet and an absolute residual mean of 0.42 feet. More importantly, the model demonstrated the ability to simulate both long-term and short-term trends with a strong correlation of the magnitude for both annual and seasonal fluctuations in lake levels. The calibration results demonstrate an improved conceptual understanding of the key hydrological factors that control lake levels, reduce uncertainty in the hydrological conceptual model, and increase confidence in the model’s ability to forecast future lake conditions. The Lake Merced Lake-Level Model will help decision-makers with a straightforward, practical analysis of the major contributions to lake-level declines that can be used to support engineering, environmental and other decisions.
The Evaluation of HOMER as a Marine Corps Expeditionary Energy Predeployment Tool
2010-09-01
experiment was used to ensure the HOMER models were accurate. Following the calibration, the concept of expeditionary energy density as it pertains to power ...Brigade–Afghanistan xvi MEP Mobile Electric Power MPP Maximum Power Point MPPT Maximum Power Point Tracker NASA National Aeronautics and...process was used to analyze HOMER’s modeling capability: • Conduct photovoltaic (PV) experiment, • Develop a calibration process to match the HOMER
The Evaluation of HOMER as a Marine Corps Expeditionary Energy Pre-deployment Tool
2010-11-21
used to ensure the HOMER models were accurate. Following the calibration, the concept of expeditionary energy density as it pertains to power ...MEP Mobile Electric Power MPP Maximum Power Point MPPT Maximum Power Point Tracker NASA National Aeronautics and Space Administration...process was used to analyze HOMER’s modeling capability: • Conduct photovoltaic (PV) experiment, • Develop a calibration process to match the HOMER
Development, sensitivity and uncertainty analysis of LASH model
USDA-ARS?s Scientific Manuscript database
Many hydrologic models have been developed to help manage natural resources all over the world. Nevertheless, most models have presented a high complexity regarding data base requirements, as well as, many calibration parameters. This has brought serious difficulties for applying them in watersheds ...
Cumulative impact of developments on the surrounding roadways' traffic.
DOT National Transportation Integrated Search
2011-10-01
"In order to recommend a procedure for cumulative impact study, four different travel : demand models were developed, calibrated, and validated. The base year for the models was 2005. : Two study areas were used, and the models were run for three per...
An improved error assessment for the GEM-T1 gravitational model
NASA Technical Reports Server (NTRS)
Lerch, F. J.; Marsh, J. G.; Klosko, S. M.; Pavlis, E. C.; Patel, G. B.; Chinn, D. S.; Wagner, C. A.
1988-01-01
Several tests were designed to determine the correct error variances for the Goddard Earth Model (GEM)-T1 gravitational solution which was derived exclusively from satellite tracking data. The basic method employs both wholly independent and dependent subset data solutions and produces a full field coefficient estimate of the model uncertainties. The GEM-T1 errors were further analyzed using a method based upon eigenvalue-eigenvector analysis which calibrates the entire covariance matrix. Dependent satellite and independent altimetric and surface gravity data sets, as well as independent satellite deep resonance information, confirm essentially the same error assessment. These calibrations (utilizing each of the major data subsets within the solution) yield very stable calibration factors which vary by approximately 10 percent over the range of tests employed. Measurements of gravity anomalies obtained from altimetry were also used directly as observations to show that GEM-T1 is calibrated. The mathematical representation of the covariance error in the presence of unmodeled systematic error effects in the data is analyzed and an optimum weighting technique is developed for these conditions. This technique yields an internal self-calibration of the error model, a process which GEM-T1 is shown to approximate.
Analysis of pork adulteration in beef meatball using Fourier transform infrared (FTIR) spectroscopy.
Rohman, A; Sismindari; Erwanto, Y; Che Man, Yaakob B
2011-05-01
Meatball is one of the favorite foods in Indonesia. The adulteration of pork in beef meatball is frequently occurring. This study was aimed to develop a fast and non destructive technique for the detection and quantification of pork in beef meatball using Fourier transform infrared (FTIR) spectroscopy and partial least square (PLS) calibration. The spectral bands associated with pork fat (PF), beef fat (BF), and their mixtures in meatball formulation were scanned, interpreted, and identified by relating them to those spectroscopically representative to pure PF and BF. For quantitative analysis, PLS regression was used to develop a calibration model at the selected fingerprint regions of 1200-1000 cm(-1). The equation obtained for the relationship between actual PF value and FTIR predicted values in PLS calibration model was y = 0.999x + 0.004, with coefficient of determination (R(2)) and root mean square error of calibration are 0.999 and 0.442, respectively. The PLS calibration model was subsequently used for the prediction of independent samples using laboratory made meatball samples containing the mixtures of BF and PF. Using 4 principal components, root mean square error of prediction is 0.742. The results showed that FTIR spectroscopy can be used for the detection and quantification of pork in beef meatball formulation for Halal verification purposes. Copyright © 2010 The American Meat Science Association. Published by Elsevier Ltd. All rights reserved.
A frequentist approach to computer model calibration
Wong, Raymond K. W.; Storlie, Curtis Byron; Lee, Thomas C. M.
2016-05-05
The paper considers the computer model calibration problem and provides a general frequentist solution. Under the framework proposed, the data model is semiparametric with a non-parametric discrepancy function which accounts for any discrepancy between physical reality and the computer model. In an attempt to solve a fundamentally important (but often ignored) identifiability issue between the computer model parameters and the discrepancy function, the paper proposes a new and identifiable parameterization of the calibration problem. It also develops a two-step procedure for estimating all the relevant quantities under the new parameterization. This estimation procedure is shown to enjoy excellent rates ofmore » convergence and can be straightforwardly implemented with existing software. For uncertainty quantification, bootstrapping is adopted to construct confidence regions for the quantities of interest. As a result, the practical performance of the methodology is illustrated through simulation examples and an application to a computational fluid dynamics model.« less
Parameter regionalization of a monthly water balance model for the conterminous United States
NASA Astrophysics Data System (ADS)
Bock, A. R.; Hay, L. E.; McCabe, G. J.; Markstrom, S. L.; Atkinson, R. D.
2015-09-01
A parameter regionalization scheme to transfer parameter values and model uncertainty information from gaged to ungaged areas for a monthly water balance model (MWBM) was developed and tested for the conterminous United States (CONUS). The Fourier Amplitude Sensitivity Test, a global-sensitivity algorithm, was implemented on a MWBM to generate parameter sensitivities on a set of 109 951 hydrologic response units (HRUs) across the CONUS. The HRUs were grouped into 110 calibration regions based on similar parameter sensitivities. Subsequently, measured runoff from 1575 streamgages within the calibration regions were used to calibrate the MWBM parameters to produce parameter sets for each calibration region. Measured and simulated runoff at the 1575 streamgages showed good correspondence for the majority of the CONUS, with a median computed Nash-Sutcliffe Efficiency coefficient of 0.76 over all streamgages. These methods maximize the use of available runoff information, resulting in a calibrated CONUS-wide application of the MWBM suitable for providing estimates of water availability at the HRU resolution for both gaged and ungaged areas of the CONUS.
NASA Technical Reports Server (NTRS)
Martos, Borja; Kiszely, Paul; Foster, John V.
2011-01-01
As part of the NASA Aviation Safety Program (AvSP), a novel pitot-static calibration method was developed to allow rapid in-flight calibration for subscale aircraft while flying within confined test areas. This approach uses Global Positioning System (GPS) technology coupled with modern system identification methods that rapidly computes optimal pressure error models over a range of airspeed with defined confidence bounds. This method has been demonstrated in subscale flight tests and has shown small 2- error bounds with significant reduction in test time compared to other methods. The current research was motivated by the desire to further evaluate and develop this method for full-scale aircraft. A goal of this research was to develop an accurate calibration method that enables reductions in test equipment and flight time, thus reducing costs. The approach involved analysis of data acquisition requirements, development of efficient flight patterns, and analysis of pressure error models based on system identification methods. Flight tests were conducted at The University of Tennessee Space Institute (UTSI) utilizing an instrumented Piper Navajo research aircraft. In addition, the UTSI engineering flight simulator was used to investigate test maneuver requirements and handling qualities issues associated with this technique. This paper provides a summary of piloted simulation and flight test results that illustrates the performance and capabilities of the NASA calibration method. Discussion of maneuver requirements and data analysis methods is included as well as recommendations for piloting technique.
Modeling Photo-multiplier Gain and Regenerating Pulse Height Data for Application Development
NASA Astrophysics Data System (ADS)
Aspinall, Michael D.; Jones, Ashley R.
2018-01-01
Systems that adopt organic scintillation detector arrays often require a calibration process prior to the intended measurement campaign to correct for significant performance variances between detectors within the array. These differences exist because of low tolerances associated with photo-multiplier tube technology and environmental influences. Differences in detector response can be corrected for by adjusting the supplied photo-multiplier tube voltage to control its gain and the effect that this has on the pulse height spectra from a gamma-only calibration source with a defined photo-peak. Automated methods that analyze these spectra and adjust the photo-multiplier tube bias accordingly are emerging for hardware that integrate acquisition electronics and high voltage control. However, development of such algorithms require access to the hardware, multiple detectors and calibration source for prolonged periods, all with associated constraints and risks. In this work, we report on a software function and related models developed to rescale and regenerate pulse height data acquired from a single scintillation detector. Such a function could be used to generate significant and varied pulse height data that can be used to integration-test algorithms that are capable of automatically response matching multiple detectors using pulse height spectra analysis. Furthermore, a function of this sort removes the dependence on multiple detectors, digital analyzers and calibration source. Results show a good match between the real and regenerated pulse height data. The function has also been used successfully to develop auto-calibration algorithms.
Conditioning of MVM '73 radio-tracking data
NASA Technical Reports Server (NTRS)
Koch, R. E.; Chao, C. C.; Winn, F. B.; Yip, K. W.
1974-01-01
An extensive effort was undertaken to edit Mariner 10 radiometric tracking data. Interactive computer graphics were used for the first time by an interplanetary mission to detect blunder points and spurious signatures in the data. Interactive graphics improved the former process time by a factor of 10 to 20, while increasing reliability. S/X dual Doppler data was used for the first time to calibrate charged particles in the tracking medium. Application of the charged particle calibrations decreased the orbit determination error for a short data arc following the 16 March 1974 maneuver by about 80%. A new model was developed to calibrate the earth's troposphere with seasonal adjustments. The new model has a 2% accuracy and is 5 times better than the old model.
Post Audit of a Field Scale Reactive Transport Model of Uranium at a Former Mill Site
NASA Astrophysics Data System (ADS)
Curtis, G. P.
2015-12-01
Reactive transport of hexavalent uranium (U(VI)) in a shallow alluvial aquifer at a former uranium mill tailings site near Naturita CO has been monitored for nearly 30 years by the US Department of Energy and the US Geological Survey. Groundwater at the site has high concentrations of chloride, alkalinity and U(VI) as a owing to ore processing at the site from 1941 to 1974. We previously calibrated a multicomponent reactive transport model to data collected at the site from 1986 to 2001. A two dimensional nonreactive transport model used a uniform hydraulic conductivity which was estimated from observed chloride concentrations and tritium helium age dates. A reactive transport model for the 2km long site was developed by including an equilibrium U(VI) surface complexation model calibrated to laboratory data and calcite equilibrium. The calibrated model reproduced both nonreactive tracers as well as the observed U(VI), pH and alkalinity. Forward simulations for the period 2002-2015 conducted with the calibrated model predict significantly faster natural attenuation of U(VI) concentrations than has been observed by the persistent high U(VI) concentrations at the site. Alternative modeling approaches are being evaluating evaluated using recent data to determine if the persistence can be explained by multirate mass transfer models developed from experimental observations at the column scale(~0.2m), the laboratory tank scale (~2m), the field tracer test scale (~1-4m) or geophysical observation scale (~1-5m). Results of this comparison should provide insight into the persistence of U(VI) plumes and improved management options.
NASA Technical Reports Server (NTRS)
Jung, Hahn Chul; Jasinski, Michael; Kim, Jin-Woo; Shum, C. K.; Bates, Paul; Neal, Jeffrey; Lee, Hyongki; Alsdorf, Doug
2011-01-01
This study focuses on the feasibility of using SAR interferometry to support 2D hydrodynamic model calibration and provide water storage change in the floodplain. Two-dimensional (2D) flood inundation modeling has been widely studied using storage cell approaches with the availability of high resolution, remotely sensed floodplain topography. The development of coupled 1D/2D flood modeling has shown improved calculation of 2D floodplain inundation as well as channel water elevation. Most floodplain model results have been validated using remote sensing methods for inundation extent. However, few studies show the quantitative validation of spatial variations in floodplain water elevations in the 2D modeling since most of the gauges are located along main river channels and traditional single track satellite altimetry over the floodplain are limited. Synthetic Aperture Radar (SAR) interferometry recently has been proven to be useful for measuring centimeter-scale water elevation changes over the floodplain. In the current study, we apply the LISFLOOD hydrodynamic model to the central Atchafalaya River Basin, Louisiana, during a 62 day period from 1 April to 1 June 2008 using two different calibration schemes for Manning's n. First, the model is calibrated in terms of water elevations from a single in situ gauge that represents a more traditional approach. Due to the gauge location in the channel, the calibration shows more sensitivity to channel roughness relative to floodplain roughness. Second, the model is calibrated in terms of water elevation changes calculated from ALOS PALSAR interferometry during 46 days of the image acquisition interval from 16 April 2008 to 1 June 2009. Since SAR interferometry receives strongly scatters in floodplain due to double bounce effect as compared to specular scattering of open water, the calibration shows more dependency to floodplain roughness. An iterative approach is used to determine the best-fit Manning's n for the two different calibration approaches. Results suggest similar floodplain roughness but slightly different channel roughness. However, application of SAR interferometry provides a unique view of the floodplain flow gradients, not possible with a single gauge calibration. These gradients, allow improved computation of water storage change over the 46-day simulation period. Overall, the results suggest that the use of 2D SAR water elevation changes in the Atchafalaya basin offers improved understanding and modeling of floodplain hydrodynamics.
Methodology for the development and calibration of the SCI-QOL item banks
Tulsky, David S.; Kisala, Pamela A.; Victorson, David; Choi, Seung W.; Gershon, Richard; Heinemann, Allen W.; Cella, David
2015-01-01
Objective To develop a comprehensive, psychometrically sound, and conceptually grounded patient reported outcomes (PRO) measurement system for individuals with spinal cord injury (SCI). Methods Individual interviews (n = 44) and focus groups (n = 65 individuals with SCI and n = 42 SCI clinicians) were used to select key domains for inclusion and to develop PRO items. Verbatim items from other cutting-edge measurement systems (i.e. PROMIS, Neuro-QOL) were included to facilitate linkage and cross-population comparison. Items were field tested in a large sample of individuals with traumatic SCI (n = 877). Dimensionality was assessed with confirmatory factor analysis. Local item dependence and differential item functioning were assessed, and items were calibrated using the item response theory (IRT) graded response model. Finally, computer adaptive tests (CATs) and short forms were administered in a new sample (n = 245) to assess test-retest reliability and stability. Participants and Procedures A calibration sample of 877 individuals with traumatic SCI across five SCI Model Systems sites and one Department of Veterans Affairs medical center completed SCI-QOL items in interview format. Results We developed 14 unidimensional calibrated item banks and 3 calibrated scales across physical, emotional, and social health domains. When combined with the five Spinal Cord Injury – Functional Index physical function banks, the final SCI-QOL system consists of 22 IRT-calibrated item banks/scales. Item banks may be administered as CATs or short forms. Scales may be administered in a fixed-length format only. Conclusions The SCI-QOL measurement system provides SCI researchers and clinicians with a comprehensive, relevant and psychometrically robust system for measurement of physical-medical, physical-functional, emotional, and social outcomes. All SCI-QOL instruments are freely available on Assessment CenterSM. PMID:26010963
Methodology for the development and calibration of the SCI-QOL item banks.
Tulsky, David S; Kisala, Pamela A; Victorson, David; Choi, Seung W; Gershon, Richard; Heinemann, Allen W; Cella, David
2015-05-01
To develop a comprehensive, psychometrically sound, and conceptually grounded patient reported outcomes (PRO) measurement system for individuals with spinal cord injury (SCI). Individual interviews (n=44) and focus groups (n=65 individuals with SCI and n=42 SCI clinicians) were used to select key domains for inclusion and to develop PRO items. Verbatim items from other cutting-edge measurement systems (i.e. PROMIS, Neuro-QOL) were included to facilitate linkage and cross-population comparison. Items were field tested in a large sample of individuals with traumatic SCI (n=877). Dimensionality was assessed with confirmatory factor analysis. Local item dependence and differential item functioning were assessed, and items were calibrated using the item response theory (IRT) graded response model. Finally, computer adaptive tests (CATs) and short forms were administered in a new sample (n=245) to assess test-retest reliability and stability. A calibration sample of 877 individuals with traumatic SCI across five SCI Model Systems sites and one Department of Veterans Affairs medical center completed SCI-QOL items in interview format. We developed 14 unidimensional calibrated item banks and 3 calibrated scales across physical, emotional, and social health domains. When combined with the five Spinal Cord Injury--Functional Index physical function banks, the final SCI-QOL system consists of 22 IRT-calibrated item banks/scales. Item banks may be administered as CATs or short forms. Scales may be administered in a fixed-length format only. The SCI-QOL measurement system provides SCI researchers and clinicians with a comprehensive, relevant and psychometrically robust system for measurement of physical-medical, physical-functional, emotional, and social outcomes. All SCI-QOL instruments are freely available on Assessment CenterSM.
Limits of Predictability in Commuting Flows in the Absence of Data for Calibration
Yang, Yingxiang; Herrera, Carlos; Eagle, Nathan; González, Marta C.
2014-01-01
The estimation of commuting flows at different spatial scales is a fundamental problem for different areas of study. Many current methods rely on parameters requiring calibration from empirical trip volumes. Their values are often not generalizable to cases without calibration data. To solve this problem we develop a statistical expression to calculate commuting trips with a quantitative functional form to estimate the model parameter when empirical trip data is not available. We calculate commuting trip volumes at scales from within a city to an entire country, introducing a scaling parameter α to the recently proposed parameter free radiation model. The model requires only widely available population and facility density distributions. The parameter can be interpreted as the influence of the region scale and the degree of heterogeneity in the facility distribution. We explore in detail the scaling limitations of this problem, namely under which conditions the proposed model can be applied without trip data for calibration. On the other hand, when empirical trip data is available, we show that the proposed model's estimation accuracy is as good as other existing models. We validated the model in different regions in the U.S., then successfully applied it in three different countries. PMID:25012599
A high-resolution dissolved oxygen mass balance model was developed for the Louisiana coastal shelf in the northern Gulf of Mexico. GoMDOM (Gulf of Mexico Dissolved Oxygen Model) was developed to assist in evaluating the impacts of nutrient loading on hypoxia development and exte...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liangzhe Zhang; Anthony D. Rollett; Timothy Bartel
2012-02-01
A calibrated Monte Carlo (cMC) approach, which quantifies grain boundary kinetics within a generic setting, is presented. The influence of misorientation is captured by adding a scaling coefficient in the spin flipping probability equation, while the contribution of different driving forces is weighted using a partition function. The calibration process relies on the established parametric links between Monte Carlo (MC) and sharp-interface models. The cMC algorithm quantifies microstructural evolution under complex thermomechanical environments and remedies some of the difficulties associated with conventional MC models. After validation, the cMC approach is applied to quantify the texture development of polycrystalline materials withmore » influences of misorientation and inhomogeneous bulk energy across grain boundaries. The results are in good agreement with theory and experiments.« less
CHALLENGES OF PROCESSING BIOLOGICAL DATA FOR INCORPORATION INTO A LAKE EUTROPHICATION MODEL
A eutrophication model is in development as part of the Lake Michigan Mass Balance Project (LMMBP). Successful development and calibration of this model required the processing and incorporation of extensive biological data. Data were drawn from multiple sources, including nutrie...
Phosphorus and Phytoplankton in Lake Michigan: Model Post-audit and Projections
The eutrophication model, LM3-Eutro, was developed in support of the Lake Michigan Mass Balance Project to simulate chlorophyll-a (phytoplankton), phosphorus and carbon concentrations in the lake. This high-resolution carbon-based model was developed and calibrated using extensi...
Calibration strategies for a groundwater model in a highly dynamic alpine floodplain
Foglia, L.; Burlando, P.; Hill, Mary C.; Mehl, S.
2004-01-01
Most surface flows to the 20-km-long Maggia Valley in Southern Switzerland are impounded and the valley is being investigated to determine environmental flow requirements. The aim of the investigation is the devel-opment of a modelling framework that simulates the dynamics of the ground-water, hydrologic, and ecologic systems. Because of the multi-scale nature of the modelling framework, large-scale models are first developed to provide the boundary conditions for more detailed models of reaches that are of eco-logical importance. We describe here the initial (large-scale) groundwa-ter/surface water model and its calibration in relation to initial and boundary conditions. A MODFLOW-2000 model was constructed to simulate the inter-action of groundwater and surface water and was developed parsimoniously to avoid modelling artefacts and parameter inconsistencies. Model calibration includes two steady-state conditions, with and without recharge to the aquifer from the adjoining hillslopes. Parameters are defined to represent areal re-charge, hydraulic conductivity of the aquifer (up to 5 classes), and streambed hydraulic conductivity. Model performance was investigated following two system representation. The first representation assumed unknown flow input at the northern end of the groundwater domain and unknown lateral inflow. The second representation used simulations of the lateral flow obtained by means of a raster-based, physically oriented and continuous in time rainfall-runoff (R-R) model. Results based on these two representations are compared and discussed.
NASA Astrophysics Data System (ADS)
Lin, Tsungpo
Performance engineers face the major challenge in modeling and simulation for the after-market power system due to system degradation and measurement errors. Currently, the majority in power generation industries utilizes the deterministic data matching method to calibrate the model and cascade system degradation, which causes significant calibration uncertainty and also the risk of providing performance guarantees. In this research work, a maximum-likelihood based simultaneous data reconciliation and model calibration (SDRMC) is used for power system modeling and simulation. By replacing the current deterministic data matching with SDRMC one can reduce the calibration uncertainty and mitigate the error propagation to the performance simulation. A modeling and simulation environment for a complex power system with certain degradation has been developed. In this environment multiple data sets are imported when carrying out simultaneous data reconciliation and model calibration. Calibration uncertainties are estimated through error analyses and populated to performance simulation by using principle of error propagation. System degradation is then quantified by performance comparison between the calibrated model and its expected new & clean status. To mitigate smearing effects caused by gross errors, gross error detection (GED) is carried out in two stages. The first stage is a screening stage, in which serious gross errors are eliminated in advance. The GED techniques used in the screening stage are based on multivariate data analysis (MDA), including multivariate data visualization and principal component analysis (PCA). Subtle gross errors are treated at the second stage, in which the serial bias compensation or robust M-estimator is engaged. To achieve a better efficiency in the combined scheme of the least squares based data reconciliation and the GED technique based on hypotheses testing, the Levenberg-Marquardt (LM) algorithm is utilized as the optimizer. To reduce the computation time and stabilize the problem solving for a complex power system such as a combined cycle power plant, meta-modeling using the response surface equation (RSE) and system/process decomposition are incorporated with the simultaneous scheme of SDRMC. The goal of this research work is to reduce the calibration uncertainties and, thus, the risks of providing performance guarantees arisen from uncertainties in performance simulation.
NASA Technical Reports Server (NTRS)
Nguyen, Quang-Viet; Kojima, Jun
2005-01-01
Researchers from NASA Glenn Research Center s Combustion Branch and the Ohio Aerospace Institute (OAI) have developed a transferable calibration standard for an optical technique called spontaneous Raman scattering (SRS) in high-pressure flames. SRS is perhaps the only technique that provides spatially and temporally resolved, simultaneous multiscalar measurements in turbulent flames. Such measurements are critical for the validation of numerical models of combustion. This study has been a combined experimental and theoretical effort to develop a spectral calibration database for multiscalar diagnostics using SRS in high-pressure flames. However, in the past such measurements have used a one-of-a-kind experimental setup and a setup-dependent calibration procedure to empirically account for spectral interferences, or crosstalk, among the major species of interest. Such calibration procedures, being non-transferable, are prohibitively expensive to duplicate. A goal of this effort is to provide an SRS calibration database using transferable standards that can be implemented widely by other researchers for both atmospheric-pressure and high-pressure (less than 30 atm) SRS studies. A secondary goal of this effort is to provide quantitative multiscalar diagnostics in high pressure environments to validate computational combustion codes.
Cai, Longyan; He, Hong S.; Wu, Zhiwei; Lewis, Benard L.; Liang, Yu
2014-01-01
Understanding the fire prediction capabilities of fuel models is vital to forest fire management. Various fuel models have been developed in the Great Xing'an Mountains in Northeast China. However, the performances of these fuel models have not been tested for historical occurrences of wildfires. Consequently, the applicability of these models requires further investigation. Thus, this paper aims to develop standard fuel models. Seven vegetation types were combined into three fuel models according to potential fire behaviors which were clustered using Euclidean distance algorithms. Fuel model parameter sensitivity was analyzed by the Morris screening method. Results showed that the fuel model parameters 1-hour time-lag loading, dead heat content, live heat content, 1-hour time-lag SAV(Surface Area-to-Volume), live shrub SAV, and fuel bed depth have high sensitivity. Two main sensitive fuel parameters: 1-hour time-lag loading and fuel bed depth, were determined as adjustment parameters because of their high spatio-temporal variability. The FARSITE model was then used to test the fire prediction capabilities of the combined fuel models (uncalibrated fuel models). FARSITE was shown to yield an unrealistic prediction of the historical fire. However, the calibrated fuel models significantly improved the capabilities of the fuel models to predict the actual fire with an accuracy of 89%. Validation results also showed that the model can estimate the actual fires with an accuracy exceeding 56% by using the calibrated fuel models. Therefore, these fuel models can be efficiently used to calculate fire behaviors, which can be helpful in forest fire management. PMID:24714164
A Kinematic Calibration Process for Flight Robotic Arms
NASA Technical Reports Server (NTRS)
Collins, Curtis L.; Robinson, Matthew L.
2013-01-01
The Mars Science Laboratory (MSL) robotic arm is ten times more massive than any Mars robotic arm before it, yet with similar accuracy and repeatability positioning requirements. In order to assess and validate these requirements, a higher-fidelity model and calibration processes were needed. Kinematic calibration of robotic arms is a common and necessary process to ensure good positioning performance. Most methodologies assume a rigid arm, high-accuracy data collection, and some kind of optimization of kinematic parameters. A new detailed kinematic and deflection model of the MSL robotic arm was formulated in the design phase and used to update the initial positioning and orientation accuracy and repeatability requirements. This model included a higher-fidelity link stiffness matrix representation, as well as a link level thermal expansion model. In addition, it included an actuator backlash model. Analytical results highlighted the sensitivity of the arm accuracy to its joint initialization methodology. Because of this, a new technique for initializing the arm joint encoders through hardstop calibration was developed. This involved selecting arm configurations to use in Earth-based hardstop calibration that had corresponding configurations on Mars with the same joint torque to ensure repeatability in the different gravity environment. The process used to collect calibration data for the arm included the use of multiple weight stand-in turrets with enough metrology targets to reconstruct the full six-degree-of-freedom location of the rover and tool frames. The follow-on data processing of the metrology data utilized a standard differential formulation and linear parameter optimization technique.
Putnam, Jacob B; Somers, Jeffrey T; Wells, Jessica A; Perry, Chris E; Untaroiu, Costin D
2015-09-01
New vehicles are currently being developed to transport humans to space. During the landing phases, crewmembers may be exposed to spinal and frontal loading. To reduce the risk of injuries during these common impact scenarios, the National Aeronautics and Space Administration (NASA) is developing new safety standards for spaceflight. The Test Device for Human Occupant Restraint (THOR) advanced multi-directional anthropomorphic test device (ATD), with the National Highway Traffic Safety Administration modification kit, has been chosen to evaluate occupant spacecraft safety because of its improved biofidelity. NASA tested the THOR ATD at Wright-Patterson Air Force Base (WPAFB) in various impact configurations, including frontal and spinal loading. A computational finite element model (FEM) of the THOR to match these latest modifications was developed in LS-DYNA software. The main goal of this study was to calibrate and validate the THOR FEM for use in future spacecraft safety studies. An optimization-based method was developed to calibrate the material models of the lumbar joints and pelvic flesh. Compression test data were used to calibrate the quasi-static material properties of the pelvic flesh, while whole body THOR ATD kinematic and kinetic responses under spinal and frontal loading conditions were used for dynamic calibration. The performance of the calibrated THOR FEM was evaluated by simulating separate THOR ATD tests with different crash pulses along both spinal and frontal directions. The model response was compared with test data by calculating its correlation score using the CORrelation and Analysis rating system. The biofidelity of the THOR FEM was then evaluated against tests recorded on human volunteers under 3 different frontal and spinal impact pulses. The calibrated THOR FEM responded with high similarity to the THOR ATD in all validation tests. The THOR FEM showed good biofidelity relative to human-volunteer data under spinal loading, but limited biofidelity under frontal loading. This may suggest a need for further improvements in both the THOR ATD and FEM. Overall, results presented in this study provide confidence in the THOR FEM for use in predicting THOR ATD responses for conditions, such as those observed in spacecraft landing, and for use in evaluating THOR ATD biofidelity. Copyright © 2015 Elsevier Ltd. All rights reserved.
Quentin, A G; Rodemann, T; Doutreleau, M-F; Moreau, M; Davies, N W; Millard, Peter
2017-01-31
Near-infrared reflectance spectroscopy (NIRS) is frequently used for the assessment of key nutrients of forage or crops but remains underused in ecological and physiological studies, especially to quantify non-structural carbohydrates. The aim of this study was to develop calibration models to assess the content in soluble sugars (fructose, glucose, sucrose) and starch in foliar material of Eucalyptus globulus. A partial least squares (PLS) regression was used on the sample spectral data and was compared to the contents measured using standard wet chemistry methods. The calibration models were validated using a completely independent set of samples. We used key indicators such as the ratio of prediction to deviation (RPD) and the range error ratio to give an assessment of the performance of the calibration models. Accurate calibration models were obtained for fructose and sucrose content (R2 > 0.85, root mean square error of prediction (RMSEP) of 0.95%–1.26% in the validation models), followed by sucrose and total soluble sugar content (R2 ~ 0.70 and RMSEP > 2.3%). In comparison to the others, calibration of the starch model performed very poorly with RPD = 1.70. This study establishes the ability of the NIRS calibration model to infer soluble sugar content in foliar samples of E. globulus in a rapid and cost-effective way. We suggest a complete redevelopment of the starch analysis using more specific quantification such as an HPLC-based technique to reach higher performance in the starch model. Overall, NIRS could serve as a high-throughput phenotyping tool to study plant response to stress factors.
Automated Camera Array Fine Calibration
NASA Technical Reports Server (NTRS)
Clouse, Daniel; Padgett, Curtis; Ansar, Adnan; Cheng, Yang
2008-01-01
Using aerial imagery, the JPL FineCalibration (JPL FineCal) software automatically tunes a set of existing CAHVOR camera models for an array of cameras. The software finds matching features in the overlap region between images from adjacent cameras, and uses these features to refine the camera models. It is not necessary to take special imagery of a known target and no surveying is required. JPL FineCal was developed for use with an aerial, persistent surveillance platform.
Kumar, Keshav
2018-03-01
Excitation-emission matrix fluorescence (EEMF) and total synchronous fluorescence spectroscopy (TSFS) are the 2 fluorescence techniques that are commonly used for the analysis of multifluorophoric mixtures. These 2 fluorescence techniques are conceptually different and provide certain advantages over each other. The manual analysis of such highly correlated large volume of EEMF and TSFS towards developing a calibration model is difficult. Partial least square (PLS) analysis can analyze the large volume of EEMF and TSFS data sets by finding important factors that maximize the correlation between the spectral and concentration information for each fluorophore. However, often the application of PLS analysis on entire data sets does not provide a robust calibration model and requires application of suitable pre-processing step. The present work evaluates the application of genetic algorithm (GA) analysis prior to PLS analysis on EEMF and TSFS data sets towards improving the precision and accuracy of the calibration model. The GA algorithm essentially combines the advantages provided by stochastic methods with those provided by deterministic approaches and can find the set of EEMF and TSFS variables that perfectly correlate well with the concentration of each of the fluorophores present in the multifluorophoric mixtures. The utility of the GA assisted PLS analysis is successfully validated using (i) EEMF data sets acquired for dilute aqueous mixture of four biomolecules and (ii) TSFS data sets acquired for dilute aqueous mixtures of four carcinogenic polycyclic aromatic hydrocarbons (PAHs) mixtures. In the present work, it is shown that by using the GA it is possible to significantly improve the accuracy and precision of the PLS calibration model developed for both EEMF and TSFS data set. Hence, GA must be considered as a useful pre-processing technique while developing an EEMF and TSFS calibration model.
NASA Astrophysics Data System (ADS)
Mohanty, B.; Jena, S.; Panda, R. K.
2016-12-01
The overexploitation of groundwater elicited in abandoning several shallow tube wells in the study Basin in Eastern India. For the sustainability of groundwater resources, basin-scale modelling of groundwater flow is indispensable for the effective planning and management of the water resources. The basic intent of this study is to develop a 3-D groundwater flow model of the study basin using the Visual MODFLOW Flex 2014.2 package and successfully calibrate and validate the model using 17 years of observed data. The sensitivity analysis was carried out to quantify the susceptibility of aquifer system to the river bank seepage, recharge from rainfall and agriculture practices, horizontal and vertical hydraulic conductivities, and specific yield. To quantify the impact of parameter uncertainties, Sequential Uncertainty Fitting Algorithm (SUFI-2) and Markov chain Monte Carlo (McMC) techniques were implemented. Results from the two techniques were compared and the advantages and disadvantages were analysed. Nash-Sutcliffe coefficient (NSE), Coefficient of Determination (R2), Mean Absolute Error (MAE), Mean Percent Deviation (Dv) and Root Mean Squared Error (RMSE) were adopted as criteria of model evaluation during calibration and validation of the developed model. NSE, R2, MAE, Dv and RMSE values for groundwater flow model during calibration and validation were in acceptable range. Also, the McMC technique was able to provide more reasonable results than SUFI-2. The calibrated and validated model will be useful to identify the aquifer properties, analyse the groundwater flow dynamics and the change in groundwater levels in future forecasts.
The calibration and flight test performance of the space shuttle orbiter air data system
NASA Technical Reports Server (NTRS)
Dean, A. S.; Mena, A. L.
1983-01-01
The Space Shuttle air data system (ADS) is used by the guidance, navigation and control system (GN&C) to guide the vehicle to a safe landing. In addition, postflight aerodynamic analysis requires a precise knowledge of flight conditions. Since the orbiter is essentially an unpowered vehicle, the conventional methods of obtaining the ADS calibration were not available; therefore, the calibration was derived using a unique and extensive wind tunnel test program. This test program included subsonic tests with a 0.36-scale orbiter model, transonic and supersonic tests with a smaller 0.2-scale model, and numerous ADS probe-alone tests. The wind tunnel calibration was further refined with subsonic results from the approach and landing test (ALT) program, thus producing the ADS calibration for the orbital flight test (OFT) program. The calibration of the Space Shuttle ADS and its performance during flight are discussed in this paper. A brief description of the system is followed by a discussion of the calibration methodology, and then by a review of the wind tunnel and flight test programs. Finally, the flight results are presented, including an evaluation of the system performance for on-board systems use and a description of the calibration refinements developed to provide the best possible air data for postflight analysis work.
Calibration of a γ- Re θ transition model and its application in low-speed flows
NASA Astrophysics Data System (ADS)
Wang, YunTao; Zhang, YuLun; Meng, DeHong; Wang, GunXue; Li, Song
2014-12-01
The prediction of laminar-turbulent transition in boundary layer is very important for obtaining accurate aerodynamic characteristics with computational fluid dynamic (CFD) tools, because laminar-turbulent transition is directly related to complex flow phenomena in boundary layer and separated flow in space. Unfortunately, the transition effect isn't included in today's major CFD tools because of non-local calculations in transition modeling. In this paper, Menter's γ- Re θ transition model is calibrated and incorporated into a Reynolds-Averaged Navier-Stokes (RANS) code — Trisonic Platform (TRIP) developed in China Aerodynamic Research and Development Center (CARDC). Based on the experimental data of flat plate from the literature, the empirical correlations involved in the transition model are modified and calibrated numerically. Numerical simulation for low-speed flow of Trapezoidal Wing (Trap Wing) is performed and compared with the corresponding experimental data. It is indicated that the γ- Re θ transition model can accurately predict the location of separation-induced transition and natural transition in the flow region with moderate pressure gradient. The transition model effectively imporves the simulation accuracy of the boundary layer and aerodynamic characteristics.
Igne, Benoit; Shi, Zhenqi; Drennen, James K; Anderson, Carl A
2014-02-01
The impact of raw material variability on the prediction ability of a near-infrared calibration model was studied. Calibrations, developed from a quaternary mixture design comprising theophylline anhydrous, lactose monohydrate, microcrystalline cellulose, and soluble starch, were challenged by intentional variation of raw material properties. A design with two theophylline physical forms, three lactose particle sizes, and two starch manufacturers was created to test model robustness. Further challenges to the models were accomplished through environmental conditions. Along with full-spectrum partial least squares (PLS) modeling, variable selection by dynamic backward PLS and genetic algorithms was utilized in an effort to mitigate the effects of raw material variability. In addition to evaluating models based on their prediction statistics, prediction residuals were analyzed by analyses of variance and model diagnostics (Hotelling's T(2) and Q residuals). Full-spectrum models were significantly affected by lactose particle size. Models developed by selecting variables gave lower prediction errors and proved to be a good approach to limit the effect of changing raw material characteristics. Hotelling's T(2) and Q residuals provided valuable information that was not detectable when studying only prediction trends. Diagnostic statistics were demonstrated to be critical in the appropriate interpretation of the prediction of quality parameters. © 2013 Wiley Periodicals, Inc. and the American Pharmacists Association.
NASA Astrophysics Data System (ADS)
Luo, L.
2011-12-01
Automated calibration of complex deterministic water quality models with a large number of biogeochemical parameters can reduce time-consuming iterative simulations involving empirical judgements of model fit. We undertook auto-calibration of the one-dimensional hydrodynamic-ecological lake model DYRESM-CAEDYM, using a Monte Carlo sampling (MCS) method, in order to test the applicability of this procedure for shallow, polymictic Lake Rotorua (New Zealand). The calibration procedure involved independently minimising the root-mean-square-error (RMSE), maximizing the Pearson correlation coefficient (r) and Nash-Sutcliffe efficient coefficient (Nr) for comparisons of model state variables against measured data. An assigned number of parameter permutations was used for 10,000 simulation iterations. The 'optimal' temperature calibration produced a RMSE of 0.54 °C, Nr-value of 0.99 and r-value of 0.98 through the whole water column based on comparisons with 540 observed water temperatures collected between 13 July 2007 - 13 January 2009. The modeled bottom dissolved oxygen concentration (20.5 m below surface) was compared with 467 available observations. The calculated RMSE of the simulations compared with the measurements was 1.78 mg L-1, the Nr-value was 0.75 and the r-value was 0.87. The autocalibrated model was further tested for an independent data set by simulating bottom-water hypoxia events for the period 15 January 2009 to 8 June 2011 (875 days). This verification produced an accurate simulation of five hypoxic events corresponding to DO < 2 mg L-1 during summer of 2009-2011. The RMSE was 2.07 mg L-1, Nr-value 0.62 and r-value of 0.81, based on the available data set of 738 days. The auto-calibration software of DYRESM-CAEDYM developed here is substantially less time-consuming and more efficient in parameter optimisation than traditional manual calibration which has been the standard tool practiced for similar complex water quality models.
NASA Astrophysics Data System (ADS)
Shedekar, Vinayak S.; King, Kevin W.; Fausey, Norman R.; Soboyejo, Alfred B. O.; Harmel, R. Daren; Brown, Larry C.
2016-09-01
Three different models of tipping bucket rain gauges (TBRs), viz. HS-TB3 (Hydrological Services Pty Ltd.), ISCO-674 (Isco, Inc.) and TR-525 (Texas Electronics, Inc.), were calibrated in the lab to quantify measurement errors across a range of rainfall intensities (5 mm·h- 1 to 250 mm·h- 1) and three different volumetric settings. Instantaneous and cumulative values of simulated rainfall were recorded at 1, 2, 5, 10 and 20-min intervals. All three TBR models showed a substantial deviation (α = 0.05) in measurements from actual rainfall depths, with increasing underestimation errors at greater rainfall intensities. Simple linear regression equations were developed for each TBR to correct the TBR readings based on measured intensities (R2 > 0.98). Additionally, two dynamic calibration techniques, viz. quadratic model (R2 > 0.7) and T vs. 1/Q model (R2 = > 0.98), were tested and found to be useful in situations when the volumetric settings of TBRs are unknown. The correction models were successfully applied to correct field-collected rainfall data from respective TBR models. The calibration parameters of correction models were found to be highly sensitive to changes in volumetric calibration of TBRs. Overall, the HS-TB3 model (with a better protected tipping bucket mechanism, and consistent measurement errors across a range of rainfall intensities) was found to be the most reliable and consistent for rainfall measurements, followed by the ISCO-674 (with susceptibility to clogging and relatively smaller measurement errors across a range of rainfall intensities) and the TR-525 (with high susceptibility to clogging and frequent changes in volumetric calibration, and highly intensity-dependent measurement errors). The study demonstrated that corrections based on dynamic and volumetric calibration can only help minimize-but not completely eliminate the measurement errors. The findings from this study will be useful for correcting field data from TBRs; and may have major implications to field- and watershed-scale hydrologic studies.
NASA Technical Reports Server (NTRS)
Tonkay, Gregory
1990-01-01
The following separate topics are addressed: (1) improving a robotic tracking system; and (2) providing insights into orbiter position calibration for radiator inspection. The objective of the tracking system project was to provide the capability to track moving targets more accurately by adjusting parameters in the control system and implementing a predictive algorithm. A computer model was developed to emulate the tracking system. Using this model as a test bed, a self-tuning algorithm was developed to tune the system gains. The model yielded important findings concerning factors that affect the gains. The self-tuning algorithms will provide the concepts to write a program to automatically tune the gains in the real system. The section concerning orbiter position calibration provides a comparison to previous work that had been performed for plant growth. It provided the conceptualized routines required to visually determine the orbiter position and orientation. Furthermore, it identified the types of information which are required to flow between the robot controller and the vision system.
Assessment of bitterness intensity and suppression effects using an Electronic Tongue
NASA Astrophysics Data System (ADS)
Legin, A.; Rudnitskaya, A.; Kirsanov, D.; Frolova, Yu.; Clapham, D.; Caricofe, R.
2009-05-01
Quantification of bitterness intensity and effectivness of bitterness suppression of a novel active pharmacological ingredient (API) being developed by GSK was performed using an Electronic Tongue (ET) based on potentiometric chemical sensors. Calibration of the ET was performed with solutions of quinine hydrochloride in the concentration range 0.4-360 mgL-1. An MLR calibration model was developed for predicting bitterness intensity expressed as "equivalent quinine concentration" of a series of solutions of quinine, bittrex and the API. Additionally the effectiveness of sucralose, mixture of aspartame and acesulfame K, and grape juice in masking the bitter taste of the API was assessed using two approaches. PCA models were produced and distances between compound containing solutions and corresponding placebos were calculated. The other approach consisted in calculating "equivalent quinine concentration" using a calibration model with respect to quinine concentration. According to both methods, the most effective taste masking was produced by grape juice, followed by the mixture of aspartame and acesulfame K.
NASA Astrophysics Data System (ADS)
Chahinian, Nanée; Moussa, Roger; Andrieux, Patrick; Voltz, Marc
2006-07-01
Tillage operations are known to greatly influence local overland flow, infiltration and depressional storage by altering soil hydraulic properties and soil surface roughness. The calibration of runoff models for tilled fields is not identical to that of untilled fields, as it has to take into consideration the temporal variability of parameters due to the transient nature of surface crusts. In this paper, we seek the application of a rainfall-runoff model and the development of a calibration methodology to take into account the impact of tillage on overland flow simulation at the scale of a tilled plot (3240 m 2) located in southern France. The selected model couples the (Morel-Seytoux, H.J., 1978. Derivation of equations for variable rainfall infiltration. Water Resources Research. 14(4), 561-568). Infiltration equation to a transfer function based on the diffusive wave equation. The parameters to be calibrated are the hydraulic conductivity at natural saturation Ks, the surface detention Sd and the lag time ω. A two-step calibration procedure is presented. First, eleven rainfall-runoff events are calibrated individually and the variability of the calibrated parameters are analysed. The individually calibrated Ks values decrease monotonously according to the total amount of rainfall since tillage. No clear relationship is observed between the two parameters Sd and ω, and the date of tillage. However, the lag time ω increases inversely with the peakflow of the events. Fairly good agreement is observed between the simulated and measured hydrographs of the calibration set. Simple mathematical laws describing the evolution of Ks and ω are selected, while Sd is considered constant. The second step involves the collective calibration of the law of evolution of each parameter on the whole calibration set. This procedure is calibrated on 11 events and validated on ten runoff inducing and four non-runoff inducing rainfall events. The suggested calibration methodology seems robust and can be transposed to other gauged sites.
Majarena, Ana C.; Santolaria, Jorge; Samper, David; Aguilar, Juan J.
2010-01-01
This paper presents an overview of the literature on kinematic and calibration models of parallel mechanisms, the influence of sensors in the mechanism accuracy and parallel mechanisms used as sensors. The most relevant classifications to obtain and solve kinematic models and to identify geometric and non-geometric parameters in the calibration of parallel robots are discussed, examining the advantages and disadvantages of each method, presenting new trends and identifying unsolved problems. This overview tries to answer and show the solutions developed by the most up-to-date research to some of the most frequent questions that appear in the modelling of a parallel mechanism, such as how to measure, the number of sensors and necessary configurations, the type and influence of errors or the number of necessary parameters. PMID:22163469
Parallel computing for automated model calibration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burke, John S.; Danielson, Gary R.; Schulz, Douglas A.
2002-07-29
Natural resources model calibration is a significant burden on computing and staff resources in modeling efforts. Most assessments must consider multiple calibration objectives (for example magnitude and timing of stream flow peak). An automated calibration process that allows real time updating of data/models, allowing scientists to focus effort on improving models is needed. We are in the process of building a fully featured multi objective calibration tool capable of processing multiple models cheaply and efficiently using null cycle computing. Our parallel processing and calibration software routines have been generically, but our focus has been on natural resources model calibration. Somore » far, the natural resources models have been friendly to parallel calibration efforts in that they require no inter-process communication, only need a small amount of input data and only output a small amount of statistical information for each calibration run. A typical auto calibration run might involve running a model 10,000 times with a variety of input parameters and summary statistical output. In the past model calibration has been done against individual models for each data set. The individual model runs are relatively fast, ranging from seconds to minutes. The process was run on a single computer using a simple iterative process. We have completed two Auto Calibration prototypes and are currently designing a more feature rich tool. Our prototypes have focused on running the calibration in a distributed computing cross platform environment. They allow incorporation of?smart? calibration parameter generation (using artificial intelligence processing techniques). Null cycle computing similar to SETI@Home has also been a focus of our efforts. This paper details the design of the latest prototype and discusses our plans for the next revision of the software.« less
Grant, S.W.; Hickey, G.L.; Carlson, E.D.; McCollum, C.N.
2014-01-01
Objective/background A number of contemporary risk prediction models for mortality following elective abdominal aortic aneurysm (AAA) repair have been developed. Before a model is used either in clinical practice or to risk-adjust surgical outcome data it is important that its performance is assessed in external validation studies. Methods The British Aneurysm Repair (BAR) score, Medicare, and Vascular Governance North West (VGNW) models were validated using an independent prospectively collected sample of multicentre clinical audit data. Consecutive, data on 1,124 patients undergoing elective AAA repair at 17 hospitals in the north-west of England and Wales between April 2011 and March 2013 were analysed. The outcome measure was in-hospital mortality. Model calibration (observed to expected ratio with chi-square test, calibration plots, calibration intercept and slope) and discrimination (area under receiver operating characteristic curve [AUC]) were assessed in the overall cohort and procedural subgroups. Results The mean age of the population was 74.4 years (SD 7.7); 193 (17.2%) patients were women and the majority of patients (759, 67.5%) underwent endovascular aneurysm repair. All three models demonstrated good calibration in the overall cohort and procedural subgroups. Overall discrimination was excellent for the BAR score (AUC 0.83, 95% confidence interval [CI] 0.76–0.89), and acceptable for the Medicare and VGNW models, with AUCs of 0.78 (95% CI 0.70–0.86) and 0.75 (95% CI 0.65–0.84) respectively. Only the BAR score demonstrated good discrimination in procedural subgroups. Conclusion All three models demonstrated good calibration and discrimination for the prediction of in-hospital mortality following elective AAA repair and are potentially useful. The BAR score has a number of advantages, which include being developed on the most contemporaneous data, excellent overall discrimination, and good performance in procedural subgroups. Regular model validations and recalibration will be essential. PMID:24837173
Grant, S W; Hickey, G L; Carlson, E D; McCollum, C N
2014-07-01
A number of contemporary risk prediction models for mortality following elective abdominal aortic aneurysm (AAA) repair have been developed. Before a model is used either in clinical practice or to risk-adjust surgical outcome data it is important that its performance is assessed in external validation studies. The British Aneurysm Repair (BAR) score, Medicare, and Vascular Governance North West (VGNW) models were validated using an independent prospectively collected sample of multicentre clinical audit data. Consecutive, data on 1,124 patients undergoing elective AAA repair at 17 hospitals in the north-west of England and Wales between April 2011 and March 2013 were analysed. The outcome measure was in-hospital mortality. Model calibration (observed to expected ratio with chi-square test, calibration plots, calibration intercept and slope) and discrimination (area under receiver operating characteristic curve [AUC]) were assessed in the overall cohort and procedural subgroups. The mean age of the population was 74.4 years (SD 7.7); 193 (17.2%) patients were women and the majority of patients (759, 67.5%) underwent endovascular aneurysm repair. All three models demonstrated good calibration in the overall cohort and procedural subgroups. Overall discrimination was excellent for the BAR score (AUC 0.83, 95% confidence interval [CI] 0.76-0.89), and acceptable for the Medicare and VGNW models, with AUCs of 0.78 (95% CI 0.70-0.86) and 0.75 (95% CI 0.65-0.84) respectively. Only the BAR score demonstrated good discrimination in procedural subgroups. All three models demonstrated good calibration and discrimination for the prediction of in-hospital mortality following elective AAA repair and are potentially useful. The BAR score has a number of advantages, which include being developed on the most contemporaneous data, excellent overall discrimination, and good performance in procedural subgroups. Regular model validations and recalibration will be essential. Copyright © 2014 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kercher, J.R.; Chambers, J.Q.
1995-10-01
We have developed a geographically-distributed ecosystem model for the carbon, nitrogen, and water dynamics of the terrestrial biosphere TERRA. The local ecosystem model of TERRA consists of coupled, modified versions of TEM and DAYTRANS. The ecosystem model in each grid cell calculates water fluxes of evaporation, transpiration, and runoff; carbon fluxes of gross primary productivity, litterfall, and plant and soil respiration; and nitrogen fluxes of vegetation uptake, litterfall, mineralization, immobilization, and system loss. The state variables are soil water content; carbon in live vegetation; carbon in soil; nitrogen in live vegetation; organic nitrogen in soil and fitter; available inorganic nitrogenmore » aggregating nitrites, nitrates, and ammonia; and a variable for allocation. Carbon and nitrogen dynamics are calibrated to specific sites in 17 vegetation types. Eight parameters are determined during calibration for each of the 17 vegetation types. At calibration, the annual average values of carbon in vegetation C, show site differences that derive from the vegetation-type specific parameters and intersite variation in climate and soils. From calibration, we recover the average C{sub v} of forests, woodlands, savannas, grasslands, shrublands, and tundra that were used to develop the model initially. The timing of the phases of the annual variation is driven by temperature and light in the high latitude and moist temperate zones. The dry temperate zones are driven by temperature, precipitation, and light. In the tropics, precipitation is the key variable in annual variation. The seasonal responses are even more clearly demonstrated in net primary production and show the same controlling factors.« less
Paech, S.J.; Mecikalski, J.R.; Sumner, D.M.; Pathak, C.S.; Wu, Q.; Islam, S.; Sangoyomi, T.
2009-01-01
Estimates of incoming solar radiation (insolation) from Geostationary Operational Environmental Satellite observations have been produced for the state of Florida over a 10-year period (1995-2004). These insolation estimates were developed into well-calibrated half-hourly and daily integrated solar insolation fields over the state at 2 km resolution, in addition to a 2-week running minimum surface albedo product. Model results of the daily integrated insolation were compared with ground-based pyranometers, and as a result, the entire dataset was calibrated. This calibration was accomplished through a three-step process: (1) comparison with ground-based pyranometer measurements on clear (noncloudy) reference days, (2) correcting for a bias related to cloudiness, and (3) deriving a monthly bias correction factor. Precalibration results indicated good model performance, with a station-averaged model error of 2.2 MJ m-2/day (13%). Calibration reduced errors to 1.7 MJ m -2/day (10%), and also removed temporal-related, seasonal-related, and satellite sensor-related biases. The calibrated insolation dataset will subsequently be used by state of Florida Water Management Districts to produce statewide, 2-km resolution maps of estimated daily reference and potential evapotranspiration for water management-related activities. ?? 2009 American Water Resources Association.
Yurko, Joseph P.; Buongiorno, Jacopo; Youngblood, Robert
2015-05-28
System codes for simulation of safety performance of nuclear plants may contain parameters whose values are not known very accurately. New information from tests or operating experience is incorporated into safety codes by a process known as calibration, which reduces uncertainty in the output of the code and thereby improves its support for decision-making. The work reported here implements several improvements on classic calibration techniques afforded by modern analysis techniques. The key innovation has come from development of code surrogate model (or code emulator) construction and prediction algorithms. Use of a fast emulator makes the calibration processes used here withmore » Markov Chain Monte Carlo (MCMC) sampling feasible. This study uses Gaussian Process (GP) based emulators, which have been used previously to emulate computer codes in the nuclear field. The present work describes the formulation of an emulator that incorporates GPs into a factor analysis-type or pattern recognition-type model. This “function factorization” Gaussian Process (FFGP) model allows overcoming limitations present in standard GP emulators, thereby improving both accuracy and speed of the emulator-based calibration process. Calibration of a friction-factor example using a Method of Manufactured Solution is performed to illustrate key properties of the FFGP based process.« less
Photogrammetry Applied to Wind Tunnel Testing
NASA Technical Reports Server (NTRS)
Liu, Tian-Shu; Cattafesta, L. N., III; Radeztsky, R. H.; Burner, A. W.
2000-01-01
In image-based measurements, quantitative image data must be mapped to three-dimensional object space. Analytical photogrammetric methods, which may be used to accomplish this task, are discussed from the viewpoint of experimental fluid dynamicists. The Direct Linear Transformation (DLT) for camera calibration, used in pressure sensitive paint, is summarized. An optimization method for camera calibration is developed that can be used to determine the camera calibration parameters, including those describing lens distortion, from a single image. Combined with the DLT method, this method allows a rapid and comprehensive in-situ camera calibration and therefore is particularly useful for quantitative flow visualization and other measurements such as model attitude and deformation in production wind tunnels. The paper also includes a brief description of typical photogrammetric applications to temperature- and pressure-sensitive paint measurements and model deformation measurements in wind tunnels.
Improving Photometric Calibration of Meteor Video Camera Systems
NASA Technical Reports Server (NTRS)
Ehlert, Steven; Kingery, Aaron; Cooke, William
2016-01-01
Current optical observations of meteors are commonly limited by systematic uncertainties in photometric calibration at the level of approximately 0.5 mag or higher. Future improvements to meteor ablation models, luminous efficiency models, or emission spectra will hinge on new camera systems and techniques that significantly reduce calibration uncertainties and can reliably perform absolute photometric measurements of meteors. In this talk we discuss the algorithms and tests that NASA's Meteoroid Environment Office (MEO) has developed to better calibrate photometric measurements for the existing All-Sky and Wide-Field video camera networks as well as for a newly deployed four-camera system for measuring meteor colors in Johnson-Cousins BV RI filters. In particular we will emphasize how the MEO has been able to address two long-standing concerns with the traditional procedure, discussed in more detail below.
Christiansen, Daniel E.; Haj, Adel E.; Risley, John C.
2017-10-24
The U.S. Geological Survey, in cooperation with the Iowa Department of Natural Resources, constructed Precipitation-Runoff Modeling System models to estimate daily streamflow for 12 river basins in western Iowa that drain into the Missouri River. The Precipitation-Runoff Modeling System is a deterministic, distributed-parameter, physical-process-based modeling system developed to evaluate the response of streamflow and general drainage basin hydrology to various combinations of climate and land use. Calibration periods for each basin varied depending on the period of record available for daily mean streamflow measurements at U.S. Geological Survey streamflow-gaging stations.A geographic information system tool was used to delineate each basin and estimate initial values for model parameters based on basin physical and geographical features. A U.S. Geological Survey automatic calibration tool that uses a shuffled complex evolution algorithm was used for initial calibration, and then manual modifications were made to parameter values to complete the calibration of each basin model. The main objective of the calibration was to match daily discharge values of simulated streamflow to measured daily discharge values. The Precipitation-Runoff Modeling System model was calibrated at 42 sites located in the 12 river basins in western Iowa.The accuracy of the simulated daily streamflow values at the 42 calibration sites varied by river and by site. The models were satisfactory at 36 of the sites based on statistical results. Unsatisfactory performance at the six other sites can be attributed to several factors: (1) low flow, no flow, and flashy flow conditions in headwater subbasins having a small drainage area; (2) poor representation of the groundwater and storage components of flow within a basin; (3) lack of accounting for basin withdrawals and water use; and (4) limited availability and accuracy of meteorological input data. The Precipitation-Runoff Modeling System models of 12 river basins in western Iowa will provide water-resource managers with a consistent and documented method for estimating streamflow at ungaged sites and aid in environmental studies, hydraulic design, water management, and water-quality projects.
Point Cloud Refinement with a Target-Free Intrinsic Calibration of a Mobile Multi-Beam LIDAR System
NASA Astrophysics Data System (ADS)
Nouiraa, H.; Deschaud, J. E.; Goulettea, F.
2016-06-01
LIDAR sensors are widely used in mobile mapping systems. The mobile mapping platforms allow to have fast acquisition in cities for example, which would take much longer with static mapping systems. The LIDAR sensors provide reliable and precise 3D information, which can be used in various applications: mapping of the environment; localization of objects; detection of changes. Also, with the recent developments, multi-beam LIDAR sensors have appeared, and are able to provide a high amount of data with a high level of detail. A mono-beam LIDAR sensor mounted on a mobile platform will have an extrinsic calibration to be done, so the data acquired and registered in the sensor reference frame can be represented in the body reference frame, modeling the mobile system. For a multibeam LIDAR sensor, we can separate its calibration into two distinct parts: on one hand, we have an extrinsic calibration, in common with mono-beam LIDAR sensors, which gives the transformation between the sensor cartesian reference frame and the body reference frame. On the other hand, there is an intrinsic calibration, which gives the relations between the beams of the multi-beam sensor. This calibration depends on a model given by the constructor, but the model can be non optimal, which would bring errors and noise into the acquired point clouds. In the litterature, some optimizations of the calibration parameters are proposed, but need a specific routine or environment, which can be constraining and time-consuming. In this article, we present an automatic method for improving the intrinsic calibration of a multi-beam LIDAR sensor, the Velodyne HDL-32E. The proposed approach does not need any calibration target, and only uses information from the acquired point clouds, which makes it simple and fast to use. Also, a corrected model for the Velodyne sensor is proposed. An energy function which penalizes points far from local planar surfaces is used to optimize the different proposed parameters for the corrected model, and we are able to give a confidence value for the calibration parameters found. Optimization results on both synthetic and real data are presented.
Load Model Verification, Validation and Calibration Framework by Statistical Analysis on Field Data
NASA Astrophysics Data System (ADS)
Jiao, Xiangqing; Liao, Yuan; Nguyen, Thai
2017-11-01
Accurate load models are critical for power system analysis and operation. A large amount of research work has been done on load modeling. Most of the existing research focuses on developing load models, while little has been done on developing formal load model verification and validation (V&V) methodologies or procedures. Most of the existing load model validation is based on qualitative rather than quantitative analysis. In addition, not all aspects of model V&V problem have been addressed by the existing approaches. To complement the existing methods, this paper proposes a novel load model verification and validation framework that can systematically and more comprehensively examine load model's effectiveness and accuracy. Statistical analysis, instead of visual check, quantifies the load model's accuracy, and provides a confidence level of the developed load model for model users. The analysis results can also be used to calibrate load models. The proposed framework can be used as a guidance to systematically examine load models for utility engineers and researchers. The proposed method is demonstrated through analysis of field measurements collected from a utility system.
Stone, T.C.
2008-01-01
With the increased emphasis on monitoring the Earth's climate from space, more stringent calibration requirements are being placed on the data products from remote sensing satellite instruments. Among these are stability over decade-length time scales and consistency across sensors and platforms. For radiometer instruments in the solar reflectance wavelength range (visible to shortwave infrared), maintaining calibration on orbit is difficult due to the lack of absolute radiometric standards suitable for flight use. The Moon presents a luminous source that can be viewed by all instruments in Earth orbit. Considered as a solar diffuser, the lunar surface is exceedingly stable. The chief difficulty with using the Moon is the strong variations in the Moon's brightness with illumination and viewing geometry. This mandates the use of a photometric model to compare lunar observations, either over time by the same instrument or between instruments. The U.S. Geological Survey in Flagstaff, Arizona, under NASA sponsorship, has developed a model for the lunar spectral irradiance that explicitly accounts for the effects of phase, the lunar librations, and the lunar surface reflectance properties. The model predicts variations in the Moon's brightness with precision ???1% over a continuous phase range from eclipse to the quarter lunar phases. Given a time series of Moon observations taken by an instrument, the geometric prediction capability of the lunar irradiance model enables sensor calibration stability with sub-percent per year precision. Cross-calibration of instruments with similar passbands can be achieved with precision comparable to the model precision. Although the Moon observations used for intercomparison can be widely separated in phase angle and/or time, SeaWiFS and MODIS have acquired lunar views closely spaced in time. These data provide an example to assess inter-calibration biases between these two instruments.
CubiCal - Fast radio interferometric calibration suite exploiting complex optimisation
NASA Astrophysics Data System (ADS)
Kenyon, J. S.; Smirnov, O. M.; Grobler, T. L.; Perkins, S. J.
2018-05-01
It has recently been shown that radio interferometric gain calibration can be expressed succinctly in the language of complex optimisation. In addition to providing an elegant framework for further development, it exposes properties of the calibration problem which can be exploited to accelerate traditional non-linear least squares solvers such as Gauss-Newton and Levenberg-Marquardt. We extend existing derivations to chains of Jones terms: products of several gains which model different aberrant effects. In doing so, we find that the useful properties found in the single term case still hold. We also develop several specialised solvers which deal with complex gains parameterised by real values. The newly developed solvers have been implemented in a Python package called CubiCal, which uses a combination of Cython, multiprocessing and shared memory to leverage the power of modern hardware. We apply CubiCal to both simulated and real data, and perform both direction-independent and direction-dependent self-calibration. Finally, we present the results of some rudimentary profiling to show that CubiCal is competitive with respect to existing calibration tools such as MeqTrees.
Prognosis model for stand development
Albert R. Stage
1973-01-01
Describes a set of computer programs for developing prognoses of the development of existing stand under alternative regimes of management. Calibration techniques, modeling procedures, and a procedure for including stochastic variation are described. Implementation of the system for lodgepole pine, including assessment of losses attributed to an infestation of mountain...
New Multi-objective Uncertainty-based Algorithm for Water Resource Models' Calibration
NASA Astrophysics Data System (ADS)
Keshavarz, Kasra; Alizadeh, Hossein
2017-04-01
Water resource models are powerful tools to support water management decision making process and are developed to deal with a broad range of issues including land use and climate change impacts analysis, water allocation, systems design and operation, waste load control and allocation, etc. These models are divided into two categories of simulation and optimization models whose calibration has been addressed in the literature where great relevant efforts in recent decades have led to two main categories of auto-calibration methods of uncertainty-based algorithms such as GLUE, MCMC and PEST and optimization-based algorithms including single-objective optimization such as SCE-UA and multi-objective optimization such as MOCOM-UA and MOSCEM-UA. Although algorithms which benefit from capabilities of both types, such as SUFI-2, were rather developed, this paper proposes a new auto-calibration algorithm which is capable of both finding optimal parameters values regarding multiple objectives like optimization-based algorithms and providing interval estimations of parameters like uncertainty-based algorithms. The algorithm is actually developed to improve quality of SUFI-2 results. Based on a single-objective, e.g. NSE and RMSE, SUFI-2 proposes a routine to find the best point and interval estimation of parameters and corresponding prediction intervals (95 PPU) of time series of interest. To assess the goodness of calibration, final results are presented using two uncertainty measures of p-factor quantifying percentage of observations covered by 95PPU and r-factor quantifying degree of uncertainty, and the analyst has to select the point and interval estimation of parameters which are actually non-dominated regarding both of the uncertainty measures. Based on the described properties of SUFI-2, two important questions are raised, answering of which are our research motivation: Given that in SUFI-2, final selection is based on the two measures or objectives and on the other hand, knowing that there is no multi-objective optimization mechanism in SUFI-2, are the final estimations Pareto-optimal? Can systematic methods be applied to select the final estimations? Dealing with these questions, a new auto-calibration algorithm was proposed where the uncertainty measures were considered as two objectives to find non-dominated interval estimations of parameters by means of coupling Monte Carlo simulation and Multi-Objective Particle Swarm Optimization. Both the proposed algorithm and SUFI-2 were applied to calibrate parameters of water resources planning model of Helleh river basin, Iran. The model is a comprehensive water quantity-quality model developed in the previous researches using WEAP software in order to analyze the impacts of different water resources management strategies including dam construction, increasing cultivation area, utilization of more efficient irrigation technologies, changing crop pattern, etc. Comparing the Pareto frontier resulted from the proposed auto-calibration algorithm with SUFI-2 results, it was revealed that the new algorithm leads to a better and also continuous Pareto frontier, even though it is more computationally expensive. Finally, Nash and Kalai-Smorodinsky bargaining methods were used to choose compromised interval estimation regarding Pareto frontier.
Parameter regionalization of a monthly water balance model for the conterminous United States
Bock, Andrew R.; Hay, Lauren E.; McCabe, Gregory J.; Markstrom, Steven L.; Atkinson, R. Dwight
2016-01-01
A parameter regionalization scheme to transfer parameter values from gaged to ungaged areas for a monthly water balance model (MWBM) was developed and tested for the conterminous United States (CONUS). The Fourier Amplitude Sensitivity Test, a global-sensitivity algorithm, was implemented on a MWBM to generate parameter sensitivities on a set of 109 951 hydrologic response units (HRUs) across the CONUS. The HRUs were grouped into 110 calibration regions based on similar parameter sensitivities. Subsequently, measured runoff from 1575 streamgages within the calibration regions were used to calibrate the MWBM parameters to produce parameter sets for each calibration region. Measured and simulated runoff at the 1575 streamgages showed good correspondence for the majority of the CONUS, with a median computed Nash–Sutcliffe efficiency coefficient of 0.76 over all streamgages. These methods maximize the use of available runoff information, resulting in a calibrated CONUS-wide application of the MWBM suitable for providing estimates of water availability at the HRU resolution for both gaged and ungaged areas of the CONUS.
McGinitie, Teague M; Ebrahimi-Najafabadi, Heshmatollah; Harynuk, James J
2014-02-21
A new method for calibrating thermodynamic data to be used in the prediction of analyte retention times is presented. The method allows thermodynamic data collected on one column to be used in making predictions across columns of the same stationary phase but with varying geometries. This calibration is essential as slight variances in the column inner diameter and stationary phase film thickness between columns or as a column ages will adversely affect the accuracy of predictions. The calibration technique uses a Grob standard mixture along with a Nelder-Mead simplex algorithm and a previously developed model of GC retention times based on a three-parameter thermodynamic model to estimate both inner diameter and stationary phase film thickness. The calibration method is highly successful with the predicted retention times for a set of alkanes, ketones and alcohols having an average error of 1.6s across three columns. Copyright © 2014 Elsevier B.V. All rights reserved.
Parameter regionalization of a monthly water balance model for the conterminous United States
NASA Astrophysics Data System (ADS)
Bock, Andrew R.; Hay, Lauren E.; McCabe, Gregory J.; Markstrom, Steven L.; Atkinson, R. Dwight
2016-07-01
A parameter regionalization scheme to transfer parameter values from gaged to ungaged areas for a monthly water balance model (MWBM) was developed and tested for the conterminous United States (CONUS). The Fourier Amplitude Sensitivity Test, a global-sensitivity algorithm, was implemented on a MWBM to generate parameter sensitivities on a set of 109 951 hydrologic response units (HRUs) across the CONUS. The HRUs were grouped into 110 calibration regions based on similar parameter sensitivities. Subsequently, measured runoff from 1575 streamgages within the calibration regions were used to calibrate the MWBM parameters to produce parameter sets for each calibration region. Measured and simulated runoff at the 1575 streamgages showed good correspondence for the majority of the CONUS, with a median computed Nash-Sutcliffe efficiency coefficient of 0.76 over all streamgages. These methods maximize the use of available runoff information, resulting in a calibrated CONUS-wide application of the MWBM suitable for providing estimates of water availability at the HRU resolution for both gaged and ungaged areas of the CONUS.
Alves, Julio Cesar L; Poppi, Ronei J
2013-01-30
This work verifies the potential of support vector machine (SVM) algorithm applied to near infrared (NIR) spectroscopy data to develop multivariate calibration models for determination of biodiesel content in diesel fuel blends that are more effective and appropriate for analytical determinations of this type of fuel nowadays, providing the usual extended analytical range with required accuracy. Considering the difficulty to develop suitable models for this type of determination in an extended analytical range and that, in practice, biodiesel/diesel fuel blends are nowadays most often used between 0 and 30% (v/v) of biodiesel content, a calibration model is suggested for the range 0-35% (v/v) of biodiesel in diesel blends. The possibility of using a calibration model for the range 0-100% (v/v) of biodiesel in diesel fuel blends was also investigated and the difficulty in obtaining adequate results for this full analytical range is discussed. The SVM models are compared with those obtained with PLS models. The best result was obtained by the SVM model using the spectral region 4400-4600 cm(-1) providing the RMSEP value of 0.11% in 0-35% biodiesel content calibration model. This model provides the determination of biodiesel content in agreement with the accuracy required by ABNT NBR and ASTM reference methods and without interference due to the presence of vegetable oil in the mixture. The best SVM model fit performance for the relationship studied is also verified by providing similar prediction results with the use of 4400-6200 cm(-1) spectral range while the PLS results are much worse over this spectral region. Copyright © 2012 Elsevier B.V. All rights reserved.
Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert M.
2013-01-01
A new regression model search algorithm was developed that may be applied to both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The algorithm is a simplified version of a more complex algorithm that was originally developed for the NASA Ames Balance Calibration Laboratory. The new algorithm performs regression model term reduction to prevent overfitting of data. It has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a regression model search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression model. Therefore, the simplified algorithm is not intended to replace the original algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new search algorithm.
Statistical analysis of target acquisition sensor modeling experiments
NASA Astrophysics Data System (ADS)
Deaver, Dawne M.; Moyer, Steve
2015-05-01
The U.S. Army RDECOM CERDEC NVESD Modeling and Simulation Division is charged with the development and advancement of military target acquisition models to estimate expected soldier performance when using all types of imaging sensors. Two elements of sensor modeling are (1) laboratory-based psychophysical experiments used to measure task performance and calibrate the various models and (2) field-based experiments used to verify the model estimates for specific sensors. In both types of experiments, it is common practice to control or measure environmental, sensor, and target physical parameters in order to minimize uncertainty of the physics based modeling. Predicting the minimum number of test subjects required to calibrate or validate the model should be, but is not always, done during test planning. The objective of this analysis is to develop guidelines for test planners which recommend the number and types of test samples required to yield a statistically significant result.
SCS-CN based time-distributed sediment yield model
NASA Astrophysics Data System (ADS)
Tyagi, J. V.; Mishra, S. K.; Singh, Ranvir; Singh, V. P.
2008-05-01
SummaryA sediment yield model is developed to estimate the temporal rates of sediment yield from rainfall events on natural watersheds. The model utilizes the SCS-CN based infiltration model for computation of rainfall-excess rate, and the SCS-CN-inspired proportionality concept for computation of sediment-excess. For computation of sedimentographs, the sediment-excess is routed to the watershed outlet using a single linear reservoir technique. Analytical development of the model shows the ratio of the potential maximum erosion (A) to the potential maximum retention (S) of the SCS-CN method is constant for a watershed. The model is calibrated and validated on a number of events using the data of seven watersheds from India and the USA. Representative values of the A/S ratio computed for the watersheds from calibration are used for the validation of the model. The encouraging results of the proposed simple four parameter model exhibit its potential in field application.
A proposed standard method for polarimetric calibration and calibration verification
NASA Astrophysics Data System (ADS)
Persons, Christopher M.; Jones, Michael W.; Farlow, Craig A.; Morell, L. Denise; Gulley, Michael G.; Spradley, Kevin D.
2007-09-01
Accurate calibration of polarimetric sensors is critical to reducing and analyzing phenomenology data, producing uniform polarimetric imagery for deployable sensors, and ensuring predictable performance of polarimetric algorithms. It is desirable to develop a standard calibration method, including verification reporting, in order to increase credibility with customers and foster communication and understanding within the polarimetric community. This paper seeks to facilitate discussions within the community on arriving at such standards. Both the calibration and verification methods presented here are performed easily with common polarimetric equipment, and are applicable to visible and infrared systems with either partial Stokes or full Stokes sensitivity. The calibration procedure has been used on infrared and visible polarimetric imagers over a six year period, and resulting imagery has been presented previously at conferences and workshops. The proposed calibration method involves the familiar calculation of the polarimetric data reduction matrix by measuring the polarimeter's response to a set of input Stokes vectors. With this method, however, linear combinations of Stokes vectors are used to generate highly accurate input states. This allows the direct measurement of all system effects, in contrast with fitting modeled calibration parameters to measured data. This direct measurement of the data reduction matrix allows higher order effects that are difficult to model to be discovered and corrected for in calibration. This paper begins with a detailed tutorial on the proposed calibration and verification reporting methods. Example results are then presented for a LWIR rotating half-wave retarder polarimeter.
A simple topography-driven, calibration-free runoff generation model
NASA Astrophysics Data System (ADS)
Gao, H.; Birkel, C.; Hrachowitz, M.; Tetzlaff, D.; Soulsby, C.; Savenije, H. H. G.
2017-12-01
Determining the amount of runoff generation from rainfall occupies a central place in rainfall-runoff modelling. Moreover, reading landscapes and developing calibration-free runoff generation models that adequately reflect land surface heterogeneities remains the focus of much hydrological research. In this study, we created a new method to estimate runoff generation - HAND-based Storage Capacity curve (HSC) which uses a topographic index (HAND, Height Above the Nearest Drainage) to identify hydrological similarity and partially the saturated areas of catchments. We then coupled the HSC model with the Mass Curve Technique (MCT) method to estimate root zone storage capacity (SuMax), and obtained the calibration-free runoff generation model HSC-MCT. Both the two models (HSC and HSC-MCT) allow us to estimate runoff generation and simultaneously visualize the spatial dynamic of saturated area. We tested the two models in the data-rich Bruntland Burn (BB) experimental catchment in Scotland with an unusual time series of the field-mapped saturation area extent. The models were subsequently tested in 323 MOPEX (Model Parameter Estimation Experiment) catchments in the United States. HBV and TOPMODEL were used as benchmarks. We found that the HSC performed better in reproducing the spatio-temporal pattern of the observed saturated areas in the BB catchment compared with TOPMODEL which is based on the topographic wetness index (TWI). The HSC also outperformed HBV and TOPMODEL in the MOPEX catchments for both calibration and validation. Despite having no calibrated parameters, the HSC-MCT model also performed comparably well with the calibrated HBV and TOPMODEL, highlighting the robustness of the HSC model to both describe the spatial distribution of the root zone storage capacity and the efficiency of the MCT method to estimate the SuMax. Moreover, the HSC-MCT model facilitated effective visualization of the saturated area, which has the potential to be used for broader geoscience studies beyond hydrology.
Akerib, DS; Alsum, S; Araújo, HM; ...
2018-01-05
The LUX experiment has performed searches for dark matter particles scattering elastically on xenon nuclei, leading to stringent upper limits on the nuclear scattering cross sections for dark matter. Here, for results derived frommore » $${1.4}\\times 10^{4}\\;\\mathrm{kg\\,days}$$ of target exposure in 2013, details of the calibration, event-reconstruction, modeling, and statistical tests that underlie the results are presented. Detector performance is characterized, including measured efficiencies, stability of response, position resolution, and discrimination between electron- and nuclear-recoil populations. Models are developed for the drift field, optical properties, background populations, the electron- and nuclear-recoil responses, and the absolute rate of low-energy background events. Innovations in the analysis include in situ measurement of the photomultipliers' response to xenon scintillation photons, verification of fiducial mass with a low-energy internal calibration source, and new empirical models for low-energy signal yield based on large-sample, in situ calibrations.« less
Akerib, D. S.; Alsum, S.; Araújo, H. M.; ...
2018-05-31
Here, the LUX experiment has performed searches for dark matter particles scattering elastically on xenon nuclei, leading to stringent upper limits on the nuclear scattering cross sections for dark matter. Here, for results derived frommore » $${1.4}\\times 10^{4}\\;\\mathrm{kg\\,days}$$ of target exposure in 2013, details of the calibration, event-reconstruction, modeling, and statistical tests that underlie the results are presented. Detector performance is characterized, including measured efficiencies, stability of response, position resolution, and discrimination between electron- and nuclear-recoil populations. Models are developed for the drift field, optical properties, background populations, the electron- and nuclear-recoil responses, and the absolute rate of low-energy background events. Innovations in the analysis include in situ measurement of the photomultipliers' response to xenon scintillation photons, verification of fiducial mass with a low-energy internal calibration source, and new empirical models for low-energy signal yield based on large-sample, in situ calibrations.« less
Predictive process simulation of cryogenic implants for leading edge transistor design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gossmann, Hans-Joachim; Zographos, Nikolas; Park, Hugh
2012-11-06
Two cryogenic implant TCAD-modules have been developed: (i) A continuum-based compact model targeted towards a TCAD production environment calibrated against an extensive data-set for all common dopants. Ion-specific calibration parameters related to damage generation and dynamic annealing were used and resulted in excellent fits to the calibration data-set. (ii) A Kinetic Monte Carlo (kMC) model including the full time dependence of ion-exposure that a particular spot on the wafer experiences, as well as the resulting temperature vs. time profile of this spot. It was calibrated by adjusting damage generation and dynamic annealing parameters. The kMC simulations clearly demonstrate the importancemore » of the time-structure of the beam for the amorphization process: Assuming an average dose-rate does not capture all of the physics and may lead to incorrect conclusions. The model enables optimization of the amorphization process through tool parameters such as scan speed or beam height.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akerib, D. S.; Alsum, S.; Araújo, H. M.
Here, the LUX experiment has performed searches for dark matter particles scattering elastically on xenon nuclei, leading to stringent upper limits on the nuclear scattering cross sections for dark matter. Here, for results derived frommore » $${1.4}\\times 10^{4}\\;\\mathrm{kg\\,days}$$ of target exposure in 2013, details of the calibration, event-reconstruction, modeling, and statistical tests that underlie the results are presented. Detector performance is characterized, including measured efficiencies, stability of response, position resolution, and discrimination between electron- and nuclear-recoil populations. Models are developed for the drift field, optical properties, background populations, the electron- and nuclear-recoil responses, and the absolute rate of low-energy background events. Innovations in the analysis include in situ measurement of the photomultipliers' response to xenon scintillation photons, verification of fiducial mass with a low-energy internal calibration source, and new empirical models for low-energy signal yield based on large-sample, in situ calibrations.« less
Photogrammetric calibration of the NASA-Wallops Island image intensifier system
NASA Technical Reports Server (NTRS)
Harp, B. F.
1972-01-01
An image intensifier was designed for use as one of the primary tracking systems for the barium cloud experiment at Wallops Island. Two computer programs, a definitive stellar camara calibration program and a geodetic stellar camara orientation program, were originally developed at Wallops on a GE 625 computer. A mathematical procedure for determining the image intensifier distortions is outlined, and the implementation of the model in the Wallops computer programs is described. The analytical calibration of metric cameras is also discussed.
Modeling Phosphorus Losses through Surface Runoff and Subsurface Drainage Using ICECREAM.
Qi, Hongkai; Qi, Zhiming; Zhang, T Q; Tan, C S; Sadhukhan, Debasis
2018-03-01
Modeling soil phosphorus (P) losses by surface and subsurface flow pathways is essential in developing successful strategies for P pollution control. We used the ICECREAM model to simultaneously simulate P losses in surface and subsurface flow, as well as to assess effectiveness of field practices in reducing P losses. Monitoring data from a mineral-P-fertilized clay loam field in southwestern Ontario, Canada, were used for calibration and validation. After careful adjustment of model parameters, ICECREAM was shown to satisfactorily simulate all major processes of surface and subsurface P losses. When the calibrated model was used to assess tillage and fertilizer management scenarios, results point to a 10% reduction in total P losses by shifting autumn tillage to spring, and a 25.4% reduction in total P losses by injecting fertilizer rather than broadcasting. Although the ICECREAM model was effective in simulating surface and subsurface P losses when thoroughly calibrated, further testing is needed to confirm these results with manure P application. As illustrated here, successful use of simulation models requires careful verification of model routines and comprehensive calibration to ensure that site-specific processes are accurately represented. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Comparison of modelled and empirical atmospheric propagation data
NASA Technical Reports Server (NTRS)
Schott, J. R.; Biegel, J. D.
1983-01-01
The radiometric integrity of TM thermal infrared channel data was evaluated and monitored to develop improved radiometric preprocessing calibration techniques for removal of atmospheric effects. Modelled atmospheric transmittance and path radiance were compared with empirical values derived from aircraft underflight data. Aircraft thermal infrared imagery and calibration data were available on two dates as were corresponding atmospheric radiosonde data. The radiosonde data were used as input to the LOWTRAN 5A code which was modified to output atmospheric path radiance in addition to transmittance. The aircraft data were calibrated and used to generate analogous measurements. These data indicate that there is a tendancy for the LOWTRAN model to underestimate atmospheric path radiance and transmittance as compared to empirical data. A plot of transmittance versus altitude for both LOWTRAN and empirical data is presented.
Small Scale Mass Flow Plug Calibration
NASA Technical Reports Server (NTRS)
Sasson, Jonathan
2015-01-01
A simple control volume model has been developed to calculate the discharge coefficient through a mass flow plug (MFP) and validated with a calibration experiment. The maximum error of the model in the operating region of the MFP is 0.54%. The model uses the MFP geometry and operating pressure and temperature to couple continuity, momentum, energy, an equation of state, and wall shear. Effects of boundary layer growth and the reduction in cross-sectional flow area are calculated using an in- integral method. A CFD calibration is shown to be of lower accuracy with a maximum error of 1.35%, and slower by a factor of 100. Effects of total pressure distortion are taken into account in the experiment. Distortion creates a loss in flow rate and can be characterized by two different distortion descriptors.
Optical interferometry and Gaia parallaxes for a robust calibration of the Cepheid distance scale
NASA Astrophysics Data System (ADS)
Kervella, Pierre; Mérand, Antoine; Gallenne, Alexandre; Trahin, Boris; Borgniet, Simon; Pietrzynski, Grzegorz; Nardetto, Nicolas; Gieren, Wolfgang
2018-04-01
We present the modeling tool we developed to incorporate multi-technique observations of Cepheids in a single pulsation model: the Spectro-Photo-Interferometry of Pulsating Stars (SPIPS). The combination of angular diameters from optical interferometry, radial velocities and photometry with the coming Gaia DR2 parallaxes of nearby Galactic Cepheids will soon enable us to calibrate the projection factor of the classical Parallax-of-Pulsation method. This will extend its applicability to Cepheids too distant for accurate Gaia parallax measurements, and allow us to precisely calibrate the Leavitt law's zero point. As an example application, we present the SPIPS model of the long-period Cepheid RS Pup that provides a measurement of its projection factor, using the independent distance estimated from its light echoes.
The report is one of 11 in a series describing the initial development of the Advanced Utility Simulation Model (AUSM) by the Universities Research Group on Energy (URGE) and its continued development by the Science Applications International Corporation (SAIC) research team. The...
Poole, Sandra; Vis, Marc; Knight, Rodney; Seibert, Jan
2017-01-01
Ecologically relevant streamflow characteristics (SFCs) of ungauged catchments are often estimated from simulated runoff of hydrologic models that were originally calibrated on gauged catchments. However, SFC estimates of the gauged donor catchments and subsequently the ungauged catchments can be substantially uncertain when models are calibrated using traditional approaches based on optimization of statistical performance metrics (e.g., Nash–Sutcliffe model efficiency). An improved calibration strategy for gauged catchments is therefore crucial to help reduce the uncertainties of estimated SFCs for ungauged catchments. The aim of this study was to improve SFC estimates from modeled runoff time series in gauged catchments by explicitly including one or several SFCs in the calibration process. Different types of objective functions were defined consisting of the Nash–Sutcliffe model efficiency, single SFCs, or combinations thereof. We calibrated a bucket-type runoff model (HBV – Hydrologiska Byråns Vattenavdelning – model) for 25 catchments in the Tennessee River basin and evaluated the proposed calibration approach on 13 ecologically relevant SFCs representing major flow regime components and different flow conditions. While the model generally tended to underestimate the tested SFCs related to mean and high-flow conditions, SFCs related to low flow were generally overestimated. The highest estimation accuracies were achieved by a SFC-specific model calibration. Estimates of SFCs not included in the calibration process were of similar quality when comparing a multi-SFC calibration approach to a traditional model efficiency calibration. For practical applications, this implies that SFCs should preferably be estimated from targeted runoff model calibration, and modeled estimates need to be carefully interpreted.
Chambers, Robert S.; Tandon, Rajan; Stavig, Mark E.
2015-07-07
In this study, to analyze the stresses and strains generated during the solidification of glass-forming materials, stress and volume relaxation must be predicted accurately. Although the modeling attributes required to depict physical aging in organic glassy thermosets strongly resemble the structural relaxation in inorganic glasses, the historical modeling approaches have been distinctly different. To determine whether a common constitutive framework can be applied to both classes of materials, the nonlinear viscoelastic simplified potential energy clock (SPEC) model, developed originally for glassy thermosets, was calibrated for the Schott 8061 inorganic glass and used to analyze a number of tests. A practicalmore » methodology for material characterization and model calibration is discussed, and the structural relaxation mechanism is interpreted in the context of SPEC model constitutive equations. SPEC predictions compared to inorganic glass data collected from thermal strain measurements and creep tests demonstrate the ability to achieve engineering accuracy and make the SPEC model feasible for engineering applications involving a much broader class of glassy materials.« less
A SCR Model Calibration Approach with Spatially Resolved Measurements and NH 3 Storage Distributions
Song, Xiaobo; Parker, Gordon G.; Johnson, John H.; ...
2014-11-27
The selective catalytic reduction (SCR) is a technology used for reducing NO x emissions in the heavy-duty diesel (HDD) engine exhaust. In this study, the spatially resolved capillary inlet infrared spectroscopy (Spaci-IR) technique was used to study the gas concentration and NH 3 storage distributions in a SCR catalyst, and to provide data for developing a SCR model to analyze the axial gaseous concentration and axial distributions of NH 3 storage. A two-site SCR model is described for simulating the reaction mechanisms. The model equations and a calculation method was developed using the Spaci-IR measurements to determine the NH 3more » storage capacity and the relationships between certain kinetic parameters of the model. Moreover, a calibration approach was then applied for tuning the kinetic parameters using the spatial gaseous measurements and calculated NH3 storage as a function of axial position instead of inlet and outlet gaseous concentrations of NO, NO 2, and NH 3. The equations and the approach for determining the NH 3 storage capacity of the catalyst and a method of dividing the NH 3 storage capacity between the two storage sites are presented. It was determined that the kinetic parameters of the adsorption and desorption reactions have to follow certain relationships for the model to simulate the experimental data. Finally, the modeling results served as a basis for developing full model calibrations to SCR lab reactor and engine data and state estimator development as described in the references (Song et al. 2013a, b; Surenahalli et al. 2013).« less
Calibration of an Unsteady Groundwater Flow Model for a Complex, Strongly Heterogeneous Aquifer
NASA Astrophysics Data System (ADS)
Curtis, Z. K.; Liao, H.; Li, S. G.; Phanikumar, M. S.; Lusch, D.
2016-12-01
Modeling of groundwater systems characterized by complex three-dimensional structure and heterogeneity remains a significant challenge. Most of today's groundwater models are developed based on relatively simple conceptual representations in favor of model calibratibility. As more complexities are modeled, e.g., by adding more layers and/or zones, or introducing transient processes, more parameters have to be estimated and issues related to ill-posed groundwater problems and non-unique calibration arise. Here, we explore the use of an alternative conceptual representation for groundwater modeling that is fully three-dimensional and can capture complex 3D heterogeneity (both systematic and "random") without over-parameterizing the aquifer system. In particular, we apply Transition Probability (TP) geostatistics on high resolution borehole data from a water well database to characterize the complex 3D geology. Different aquifer material classes, e.g., `AQ' (aquifer material), `MAQ' (marginal aquifer material'), `PCM' (partially confining material), and `CM' (confining material), are simulated, with the hydraulic properties of each material type as tuning parameters during calibration. The TP-based approach is applied to simulate unsteady groundwater flow in a large, complex, and strongly heterogeneous glacial aquifer system in Michigan across multiple spatial and temporal scales. The resulting model is calibrated to observed static water level data over a time span of 50 years. The results show that the TP-based conceptualization enables much more accurate and robust calibration/simulation than that based on conventional deterministic layer/zone based conceptual representations.
Technical Note: Procedure for the calibration and validation of kilo-voltage cone-beam CT models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vilches-Freixas, Gloria; Létang, Jean Michel; Rit,
2016-09-15
Purpose: The aim of this work is to propose a general and simple procedure for the calibration and validation of kilo-voltage cone-beam CT (kV CBCT) models against experimental data. Methods: The calibration and validation of the CT model is a two-step procedure: the source model then the detector model. The source is described by the direction dependent photon energy spectrum at each voltage while the detector is described by the pixel intensity value as a function of the direction and the energy of incident photons. The measurements for the source consist of a series of dose measurements in air performedmore » at each voltage with varying filter thicknesses and materials in front of the x-ray tube. The measurements for the detector are acquisitions of projection images using the same filters and several tube voltages. The proposed procedure has been applied to calibrate and assess the accuracy of simple models of the source and the detector of three commercial kV CBCT units. If the CBCT system models had been calibrated differently, the current procedure would have been exclusively used to validate the models. Several high-purity attenuation filters of aluminum, copper, and silver combined with a dosimeter which is sensitive to the range of voltages of interest were used. A sensitivity analysis of the model has also been conducted for each parameter of the source and the detector models. Results: Average deviations between experimental and theoretical dose values are below 1.5% after calibration for the three x-ray sources. The predicted energy deposited in the detector agrees with experimental data within 4% for all imaging systems. Conclusions: The authors developed and applied an experimental procedure to calibrate and validate any model of the source and the detector of a CBCT unit. The present protocol has been successfully applied to three x-ray imaging systems. The minimum requirements in terms of material and equipment would make its implementation suitable in most clinical environments.« less
Experimental Investigation of Nozzle/Plume Aerodynamics at Hypersonic Speeds
NASA Technical Reports Server (NTRS)
Heinemann, K.; Bogdanoff, David W.; Cambier, Jean-Luc
1992-01-01
The work performed by D. W. Bogdanoff and J.-L. Cambier during the period of 1 Feb. - 31 Oct. 1992 is presented. The following topics are discussed: (1) improvement in the operation of the facility; (2) the wedge model; (3) calibration of the new test section; (4) combustor model; (5) hydrogen fuel system for combustor model; (6) three inch calibration/development tunnel; (7) shock tunnel unsteady flow; (8) pulse detonation wave engine; (9) DCAF flow simulation; (10) high temperature shock layer simulation; and (11) the one dimensional Godunov CFD code.
NASA Technical Reports Server (NTRS)
Boyce, L.
1992-01-01
A probabilistic general material strength degradation model has been developed for structural components of aerospace propulsion systems subjected to diverse random effects. The model has been implemented in two FORTRAN programs, PROMISS (Probabilistic Material Strength Simulator) and PROMISC (Probabilistic Material Strength Calibrator). PROMISS calculates the random lifetime strength of an aerospace propulsion component due to as many as eighteen diverse random effects. Results are presented in the form of probability density functions and cumulative distribution functions of lifetime strength. PROMISC calibrates the model by calculating the values of empirical material constants.
Caliver: An R package for CALIbration and VERification of forest fire gridded model outputs.
Vitolo, Claudia; Di Giuseppe, Francesca; D'Andrea, Mirko
2018-01-01
The name caliver stands for CALIbration and VERification of forest fire gridded model outputs. This is a package developed for the R programming language and available under an APACHE-2 license from a public repository. In this paper we describe the functionalities of the package and give examples using publicly available datasets. Fire danger model outputs are taken from the modeling components of the European Forest Fire Information System (EFFIS) and observed burned areas from the Global Fire Emission Database (GFED). Complete documentation, including a vignette, is also available within the package.
Caliver: An R package for CALIbration and VERification of forest fire gridded model outputs
Di Giuseppe, Francesca; D’Andrea, Mirko
2018-01-01
The name caliver stands for CALIbration and VERification of forest fire gridded model outputs. This is a package developed for the R programming language and available under an APACHE-2 license from a public repository. In this paper we describe the functionalities of the package and give examples using publicly available datasets. Fire danger model outputs are taken from the modeling components of the European Forest Fire Information System (EFFIS) and observed burned areas from the Global Fire Emission Database (GFED). Complete documentation, including a vignette, is also available within the package. PMID:29293536
NASA Astrophysics Data System (ADS)
Scott, M. L.; Gagarin, N.; Mekemson, J. R.; Chintakunta, S. R.
2011-06-01
Until recently, civil engineering material calibration data could only be obtained from material sample cores or via time consuming, stationary calibration measurements in a limited number of locations. Calibration data are used to determine material propagation velocities of electromagnetic waves in test materials for use in layer thickness measurements and subsurface imaging. Limitations these calibration methods impose have been a significant impediment to broader use of nondestructive evaluation methods such as ground-penetrating radar (GPR). In 2006, a new rapid, continuous calibration approach was designed using simulation software to address these measurement limitations during a Federal Highway Administration (FHWA) research and development effort. This continuous calibration method combines a digitally-synthesized step-frequency (SF)-GPR array and a data collection protocol sequence for the common midpoint (CMP) method. Modeling and laboratory test results for various data collection protocols and materials are presented in this paper. The continuous-CMP concept was finally implemented for FHWA in a prototype demonstration system called the Advanced Pavement Evaluation (APE) system in 2009. Data from the continuous-CMP protocol is processed using a semblance/coherency analysis to determine material propagation velocities. Continuously calibrated pavement thicknesses measured with the APE system in 2009 are presented. This method is efficient, accurate, and cost-effective.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott, M. L.; Gagarin, N.; Mekemson, J. R.
Until recently, civil engineering material calibration data could only be obtained from material sample cores or via time consuming, stationary calibration measurements in a limited number of locations. Calibration data are used to determine material propagation velocities of electromagnetic waves in test materials for use in layer thickness measurements and subsurface imaging. Limitations these calibration methods impose have been a significant impediment to broader use of nondestructive evaluation methods such as ground-penetrating radar (GPR). In 2006, a new rapid, continuous calibration approach was designed using simulation software to address these measurement limitations during a Federal Highway Administration (FHWA) research andmore » development effort. This continuous calibration method combines a digitally-synthesized step-frequency (SF)-GPR array and a data collection protocol sequence for the common midpoint (CMP) method. Modeling and laboratory test results for various data collection protocols and materials are presented in this paper. The continuous-CMP concept was finally implemented for FHWA in a prototype demonstration system called the Advanced Pavement Evaluation (APE) system in 2009. Data from the continuous-CMP protocol is processed using a semblance/coherency analysis to determine material propagation velocities. Continuously calibrated pavement thicknesses measured with the APE system in 2009 are presented. This method is efficient, accurate, and cost-effective.« less
High Accuracy Transistor Compact Model Calibrations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hembree, Charles E.; Mar, Alan; Robertson, Perry J.
2015-09-01
Typically, transistors are modeled by the application of calibrated nominal and range models. These models consists of differing parameter values that describe the location and the upper and lower limits of a distribution of some transistor characteristic such as current capacity. Correspond- ingly, when using this approach, high degrees of accuracy of the transistor models are not expected since the set of models is a surrogate for a statistical description of the devices. The use of these types of models describes expected performances considering the extremes of process or transistor deviations. In contrast, circuits that have very stringent accuracy requirementsmore » require modeling techniques with higher accuracy. Since these accurate models have low error in transistor descriptions, these models can be used to describe part to part variations as well as an accurate description of a single circuit instance. Thus, models that meet these stipulations also enable the calculation of quantifi- cation of margins with respect to a functional threshold and uncertainties in these margins. Given this need, new model high accuracy calibration techniques for bipolar junction transis- tors have been developed and are described in this report.« less
NASA Astrophysics Data System (ADS)
Beckers, J.; Frind, E. O.
2001-03-01
A steady-state groundwater model of the Oro Moraine aquifer system in Central Ontario, Canada, is developed. The model is used to identify the role of baseflow in the water balance of the Minesing Swamp, a 70 km 2 wetland of international significance. Lithologic descriptions are used to develop a hydrostratigraphic conceptual model of the aquifer system. The numerical model uses long-term averages to represent temporal variations of the flow regime and includes a mechanism to redistribute recharge in response to near-surface geologic heterogeneity. The model is calibrated to water level and streamflow measurements through inverse modeling. Observed baseflow and runoff quantities validate the water mass balance of the numerical model and provide information on the fraction of the water surplus that contributes to groundwater flow. The inverse algorithm is used to compare alternative model zonation scenarios, illustrating the power of non-linear regression in calibrating complex aquifer systems. The adjoint method is used to identify sensitive recharge areas for groundwater discharge to the Minesing Swamp. Model results suggest that nearby urban development will have a significant impact on baseflow to the swamp. Although the direct baseflow contribution makes up only a small fraction of the total inflow to the swamp, it provides an important steady influx of water over relatively large portions of the wetland. Urban development will also impact baseflow to the headwaters of local streams. The model provides valuable insight into crucial characteristics of the aquifer system although definite conclusions regarding details of its water budget are difficult to draw given current data limitations. The model therefore also serves to guide future data collection and studies of sub-areas within the basin.
NASA Technical Reports Server (NTRS)
Steele, W. G.; Molder, K. J.; Hudson, S. T.; Vadasy, K. V.; Rieder, P. T.; Giel, T.
2005-01-01
NASA and the U.S. Air Force are working on a joint project to develop a new hydrogen-fueled, full-flow, staged combustion rocket engine. The initial testing and modeling work for the Integrated Powerhead Demonstrator (IPD) project is being performed by NASA Marshall and Stennis Space Centers. A key factor in the testing of this engine is the ability to predict and measure the transient fluid flow during engine start and shutdown phases of operation. A model built by NASA Marshall in the ROCket Engine Transient Simulation (ROCETS) program is used to predict transient engine fluid flows. The model is initially calibrated to data from previous tests on the Stennis E1 test stand. The model is then used to predict the next run. Data from this run can then be used to recalibrate the model providing a tool to guide the test program in incremental steps to reduce the risk to the prototype engine. In this paper, they define this type of model as a calibrated model. This paper proposes a method to estimate the uncertainty of a model calibrated to a set of experimental test data. The method is similar to that used in the calibration of experiment instrumentation. For the IPD example used in this paper, the model uncertainty is determined for both LOX and LH flow rates using previous data. The successful use of this model is then demonstrated to predict another similar test run within the uncertainty bounds. The paper summarizes the uncertainty methodology when a model is continually recalibrated with new test data. The methodology is general and can be applied to other calibrated models.
A Precipitation-Runoff Model for the Blackstone River Basin, Massachusetts and Rhode Island
Barbaro, Jeffrey R.; Zarriello, Phillip J.
2007-01-01
A Hydrological Simulation Program-FORTRAN (HSPF) precipitation-runoff model of the Blackstone River Basin was developed and calibrated to study the effects of changing land- and water-use patterns on water resources. The 474.5 mi2 Blackstone River Basin in southeastern Massachusetts and northern Rhode Island is experiencing rapid population and commercial growth throughout much of its area. This growth and the corresponding changes in land-use patterns are increasing stress on water resources and raising concerns about the future availability of water to meet residential and commercial needs. Increased withdrawals and wastewater-return flows also could adversely affect aquatic habitat, water quality, and the recreational value of the streams in the basin. The Blackstone River Basin was represented by 19 hydrologic response units (HRUs): 17 types of pervious areas (PERLNDs) established from combinations of surficial geology, land-use categories, and the distribution of public water and public sewer systems, and two types of impervious areas (IMPLNDs). Wetlands were combined with open water and simulated as stream reaches that receive runoff from surrounding pervious and impervious areas. This approach was taken to achieve greater flexibility in calibrating evapotranspiration losses from wetlands during the growing season. The basin was segmented into 50 reaches (RCHRES) to represent junctions at tributaries, major lakes and reservoirs, and drainage areas to streamflow-gaging stations. Climatological, streamflow, water-withdrawal, and wastewater-return data were collected during the study to develop the HSPF model. Climatological data collected at Worcester Regional Airport in Worcester, Massachusetts and T.F. Green Airport in Warwick, Rhode Island, were used for model calibration. A total of 15 streamflow-gaging stations were used in the calibration. Streamflow was measured at eight continuous-record streamflow-gaging stations that are part of the U.S. Geological Survey cooperative streamflow-gaging network, and at seven partial-record stations installed in 2004 for this study. Because the model-calibration period preceded data collection at the partial-record stations, a continuous streamflow record was estimated at these stations by correlation with flows at nearby continuous-record stations to provide additional streamflow data for model calibration. Water-use information was compiled for 1996-2001 and included municipal and commercial/industrial withdrawals, private residential withdrawals, golf-course withdrawals, municipal wastewater-return flows, and on-site septic effluent return flows. Streamflow depletion was computed for all time-varying ground-water withdrawals prior to simulation. Water-use data were included in the model to represent the net effect of water use on simulated hydrographs. Consequently, the calibrated values of the hydrologic parameters better represent the hydrologic response of the basin to precipitation. The model was calibrated for 1997-2001 to coincide with the land-use and water-use data compiled for the study. Four long-term stations (Nipmuc River near Harrisville, Rhode Island; Quinsigamond River at North Grafton, Massachusetts; Branch River at Forestdale, Rhode Island; and Blackstone River at Woonsocket, Rhode Island) that monitor flow at 3.3, 5.4, 19, and 88 percent of the total basin area, respectively, provided the primary model-calibration points. Hydrographs, scatter plots, and flow-duration curves of observed and simulated discharges, along with various model-fit statistics, indicated that the model performed well over a range of hydrologic conditions. For example, the total runoff volume for the calibration period simulated at the Nipmuc River near Harrisville, Rhode Island; Quinsigamond River at North Grafton, Massachusetts; Branch River at Forestdale, Rhode Island; and Blackstone River at Woonsocket, Rhode Island streamflow-gaging stations differed from the observed runoff v
FATE 5: A natural attenuation calibration tool for groundwater fate and transport modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nevin, J.P.; Connor, J.A.; Newell, C.J.
1997-12-31
A new groundwater attenuation modeling tool (FATE 5) has been developed to assist users with determining site-specific natural attenuation rates for organic constituents dissolved in groundwater. FATE 5 is based on and represents an enhancement to the Domenico analytical groundwater transport model. These enhancements include use of an optimization routine to match results from the Domenico model to actual measured site concentrations, an extensive database of chemical property data, and calculation of an estimate of the length of time needed for a plume to reach steady state conditions. FATE 5 was developed in Microsoft{reg_sign} Excel and is controlled by meansmore » of a simple, user-friendly graphical interface. Using the Solver routine built into Excel, FATE 5 is able to calibrate the attenuation rate used by the Domenico model to match site-specific data. By calibrating the decay rate to site-specific measurements, FATE 5 can yield accurate predictions of long-term natural attenuation processes within a groundwater within a groundwater plume. In addition, FATE 5 includes a formulation of the transient Domenico solution used to help the user determine if the steady-state assumptions employed by the model are appropriate. The calibrated groundwater flow model can then be used either to (i) predict upper-bound constituent concentrations in groundwater, based on an observed source zone concentration, or (ii) back-calculate a lower-bound SSTL value, based on a user-specified exposure point concentration at the groundwater point of exposure (POE). This paper reviews the major elements of the FATE 5 model - and gives results for real-world applications. Key modeling assumptions and summary guidelines regarding calculation procedures and input parameter selection are also addressed.« less
Modeling soil temperature change in Seward Peninsula, Alaska
NASA Astrophysics Data System (ADS)
Debolskiy, M. V.; Nicolsky, D.; Romanovsky, V. E.; Muskett, R. R.; Panda, S. K.
2017-12-01
Increasing demand for assessment of climate change-induced permafrost degradation and its consequences promotes creation of high-resolution modeling products of soil temperature changes. This is especially relevant for areas with highly vulnerable warm discontinuous permafrost in the Western Alaska. In this study, we apply ecotype-based modeling approach to simulate high-resolution permafrost distribution and its temporal dynamics in Seward Peninsula, Alaska. To model soil temperature dynamics, we use a transient soil heat transfer model developed at the Geophysical Institute Permafrost Laboratory (GIPL-2). The model solves one dimensional nonlinear heat equation with phase change. The developed model is forced with combination of historical climate and different future scenarios for 1900-2100 with 2x2 km resolution prepared by Scenarios Network for Alaska and Arctic Planning (2017). Vegetation, snow and soil properties are calibrated by ecotype and up-scaled by using Alaska Existing Vegetation Type map for Western Alaska (Flemming, 2015) with 30x30 m resolution provided by Geographic Information Network of Alaska (UAF). The calibrated ecotypes cover over 75% of the study area. We calibrate the model using a data assimilation technique utilizing available observations of air, surface and sub-surface temperatures and snow cover collected by various agencies and research groups (USGS, Geophysical Institute, USDA). The calibration approach takes into account a natural variability between stations in the same ecotype and finds an optimal set of model parameters (snow and soil properties) within the study area. This approach allows reduction in microscale heterogeneity and aggregated soil temperature data from shallow boreholes which is highly dependent on local conditions. As a result of this study we present a series of preliminary high resolution maps for the Seward Peninsula showing changes in the active layer depth and ground temperatures for the current climate and future climate change scenarios.
Soo, Danielle H E; Pendharkar, Sayali A; Jivanji, Chirag J; Gillies, Nicola A; Windsor, John A; Petrov, Maxim S
2017-10-01
Approximately 40% of patients develop abnormal glucose metabolism after a single episode of acute pancreatitis. This study aimed to develop and validate a prediabetes self-assessment screening score for patients after acute pancreatitis. Data from non-overlapping training (n=82) and validation (n=80) cohorts were analysed. Univariate logistic and linear regression identified variables associated with prediabetes after acute pancreatitis. Multivariate logistic regression developed the score, ranging from 0 to 215. The area under the receiver-operating characteristic curve (AUROC), Hosmer-Lemeshow χ 2 statistic, and calibration plots were used to assess model discrimination and calibration. The developed score was validated using data from the validation cohort. The score had an AUROC of 0.88 (95% CI, 0.80-0.97) and Hosmer-Lemeshow χ 2 statistic of 5.75 (p=0.676). Patients with a score of ≥75 had a 94.1% probability of having prediabetes, and were 29 times more likely to have prediabetes than those with a score of <75. The AUROC in the validation cohort was 0.81 (95% CI, 0.70-0.92) and the Hosmer-Lemeshow χ 2 statistic was 5.50 (p=0.599). Model calibration of the score showed good calibration in both cohorts. The developed and validated score, called PERSEUS, is the first instrument to identify individuals who are at high risk of developing abnormal glucose metabolism following an episode of acute pancreatitis. Copyright © 2017 Editrice Gastroenterologica Italiana S.r.l. Published by Elsevier Ltd. All rights reserved.
The spectral irradiance of the moon
Kieffer, H.H.; Stone, T.C.
2005-01-01
Images of the Moon at 32 wavelengths from 350 to 2450 nm have been obtained from a dedicated observatory during the bright half of each month over a period of several years. The ultimate goal is to develop a spectral radiance model of the Moon with an angular resolution and radiometric accuracy appropriate for calibration of Earth-orbiting spacecraft. An empirical model of irradiance has been developed that treats phase and libration explicitly, with absolute scale founded on the spectra of the star Vega and returned Apollo samples. A selected set of 190 standard stars are observed regularly to provide nightly extinction correction and long-term calibration of the observations. The extinction model is wavelength-coupled and based on the absorption coefficients of a number of gases and aerosols. The empirical irradiance model has the same form at each wavelength, with 18 coefficients, eight of which are constant across wavelength, for a total of 328 coefficients. Over 1000 lunar observations are fitted at each wavelength; the average residual is less than 1%. The irradiance model is actively being used in lunar calibration of several spacecraft instruments and can track sensor response changes at the 0.1% level. ?? 2005. The American Astronomical Society. All rights reserved.
Alamar, Priscila D; Caramês, Elem T S; Poppi, Ronei J; Pallone, Juliana A L
2016-07-01
The present study investigated the application of near infrared spectroscopy as a green, quick, and efficient alternative to analytical methods currently used to evaluate the quality (moisture, total sugars, acidity, soluble solids, pH and ascorbic acid) of frozen guava and passion fruit pulps. Fifty samples were analyzed by near infrared spectroscopy (NIR) and reference methods. Partial least square regression (PLSR) was used to develop calibration models to relate the NIR spectra and the reference values. Reference methods indicated adulteration by water addition in 58% of guava pulp samples and 44% of yellow passion fruit pulp samples. The PLS models produced lower values of root mean squares error of calibration (RMSEC), root mean squares error of prediction (RMSEP), and coefficient of determination above 0.7. Moisture and total sugars presented the best calibration models (RMSEP of 0.240 and 0.269, respectively, for guava pulp; RMSEP of 0.401 and 0.413, respectively, for passion fruit pulp) which enables the application of these models to determine adulteration in guava and yellow passion fruit pulp by water or sugar addition. The models constructed for calibration of quality parameters of frozen fruit pulps in this study indicate that NIR spectroscopy coupled with the multivariate calibration technique could be applied to determine the quality of guava and yellow passion fruit pulp. Copyright © 2016 Elsevier Ltd. All rights reserved.
Li, Zhaofu; Liu, Hongyu; Luo, Chuan; Li, Yan; Li, Hengpeng; Pan, Jianjun; Jiang, Xiaosan; Zhou, Quansuo; Xiong, Zhengqin
2015-05-01
The Hydrological Simulation Program-Fortran (HSPF), which is a hydrological and water-quality computer model that was developed by the United States Environmental Protection Agency, was employed to simulate runoff and nutrient export from a typical small watershed in a hilly eastern monsoon region of China. First, a parameter sensitivity analysis was performed to assess how changes in the model parameters affect runoff and nutrient export. Next, the model was calibrated and validated using measured runoff and nutrient concentration data. The Nash-Sutcliffe efficiency (E NS ) values of the yearly runoff were 0.87 and 0.69 for the calibration and validation periods, respectively. For storms runoff events, the E NS values were 0.93 for the calibration period and 0.47 for the validation period. Antecedent precipitation and soil moisture conditions can affect the simulation accuracy of storm event flow. The E NS values for the total nitrogen (TN) export were 0.58 for the calibration period and 0.51 for the validation period. In addition, the correlation coefficients between the observed and simulated TN concentrations were 0.84 for the calibration period and 0.74 for the validation period. For phosphorus export, the E NS values were 0.89 for the calibration period and 0.88 for the validation period. In addition, the correlation coefficients between the observed and simulated orthophosphate concentrations were 0.96 and 0.94 for the calibration and validation periods, respectively. The nutrient simulation results are generally satisfactory even though the parameter-lumped HSPF model cannot represent the effects of the spatial pattern of land cover on nutrient export. The model parameters obtained in this study could serve as reference values for applying the model to similar regions. In addition, HSPF can properly describe the characteristics of water quantity and quality processes in this area. After adjustment, calibration, and validation of the parameters, the HSPF model is suitable for hydrological and water-quality simulations in watershed planning and management and for designing best management practices.
Rapid analysis of pharmaceutical drugs using LIBS coupled with multivariate analysis.
Tiwari, P K; Awasthi, S; Kumar, R; Anand, R K; Rai, P K; Rai, A K
2018-02-01
Type 2 diabetes drug tablets containing voglibose having dose strengths of 0.2 and 0.3 mg of various brands have been examined, using laser-induced breakdown spectroscopy (LIBS) technique. The statistical methods such as the principal component analysis (PCA) and the partial least square regression analysis (PLSR) have been employed on LIBS spectral data for classifying and developing the calibration models of drug samples. We have developed the ratio-based calibration model applying PLSR in which relative spectral intensity ratios H/C, H/N and O/N are used. Further, the developed model has been employed to predict the relative concentration of element in unknown drug samples. The experiment has been performed in air and argon atmosphere, respectively, and the obtained results have been compared. The present model provides rapid spectroscopic method for drug analysis with high statistical significance for online control and measurement process in a wide variety of pharmaceutical industrial applications.
Gurarie, David; Karl, Stephan; Zimmerman, Peter A; King, Charles H; St Pierre, Timothy G; Davis, Timothy M E
2012-01-01
Agent-based modeling of Plasmodium falciparum infection offers an attractive alternative to the conventional Ross-Macdonald methodology, as it allows simulation of heterogeneous communities subjected to realistic transmission (inoculation patterns). We developed a new, agent based model that accounts for the essential in-host processes: parasite replication and its regulation by innate and adaptive immunity. The model also incorporates a simplified version of antigenic variation by Plasmodium falciparum. We calibrated the model using data from malaria-therapy (MT) studies, and developed a novel calibration procedure that accounts for a deterministic and a pseudo-random component in the observed parasite density patterns. Using the parasite density patterns of 122 MT patients, we generated a large number of calibrated parameters. The resulting data set served as a basis for constructing and simulating heterogeneous agent-based (AB) communities of MT-like hosts. We conducted several numerical experiments subjecting AB communities to realistic inoculation patterns reported from previous field studies, and compared the model output to the observed malaria prevalence in the field. There was overall consistency, supporting the potential of this agent-based methodology to represent transmission in realistic communities. Our approach represents a novel, convenient and versatile method to model Plasmodium falciparum infection.
Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert Manfred
2013-01-01
A new regression model search algorithm was developed in 2011 that may be used to analyze both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The new algorithm is a simplified version of a more complex search algorithm that was originally developed at the NASA Ames Balance Calibration Laboratory. The new algorithm has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression models. Therefore, the simplified search algorithm is not intended to replace the original search algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm either fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new regression model search algorithm.
Markgraf, Rainer; Deutschinoff, Gerd; Pientka, Ludger; Scholten, Theo; Lorenz, Cristoph
2001-01-01
Background: Mortality predictions calculated using scoring scales are often not accurate in populations other than those in which the scales were developed because of differences in case-mix. The present study investigates the effect of first-level customization, using a logistic regression technique, on discrimination and calibration of the Acute Physiology and Chronic Health Evaluation (APACHE) II and III scales. Method: Probabilities of hospital death for patients were estimated by applying APACHE II and III and comparing these with observed outcomes. Using the split sample technique, a customized model to predict outcome was developed by logistic regression. The overall goodness-of-fit of the original and the customized models was assessed. Results: Of 3383 consecutive intensive care unit (ICU) admissions over 3 years, 2795 patients could be analyzed, and were split randomly into development and validation samples. The discriminative powers of APACHE II and III were unchanged by customization (areas under the receiver operating characteristic [ROC] curve 0.82 and 0.85, respectively). Hosmer-Lemeshow goodness-of-fit tests showed good calibration for APACHE II, but insufficient calibration for APACHE III. Customization improved calibration for both models, with a good fit for APACHE III as well. However, fit was different for various subgroups. Conclusions: The overall goodness-of-fit of APACHE III mortality prediction was improved significantly by customization, but uniformity of fit in different subgroups was not achieved. Therefore, application of the customized model provides no advantage, because differences in case-mix still limit comparisons of quality of care. PMID:11178223
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, Bo; Abdelaziz, Omar; Shrestha, Som S.
Based on the laboratory investigation in FY16, for R-22 and R-410A alternative low GWP refrigerants in two baseline rooftop air conditioners (RTU), we used the DOE/ORNL Heat Pump Design Model to model the two RTUs and calibrated the models against the experimental data. Using the calibrated equipment models, we compared the compressor efficiencies, heat exchanger performances. An efficiency-based compressor mapping method was developed, which is able to predict compressor performances of the alternative low GWP refrigerants accurately. Extensive model-based optimizations were conducted to provide a fair comparison between all the low GWP candidates by selecting their preferred configurations at themore » same cooling capacity and compressor efficiencies.« less
Calibration of hydrological models using flow-duration curves
NASA Astrophysics Data System (ADS)
Westerberg, I. K.; Guerrero, J.-L.; Younger, P. M.; Beven, K. J.; Seibert, J.; Halldin, S.; Freer, J. E.; Xu, C.-Y.
2010-12-01
The degree of belief we have in predictions from hydrologic models depends on how well they can reproduce observations. Calibrations with traditional performance measures such as the Nash-Sutcliffe model efficiency are challenged by problems including: (1) uncertain discharge data, (2) variable importance of the performance with flow magnitudes, (3) influence of unknown input/output errors and (4) inability to evaluate model performance when observation time periods for discharge and model input data do not overlap. A new calibration method using flow-duration curves (FDCs) was developed which addresses these problems. The method focuses on reproducing the observed discharge frequency distribution rather than the exact hydrograph. It consists of applying limits of acceptability for selected evaluation points (EPs) of the observed uncertain FDC in the extended GLUE approach. Two ways of selecting the EPs were tested - based on equal intervals of discharge and of volume of water. The method was tested and compared to a calibration using the traditional model efficiency for the daily four-parameter WASMOD model in the Paso La Ceiba catchment in Honduras and for Dynamic TOPMODEL evaluated at an hourly time scale for the Brue catchment in Great Britain. The volume method of selecting EPs gave the best results in both catchments with better calibrated slow flow, recession and evaporation than the other criteria. Observed and simulated time series of uncertain discharges agreed better for this method both in calibration and prediction in both catchments without resulting in overpredicted simulated uncertainty. An advantage with the method is that the rejection criterion is based on an estimation of the uncertainty in discharge data and that the EPs of the FDC can be chosen to reflect the aims of the modelling application e.g. using more/less EPs at high/low flows. While the new method is less sensitive to epistemic input/output errors than the normal use of limits of acceptability applied directly to the time series of discharge, it still requires a reasonable representation of the distribution of inputs. Additional constraints might therefore be required in catchments subject to snow. The results suggest that the new calibration method can be useful when observation time periods for discharge and model input data do not overlap. The new method could also be suitable for calibration to regional FDCs while taking uncertainties in the hydrological model and data into account.
An examination of land use impacts of flooding induced by sea level rise
NASA Astrophysics Data System (ADS)
Song, Jie; Fu, Xinyu; Gu, Yue; Deng, Yujun; Peng, Zhong-Ren
2017-03-01
Coastal regions become unprecedentedly vulnerable to coastal hazards that are associated with sea level rise. The purpose of this paper is therefore to simulate prospective urban exposure to changing sea levels. This article first applied the cellular-automaton-based SLEUTH model (Project Gigalopolis, 2016) to calibrate historical urban dynamics in Bay County, Florida (USA) - a region that is greatly threatened by rising sea levels. This paper estimated five urban growth parameters by multiple-calibration procedures that used different Monte Carlo iterations to account for modeling uncertainties. It then employed the calibrated model to predict three scenarios of urban growth up to 2080 - historical trend, urban sprawl, and compact development. We also assessed land use impacts of four policies: no regulations; flood mitigation plans based on the whole study region and on those areas that are prone to experience growth; and the protection of conservational lands. This study lastly overlaid projected urban areas in 2030 and 2080 with 500-year flooding maps that were developed under 0, 0.2, and 0.9 m sea level rise. The calibration results that a substantial number of built-up regions extend from established coastal settlements. The predictions suggest that total flooded area of new urbanized regions in 2080 would be more than 25 times that under the flood mitigation policy, if the urbanization progresses with few policy interventions. The joint model generates new knowledge in the domain between land use modeling and sea level rise. It contributes to coastal spatial planning by helping develop hazard mitigation schemes and can be employed in other international communities that face combined pressure of urban growth and climate change.
Yasuda, Tomomi; Yonemura, Seiichiro; Tani, Akira
2012-01-01
Many sensors have to be used simultaneously for multipoint carbon dioxide (CO(2)) observation. All the sensors should be calibrated in advance, but this is a time-consuming process. To seek a simplified calibration method, we used four commercial CO(2) sensor models and characterized their output tendencies against ambient temperature and length of use, in addition to offset characteristics. We used four samples of standard gas with different CO(2) concentrations (0, 407, 1,110, and 1,810 ppm). The outputs of K30 and AN100 models showed linear relationships with temperature and length of use. Calibration coefficients for sensor models were determined using the data from three individual sensors of the same model to minimize the relative RMS error. When the correction was applied to the sensors, the accuracy of measurements improved significantly in the case of the K30 and AN100 units. In particular, in the case of K30 the relative RMS error decreased from 24% to 4%. Hence, we have chosen K30 for developing a portable CO(2) measurement device (10 × 10 × 15 cm, 900 g). Data of CO(2) concentration, measurement time and location, temperature, humidity, and atmospheric pressure can be recorded onto a Secure Digital (SD) memory card. The CO(2) concentration in a high-school lecture room was monitored with this device. The CO(2) data, when corrected for simultaneously measured temperature, water vapor partial pressure, and atmospheric pressure, showed a good agreement with the data measured by a highly accurate CO(2) analyzer, LI-6262. This indicates that acceptable accuracy can be realized using the calibration method developed in this study.
Yasuda, Tomomi; Yonemura, Seiichiro; Tani, Akira
2012-01-01
Many sensors have to be used simultaneously for multipoint carbon dioxide (CO2) observation. All the sensors should be calibrated in advance, but this is a time-consuming process. To seek a simplified calibration method, we used four commercial CO2 sensor models and characterized their output tendencies against ambient temperature and length of use, in addition to offset characteristics. We used four samples of standard gas with different CO2 concentrations (0, 407, 1,110, and 1,810 ppm). The outputs of K30 and AN100 models showed linear relationships with temperature and length of use. Calibration coefficients for sensor models were determined using the data from three individual sensors of the same model to minimize the relative RMS error. When the correction was applied to the sensors, the accuracy of measurements improved significantly in the case of the K30 and AN100 units. In particular, in the case of K30 the relative RMS error decreased from 24% to 4%. Hence, we have chosen K30 for developing a portable CO2 measurement device (10 × 10 × 15 cm, 900 g). Data of CO2 concentration, measurement time and location, temperature, humidity, and atmospheric pressure can be recorded onto a Secure Digital (SD) memory card. The CO2 concentration in a high-school lecture room was monitored with this device. The CO2 data, when corrected for simultaneously measured temperature, water vapor partial pressure, and atmospheric pressure, showed a good agreement with the data measured by a highly accurate CO2 analyzer, LI-6262. This indicates that acceptable accuracy can be realized using the calibration method developed in this study. PMID:22737029
NASA Astrophysics Data System (ADS)
Bolten, J. D.; Mohammed, I. N.; Srinivasan, R.; Lakshmi, V.
2017-12-01
Better understanding of the hydrological cycle of the Lower Mekong River Basin (LMRB) and addressing the value-added information of using remote sensing data on the spatial variability of soil moisture over the Mekong Basin is the objective of this work. In this work, we present the development and assessment of the LMRB (drainage area of 495,000 km2) Soil and Water Assessment Tool (SWAT). The coupled model framework presented is part of SERVIR, a joint capacity building venture between NASA and the U.S. Agency for International Development, providing state-of-the-art, satellite-based earth monitoring, imaging and mapping data, geospatial information, predictive models, and science applications to improve environmental decision-making among multiple developing nations. The developed LMRB SWAT model enables the integration of satellite-based daily gridded precipitation, air temperature, digital elevation model, soil texture, and land cover and land use data to drive SWAT model simulations over the Lower Mekong River Basin. The LMRB SWAT model driven by remote sensing climate data was calibrated and verified with observed runoff data at the watershed outlet as well as at multiple sites along the main river course. Another LMRB SWAT model set driven by in-situ climate observations was also calibrated and verified to streamflow data. Simulated soil moisture estimates from the two models were then examined and compared to a downscaled Soil Moisture Active Passive Sensor (SMAP) 36 km radiometer products. Results from this work present a framework for improving SWAT performance by utilizing a downscaled SMAP soil moisture products used for model calibration and validation. Index Terms: 1622: Earth system modeling; 1631: Land/atmosphere interactions; 1800: Hydrology; 1836 Hydrological cycles and budgets; 1840 Hydrometeorology; 1855: Remote sensing; 1866: Soil moisture; 6334: Regional Planning
Evaluation of Potential Evapotranspiration from a Hydrologic Model on a National Scale
NASA Astrophysics Data System (ADS)
Hakala, K. A.; Hay, L.; Markstrom, S. L.
2014-12-01
The US Geological Survey has developed a National Hydrologic Model (NHM) to support coordinated, comprehensive and consistent hydrologic model development and facilitate the application of simulations on the scale of the continental US. The NHM has a consistent geospatial fabric for modeling, consisting of over 100,000 hydrologic response units (HRUs). Each HRU requires accurate parameter estimates, some of which are attained from automated calibration. However, improved calibration can be achieved by initially utilizing as many parameters as possible from national data sets. This presentation investigates the effectiveness of calculating potential evapotranspiration (PET) parameters based on mean monthly values from the NOAA PET Atlas. Additional PET products are then used to evaluate the PET parameters. Effectively utilizing existing national-scale data sets can simplify the effort in establishing a robust NHM.
Modeling to Support the Development of Habitat Targets for Piping Plovers on the Missouri River
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buenau, Kate E.
2015-05-05
Report on modeling and analyses done in support of developing quantative sandbar habitat targets for piping plovers, including assessment of reference, historical, dams present but not operated, and habitat construction calibrated to meet population viability targets.
Development of a Dynamic Traffic Assignment Model for Northern Nevada
DOT National Transportation Integrated Search
2014-06-01
The objective of this research is to build and calibrate a DTA model for Northern Nevada (RenoSparks Area) based on the network profile and travel demand information updated to date. The critical procedures include development of consistent and readi...
Data-base development for water-quality modeling of the Patuxent River basin, Maryland
Fisher, G.T.; Summers, R.M.
1987-01-01
Procedures and rationale used to develop a data base and data management system for the Patuxent Watershed Nonpoint Source Water Quality Monitoring and Modeling Program of the Maryland Department of the Environment and the U.S. Geological Survey are described. A detailed data base and data management system has been developed to facilitate modeling of the watershed for water quality planning purposes; statistical analysis; plotting of meteorologic, hydrologic and water quality data; and geographic data analysis. The system is Maryland 's prototype for development of a basinwide water quality management program. A key step in the program is to build a calibrated and verified water quality model of the basin using the Hydrological Simulation Program--FORTRAN (HSPF) hydrologic model, which has been used extensively in large-scale basin modeling. The compilation of the substantial existing data base for preliminary calibration of the basin model, including meteorologic, hydrologic, and water quality data from federal and state data bases and a geographic information system containing digital land use and soils data is described. The data base development is significant in its application of an integrated, uniform approach to data base management and modeling. (Lantz-PTT)
A strain-mediated corrosion model for bioabsorbable metallic stents.
Galvin, E; O'Brien, D; Cummins, C; Mac Donald, B J; Lally, C
2017-06-01
This paper presents a strain-mediated phenomenological corrosion model, based on the discrete finite element modelling method which was developed for use with the ANSYS Implicit finite element code. The corrosion model was calibrated from experimental data and used to simulate the corrosion performance of a WE43 magnesium alloy stent. The model was found to be capable of predicting the experimentally observed plastic strain-mediated mass loss profile. The non-linear plastic strain model, extrapolated from the experimental data, was also found to adequately capture the corrosion-induced reduction in the radial stiffness of the stent over time. The model developed will help direct future design efforts towards the minimisation of plastic strain during device manufacture, deployment and in-service, in order to reduce corrosion rates and prolong the mechanical integrity of magnesium devices. The need for corrosion models that explore the interaction of strain with corrosion damage has been recognised as one of the current challenges in degradable material modelling (Gastaldi et al., 2011). A finite element based plastic strain-mediated phenomenological corrosion model was developed in this work and was calibrated based on the results of the corrosion experiments. It was found to be capable of predicting the experimentally observed plastic strain-mediated mass loss profile and the corrosion-induced reduction in the radial stiffness of the stent over time. To the author's knowledge, the results presented here represent the first experimental calibration of a plastic strain-mediated corrosion model of a corroding magnesium stent. Copyright © 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Calibrating Parameters of Power System Stability Models using Advanced Ensemble Kalman Filter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Renke; Diao, Ruisheng; Li, Yuanyuan
With the ever increasing penetration of renewable energy, smart loads, energy storage, and new market behavior, today’s power grid becomes more dynamic and stochastic, which may invalidate traditional study assumptions and pose great operational challenges. Thus, it is of critical importance to maintain good-quality models for secure and economic planning and real-time operation. Following the 1996 Western Systems Coordinating Council (WSCC) system blackout, North American Electric Reliability Corporation (NERC) and Western Electricity Coordinating Council (WECC) in North America enforced a number of policies and standards to guide the power industry to periodically validate power grid models and calibrate poor parametersmore » with the goal of building sufficient confidence in model quality. The PMU-based approach using online measurements without interfering with the operation of generators provides a low-cost alternative to meet NERC standards. This paper presents an innovative procedure and tool suites to validate and calibrate models based on a trajectory sensitivity analysis method and an advanced ensemble Kalman filter algorithm. The developed prototype demonstrates excellent performance in identifying and calibrating bad parameters of a realistic hydro power plant against multiple system events.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Chao; Xu, Zhijie; Lai, Kevin
Part 1 of this paper presents a numerical model for non-reactive physical mass transfer across a wetted wall column (WWC). In Part 2, we improved the existing computational fluid dynamics (CFD) model to simulate chemical absorption occurring in a WWC as a bench-scale study of solvent-based carbon dioxide (CO2) capture. To generate data for WWC model validation, CO2 mass transfer across a monoethanolamine (MEA) solvent was first measured on a WWC experimental apparatus. The numerical model developed in this work can account for both chemical absorption and desorption of CO2 in MEA. In addition, the overall mass transfer coefficient predictedmore » using traditional/empirical correlations is conducted and compared with CFD prediction results for both steady and wavy falling films. A Bayesian statistical calibration algorithm is adopted to calibrate the reaction rate constants in chemical absorption/desorption of CO2 across a falling film of MEA. The posterior distributions of the two transport properties, i.e., Henry's constant and gas diffusivity in the non-reacting nitrous oxide (N2O)/MEA system obtained from Part 1 of this study, serves as priors for the calibration of CO2 reaction rate constants after using the N2O/CO2 analogy method. The calibrated model can be used to predict the CO2 mass transfer in a WWC for a wider range of operating conditions.« less
Wang, Chao; Xu, Zhijie; Lai, Kevin; ...
2017-10-24
Part 1 of this paper presents a numerical model for non-reactive physical mass transfer across a wetted wall column (WWC). In Part 2, we improved the existing computational fluid dynamics (CFD) model to simulate chemical absorption occurring in a WWC as a bench-scale study of solvent-based carbon dioxide (CO2) capture. To generate data for WWC model validation, CO2 mass transfer across a monoethanolamine (MEA) solvent was first measured on a WWC experimental apparatus. The numerical model developed in this work can account for both chemical absorption and desorption of CO2 in MEA. In addition, the overall mass transfer coefficient predictedmore » using traditional/empirical correlations is conducted and compared with CFD prediction results for both steady and wavy falling films. A Bayesian statistical calibration algorithm is adopted to calibrate the reaction rate constants in chemical absorption/desorption of CO2 across a falling film of MEA. The posterior distributions of the two transport properties, i.e., Henry's constant and gas diffusivity in the non-reacting nitrous oxide (N2O)/MEA system obtained from Part 1 of this study, serves as priors for the calibration of CO2 reaction rate constants after using the N2O/CO2 analogy method. The calibrated model can be used to predict the CO2 mass transfer in a WWC for a wider range of operating conditions.« less
Christiansen, Daniel E.
2012-01-01
The U.S. Geological Survey, in cooperation with the Iowa Department of Natural Resources, conducted a study to examine techniques for estimation of daily streamflows using hydrological models and statistical methods. This report focuses on the use of a hydrologic model, the U.S. Geological Survey's Precipitation-Runoff Modeling System, to estimate daily streamflows at gaged and ungaged locations. The Precipitation-Runoff Modeling System is a modular, physically based, distributed-parameter modeling system developed to evaluate the impacts of various combinations of precipitation, climate, and land use on surface-water runoff and general basin hydrology. The Cedar River Basin was selected to construct a Precipitation-Runoff Modeling System model that simulates the period from January 1, 2000, to December 31, 2010. The calibration period was from January 1, 2000, to December 31, 2004, and the validation periods were from January 1, 2005, to December 31, 2010 and January 1, 2000 to December 31, 2010. A Geographic Information System tool was used to delineate the Cedar River Basin and subbasins for the Precipitation-Runoff Modeling System model and to derive parameters based on the physical geographical features. Calibration of the Precipitation-Runoff Modeling System model was completed using a U.S. Geological Survey calibration software tool. The main objective of the calibration was to match the daily streamflow simulated by the Precipitation-Runoff Modeling System model with streamflow measured at U.S. Geological Survey streamflow gages. The Cedar River Basin daily streamflow model performed with a Nash-Sutcliffe efficiency ranged from 0.82 to 0.33 during the calibration period, and a Nash-Sutcliffe efficiency ranged from 0.77 to -0.04 during the validation period. The Cedar River Basin model is meeting the criteria of greater than 0.50 Nash-Sutcliffe and is a good fit for streamflow conditions for the calibration period at all but one location, Austin, Minnesota. The Precipitation-Runoff Modeling System model accurately simulated streamflow at four of six uncalibrated sites within the basin. Overall, there was good agreement between simulated and measured seasonal and annual volumes throughout the basin for calibration and validation sites. The calibration period ranged from 0.2 to 20.8 percent difference, and the validation period ranged from 0.0 to 19.5 percent difference across all seasons and total annual runoff. The Precipitation-Runoff Modeling System model tended to underestimate lower streamflows compared to the observed streamflow values. This is an indication that the Precipitation-Runoff Modeling model needs more detailed groundwater and storage information to properly model the low-flow conditions in the Cedar River Basin.
The Extended HANDS Characterization and Analysis of Metric Biases
NASA Astrophysics Data System (ADS)
Kelecy, T.; Knox, R.; Cognion, R.
The Extended High Accuracy Network Determination System (Extended HANDS) consists of a network of low cost, high accuracy optical telescopes designed to support space surveillance and development of space object characterization technologies. Comprising off-the-shelf components, the telescopes are designed to provide sub arc-second astrometric accuracy. The design and analysis team are in the process of characterizing the system through development of an error allocation tree whose assessment is supported by simulation, data analysis, and calibration tests. The metric calibration process has revealed 1-2 arc-second biases in the right ascension and declination measurements of reference satellite position, and these have been observed to have fairly distinct characteristics that appear to have some dependence on orbit geometry and tracking rates. The work presented here outlines error models developed to aid in development of the system error budget, and examines characteristic errors (biases, time dependence, etc.) that might be present in each of the relevant system elements used in the data collection and processing, including the metric calibration processing. The relevant reference frames are identified, and include the sensor (CCD camera) reference frame, Earth-fixed topocentric frame, topocentric inertial reference frame, and the geocentric inertial reference frame. The errors modeled in each of these reference frames, when mapped into the topocentric inertial measurement frame, reveal how errors might manifest themselves through the calibration process. The error analysis results that are presented use satellite-sensor geometries taken from periods where actual measurements were collected, and reveal how modeled errors manifest themselves over those specific time periods. These results are compared to the real calibration metric data (right ascension and declination residuals), and sources of the bias are hypothesized. In turn, the actual right ascension and declination calibration residuals are also mapped to other relevant reference frames in an attempt to validate the source of the bias errors. These results will serve as the basis for more focused investigation into specific components embedded in the system and system processes that might contain the source of the observed biases.
Modeling the Death Valley regional ground-water flow system
D'Agnese, F. A.; Faunt, C.C.; Hill, M.C.
2004-01-01
The development of a regional ground-water flow model of the Death Valley region in the southwestern United States is discussed in the context of the fourteen guidelines of Hill. This application of the guidelines demonstrates how they may be used for model calibration and evaluation, and to direct further model development and data collection.
The Lake Michigan Mass Balance Project (LMMBP) was initiated to support the development of a Lake Wide Management Plan (LaMP) for Lake Michigan. As one of the models in the LMMBP modeling framework, the Level 2 Lake Michigan containment transport and fate (LM2) model has been dev...
Development and Evaluation of a UAV-Photogrammetry System for Precise 3D Environmental Modeling.
Shahbazi, Mozhdeh; Sohn, Gunho; Théau, Jérôme; Menard, Patrick
2015-10-30
The specific requirements of UAV-photogrammetry necessitate particular solutions for system development, which have mostly been ignored or not assessed adequately in recent studies. Accordingly, this paper presents the methodological and experimental aspects of correctly implementing a UAV-photogrammetry system. The hardware of the system consists of an electric-powered helicopter, a high-resolution digital camera and an inertial navigation system. The software of the system includes the in-house programs specifically designed for camera calibration, platform calibration, system integration, on-board data acquisition, flight planning and on-the-job self-calibration. The detailed features of the system are discussed, and solutions are proposed in order to enhance the system and its photogrammetric outputs. The developed system is extensively tested for precise modeling of the challenging environment of an open-pit gravel mine. The accuracy of the results is evaluated under various mapping conditions, including direct georeferencing and indirect georeferencing with different numbers, distributions and types of ground control points. Additionally, the effects of imaging configuration and network stability on modeling accuracy are assessed. The experiments demonstrated that 1.55 m horizontal and 3.16 m vertical absolute modeling accuracy could be achieved via direct geo-referencing, which was improved to 0.4 cm and 1.7 cm after indirect geo-referencing.
Development and Evaluation of a UAV-Photogrammetry System for Precise 3D Environmental Modeling
Shahbazi, Mozhdeh; Sohn, Gunho; Théau, Jérôme; Menard, Patrick
2015-01-01
The specific requirements of UAV-photogrammetry necessitate particular solutions for system development, which have mostly been ignored or not assessed adequately in recent studies. Accordingly, this paper presents the methodological and experimental aspects of correctly implementing a UAV-photogrammetry system. The hardware of the system consists of an electric-powered helicopter, a high-resolution digital camera and an inertial navigation system. The software of the system includes the in-house programs specifically designed for camera calibration, platform calibration, system integration, on-board data acquisition, flight planning and on-the-job self-calibration. The detailed features of the system are discussed, and solutions are proposed in order to enhance the system and its photogrammetric outputs. The developed system is extensively tested for precise modeling of the challenging environment of an open-pit gravel mine. The accuracy of the results is evaluated under various mapping conditions, including direct georeferencing and indirect georeferencing with different numbers, distributions and types of ground control points. Additionally, the effects of imaging configuration and network stability on modeling accuracy are assessed. The experiments demonstrated that 1.55 m horizontal and 3.16 m vertical absolute modeling accuracy could be achieved via direct geo-referencing, which was improved to 0.4 cm and 1.7 cm after indirect geo-referencing. PMID:26528976
Comparison of Test and Finite Element Analysis for Two Full-Scale Helicopter Crash Tests
NASA Technical Reports Server (NTRS)
Annett, Martin S.; Horta,Lucas G.
2011-01-01
Finite element analyses have been performed for two full-scale crash tests of an MD-500 helicopter. The first crash test was conducted to evaluate the performance of a composite deployable energy absorber under combined flight loads. In the second crash test, the energy absorber was removed to establish the baseline loads. The use of an energy absorbing device reduced the impact acceleration levels by a factor of three. Accelerations and kinematic data collected from the crash tests were compared to analytical results. Details of the full-scale crash tests and development of the system-integrated finite element model are briefly described along with direct comparisons of acceleration magnitudes and durations for the first full-scale crash test. Because load levels were significantly different between tests, models developed for the purposes of predicting the overall system response with external energy absorbers were not adequate under more severe conditions seen in the second crash test. Relative error comparisons were inadequate to guide model calibration. A newly developed model calibration approach that includes uncertainty estimation, parameter sensitivity, impact shape orthogonality, and numerical optimization was used for the second full-scale crash test. The calibrated parameter set reduced 2-norm prediction error by 51% but did not improve impact shape orthogonality.
Gamma model and its analysis for phase measuring profilometry.
Liu, Kai; Wang, Yongchang; Lau, Daniel L; Hao, Qi; Hassebrook, Laurence G
2010-03-01
Phase measuring profilometry is a method of structured light illumination whose three-dimensional reconstructions are susceptible to error from nonunitary gamma in the associated optical devices. While the effects of this distortion diminish with an increasing number of employed phase-shifted patterns, gamma distortion may be unavoidable in real-time systems where the number of projected patterns is limited by the presence of target motion. A mathematical model is developed for predicting the effects of nonunitary gamma on phase measuring profilometry, while also introducing an accurate gamma calibration method and two strategies for minimizing gamma's effect on phase determination. These phase correction strategies include phase corrections with and without gamma calibration. With the reduction in noise, for three-step phase measuring profilometry, analysis of the root mean squared error of the corrected phase will show a 60x reduction in phase error when the proposed gamma calibration is performed versus 33x reduction without calibration.
Hydrological Scenario Using Tools and Applications Available in enviroGRIDS Portal
NASA Astrophysics Data System (ADS)
Bacu, V.; Mihon, D.; Stefanut, T.; Rodila, D.; Cau, P.; Manca, S.; Soru, C.; Gorgan, D.
2012-04-01
Nowadays the decision makers but also citizens are concerning with the sustainability and vulnerability of land management practices on various aspects and in particular on water quality and quantity in complex watersheds. The Black Sea Catchment is an important watershed in the Central and East Europe. In the FP7 project enviroGRIDS [1] was developed a Web Portal that incorporates different tools and applications focused on geospatial data management, hydrologic model calibration, execution and visualization and training activities. This presentation highlights, from the end-user point of view, the scenario related with hydrological models using the tools and applications available in the enviroGRIDS Web Portal [2]. The development of SWAT (Soil Water Assessment Tool) hydrological models is a well known procedure for the hydrological specialists [3]. Starting from the primary data (information related to weather, soil properties, topography, vegetation, and land management practices of the particular watershed) that are used to develop SWAT hydrological models, to specific reports, about the water quality in the studied watershed, the hydrological specialist will use different applications available in the enviroGRIDS portal. The tools and applications available through the enviroGRIDS portal are not dealing with the building up of the SWAT hydrological models. They are mainly focused on: calibration procedure (gSWAT [4]) - uses the GRID computational infrastructure to speed-up the calibration process; development of specific scenarios (BASHYT [5]) - starts from an already calibrated SWAT hydrological model and defines new scenarios; execution of scenarios (gSWATSim [6]) - executes the scenarios exported from BASHYT; visualization (BASHYT) - displays charts, tables and maps. Each application is built-up as a stack of functional layers. We combine different layers of applications by vertical interoperability in order to build the desired complex functionality. On the other hand, the applications can collaborate at the same architectural levels, which represent the horizontal interoperability. Both the horizontal and vertical interoperability is accomplished by services and by exchanging data. The calibration procedure requires huge computational resources, which are provided by the Grid infrastructure. On the other hand the scenario development through BASHYT requires a flexible way of interaction with the SWAT model in order to easily change the input model. The large user community of SWAT from the enviroGRIDS consortium or outside may greatly benefit from tools and applications related with the calibration process, scenario development and execution from the enviroGRIDS portal. [1]. enviroGRIDS project, http://envirogrids.net/ [2]. Gorgan D., Abbaspour K., Cau P., Bacu V., Mihon D., Giuliani G., Ray N., Lehmann A., Grid Based Data Processing Tools and Applications for Black Sea Catchment Basin. IDAACS 2011 - The 6th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications 15-17 September 2011, Prague. IEEE Computer Press, pp. 223 - 228 (2011). [3]. Soil and Water Assessment Tool, http://www.brc.tamus.edu/swat/index.html [4]. Bacu V., Mihon D., Rodila D., Stefanut T., Gorgan D., Grid Based Architectural Components for SWAT Model Calibration. HPCS 2011 - International Conference on High Performance Computing and Simulation, 4-8 July, Istanbul, Turkey, ISBN 978-1-61284-381-0, doi: 10.1109/HPCSim.2011.5999824, pp. 193-198 (2011). [5]. Manca S., Soru C., Cau P., Meloni G., Fiori M., A multi model and multiscale, GIS oriented Web framework based on the SWAT model to face issues of water and soil resource vulnerability. Presentation at the 5th International SWAT Conference, August 3-7, 2009, http://www.brc.tamus.edu/swat/4thswatconf/docs/rooma/session5/Cau-Bashyt.pdf [6]. Bacu V., Mihon D., Stefanut T., Rodila D., Gorgan D., Cau P., Manca S., Grid Based Services and Tools for Hydrological Model Processing and Visualization. SYNASC 2011 - 13 International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (in press).
Brouckaert, D; Uyttersprot, J-S; Broeckx, W; De Beer, T
2018-03-01
Calibration transfer or standardisation aims at creating a uniform spectral response on different spectroscopic instruments or under varying conditions, without requiring a full recalibration for each situation. In the current study, this strategy is applied to construct at-line multivariate calibration models and consequently employ them in-line in a continuous industrial production line, using the same spectrometer. Firstly, quantitative multivariate models are constructed at-line at laboratory scale for predicting the concentration of two main ingredients in hard surface cleaners. By regressing the Raman spectra of a set of small-scale calibration samples against their reference concentration values, partial least squares (PLS) models are developed to quantify the surfactant levels in the liquid detergent compositions under investigation. After evaluating the models performance with a set of independent validation samples, a univariate slope/bias correction is applied in view of transporting these at-line calibration models to an in-line manufacturing set-up. This standardisation technique allows a fast and easy transfer of the PLS regression models, by simply correcting the model predictions on the in-line set-up, without adjusting anything to the original multivariate calibration models. An extensive statistical analysis is performed in order to assess the predictive quality of the transferred regression models. Before and after transfer, the R 2 and RMSEP of both models is compared for evaluating if their magnitude is similar. T-tests are then performed to investigate whether the slope and intercept of the transferred regression line are not statistically different from 1 and 0, respectively. Furthermore, it is inspected whether no significant bias can be noted. F-tests are executed as well, for assessing the linearity of the transfer regression line and for investigating the statistical coincidence of the transfer and validation regression line. Finally, a paired t-test is performed to compare the original at-line model to the slope/bias corrected in-line model, using interval hypotheses. It is shown that the calibration models of Surfactant 1 and Surfactant 2 yield satisfactory in-line predictions after slope/bias correction. While Surfactant 1 passes seven out of eight statistical tests, the recommended validation parameters are 100% successful for Surfactant 2. It is hence concluded that the proposed strategy for transferring at-line calibration models to an in-line industrial environment via a univariate slope/bias correction of the predicted values offers a successful standardisation approach. Copyright © 2017 Elsevier B.V. All rights reserved.
Wang, Wei; Lu, Hui; Yang, Dawen; Sothea, Khem; Jiao, Yang; Gao, Bin; Peng, Xueting; Pang, Zhiguo
2016-01-01
The Mekong River is the most important river in Southeast Asia. It has increasingly suffered from water-related problems due to economic development, population growth and climate change in the surrounding areas. In this study, we built a distributed Geomorphology-Based Hydrological Model (GBHM) of the Mekong River using remote sensing data and other publicly available data. Two numerical experiments were conducted using different rainfall data sets as model inputs. The data sets included rain gauge data from the Mekong River Commission (MRC) and remote sensing rainfall data from the Tropic Rainfall Measurement Mission (TRMM 3B42V7). Model calibration and validation were conducted for the two rainfall data sets. Compared to the observed discharge, both the gauge simulation and TRMM simulation performed well during the calibration period (1998–2001). However, the performance of the gauge simulation was worse than that of the TRMM simulation during the validation period (2002–2012). The TRMM simulation is more stable and reliable at different scales. Moreover, the calibration period was changed to 2, 4, and 8 years to test the impact of the calibration period length on the two simulations. The results suggest that longer calibration periods improved the GBHM performance during validation periods. In addition, the TRMM simulation is more stable and less sensitive to the calibration period length than is the gauge simulation. Further analysis reveals that the uneven distribution of rain gauges makes the input rainfall data less representative and more heterogeneous, worsening the simulation performance. Our results indicate that remotely sensed rainfall data may be more suitable for driving distributed hydrologic models, especially in basins with poor data quality or limited gauge availability. PMID:27010692
Wang, Wei; Lu, Hui; Yang, Dawen; Sothea, Khem; Jiao, Yang; Gao, Bin; Peng, Xueting; Pang, Zhiguo
2016-01-01
The Mekong River is the most important river in Southeast Asia. It has increasingly suffered from water-related problems due to economic development, population growth and climate change in the surrounding areas. In this study, we built a distributed Geomorphology-Based Hydrological Model (GBHM) of the Mekong River using remote sensing data and other publicly available data. Two numerical experiments were conducted using different rainfall data sets as model inputs. The data sets included rain gauge data from the Mekong River Commission (MRC) and remote sensing rainfall data from the Tropic Rainfall Measurement Mission (TRMM 3B42V7). Model calibration and validation were conducted for the two rainfall data sets. Compared to the observed discharge, both the gauge simulation and TRMM simulation performed well during the calibration period (1998-2001). However, the performance of the gauge simulation was worse than that of the TRMM simulation during the validation period (2002-2012). The TRMM simulation is more stable and reliable at different scales. Moreover, the calibration period was changed to 2, 4, and 8 years to test the impact of the calibration period length on the two simulations. The results suggest that longer calibration periods improved the GBHM performance during validation periods. In addition, the TRMM simulation is more stable and less sensitive to the calibration period length than is the gauge simulation. Further analysis reveals that the uneven distribution of rain gauges makes the input rainfall data less representative and more heterogeneous, worsening the simulation performance. Our results indicate that remotely sensed rainfall data may be more suitable for driving distributed hydrologic models, especially in basins with poor data quality or limited gauge availability.
Optics-Only Calibration of a Neural-Net Based Optical NDE Method for Structural Health Monitoring
NASA Technical Reports Server (NTRS)
Decker, Arthur J.
2004-01-01
A calibration process is presented that uses optical measurements alone to calibrate a neural-net based NDE method. The method itself detects small changes in the vibration mode shapes of structures. The optics-only calibration process confirms previous work that the sensitivity to vibration-amplitude changes can be as small as 10 nanometers. A more practical value in an NDE service laboratory is shown to be 50 nanometers. Both model-generated and experimental calibrations are demonstrated using two implementations of the calibration technique. The implementations are based on previously published demonstrations of the NDE method and an alternative calibration procedure that depends on comparing neural-net and point sensor measurements. The optics-only calibration method, unlike the alternative method, does not require modifications of the structure being tested or the creation of calibration objects. The calibration process can be used to test improvements in the NDE process and to develop a vibration-mode-independence of damagedetection sensitivity. The calibration effort was intended to support NASA s objective to promote safety in the operations of ground test facilities or aviation safety, in general, by allowing the detection of the gradual onset of structural changes and damage.
NASA Astrophysics Data System (ADS)
Lei, Xiaohui; Wang, Yuhui; Liao, Weihong; Jiang, Yunzhong; Tian, Yu; Wang, Hao
2011-09-01
Many regions are still threatened with frequent floods and water resource shortage problems in China. Consequently, the task of reproducing and predicting the hydrological process in watersheds is hard and unavoidable for reducing the risks of damage and loss. Thus, it is necessary to develop an efficient and cost-effective hydrological tool in China as many areas should be modeled. Currently, developed hydrological tools such as Mike SHE and ArcSWAT (soil and water assessment tool based on ArcGIS) show significant power in improving the precision of hydrological modeling in China by considering spatial variability both in land cover and in soil type. However, adopting developed commercial tools in such a large developing country comes at a high cost. Commercial modeling tools usually contain large numbers of formulas, complicated data formats, and many preprocessing or postprocessing steps that may make it difficult for the user to carry out simulation, thus lowering the efficiency of the modeling process. Besides, commercial hydrological models usually cannot be modified or improved to be suitable for some special hydrological conditions in China. Some other hydrological models are open source, but integrated into commercial GIS systems. Therefore, by integrating hydrological simulation code EasyDHM, a hydrological simulation tool named MWEasyDHM was developed based on open-source MapWindow GIS, the purpose of which is to establish the first open-source GIS-based distributed hydrological model tool in China by integrating modules of preprocessing, model computation, parameter estimation, result display, and analysis. MWEasyDHM provides users with a friendly manipulating MapWindow GIS interface, selectable multifunctional hydrological processing modules, and, more importantly, an efficient and cost-effective hydrological simulation tool. The general construction of MWEasyDHM consists of four major parts: (1) a general GIS module for hydrological analysis, (2) a preprocessing module for modeling inputs, (3) a model calibration module, and (4) a postprocessing module. The general GIS module for hydrological analysis is developed on the basis of totally open-source GIS software, MapWindow, which contains basic GIS functions. The preprocessing module is made up of three submodules including a DEM-based submodule for hydrological analysis, a submodule for default parameter calculation, and a submodule for the spatial interpolation of meteorological data. The calibration module contains parallel computation, real-time computation, and visualization. The postprocessing module includes model calibration and model results spatial visualization using tabular form and spatial grids. MWEasyDHM makes it possible for efficient modeling and calibration of EasyDHM, and promises further development of cost-effective applications in various watersheds.
An atlas of selected calibrated stellar spectra
NASA Technical Reports Server (NTRS)
Walker, Russell G.; Cohen, Martin
1992-01-01
Five hundred and fifty six stars in the IRAS PSC-2 that are suitable for stellar radiometric standards and are brighter than 1 Jy at 25 microns were identified. In addition, 123 stars that meet all of our criteria for calibration standards, but which lack a luminosity class were identified. An approach to absolute stellar calibration of broadband infrared filters based upon new models of Vega and Sirius due to Kurucz (1992) is presented. A general technique used to assemble continuous wide-band calibrated infrared spectra is described and an absolutely calibrated 1-35 micron spectrum of alpha(Tau) is constructed and the method using new and carefully designed observations is independently validated. The absolute calibration of the IRAS Low Resolution Spectrometer (LRS) database is investigated by comparing the observed spectrum of alpha(Tau) with that assumed in the original LRS calibration scheme. Neglect of the SiO fundamental band in alpha(Tau) has led to the presence of a specious 'emission' feature in all LRS spectra near 8.5 microns, and to an incorrect spectral slope between 8 and 12 microns. Finally, some of the properties of asteroids that effect their utility as calibration objects for the middle and far infrared region are examined. A technique to determine, from IRAS multiwaveband observations, the basic physical parameters needed by various asteroid thermal models that minimize the number of assumptions required is developed.
Calibration OGSEs for multichannel radiometers for Mars atmosphere studies
NASA Astrophysics Data System (ADS)
Jiménez, J. J.; J Álvarez, F.; Gonzalez-Guerrero, M.; Apéstigue, V.; Martín, I.; Fernández, J. M.; Fernán, A. A.; Arruego, I.
2018-02-01
This work describes several Optical Ground Support Equipment (OGSEs) developed by INTA (Spanish Institute of Aerospace Technology—Instituto Nacional de Técnica Aeroespacial) for the calibration and characterization of their self-manufactured multichannel radiometers (solar irradiance sensors—SIS) developed for working on the surface of Mars and studying the atmosphere of that planet. Nowadays, INTA is developing two SIS for the ESA ExoMars 2020 and for the JPL/NASA Mars 2020 missions. These calibration OGSEs have been improved since the first model in 2011 developed for Mars MetNet Precursor mission. This work describes the currently used OGSE. Calibration tests provide an objective evidence of the SIS performance, allowing the conversion of the electrical sensor output into accurate physical measurements (irradiance) with uncertainty bounds. Calibration results of the SIS on board of the Dust characterisation, Risk assessment, and Environment Analyzer on the Martian Surface (DREAMS) on board the ExoMars 2016 Schiaparelli module (EDM—entry and descent module) are also presented, as well as their error propagation. Theoretical precision and accuracy of the instrument are determined by these results. Two types of OGSE are used as a function of the pursued aim: calibration OGSEs and Optical Fast Verification (OFV) GSE. Calibration OGSEs consist of three setups which characterize with the highest possible accuracy, the responsivity, the angular response and the thermal behavior; OFV OGSE verify that the performance of the sensor is close to nominal after every environmental and qualification test. Results show that the accuracy of the calibrated sensors is a function of the accuracy of the optical detectors and of the light conditions. For normal direct incidence and diffuse light, the accuracy is in the same order of uncertainty as that of the reference cell used for fixing the irradiance, which is about 1%.
Calibration OGSEs for multichannel radiometers for Mars atmosphere studies
NASA Astrophysics Data System (ADS)
Jiménez, J. J.; J Álvarez, F.; Gonzalez-Guerrero, M.; Apéstigue, V.; Martín, I.; Fernández, J. M.; Fernán, A. A.; Arruego, I.
2018-06-01
This work describes several Optical Ground Support Equipment (OGSEs) developed by INTA (Spanish Institute of Aerospace Technology—Instituto Nacional de Técnica Aeroespacial) for the calibration and characterization of their self-manufactured multichannel radiometers (solar irradiance sensors—SIS) developed for working on the surface of Mars and studying the atmosphere of that planet. Nowadays, INTA is developing two SIS for the ESA ExoMars 2020 and for the JPL/NASA Mars 2020 missions. These calibration OGSEs have been improved since the first model in 2011 developed for Mars MetNet Precursor mission. This work describes the currently used OGSE. Calibration tests provide an objective evidence of the SIS performance, allowing the conversion of the electrical sensor output into accurate physical measurements (irradiance) with uncertainty bounds. Calibration results of the SIS on board of the Dust characterisation, Risk assessment, and Environment Analyzer on the Martian Surface (DREAMS) on board the ExoMars 2016 Schiaparelli module (EDM—entry and descent module) are also presented, as well as their error propagation. Theoretical precision and accuracy of the instrument are determined by these results. Two types of OGSE are used as a function of the pursued aim: calibration OGSEs and Optical Fast Verification (OFV) GSE. Calibration OGSEs consist of three setups which characterize with the highest possible accuracy, the responsivity, the angular response and the thermal behavior; OFV OGSE verify that the performance of the sensor is close to nominal after every environmental and qualification test. Results show that the accuracy of the calibrated sensors is a function of the accuracy of the optical detectors and of the light conditions. For normal direct incidence and diffuse light, the accuracy is in the same order of uncertainty as that of the reference cell used for fixing the irradiance, which is about 1%.
Multiple-Objective Stepwise Calibration Using Luca
Hay, Lauren E.; Umemoto, Makiko
2007-01-01
This report documents Luca (Let us calibrate), a multiple-objective, stepwise, automated procedure for hydrologic model calibration and the associated graphical user interface (GUI). Luca is a wizard-style user-friendly GUI that provides an easy systematic way of building and executing a calibration procedure. The calibration procedure uses the Shuffled Complex Evolution global search algorithm to calibrate any model compiled with the U.S. Geological Survey's Modular Modeling System. This process assures that intermediate and final states of the model are simulated consistently with measured values.
Sophocleous, M.A.; Koelliker, J.K.; Govindaraju, R.S.; Birdie, T.; Ramireddygari, S.R.; Perkins, S.P.
1999-01-01
The objective of this article is to develop and implement a comprehensive computer model that is capable of simulating the surface-water, ground-water, and stream-aquifer interactions on a continuous basis for the Rattlesnake Creek basin in south-central Kansas. The model is to be used as a tool for evaluating long-term water-management strategies. The agriculturally-based watershed model SWAT and the ground-water model MODFLOW with stream-aquifer interaction routines, suitably modified, were linked into a comprehensive basin model known as SWATMOD. The hydrologic response unit concept was implemented to overcome the quasi-lumped nature of SWAT and represent the heterogeneity within each subbasin of the basin model. A graphical user-interface and a decision support system were also developed to evaluate scenarios involving manipulation of water fights and agricultural land uses on stream-aquifer system response. An extensive sensitivity analysis on model parameters was conducted, and model limitations and parameter uncertainties were emphasized. A combination of trial-and-error and inverse modeling techniques were employed to calibrate the model against multiple calibration targets of measured ground-water levels, streamflows, and reported irrigation amounts. The split-sample technique was employed for corroborating the calibrated model. The model was run for a 40 y historical simulation period, and a 40 y prediction period. A number of hypothetical management scenarios involving reductions and variations in withdrawal rates and patterns were simulated. The SWATMOD model was developed as a hydrologically rational low-flow model for analyzing, in a user-friendly manner, the conditions in the basin when there is a shortage of water.
NASA Astrophysics Data System (ADS)
Ichii, K.; Suzuki, T.; Kato, T.; Ito, A.; Hajima, T.; Ueyama, M.; Sasai, T.; Hirata, R.; Saigusa, N.; Ohtani, Y.; Takagi, K.
2010-07-01
Terrestrial biosphere models show large differences when simulating carbon and water cycles, and reducing these differences is a priority for developing more accurate estimates of the condition of terrestrial ecosystems and future climate change. To reduce uncertainties and improve the understanding of their carbon budgets, we investigated the utility of the eddy flux datasets to improve model simulations and reduce variabilities among multi-model outputs of terrestrial biosphere models in Japan. Using 9 terrestrial biosphere models (Support Vector Machine - based regressions, TOPS, CASA, VISIT, Biome-BGC, DAYCENT, SEIB, LPJ, and TRIFFID), we conducted two simulations: (1) point simulations at four eddy flux sites in Japan and (2) spatial simulations for Japan with a default model (based on original settings) and a modified model (based on model parameter tuning using eddy flux data). Generally, models using default model settings showed large deviations in model outputs from observation with large model-by-model variability. However, after we calibrated the model parameters using eddy flux data (GPP, RE and NEP), most models successfully simulated seasonal variations in the carbon cycle, with less variability among models. We also found that interannual variations in the carbon cycle are mostly consistent among models and observations. Spatial analysis also showed a large reduction in the variability among model outputs. This study demonstrated that careful validation and calibration of models with available eddy flux data reduced model-by-model differences. Yet, site history, analysis of model structure changes, and more objective procedure of model calibration should be included in the further analysis.
Exploring the calibration of a wind forecast ensemble for energy applications
NASA Astrophysics Data System (ADS)
Heppelmann, Tobias; Ben Bouallegue, Zied; Theis, Susanne
2015-04-01
In the German research project EWeLiNE, Deutscher Wetterdienst (DWD) and Fraunhofer Institute for Wind Energy and Energy System Technology (IWES) are collaborating with three German Transmission System Operators (TSO) in order to provide the TSOs with improved probabilistic power forecasts. Probabilistic power forecasts are derived from probabilistic weather forecasts, themselves derived from ensemble prediction systems (EPS). Since the considered raw ensemble wind forecasts suffer from underdispersiveness and bias, calibration methods are developed for the correction of the model bias and the ensemble spread bias. The overall aim is to improve the ensemble forecasts such that the uncertainty of the possible weather deployment is depicted by the ensemble spread from the first forecast hours. Additionally, the ensemble members after calibration should remain physically consistent scenarios. We focus on probabilistic hourly wind forecasts with horizon of 21 h delivered by the convection permitting high-resolution ensemble system COSMO-DE-EPS which has become operational in 2012 at DWD. The ensemble consists of 20 ensemble members driven by four different global models. The model area includes whole Germany and parts of Central Europe with a horizontal resolution of 2.8 km and a vertical resolution of 50 model levels. For verification we use wind mast measurements around 100 m height that corresponds to the hub height of wind energy plants that belong to wind farms within the model area. Calibration of the ensemble forecasts can be performed by different statistical methods applied to the raw ensemble output. Here, we explore local bivariate Ensemble Model Output Statistics at individual sites and quantile regression with different predictors. Applying different methods, we already show an improvement of ensemble wind forecasts from COSMO-DE-EPS for energy applications. In addition, an ensemble copula coupling approach transfers the time-dependencies of the raw ensemble to the calibrated ensemble. The calibrated wind forecasts are evaluated first with univariate probabilistic scores and additionally with diagnostics of wind ramps in order to assess the time-consistency of the calibrated ensemble members.
Acute health impacts of airborne particles estimated from satellite remote sensing.
Wang, Zhaoxi; Liu, Yang; Hu, Mu; Pan, Xiaochuan; Shi, Jing; Chen, Feng; He, Kebin; Koutrakis, Petros; Christiani, David C
2013-01-01
Satellite-based remote sensing provides a unique opportunity to monitor air quality from space at global, continental, national and regional scales. Most current research focused on developing empirical models using ground measurements of the ambient particulate. However, the application of satellite-based exposure assessment in environmental health is still limited, especially for acute effects, because the development of satellite PM(2.5) model depends on the availability of ground measurements. We tested the hypothesis that MODIS AOD (aerosol optical depth) exposure estimates, obtained from NASA satellites, are directly associated with daily health outcomes. Three independent healthcare databases were used: unscheduled outpatient visits, hospital admissions, and mortality collected in Beijing metropolitan area, China during 2006. We use generalized linear models to compare the short-term effects of air pollution assessed by ground monitoring (PM(10)) with adjustment of absolute humidity (AH) and AH-calibrated AOD. Across all databases we found that both AH-calibrated AOD and PM(10) (adjusted by AH) were consistently associated with elevated daily events on the current day and/or lag days for cardiovascular diseases, ischemic heart diseases, and COPD. The relative risks estimated by AH-calibrated AOD and PM(10) (adjusted by AH) were similar. Additionally, compared to ground PM(10), we found that AH-calibrated AOD had narrower confidence intervals for all models and was more robust in estimating the current day and lag day effects. Our preliminary findings suggested that, with proper adjustment of meteorological factors, satellite AOD can be used directly to estimate the acute health impacts of ambient particles without prior calibrating to the sparse ground monitoring networks. Copyright © 2012 Elsevier Ltd. All rights reserved.
Multisite Evaluation of APEX for Water Quality: II. Regional Parameterization.
Nelson, Nathan O; Baffaut, Claire; Lory, John A; Anomaa Senaviratne, G M M M; Bhandari, Ammar B; Udawatta, Ranjith P; Sweeney, Daniel W; Helmers, Matt J; Van Liew, Mike W; Mallarino, Antonio P; Wortmann, Charles S
2017-11-01
Phosphorus (P) Index assessment requires independent estimates of long-term average annual P loss from fields, representing multiple climatic scenarios, management practices, and landscape positions. Because currently available measured data are insufficient to evaluate P Index performance, calibrated and validated process-based models have been proposed as tools to generate the required data. The objectives of this research were to develop a regional parameterization for the Agricultural Policy Environmental eXtender (APEX) model to estimate edge-of-field runoff, sediment, and P losses in restricted-layer soils of Missouri and Kansas and to assess the performance of this parameterization using monitoring data from multiple sites in this region. Five site-specific calibrated models (SSCM) from within the region were used to develop a regionally calibrated model (RCM), which was further calibrated and validated with measured data. Performance of the RCM was similar to that of the SSCMs for runoff simulation and had Nash-Sutcliffe efficiency (NSE) > 0.72 and absolute percent bias (|PBIAS|) < 18% for both calibration and validation. The RCM could not simulate sediment loss (NSE < 0, |PBIAS| > 90%) and was particularly ineffective at simulating sediment loss from locations with small sediment loads. The RCM had acceptable performance for simulation of total P loss (NSE > 0.74, |PBIAS| < 30%) but underperformed the SSCMs. Total P-loss estimates should be used with caution due to poor simulation of sediment loss. Although we did not attain our goal of a robust regional parameterization of APEX for estimating sediment and total P losses, runoff estimates with the RCM were acceptable for P Index evaluation. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
DOT National Transportation Integrated Search
1997-06-01
The present study has been conducted to evaluate eight flood prediction models for an ungauged small watershed. These models are either frequently used by or were developed by Louisiana Department of Transportation and Development (LADOTD). The eight...
NASA Astrophysics Data System (ADS)
Alipour, M. H.; Kibler, Kelly M.
2018-02-01
A framework methodology is proposed for streamflow prediction in poorly-gauged rivers located within large-scale regions of sparse hydrometeorologic observation. A multi-criteria model evaluation is developed to select models that balance runoff efficiency with selection of accurate parameter values. Sparse observed data are supplemented by uncertain or low-resolution information, incorporated as 'soft' data, to estimate parameter values a priori. Model performance is tested in two catchments within a data-poor region of southwestern China, and results are compared to models selected using alternative calibration methods. While all models perform consistently with respect to runoff efficiency (NSE range of 0.67-0.78), models selected using the proposed multi-objective method may incorporate more representative parameter values than those selected by traditional calibration. Notably, parameter values estimated by the proposed method resonate with direct estimates of catchment subsurface storage capacity (parameter residuals of 20 and 61 mm for maximum soil moisture capacity (Cmax), and 0.91 and 0.48 for soil moisture distribution shape factor (B); where a parameter residual is equal to the centroid of a soft parameter value minus the calibrated parameter value). A model more traditionally calibrated to observed data only (single-objective model) estimates a much lower soil moisture capacity (residuals of Cmax = 475 and 518 mm and B = 1.24 and 0.7). A constrained single-objective model also underestimates maximum soil moisture capacity relative to a priori estimates (residuals of Cmax = 246 and 289 mm). The proposed method may allow managers to more confidently transfer calibrated models to ungauged catchments for streamflow predictions, even in the world's most data-limited regions.
NASA Technical Reports Server (NTRS)
Putnam, J. B.; Unataroiu, C. D.; Somers, J. T.
2014-01-01
The THOR anthropomorphic test device (ATD) has been developed and continuously improved by the National Highway Traffic Safety Administration to provide automotive manufacturers an advanced tool that can be used to assess the injury risk of vehicle occupants in crash tests. Recently, a series of modifications were completed to improve the biofidelity of THOR ATD [1]. The updated THOR Modification Kit (THOR-K) ATD was employed at Wright-Patterson Air Base in 22 impact tests in three configurations: vertical, lateral, and spinal [2]. Although a computational finite element (FE) model of the THOR had been previously developed [3], updates to the model were needed to incorporate the recent changes in the modification kit. The main goal of this study was to develop and validate a FE model of the THOR-K ATD. The CAD drawings of the THOR-K ATD were reviewed and FE models were developed for the updated parts. For example, the head-skin geometry was found to change significantly, so its model was re-meshed (Fig. 1a). A protocol was developed to calibrate each component identified as key to the kinematic and kinetic response of the THOR-K head/neck ATD FE model (Fig. 1b). The available ATD tests were divided in two groups: a) calibration tests where the unknown material parameters of deformable parts (e.g., head skin, pelvis foam) were optimized to match the data and b) validation tests where the model response was only compared with test data by calculating their score using CORrelation and Analysis (CORA) rating system. Finally, the whole ATD model was validated under horizontal-, vertical-, and lateral-loading conditions against data recorded in the Wright Patterson tests [2]. Overall, the final THOR-K ATD model developed in this study is shown to respond similarly to the ATD in all validation tests. This good performance indicates that the optimization performed during calibration by using the CORA score as objective function is not test specific. Therefore confidence is provided in the ATD model for uses in predicting response in test conditions not performed in this study such those observed in the spacecraft landing. Comparison studies with ATD and human models may also be performed to contribute to future changes in THOR ATD design in an effort to improve its biofidelity, which has been traditionally based on post-mortem human subject testing and designer experience.
The In-flight Spectroscopic Performance of the Swift XRT CCD Camera During 2006-2007
NASA Technical Reports Server (NTRS)
Godet, O.; Beardmore, A.P.; Abbey, A.F.; Osborne, J.P.; Page, K.L.; Evans, P.; Starling, R.; Wells, A.A.; Angelini, L.; Burrows, D.N.;
2007-01-01
The Swift X-ray Telescope focal plane camera is a front-illuminated MOS CCD, providing a spectral response kernel of 135 eV FWHM at 5.9 keV as measured before launch. We describe the CCD calibration program based on celestial and on-board calibration sources, relevant in-flight experiences, and developments in the CCD response model. We illustrate how the revised response model describes the calibration sources well. Comparison of observed spectra with models folded through the instrument response produces negative residuals around and below the Oxygen edge. We discuss several possible causes for such residuals. Traps created by proton damage on the CCD increase the charge transfer inefficiency (CTI) over time. We describe the evolution of the CTI since the launch and its effect on the CCD spectral resolution and the gain.
van Heeswijk, Marijke
2006-01-01
Surface water has been diverted from the Salmon Creek Basin for irrigation purposes since the early 1900s, when the Bureau of Reclamation built the Okanogan Project. Spring snowmelt runoff is stored in two reservoirs, Conconully Reservoir and Salmon Lake Reservoir, and gradually released during the growing season. As a result of the out-of-basin streamflow diversions, the lower 4.3 miles of Salmon Creek typically has been a dry creek bed for almost 100 years, except during the spring snowmelt season during years of high runoff. To continue meeting the water needs of irrigators but also leave water in lower Salmon Creek for fish passage and to help restore the natural ecosystem, changes are being considered in how the Okanogan Project is operated. This report documents development of a precipitation-runoff model for the Salmon Creek Basin that can be used to simulate daily unregulated streamflows. The precipitation-runoff model is a component of a Decision Support System (DSS) that includes a water-operations model the Bureau of Reclamation plans to develop to study the water resources of the Salmon Creek Basin. The DSS will be similar to the DSS that the Bureau of Reclamation and the U.S. Geological Survey developed previously for the Yakima River Basin in central southern Washington. The precipitation-runoff model was calibrated for water years 1950-89 and tested for water years 1990-96. The model was used to simulate daily streamflows that were aggregated on a monthly basis and calibrated against historical monthly streamflows for Salmon Creek at Conconully Dam. Additional calibration data were provided by the snowpack water-equivalent record for a SNOTEL station in the basin. Model input time series of daily precipitation and minimum and maximum air temperatures were based on data from climate stations in the study area. Historical records of unregulated streamflow for Salmon Creek at Conconully Dam do not exist for water years 1950-96. Instead, estimates of historical monthly mean unregulated streamflow based on reservoir outflows and storage changes were used as a surrogate for the missing data and to calibrate and test the model. The estimated unregulated streamflows were corrected for evaporative losses from Conconully Reservoir (about 1 ft3/s) and ground-water losses from the basin (about 2 ft3/s). The total of the corrections was about 9 percent of the mean uncorrected streamflow of 32.2 ft3/s (23,300 acre-ft/yr) for water years 1949-96. For the calibration period, the basinwide mean annual evapotranspiration was simulated to be 19.1 inches, or about 83 percent of the mean annual precipitation of 23.1 inches. Model calibration and testing indicated that the daily streamflows simulated using the precipitation-runoff model should be used only to analyze historical and forecasted annual mean and April-July mean streamflows for Salmon Creek at Conconully Dam. Because of the paucity of model input data and uncertainty in the estimated unregulated streamflows, the model is not adequately calibrated and tested to estimate monthly mean streamflows for individual months, such as during low-flow periods, or for shorter periods such as during peak flows. No data were available to test the accuracy of simulated streamflows for lower Salmon Creek. As a result, simulated streamflows for lower Salmon Creek should be used with caution. For the calibration period (water years 1950-89), both the simulated mean annual streamflow and the simulated mean April-July streamflow compared well with the estimated uncorrected unregulated streamflow (UUS) and corrected unregulated streamflow (CUS). The simulated mean annual streamflow exceeded UUS by 5.9 percent and was less than CUS by 2.7 percent. Similarly, the simulated mean April-July streamflow exceeded UUS by 1.8 percent and was less than CUS by 3.1 percent. However, streamflow was significantly undersimulated during the low-flow, baseflow-dominated months of November through F
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1995-11-01
The Data Fusion Modeling (DFM) approach has been used to develop a groundwater flow and transport model of the Old Burial Grounds (OBG) at the US Department of Energy`s Savannah River Site (SRS). The resulting DFM model was compared to an existing model that was calibrated via the typical trial-and-error method. The OBG was chosen because a substantial amount of hydrogeologic information is available, a FACT (derivative of VAM3DCG) flow and transport model of the site exists, and the calibration and numerics were challenging with standard approaches. The DFM flow model developed here is similar to the flow model bymore » Flach et al. This allows comparison of the two flow models and validates the utility of DFM. The contaminant of interest for this study is tritium, because it is a geochemically conservative tracer that has been monitored along the seepline near the F-Area effluent and Fourmile Branch for several years.« less
Analysis of Flavonoid in Medicinal Plant Extract Using Infrared Spectroscopy and Chemometrics
Retnaningtyas, Yuni; Nuri; Lukman, Hilmia
2016-01-01
Infrared (IR) spectroscopy combined with chemometrics has been developed for simple analysis of flavonoid in the medicinal plant extract. Flavonoid was extracted from medicinal plant leaves by ultrasonication and maceration. IR spectra of selected medicinal plant extract were correlated with flavonoid content using chemometrics. The chemometric method used for calibration analysis was Partial Last Square (PLS) and the methods used for classification analysis were Linear Discriminant Analysis (LDA), Soft Independent Modelling of Class Analogies (SIMCA), and Support Vector Machines (SVM). In this study, the calibration of NIR model that showed best calibration with R 2 and RMSEC value was 0.9916499 and 2.1521897, respectively, while the accuracy of all classification models (LDA, SIMCA, and SVM) was 100%. R 2 and RMSEC of calibration of FTIR model were 0.8653689 and 8.8958149, respectively, while the accuracy of LDA, SIMCA, and SVM was 86.0%, 91.2%, and 77.3%, respectively. PLS and LDA of NIR models were further used to predict unknown flavonoid content in commercial samples. Using these models, the significance of flavonoid content that has been measured by NIR and UV-Vis spectrophotometry was evaluated with paired samples t-test. The flavonoid content that has been measured with both methods gave no significant difference. PMID:27529051
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lai, Canhai; Xu, Zhijie; Pan, Wenxiao
2016-01-01
To quantify the predictive confidence of a solid sorbent-based carbon capture design, a hierarchical validation methodology—consisting of basic unit problems with increasing physical complexity coupled with filtered model-based geometric upscaling has been developed and implemented. This paper describes the computational fluid dynamics (CFD) multi-phase reactive flow simulations and the associated data flows among different unit problems performed within the said hierarchical validation approach. The bench-top experiments used in this calibration and validation effort were carefully designed to follow the desired simple-to-complex unit problem hierarchy, with corresponding data acquisition to support model parameters calibrations at each unit problem level. A Bayesianmore » calibration procedure is employed and the posterior model parameter distributions obtained at one unit-problem level are used as prior distributions for the same parameters in the next-tier simulations. Overall, the results have demonstrated that the multiphase reactive flow models within MFIX can be used to capture the bed pressure, temperature, CO2 capture capacity, and kinetics with quantitative accuracy. The CFD modeling methodology and associated uncertainty quantification techniques presented herein offer a solid framework for estimating the predictive confidence in the virtual scale up of a larger carbon capture device.« less
Cho, Jae Heon; Lee, Jong Ho
2015-11-01
Manual calibration is common in rainfall-runoff model applications. However, rainfall-runoff models include several complicated parameters; thus, significant time and effort are required to manually calibrate the parameters individually and repeatedly. Automatic calibration has relative merit regarding time efficiency and objectivity but shortcomings regarding understanding indigenous processes in the basin. In this study, a watershed model calibration framework was developed using an influence coefficient algorithm and genetic algorithm (WMCIG) to automatically calibrate the distributed models. The optimization problem used to minimize the sum of squares of the normalized residuals of the observed and predicted values was solved using a genetic algorithm (GA). The final model parameters were determined from the iteration with the smallest sum of squares of the normalized residuals of all iterations. The WMCIG was applied to a Gomakwoncheon watershed located in an area that presents a total maximum daily load (TMDL) in Korea. The proportion of urbanized area in this watershed is low, and the diffuse pollution loads of nutrients such as phosphorus are greater than the point-source pollution loads because of the concentration of rainfall that occurs during the summer. The pollution discharges from the watershed were estimated for each land-use type, and the seasonal variations of the pollution loads were analyzed. Consecutive flow measurement gauges have not been installed in this area, and it is difficult to survey the flow and water quality in this area during the frequent heavy rainfall that occurs during the wet season. The Hydrological Simulation Program-Fortran (HSPF) model was used to calculate the runoff flow and water quality in this basin. Using the water quality results, a load duration curve was constructed for the basin, the exceedance frequency of the water quality standard was calculated for each hydrologic condition class, and the percent reduction required to achieve the water quality standard was estimated. The R(2) value for the calibrated BOD5 was 0.60, which is a moderate result, and the R(2) value for the TP was 0.86, which is a good result. The percent differences obtained for the calibrated BOD5 and TP were very good; therefore, the calibration results using WMCIG were satisfactory. From the load duration curve analysis, the WQS exceedance frequencies of the BOD5 under dry conditions and low-flow conditions were 75.7% and 65%, respectively, and the exceedance frequencies under moist and mid-range conditions were higher than under other conditions. The exceedance frequencies of the TP for the high-flow, moist and mid-range conditions were high and the exceedance rate for the high-flow condition was particularly high. Most of the data from the high-flow conditions exceeded the WQSs. Thus, nonpoint-source pollutants from storm-water runoff substantially affected the TP concentration in the Gomakwoncheon. Copyright © 2015 Elsevier Ltd. All rights reserved.
Luczak, Susan E; Hawkins, Ashley L; Dai, Zheng; Wichmann, Raphael; Wang, Chunming; Rosen, I Gary
2018-08-01
Biosensors have been developed to measure transdermal alcohol concentration (TAC), but converting TAC into interpretable indices of blood/breath alcohol concentration (BAC/BrAC) is difficult because of variations that occur in TAC across individuals, drinking episodes, and devices. We have developed mathematical models and the BrAC Estimator software for calibrating and inverting TAC into quantifiable BrAC estimates (eBrAC). The calibration protocol to determine the individualized parameters for a specific individual wearing a specific device requires a drinking session in which BrAC and TAC measurements are obtained simultaneously. This calibration protocol was originally conducted in the laboratory with breath analyzers used to produce the BrAC data. Here we develop and test an alternative calibration protocol using drinking diary data collected in the field with the smartphone app Intellidrink to produce the BrAC calibration data. We compared BrAC Estimator software results for 11 drinking episodes collected by an expert user when using Intellidrink versus breath analyzer measurements as BrAC calibration data. Inversion phase results indicated the Intellidrink calibration protocol produced similar eBrAC curves and captured peak eBrAC to within 0.0003%, time of peak eBrAC to within 18min, and area under the eBrAC curve to within 0.025% alcohol-hours as the breath analyzer calibration protocol. This study provides evidence that drinking diary data can be used in place of breath analyzer data in the BrAC Estimator software calibration procedure, which can reduce participant and researcher burden and expand the potential software user pool beyond researchers studying participants who can drink in the laboratory. Copyright © 2017. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Xiong, X.; Stone, T. C.
2017-12-01
To meet objectives for assembling continuous Earth environmental data records from multiple satellite instruments, a key consideration is to assure consistent and stable sensor calibration across platforms and spanning mission lifetimes. Maintaining and verifying calibration stability in orbit is particularly challenging for reflected solar band (RSB) radiometer instruments, as options for stable references are limited. The Moon is used regularly as a calibration target, which has capabilities for long-term sensor performance monitoring and for use as a common reference for RSB sensor inter-calibration. Suomi NPP VIIRS has viewed the Moon nearly every month since launch, utilizing spacecraft roll maneuvers to acquire lunar observations within a small range of phase angles. The VIIRS Characterization Support Team (VCST) at NASA GSFC has processed the Moon images acquired by SNPP VIIRS into irradiance measurements for calibration purposes; however, the variations in the Moon's brightness still require normalizing the VIIRS lunar measurements using radiometric reference values generated by the USGS lunar calibration system, i.e. the ROLO model. Comparison of the lunar irradiance time series to the calibration f-factors derived from the VIIRS on-board solar diffuser system shows similar overall trends in sensor response, but also reveals residual geometric anomalies in the lunar model results. The excellent lunar radiometry achieved by SNPP VIIRS is actively being used to advance lunar model development at USGS. Both MODIS instruments also have viewed the Moon regularly since launch, providing a practical application of sensor inter-calibration using the Moon as a common reference. This paper discusses ongoing efforts aimed toward demonstrating and utilizing the full potential of lunar observations to support long-term calibration stability and consistency for SNPP VIIRS and MODIS, thus contributing to level-1B data quality assurance for continuity and monitoring global environmental changes.
NASA Astrophysics Data System (ADS)
Tian, Jialin; Smith, William L.; Gazarik, Michael J.
2008-12-01
The ultimate remote sensing benefits of the high resolution Infrared radiance spectrometers will be realized with their geostationary satellite implementation in the form of imaging spectrometers. This will enable dynamic features of the atmosphere's thermodynamic fields and pollutant and greenhouse gas constituents to be observed for revolutionary improvements in weather forecasts and more accurate air quality and climate predictions. As an important step toward realizing this application objective, the Geostationary Imaging Fourier Transform Spectrometer (GIFTS) Engineering Demonstration Unit (EDU) was successfully developed under the NASA New Millennium Program, 2000-2006. The GIFTS-EDU instrument employs three focal plane arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The GIFTS calibration is achieved using internal blackbody calibration references at ambient (260 K) and hot (286 K) temperatures. In this paper, we introduce a refined calibration technique that utilizes Principle Component (PC) analysis to compensate for instrument distortions and artifacts, therefore, enhancing the absolute calibration accuracy. This method is applied to data collected during the GIFTS Ground Based Measurement (GBM) experiment, together with simultaneous observations by the accurately calibrated AERI (Atmospheric Emitted Radiance Interferometer), both simultaneously zenith viewing the sky through the same external scene mirror at ten-minute intervals throughout a cloudless day at Logan Utah on September 13, 2006. The accurately calibrated GIFTS radiances are produced using the first four PC scores in the GIFTS-AERI regression model. Temperature and moisture profiles retrieved from the PC-calibrated GIFTS radiances are verified against radiosonde measurements collected throughout the GIFTS sky measurement period. Using the GIFTS GBM calibration model, we compute the calibrated radiances from data collected during the moon tracking and viewing experiment events. From which, we derive the lunar surface temperature and emissivity associated with the moon viewing measurements.
NASA Technical Reports Server (NTRS)
Tian, Jialin; Smith, William L.; Gazarik, Michael J.
2008-01-01
The ultimate remote sensing benefits of the high resolution Infrared radiance spectrometers will be realized with their geostationary satellite implementation in the form of imaging spectrometers. This will enable dynamic features of the atmosphere s thermodynamic fields and pollutant and greenhouse gas constituents to be observed for revolutionary improvements in weather forecasts and more accurate air quality and climate predictions. As an important step toward realizing this application objective, the Geostationary Imaging Fourier Transform Spectrometer (GIFTS) Engineering Demonstration Unit (EDU) was successfully developed under the NASA New Millennium Program, 2000-2006. The GIFTS-EDU instrument employs three focal plane arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The GIFTS calibration is achieved using internal blackbody calibration references at ambient (260 K) and hot (286 K) temperatures. In this paper, we introduce a refined calibration technique that utilizes Principle Component (PC) analysis to compensate for instrument distortions and artifacts, therefore, enhancing the absolute calibration accuracy. This method is applied to data collected during the GIFTS Ground Based Measurement (GBM) experiment, together with simultaneous observations by the accurately calibrated AERI (Atmospheric Emitted Radiance Interferometer), both simultaneously zenith viewing the sky through the same external scene mirror at ten-minute intervals throughout a cloudless day at Logan Utah on September 13, 2006. The accurately calibrated GIFTS radiances are produced using the first four PC scores in the GIFTS-AERI regression model. Temperature and moisture profiles retrieved from the PC-calibrated GIFTS radiances are verified against radiosonde measurements collected throughout the GIFTS sky measurement period. Using the GIFTS GBM calibration model, we compute the calibrated radiances from data collected during the moon tracking and viewing experiment events. From which, we derive the lunar surface temperature and emissivity associated with the moon viewing measurements.
Robot calibration with a photogrammetric on-line system using reseau scanning cameras
NASA Astrophysics Data System (ADS)
Diewald, Bernd; Godding, Robert; Henrich, Andreas
1994-03-01
The possibility for testing and calibration of industrial robots becomes more and more important for manufacturers and users of such systems. Exacting applications in connection with the off-line programming techniques or the use of robots as measuring machines are impossible without a preceding robot calibration. At the LPA an efficient calibration technique has been developed. Instead of modeling the kinematic behavior of a robot, the new method describes the pose deviations within a user-defined section of the robot's working space. High- precision determination of 3D coordinates of defined path positions is necessary for calibration and can be done by digital photogrammetric systems. For the calibration of a robot at the LPA a digital photogrammetric system with three Rollei Reseau Scanning Cameras was used. This system allows an automatic measurement of a large number of robot poses with high accuracy.
NASA Technical Reports Server (NTRS)
Geng, Steven M.
1987-01-01
A free-piston Stirling engine performance code is being upgraded and validated at the NASA Lewis Research Center under an interagency agreement between the Department of Energy's Oak Ridge National Laboratory and NASA Lewis. Many modifications were made to the free-piston code in an attempt to decrease the calibration effort. A procedure was developed that made the code calibration process more systematic. Engine-specific calibration parameters are often used to bring predictions and experimental data into better agreement. The code was calibrated to a matrix of six experimental data points. Predictions of the calibrated free-piston code are compared with RE-1000 free-piston Stirling engine sensitivity test data taken at NASA Lewis. Reasonable agreement was obtained between the code prediction and the experimental data over a wide range of engine operating conditions.
NASA Technical Reports Server (NTRS)
Geng, Steven M.
1987-01-01
A free-piston Stirling engine performance code is being upgraded and validated at the NASA Lewis Research Center under an interagency agreement between the Department of Energy's Oak Ridge National Laboratory and NASA Lewis. Many modifications were made to the free-piston code in an attempt to decrease the calibration effort. A procedure was developed that made the code calibration process more systematic. Engine-specific calibration parameters are often used to bring predictions and experimental data into better agreement. The code was calibrated to a matrix of six experimental data points. Predictions of the calibrated free-piston code are compared with RE-1000 free-piston Stirling engine sensitivity test data taken at NASA Lewis. Resonable agreement was obtained between the code predictions and the experimental data over a wide range of engine operating conditions.
NASA Astrophysics Data System (ADS)
Norton, P. A., II; Haj, A. E., Jr.
2014-12-01
The United States Geological Survey is currently developing a National Hydrologic Model (NHM) to support and facilitate coordinated and consistent hydrologic modeling efforts at the scale of the continental United States. As part of this effort, the Geospatial Fabric (GF) for the NHM was created. The GF is a database that contains parameters derived from datasets that characterize the physical features of watersheds. The GF was used to aggregate catchments and flowlines defined in the National Hydrography Dataset Plus dataset for more than 100,000 hydrologic response units (HRUs), and to establish initial parameter values for input to the Precipitation-Runoff Modeling System (PRMS). Many parameter values are adjusted in PRMS using an automated calibration process. Using these adjusted parameter values, the PRMS model estimated variables such as evapotranspiration (ET), potential evapotranspiration (PET), snow-covered area (SCA), and snow water equivalent (SWE). In order to evaluate the effectiveness of parameter calibration, and model performance in general, several satellite-based Moderate Resolution Imaging Spectroradiometer (MODIS) and Snow Data Assimilation System (SNODAS) gridded datasets including ET, PET, SCA, and SWE were compared to PRMS-simulated values. The MODIS and SNODAS data were spatially averaged for each HRU, and compared to PRMS-simulated ET, PET, SCA, and SWE values for each HRU in the Upper Missouri River watershed. Default initial GF parameter values and PRMS calibration ranges were evaluated. Evaluation results, and the use of MODIS and SNODAS datasets to update GF parameter values and PRMS calibration ranges, are presented and discussed.
Kuligowski, Julia; Carrión, David; Quintás, Guillermo; Garrigues, Salvador; de la Guardia, Miguel
2011-01-01
The selection of an appropriate calibration set is a critical step in multivariate method development. In this work, the effect of using different calibration sets, based on a previous classification of unknown samples, on the partial least squares (PLS) regression model performance has been discussed. As an example, attenuated total reflection (ATR) mid-infrared spectra of deep-fried vegetable oil samples from three botanical origins (olive, sunflower, and corn oil), with increasing polymerized triacylglyceride (PTG) content induced by a deep-frying process were employed. The use of a one-class-classifier partial least squares-discriminant analysis (PLS-DA) and a rooted binary directed acyclic graph tree provided accurate oil classification. Oil samples fried without foodstuff could be classified correctly, independent of their PTG content. However, class separation of oil samples fried with foodstuff, was less evident. The combined use of double-cross model validation with permutation testing was used to validate the obtained PLS-DA classification models, confirming the results. To discuss the usefulness of the selection of an appropriate PLS calibration set, the PTG content was determined by calculating a PLS model based on the previously selected classes. In comparison to a PLS model calculated using a pooled calibration set containing samples from all classes, the root mean square error of prediction could be improved significantly using PLS models based on the selected calibration sets using PLS-DA, ranging between 1.06 and 2.91% (w/w).
Lindner-Lunsford, J. B.; Ellis, S.R.
1984-01-01
The U.S. Geological Survey 's Distributed Routing Rainfall-Runoff Model--Version II was calibrated and verified for five urban basins in the Denver metropolitan area. Land-use types in the basins were light commerical, multifamily housing, single-family housing, and a shopping center. The overall accuracy of model predictions of peak flows and runoff volumes was about 15 percent for storms with rainfall intensities of less than 1 inch per hour and runoff volume of greater than 0.01 inch. Predictions generally were unsatisfactory for storm having a rainfall intensity of more than 1 inch per hour, or runoff of 0.01 inch or less. The Distributed Routing Rainfall-Runoff Model-Quality, a multievent runoff-quality model developed by the U.S. Geological Survey, was calibrated and verified on four basins. The model was found to be most useful in the prediction of seasonal loads of constituents in the runoff resulting from rainfall. The model was not very accurate in the prediction of runoff loads of individual constituents. (USGS)
A Testbed for Model Development
NASA Astrophysics Data System (ADS)
Berry, J. A.; Van der Tol, C.; Kornfeld, A.
2014-12-01
Carbon cycle and land-surface models used in global simulations need to be computationally efficient and have a high standard of software engineering. These models also make a number of scaling assumptions to simplify the representation of complex biochemical and structural properties of ecosystems. This makes it difficult to use these models to test new ideas for parameterizations or to evaluate scaling assumptions. The stripped down nature of these models also makes it difficult to "connect" with current disciplinary research which tends to be focused on much more nuanced topics than can be included in the models. In our opinion/experience this indicates the need for another type of model that can more faithfully represent the complexity ecosystems and which has the flexibility to change or interchange parameterizations and to run optimization codes for calibration. We have used the SCOPE (Soil Canopy Observation, Photochemistry and Energy fluxes) model in this way to develop, calibrate, and test parameterizations for solar induced chlorophyll fluorescence, OCS exchange and stomatal parameterizations at the canopy scale. Examples of the data sets and procedures used to develop and test new parameterizations are presented.
NASA Astrophysics Data System (ADS)
Karssenberg, D.; Wanders, N.; de Roo, A.; de Jong, S.; Bierkens, M. F.
2013-12-01
Large-scale hydrological models are nowadays mostly calibrated using observed discharge. As a result, a large part of the hydrological system that is not directly linked to discharge, in particular the unsaturated zone, remains uncalibrated, or might be modified unrealistically. Soil moisture observations from satellites have the potential to fill this gap, as these provide the closest thing to a direct measurement of the state of the unsaturated zone, and thus are potentially useful in calibrating unsaturated zone model parameters. This is expected to result in a better identification of the complete hydrological system, potentially leading to improved forecasts of the hydrograph as well. Here we evaluate this added value of remotely sensed soil moisture in calibration of large-scale hydrological models by addressing two research questions: 1) Which parameters of hydrological models can be identified by calibration with remotely sensed soil moisture? 2) Does calibration with remotely sensed soil moisture lead to an improved calibration of hydrological models compared to approaches that calibrate only with discharge, such that this leads to improved forecasts of soil moisture content and discharge as well? To answer these questions we use a dual state and parameter ensemble Kalman filter to calibrate the hydrological model LISFLOOD for the Upper Danube area. Calibration is done with discharge and remotely sensed soil moisture acquired by AMSR-E, SMOS and ASCAT. Four scenarios are studied: no calibration (expert knowledge), calibration on discharge, calibration on remote sensing data (three satellites) and calibration on both discharge and remote sensing data. Using a split-sample approach, the model is calibrated for a period of 2 years and validated for the calibrated model parameters on a validation period of 10 years. Results show that calibration with discharge data improves the estimation of groundwater parameters (e.g., groundwater reservoir constant) and routing parameters. Calibration with only remotely sensed soil moisture results in an accurate calibration of parameters related to land surface process (e.g., the saturated conductivity of the soil), which is not possible when calibrating on discharge alone. For the upstream area up to 40000 km2, calibration on both discharge and soil moisture results in a reduction by 10-30 % in the RMSE for discharge simulations, compared to calibration on discharge alone. For discharge in the downstream area, the model performance due to assimilation of remotely sensed soil moisture is not increased or slightly decreased, most probably due to the longer relative importance of the routing and contribution of groundwater in downstream areas. When microwave soil moisture is used for calibration the RMSE of soil moisture simulations decreases from 0.072 m3m-3 to 0.062 m3m-3. The conclusion is that remotely sensed soil moisture holds potential for calibration of hydrological models leading to a better simulation of soil moisture content throughout and a better simulation of discharge in upstream areas, particularly if discharge observations are sparse.
NASA Astrophysics Data System (ADS)
Pham, H. V.; Parashar, R.; Sund, N. L.; Pohlmann, K.
2017-12-01
Pahute Mesa, located in the north-western region of the Nevada National Security Site, is an area where numerous underground nuclear tests were conducted. The mesa contains several fractured aquifers that can potentially provide high permeability pathways for migration of radionuclides away from testing locations. The BULLION Forced-Gradient Experiment (FGE) conducted on Pahute Mesa injected and pumped solute and colloid tracers from a system of three wells for obtaining site-specific information about the transport of radionuclides in fractured rock aquifers. This study aims to develop reliable three-dimensional discrete fracture network (DFN) models to simulate the BULLION FGE as a means for computing realistic ranges of important parameters describing fractured rock. Multiple conceptual DFN models were developed using dfnWorks, a parallelized computational suite developed by Los Alamos National Laboratory, to simulate flow and conservative particle movement in subsurface fractured rocks downgradient from the BULLION test. The model domain is 100x200x100 m and includes the three tracer-test wells of the BULLION FGE and the Pahute Mesa Lava-flow aquifer. The model scenarios considered differ from each other in terms of boundary conditions and fracture density. For each conceptual model, a number of statistically equivalent fracture network realizations were generated using data from fracture characterization studies. We adopt the covariance matrix adaptation-evolution strategy (CMA-ES) as a global local stochastic derivative-free optimization method to calibrate the DFN models using groundwater levels and tracer breakthrough data obtained from the three wells. Models of fracture apertures based on fracture type and size are proposed and the values of apertures in each model are estimated during model calibration. The ranges of fracture aperture values resulting from this study are expected to enhance understanding of radionuclide transport in fractured rocks and support development of improved large-scale flow and transport models for Pahute Mesa.
Sun, Tong; Xu, Wen-Li; Hu, Tian; Liu, Mu-Hua
2013-12-01
The objective of the present research was to assess soluble solids content (SSC) of Nanfeng mandarin by visible/near infrared (Vis/NIR) spectroscopy combined with new variable selection method, simplify prediction model and improve the performance of prediction model for SSC of Nanfeng mandarin. A total of 300 Nanfeng mandarin samples were used, the numbers of Nanfeng mandarin samples in calibration, validation and prediction sets were 150, 75 and 75, respectively. Vis/NIR spectra of Nanfeng mandarin samples were acquired by a QualitySpec spectrometer in the wavelength range of 350-1000 nm. Uninformative variables elimination (UVE) was used to eliminate wavelength variables that had few information of SSC, then independent component analysis (ICA) was used to extract independent components (ICs) from spectra that eliminated uninformative wavelength variables. At last, least squares support vector machine (LS-SVM) was used to develop calibration models for SSC of Nanfeng mandarin using extracted ICs, and 75 prediction samples that had not been used for model development were used to evaluate the performance of SSC model of Nanfeng mandarin. The results indicate t hat Vis/NIR spectroscopy combinedwith UVE-ICA-LS-SVM is suitable for assessing SSC o f Nanfeng mandarin, and t he precision o f prediction ishigh. UVE--ICA is an effective method to eliminate uninformative wavelength variables, extract important spectral information, simplify prediction model and improve the performance of prediction model. The SSC model developed by UVE-ICA-LS-SVM is superior to that developed by PLS, PCA-LS-SVM or ICA-LS-SVM, and the coefficient of determination and root mean square error in calibration, validation and prediction sets were 0.978, 0.230%, 0.965, 0.301% and 0.967, 0.292%, respectively.
IMPROVED SPECTROPHOTOMETRIC CALIBRATION OF THE SDSS-III BOSS QUASAR SAMPLE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Margala, Daniel; Kirkby, David; Dawson, Kyle
2016-11-10
We present a model for spectrophotometric calibration errors in observations of quasars from the third generation of the Sloan Digital Sky Survey Baryon Oscillation Spectroscopic Survey (BOSS) and describe the correction procedure we have developed and applied to this sample. Calibration errors are primarily due to atmospheric differential refraction and guiding offsets during each exposure. The corrections potentially reduce the systematics for any studies of BOSS quasars, including the measurement of baryon acoustic oscillations using the Ly α forest. Our model suggests that, on average, the observed quasar flux in BOSS is overestimated by ∼19% at 3600 Å and underestimatedmore » by ∼24% at 10,000 Å. Our corrections for the entire BOSS quasar sample are publicly available.« less
The Adaptive Calibration Model of stress responsivity
Ellis, Bruce J.; Shirtcliff, Elizabeth A.
2010-01-01
This paper presents the Adaptive Calibration Model (ACM), an evolutionary-developmental theory of individual differences in the functioning of the stress response system. The stress response system has three main biological functions: (1) to coordinate the organism’s allostatic response to physical and psychosocial challenges; (2) to encode and filter information about the organism’s social and physical environment, mediating the organism’s openness to environmental inputs; and (3) to regulate the organism’s physiology and behavior in a broad range of fitness-relevant areas including defensive behaviors, competitive risk-taking, learning, attachment, affiliation and reproductive functioning. The information encoded by the system during development feeds back on the long-term calibration of the system itself, resulting in adaptive patterns of responsivity and individual differences in behavior. Drawing on evolutionary life history theory, we build a model of the development of stress responsivity across life stages, describe four prototypical responsivity patterns, and discuss the emergence and meaning of sex differences. The ACM extends the theory of biological sensitivity to context (BSC) and provides an integrative framework for future research in the field. PMID:21145350
On-orbit radiometric calibration over time and between spacecraft using the moon
Kieffer, H.H.; Stone, T.C.; Barnes, R.A.; Bender, S.; Eplee, R.E.; Mendenhall, J.; Ong, L.; ,
2002-01-01
The Robotic Lunar Observatory (ROLO) project has developed a spectral irradiance model of the Moon that accounts for variations with lunar phase through the bright half of a month, lunar librations, and the location of an Earth-orbiting spacecraft. The methodology of comparing spacecraft observations of the Moon with this model has been developed to a set of standardized procedures so that comparisons can be readily made. In the cases where observations extend over several years (e.g., SeaWiFS), instrument response degradation has been determined with precision of about 0.1% per year. Because of the strong dependence of lunar irradiance on geometric angles, observations by two spacecraft cannot be directly compared unless acquired at the same time and location. Rather, the lunar irradiance based on each spacecraft instrument calibration can be compared with the lunar irradiance model. Even single observations by an instrument allow inter-comparison of its radiometric scale with other instruments participating in the lunar calibration program. Observations by SeaWiFS, ALI, Hyperion and MTI are compared here.
NASA Technical Reports Server (NTRS)
Abid, R.; Speziale, C. G.
1993-01-01
Turbulent channel flow and homogeneous shear flow have served as basic building block flows for the testing and calibration of Reynolds stress models. A direct theoretical connection is made between homogeneous shear flow in equilibrium and the log-layer of fully-developed turbulent channel flow. It is shown that if a second-order closure model is calibrated to yield good equilibrium values for homogeneous shear flow it will also yield good results for the log-layer of channel flow provided that the Rotta coefficient is not too far removed from one. Most of the commonly used second-order closure models introduce an ad hoc wall reflection term in order to mask deficient predictions for the log-layer of channel flow that arise either from an inaccurate calibration of homogeneous shear flow or from the use of a Rotta coefficient that is too large. Illustrative model calculations are presented to demonstrate this point which has important implications for turbulence modeling.
NASA Technical Reports Server (NTRS)
Abid, R.; Speziale, C. G.
1992-01-01
Turbulent channel flow and homogeneous shear flow have served as basic building block flows for the testing and calibration of Reynolds stress models. A direct theoretical connection is made between homogeneous shear flow in equilibrium and the log-layer of fully-developed turbulent channel flow. It is shown that if a second-order closure model is calibrated to yield good equilibrium values for homogeneous shear flow it will also yield good results for the log-layer of channel flow provided that the Rotta coefficient is not too far removed from one. Most of the commonly used second-order closure models introduce an ad hoc wall reflection term in order to mask deficient predictions for the log-layer of channel flow that arise either from an inaccurate calibration of homogeneous shear flow or from the use of a Rotta coefficient that is too large. Illustrative model calculations are presented to demonstrate this point which has important implications for turbulence modeling.
Select Components and Finish System Design of a Window Air Conditioner with Propane
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, Bo; Abdelaziz, Omar
This report describes the technical targets for developing a high efficiency window air conditioner (WAC) using propane (R-290). The baseline unit selected for this activity is a GE R-410A WAC. We established collaboration with a Chinese rotary compressor manufacturer, to select an R-290 compressor. We first modelled and calibrated the WAC system model using R-410A. Next, we applied the calibrated system model to design the R-290 WAC, and decided the strategies to reduce the system charge below 260 grams and achieve the capacity and efficiency targets.
NASA Astrophysics Data System (ADS)
Shafii, M.; Tolson, B.; Matott, L. S.
2012-04-01
Hydrologic modeling has benefited from significant developments over the past two decades. This has resulted in building of higher levels of complexity into hydrologic models, which eventually makes the model evaluation process (parameter estimation via calibration and uncertainty analysis) more challenging. In order to avoid unreasonable parameter estimates, many researchers have suggested implementation of multi-criteria calibration schemes. Furthermore, for predictive hydrologic models to be useful, proper consideration of uncertainty is essential. Consequently, recent research has emphasized comprehensive model assessment procedures in which multi-criteria parameter estimation is combined with statistically-based uncertainty analysis routines such as Bayesian inference using Markov Chain Monte Carlo (MCMC) sampling. Such a procedure relies on the use of formal likelihood functions based on statistical assumptions, and moreover, the Bayesian inference structured on MCMC samplers requires a considerably large number of simulations. Due to these issues, especially in complex non-linear hydrological models, a variety of alternative informal approaches have been proposed for uncertainty analysis in the multi-criteria context. This study aims at exploring a number of such informal uncertainty analysis techniques in multi-criteria calibration of hydrological models. The informal methods addressed in this study are (i) Pareto optimality which quantifies the parameter uncertainty using the Pareto solutions, (ii) DDS-AU which uses the weighted sum of objective functions to derive the prediction limits, and (iii) GLUE which describes the total uncertainty through identification of behavioral solutions. The main objective is to compare such methods with MCMC-based Bayesian inference with respect to factors such as computational burden, and predictive capacity, which are evaluated based on multiple comparative measures. The measures for comparison are calculated both for calibration and evaluation periods. The uncertainty analysis methodologies are applied to a simple 5-parameter rainfall-runoff model, called HYMOD.
Automated model optimisation using the Cylc workflow engine (Cyclops v1.0)
NASA Astrophysics Data System (ADS)
Gorman, Richard M.; Oliver, Hilary J.
2018-06-01
Most geophysical models include many parameters that are not fully determined by theory, and can be tuned
to improve the model's agreement with available data. We might attempt to automate this tuning process in an objective way by employing an optimisation algorithm to find the set of parameters that minimises a cost function derived from comparing model outputs with measurements. A number of algorithms are available for solving optimisation problems, in various programming languages, but interfacing such software to a complex geophysical model simulation presents certain challenges. To tackle this problem, we have developed an optimisation suite (Cyclops
) based on the Cylc workflow engine that implements a wide selection of optimisation algorithms from the NLopt Python toolbox (Johnson, 2014). The Cyclops optimisation suite can be used to calibrate any modelling system that has itself been implemented as a (separate) Cylc model suite, provided it includes computation and output of the desired scalar cost function. A growing number of institutions are using Cylc to orchestrate complex distributed suites of interdependent cycling tasks within their operational forecast systems, and in such cases application of the optimisation suite is particularly straightforward. As a test case, we applied the Cyclops to calibrate a global implementation of the WAVEWATCH III (v4.18) third-generation spectral wave model, forced by ERA-Interim input fields. This was calibrated over a 1-year period (1997), before applying the calibrated model to a full (1979-2016) wave hindcast. The chosen error metric was the spatial average of the root mean square error of hindcast significant wave height compared with collocated altimeter records. We describe the results of a calibration in which up to 19 parameters were optimised.
NASA Astrophysics Data System (ADS)
Hayley, Kevin; Schumacher, J.; MacMillan, G. J.; Boutin, L. C.
2014-05-01
Expanding groundwater datasets collected by automated sensors, and improved groundwater databases, have caused a rapid increase in calibration data available for groundwater modeling projects. Improved methods of subsurface characterization have increased the need for model complexity to represent geological and hydrogeological interpretations. The larger calibration datasets and the need for meaningful predictive uncertainty analysis have both increased the degree of parameterization necessary during model calibration. Due to these competing demands, modern groundwater modeling efforts require a massive degree of parallelization in order to remain computationally tractable. A methodology for the calibration of highly parameterized, computationally expensive models using the Amazon EC2 cloud computing service is presented. The calibration of a regional-scale model of groundwater flow in Alberta, Canada, is provided as an example. The model covers a 30,865-km2 domain and includes 28 hydrostratigraphic units. Aquifer properties were calibrated to more than 1,500 static hydraulic head measurements and 10 years of measurements during industrial groundwater use. Three regionally extensive aquifers were parameterized (with spatially variable hydraulic conductivity fields), as was the aerial recharge boundary condition, leading to 450 adjustable parameters in total. The PEST-based model calibration was parallelized on up to 250 computing nodes located on Amazon's EC2 servers.
Using the cloud to speed-up calibration of watershed-scale hydrologic models (Invited)
NASA Astrophysics Data System (ADS)
Goodall, J. L.; Ercan, M. B.; Castronova, A. M.; Humphrey, M.; Beekwilder, N.; Steele, J.; Kim, I.
2013-12-01
This research focuses on using the cloud to address computational challenges associated with hydrologic modeling. One example is calibration of a watershed-scale hydrologic model, which can take days of execution time on typical computers. While parallel algorithms for model calibration exist and some researchers have used multi-core computers or clusters to run these algorithms, these solutions do not fully address the challenge because (i) calibration can still be too time consuming even on multicore personal computers and (ii) few in the community have the time and expertise needed to manage a compute cluster. Given this, another option for addressing this challenge that we are exploring through this work is the use of the cloud for speeding-up calibration of watershed-scale hydrologic models. The cloud used in this capacity provides a means for renting a specific number and type of machines for only the time needed to perform a calibration model run. The cloud allows one to precisely balance the duration of the calibration with the financial costs so that, if the budget allows, the calibration can be performed more quickly by renting more machines. Focusing specifically on the SWAT hydrologic model and a parallel version of the DDS calibration algorithm, we show significant speed-up time across a range of watershed sizes using up to 256 cores to perform a model calibration. The tool provides a simple web-based user interface and the ability to monitor the calibration job submission process during the calibration process. Finally this talk concludes with initial work to leverage the cloud for other tasks associated with hydrologic modeling including tasks related to preparing inputs for constructing place-based hydrologic models.
NASA Astrophysics Data System (ADS)
Nouri, N. M.; Mostafapour, K.; Kamran, M.
2018-02-01
In a closed water-tunnel circuit, the multi-component strain gauge force and moment sensor (also known as balance) are generally used to measure hydrodynamic forces and moments acting on scaled models. These balances are periodically calibrated by static loading. Their performance and accuracy depend significantly on the rig and the method of calibration. In this research, a new calibration rig was designed and constructed to calibrate multi-component internal strain gauge balances. The calibration rig has six degrees of freedom and six different component-loading structures that can be applied separately and synchronously. The system was designed based on the applicability of formal experimental design techniques, using gravity for balance loading and balance positioning and alignment relative to gravity. To evaluate the calibration rig, a six-component internal balance developed by Iran University of Science and Technology was calibrated using response surface methodology. According to the results, calibration rig met all design criteria. This rig provides the means by which various methods of formal experimental design techniques can be implemented. The simplicity of the rig saves time and money in the design of experiments and in balance calibration while simultaneously increasing the accuracy of these activities.
SCALA: In situ calibration for integral field spectrographs
NASA Astrophysics Data System (ADS)
Lombardo, S.; Küsters, D.; Kowalski, M.; Aldering, G.; Antilogus, P.; Bailey, S.; Baltay, C.; Barbary, K.; Baugh, D.; Bongard, S.; Boone, K.; Buton, C.; Chen, J.; Chotard, N.; Copin, Y.; Dixon, S.; Fagrelius, P.; Feindt, U.; Fouchez, D.; Gangler, E.; Hayden, B.; Hillebrandt, W.; Hoffmann, A.; Kim, A. G.; Leget, P.-F.; McKay, L.; Nordin, J.; Pain, R.; Pécontal, E.; Pereira, R.; Perlmutter, S.; Rabinowitz, D.; Reif, K.; Rigault, M.; Rubin, D.; Runge, K.; Saunders, C.; Smadja, G.; Suzuki, N.; Taubenberger, S.; Tao, C.; Thomas, R. C.; Nearby Supernova Factory
2017-11-01
Aims: The scientific yield of current and future optical surveys is increasingly limited by systematic uncertainties in the flux calibration. This is the case for type Ia supernova (SN Ia) cosmology programs, where an improved calibration directly translates into improved cosmological constraints. Current methodology rests on models of stars. Here we aim to obtain flux calibration that is traceable to state-of-the-art detector-based calibration. Methods: We present the SNIFS Calibration Apparatus (SCALA), a color (relative) flux calibration system developed for the SuperNova integral field spectrograph (SNIFS), operating at the University of Hawaii 2.2 m (UH 88) telescope. Results: By comparing the color trend of the illumination generated by SCALA during two commissioning runs, and to previous laboratory measurements, we show that we can determine the light emitted by SCALA with a long-term repeatability better than 1%. We describe the calibration procedure necessary to control for system aging. We present measurements of the SNIFS throughput as estimated by SCALA observations. Conclusions: The SCALA calibration unit is now fully deployed at the UH 88 telescope, and with it color-calibration between 4000 Å and 9000 Å is stable at the percent level over a one-year baseline.
Evaluation of the CMODIS-measured radiance
NASA Astrophysics Data System (ADS)
Mao, Zhihua; Pan, Delu; Huang, Haiqing
2006-12-01
A Chinese Moderate Resolution Imaging Spectrometer (CMODIS) on "Shenzhou-3" spaceship was launched on March 25, 2002. CMODIS has 34 channels, with 30 visible and near-infrared channels and 4 infrared channels. The 30 channels are 20nm width with wavelength ranging from 403nm to 1023nm. The radiance calibration of CMODIS was finished in the laboratory measurements before it was launched and the laboratory calibration coefficients were used to calibrate the CMODIS raw data. Since none of on-board radiance absolute calibration devices including internal lamps system and calibration system which is based on solar reflectance and lunar irradiance were installed with the sensor, how about the accuracy of CMODIS-measured radiance is a key question for the remote sensing data processing and ocean applications. A new model was developed as a program to evaluate the accuracy of calibrated radiance measured by CMODIS at the top of the atmosphere (TOA). The program can compute the Rayleigh scattering radiance and aerosol scattering radiance together with the radiance component from the water-leaving radiance to deduce the total radiance at TOA under some similar observation conditions of CMODIS. Both the multiple-scattering effects and atmosphere absorbing effects are taken into account on the radiative transfer model to improve the accuracy of atmospheric scattering radiances. The model was used to deduce the spectral radiances at TOA and compared with the radiances measured by Sea-viewing Wide Field-of-view Sensor (SeaWiFS) to check the performance of the model, showing that the spectral radiances from the model with small differences from those of SeaWiFS. The spectral radiances of the model can be taken as reference values to evaluate the accuracy of CMODIS calibrated radiance. The relative differences of the two radiances are large from 16% to 300%, especially for CMODIS at the near-infrared channels with more than one time larger than those of the model. It is shown that the calibration coefficients from the laboratory measurements are not reliable and the radiance of CMODIS needs to be recalibrated before the data are used for oceanography applications. The results show that the model is effective in evaluating the CMODIS sensor and easily to be modified to evaluate other kinds of ocean color satellite sensors.
Automatic calibration system for analog instruments based on DSP and CCD sensor
NASA Astrophysics Data System (ADS)
Lan, Jinhui; Wei, Xiangqin; Bai, Zhenlong
2008-12-01
Currently, the calibration work of analog measurement instruments is mainly completed by manual and there are many problems waiting for being solved. In this paper, an automatic calibration system (ACS) based on Digital Signal Processor (DSP) and Charge Coupled Device (CCD) sensor is developed and a real-time calibration algorithm is presented. In the ACS, TI DM643 DSP processes the data received by CCD sensor and the outcome is displayed on Liquid Crystal Display (LCD) screen. For the algorithm, pointer region is firstly extracted for improving calibration speed. And then a math model of the pointer is built to thin the pointer and determine the instrument's reading. Through numbers of experiments, the time of once reading is no more than 20 milliseconds while it needs several seconds if it is done manually. At the same time, the error of the instrument's reading satisfies the request of the instruments. It is proven that the automatic calibration system can effectively accomplish the calibration work of the analog measurement instruments.
HST/WFC3 flux calibration ladder: Vega
NASA Astrophysics Data System (ADS)
Deustua, Susana E.; Bohlin, Ralph; Pirzkal, Nor; MacKenty, John
2014-08-01
Vega is one of only a few stars calibrated against an SI-traceable blackbody, and is the historical flux standard. Photometric zeropoints of the Hubble Space Telescope's instruments rely on Vega, through the transfer of its calibration via stellar atmosphere models to the suite of standard stars. HST's recently implemented scan mode has enabled us to develop a path to an absolute SI traceable calibration for HST IR observations. To fill in the crucial gap between 0.9 and 1.7 micron in the absolute calibration, we acquired -1st order spectra of Vega with the two WFC3 infrared grisms. At the same time, we have improved the calibration of the -1st orders of both WFC3 IR grisms, as well as extended the dynamic range of WFC3 science observations by a factor of 10000. We describe our progress to date on the WFC3 `flux calibration ladder' project to provide currently needed accurate zeropoint measurements in the IR
Absolute calibration of Doppler coherence imaging velocity images
NASA Astrophysics Data System (ADS)
Samuell, C. M.; Allen, S. L.; Meyer, W. H.; Howard, J.
2017-08-01
A new technique has been developed for absolutely calibrating a Doppler Coherence Imaging Spectroscopy interferometer for measuring plasma ion and neutral velocities. An optical model of the interferometer is used to generate zero-velocity reference images for the plasma spectral line of interest from a calibration source some spectral distance away. Validation of this technique using a tunable diode laser demonstrated an accuracy better than 0.2 km/s over an extrapolation range of 3.5 nm; a two order of magnitude improvement over linear approaches. While a well-characterized and very stable interferometer is required, this technique opens up the possibility of calibrated velocity measurements in difficult viewing geometries and for complex spectral line-shapes.
Consideration of Real World Factors Influencing Greenhouse ...
Discuss a variety of factors that influence the simulated fuel economy and GHG emissions that are often overlooked and updates made to ALPHA based on actual benchmarking data observed across a range of vehicles and transmissions. ALPHA model calibration is also examined, focusing on developing generic calibrations for driver behavior, transmission gear selection and torque converter lockup. In addition, show the derivation of correction factors needed to estimate cold start emission results. To provide an overview of the ALPHA tool with additional focus on recent updates by presenting the approach for validating and calibrating ALPHA to match particular vehicles in a general sense, then by looking at the individual losses, and calibration factors likely to influence fuel economy.
ERIC Educational Resources Information Center
Tobias, Robert
2009-01-01
This article presents a social psychological model of prospective memory and habit development. The model is based on relevant research literature, and its dynamics were investigated by computer simulations. Time-series data from a behavior-change campaign in Cuba were used for calibration and validation of the model. The model scored well in…
Evaluation of Potential Evapotranspiration from a Hydrologic Model on a National Scale
NASA Astrophysics Data System (ADS)
Hakala, Kirsti; Markstrom, Steven; Hay, Lauren
2015-04-01
The U.S. Geological Survey has developed a National Hydrologic Model (NHM) to support coordinated, comprehensive and consistent hydrologic model development and facilitate the application of simulations on the scale of the continental U.S. The NHM has a consistent geospatial fabric for modeling, consisting of over 100,000 hydrologic response units HRUs). Each HRU requires accurate parameter estimates, some of which are attained from automated calibration. However, improved calibration can be achieved by initially utilizing as many parameters as possible from national data sets. This presentation investigates the effectiveness of calculating potential evapotranspiration (PET) parameters based on mean monthly values from the NOAA PET Atlas. Additional PET products are then used to evaluate the PET parameters. Effectively utilizing existing national-scale data sets can simplify the effort in establishing a robust NHM.
Beyond allostatic load: rethinking the role of stress in regulating human development.
Ellis, Bruce J; Del Giudice, Marco
2014-02-01
How do exposures to stress affect biobehavioral development and, through it, psychiatric and biomedical disorder? In the health sciences, the allostatic load model provides a widely accepted answer to this question: stress responses, while essential for survival, have negative long-term effects that promote illness. Thus, the benefits of mounting repeated biological responses to threat are traded off against costs to mental and physical health. The adaptive calibration model, an evolutionary-developmental theory of stress-health relations, extends this logic by conceptualizing these trade-offs as decision nodes in allocation of resources. Each decision node influences the next in a chain of resource allocations that become instantiated in the regulatory parameters of stress response systems. Over development, these parameters filter and embed information about key dimensions of environmental stress and support, mediating the organism's openness to environmental inputs, and function to regulate life history strategies to match those dimensions. Drawing on the adaptive calibration model, we propose that consideration of biological fitness trade-offs, as delineated by life history theory, is needed to more fully explain the complex relations between developmental exposures to stress, stress responsivity, behavioral strategies, and health. We conclude that the adaptive calibration model and allostatic load model are only partially complementary and, in some cases, support different approaches to intervention. In the long run, the field may be better served by a model informed by life history theory that addresses the adaptive role of stress response systems in regulating alternative developmental pathways.
A Calibration Method for Nanowire Biosensors to Suppress Device-to-device Variation
Ishikawa, Fumiaki N.; Curreli, Marco; Chang, Hsiao-Kang; Chen, Po-Chiang; Zhang, Rui; Cote, Richard J.; Thompson, Mark E.; Zhou, Chongwu
2009-01-01
Nanowire/nanotube biosensors have stimulated significant interest; however the inevitable device-to-device variation in the biosensor performance remains a great challenge. We have developed an analytical method to calibrate nanowire biosensor responses that can suppress the device-to-device variation in sensing response significantly. The method is based on our discovery of a strong correlation between the biosensor gate dependence (dIds/dVg) and the absolute response (absolute change in current, ΔI). In2O3 nanowire based biosensors for streptavidin detection were used as the model system. Studying the liquid gate effect and ionic concentration dependence of strepavidin sensing indicates that electrostatic interaction is the dominant mechanism for sensing response. Based on this sensing mechanism and transistor physics, a linear correlation between the absolute sensor response (ΔI) and the gate dependence (dIds/dVg) is predicted and confirmed experimentally. Using this correlation, a calibration method was developed where the absolute response is divided by dIds/dVg for each device, and the calibrated responses from different devices behaved almost identically. Compared to the common normalization method (normalization of the conductance/resistance/current by the initial value), this calibration method was proved advantageous using a conventional transistor model. The method presented here substantially suppresses device-to-device variation, allowing the use of nanosensors in large arrays. PMID:19921812
Igne, Benoît; Drennen, James K; Anderson, Carl A
2014-01-01
Changes in raw materials and process wear and tear can have significant effects on the prediction error of near-infrared calibration models. When the variability that is present during routine manufacturing is not included in the calibration, test, and validation sets, the long-term performance and robustness of the model will be limited. Nonlinearity is a major source of interference. In near-infrared spectroscopy, nonlinearity can arise from light path-length differences that can come from differences in particle size or density. The usefulness of support vector machine (SVM) regression to handle nonlinearity and improve the robustness of calibration models in scenarios where the calibration set did not include all the variability present in test was evaluated. Compared to partial least squares (PLS) regression, SVM regression was less affected by physical (particle size) and chemical (moisture) differences. The linearity of the SVM predicted values was also improved. Nevertheless, although visualization and interpretation tools have been developed to enhance the usability of SVM-based methods, work is yet to be done to provide chemometricians in the pharmaceutical industry with a regression method that can supplement PLS-based methods.
A Robust Bayesian Random Effects Model for Nonlinear Calibration Problems
Fong, Y.; Wakefield, J.; De Rosa, S.; Frahm, N.
2013-01-01
Summary In the context of a bioassay or an immunoassay, calibration means fitting a curve, usually nonlinear, through the observations collected on a set of samples containing known concentrations of a target substance, and then using the fitted curve and observations collected on samples of interest to predict the concentrations of the target substance in these samples. Recent technological advances have greatly improved our ability to quantify minute amounts of substance from a tiny volume of biological sample. This has in turn led to a need to improve statistical methods for calibration. In this paper, we focus on developing calibration methods robust to dependent outliers. We introduce a novel normal mixture model with dependent error terms to model the experimental noise. In addition, we propose a re-parameterization of the five parameter logistic nonlinear regression model that allows us to better incorporate prior information. We examine the performance of our methods with simulation studies and show that they lead to a substantial increase in performance measured in terms of mean squared error of estimation and a measure of the average prediction accuracy. A real data example from the HIV Vaccine Trials Network Laboratory is used to illustrate the methods. PMID:22551415
Wan, Boyong; Small, Gary W.
2010-01-01
Wavelet analysis is developed as a preprocessing tool for use in removing background information from near-infrared (near-IR) single-beam spectra before the construction of multivariate calibration models. Three data sets collected with three different near-IR spectrometers are investigated that involve the determination of physiological levels of glucose (1-30 mM) in a simulated biological matrix containing alanine, ascorbate, lactate, triacetin, and urea in phosphate buffer. A factorial design is employed to optimize the specific wavelet function used and the level of decomposition applied, in addition to the spectral range and number of latent variables associated with a partial least-squares calibration model. The prediction performance of the computed models is studied with separate data acquired after the collection of the calibration spectra. This evaluation includes one data set collected over a period of more than six months. Preprocessing with wavelet analysis is also compared to the calculation of second-derivative spectra. Over the three data sets evaluated, wavelet analysis is observed to produce better-performing calibration models, with improvements in concentration predictions on the order of 30% being realized relative to models based on either second-derivative spectra or spectra preprocessed with simple additive and multiplicative scaling correction. This methodology allows the construction of stable calibrations directly with single-beam spectra, thereby eliminating the need for the collection of a separate background or reference spectrum. PMID:21035604
Wan, Boyong; Small, Gary W
2010-11-29
Wavelet analysis is developed as a preprocessing tool for use in removing background information from near-infrared (near-IR) single-beam spectra before the construction of multivariate calibration models. Three data sets collected with three different near-IR spectrometers are investigated that involve the determination of physiological levels of glucose (1-30 mM) in a simulated biological matrix containing alanine, ascorbate, lactate, triacetin, and urea in phosphate buffer. A factorial design is employed to optimize the specific wavelet function used and the level of decomposition applied, in addition to the spectral range and number of latent variables associated with a partial least-squares calibration model. The prediction performance of the computed models is studied with separate data acquired after the collection of the calibration spectra. This evaluation includes one data set collected over a period of more than 6 months. Preprocessing with wavelet analysis is also compared to the calculation of second-derivative spectra. Over the three data sets evaluated, wavelet analysis is observed to produce better-performing calibration models, with improvements in concentration predictions on the order of 30% being realized relative to models based on either second-derivative spectra or spectra preprocessed with simple additive and multiplicative scaling correction. This methodology allows the construction of stable calibrations directly with single-beam spectra, thereby eliminating the need for the collection of a separate background or reference spectrum. Copyright © 2010 Elsevier B.V. All rights reserved.
The moon as a radiometric reference source for on-orbit sensor stability calibration
Stone, T.C.
2009-01-01
The wealth of data generated by the world's Earth-observing satellites, now spanning decades, allows the construction of long-term climate records. A key consideration for detecting climate trends is precise quantification of temporal changes in sensor calibration on-orbit. For radiometer instruments in the solar reflectance wavelength range (near-UV to shortwave-IR), the Moon can be viewed as a solar diffuser with exceptional stability properties. A model for the lunar spectral irradiance that predicts the geometric variations in the Moon's brightness with ???1% precision has been developed at the U.S. Geological Survey in Flagstaff, AZ. Lunar model results corresponding to a series of Moon observations taken by an instrument can be used to stabilize sensor calibration with sub-percent per year precision, as demonstrated by the Sea-viewing Wide Field-of-view Sensor (SeaWiFS). The inherent stability of the Moon and the operational model to utilize the lunar irradiance quantity provide the Moon as a reference source for monitoring radiometric calibration in orbit. This represents an important capability for detecting terrestrial climate change from space-based radiometric measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Chao; Xu, Zhijie; Lai, Kevin
The first part of this paper (Part 1) presents a numerical model for non-reactive physical mass transfer across a wetted wall column (WWC). In Part 2, we improved the existing computational fluid dynamics (CFD) model to simulate chemical absorption occurring in a WWC as a bench-scale study of solvent-based carbon dioxide (CO2) capture. To generate data for WWC model validation, CO2 mass transfer across a monoethanolamine (MEA) solvent was first measured on a WWC experimental apparatus. The numerical model developed in this work has the ability to account for both chemical absorption and desorption of CO2 in MEA. In addition,more » the overall mass transfer coefficient predicted using traditional/empirical correlations is conducted and compared with CFD prediction results for both steady and wavy falling films. A Bayesian statistical calibration algorithm is adopted to calibrate the reaction rate constants in chemical absorption/desorption of CO2 across a falling film of MEA. The posterior distributions of the two transport properties, i.e., Henry’s constant and gas diffusivity in the non-reacting nitrous oxide (N2O)/MEA system obtained from Part 1 of this study, serves as priors for the calibration of CO2 reaction rate constants after using the N2O/CO2 analogy method. The calibrated model can be used to predict the CO2 mass transfer in a WWC for a wider range of operating conditions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Chao; Xu, Zhijie; Lai, Kevin
Part 1 of this paper presents a numerical model for non-reactive physical mass transfer across a wetted wall column (WWC). In Part 2, we improved the existing computational fluid dynamics (CFD) model to simulate chemical absorption occurring in a WWC as a bench-scale study of solvent-based carbon dioxide (CO 2) capture. In this study, to generate data for WWC model validation, CO 2 mass transfer across a monoethanolamine (MEA) solvent was first measured on a WWC experimental apparatus. The numerical model developed in this work can account for both chemical absorption and desorption of CO 2 in MEA. In addition,more » the overall mass transfer coefficient predicted using traditional/empirical correlations is conducted and compared with CFD prediction results for both steady and wavy falling films. A Bayesian statistical calibration algorithm is adopted to calibrate the reaction rate constants in chemical absorption/desorption of CO 2 across a falling film of MEA. The posterior distributions of the two transport properties, i.e., Henry's constant and gas diffusivity in the non-reacting nitrous oxide (N 2O)/MEA system obtained from Part 1 of this study, serves as priors for the calibration of CO 2 reaction rate constants after using the N 2O/CO 2 analogy method. Finally, the calibrated model can be used to predict the CO 2 mass transfer in a WWC for a wider range of operating conditions.« less
Wang, Chao; Xu, Zhijie; Lai, Kevin; ...
2017-10-24
Part 1 of this paper presents a numerical model for non-reactive physical mass transfer across a wetted wall column (WWC). In Part 2, we improved the existing computational fluid dynamics (CFD) model to simulate chemical absorption occurring in a WWC as a bench-scale study of solvent-based carbon dioxide (CO 2) capture. In this study, to generate data for WWC model validation, CO 2 mass transfer across a monoethanolamine (MEA) solvent was first measured on a WWC experimental apparatus. The numerical model developed in this work can account for both chemical absorption and desorption of CO 2 in MEA. In addition,more » the overall mass transfer coefficient predicted using traditional/empirical correlations is conducted and compared with CFD prediction results for both steady and wavy falling films. A Bayesian statistical calibration algorithm is adopted to calibrate the reaction rate constants in chemical absorption/desorption of CO 2 across a falling film of MEA. The posterior distributions of the two transport properties, i.e., Henry's constant and gas diffusivity in the non-reacting nitrous oxide (N 2O)/MEA system obtained from Part 1 of this study, serves as priors for the calibration of CO 2 reaction rate constants after using the N 2O/CO 2 analogy method. Finally, the calibrated model can be used to predict the CO 2 mass transfer in a WWC for a wider range of operating conditions.« less
Modelling dimercaptosuccinic acid (DMSA) plasma kinetics in humans.
van Eijkeren, Jan C H; Olie, J Daniël N; Bradberry, Sally M; Vale, J Allister; de Vries, Irma; Meulenbelt, Jan; Hunault, Claudine C
2016-11-01
No kinetic models presently exist which simulate the effect of chelation therapy on lead blood concentrations in lead poisoning. Our aim was to develop a kinetic model that describes the kinetics of dimercaptosuccinic acid (DMSA; succimer), a commonly used chelating agent, that could be used in developing a lead chelating model. This was a kinetic modelling study. We used a two-compartment model, with a non-systemic gastrointestinal compartment (gut lumen) and the whole body as one systemic compartment. The only data available from the literature were used to calibrate the unknown model parameters. The calibrated model was then validated by comparing its predictions with measured data from three different experimental human studies. The model predicted total DMSA plasma and urine concentrations measured in three healthy volunteers after ingestion of DMSA 10 mg/kg. The model was then validated by using data from three other published studies; it predicted concentrations within a factor of two, representing inter-human variability. A simple kinetic model simulating the kinetics of DMSA in humans has been developed and validated. The interest of this model lies in the future potential to use it to predict blood lead concentrations in lead-poisoned patients treated with DMSA.
40 CFR 86.1830-01 - Acceptance of vehicles for emission testing.
Code of Federal Regulations, 2010 CFR
2010-07-01
... previous model year emission data vehicles, running change vehicles, fuel economy data vehicles, and... judgment in making such determinations. Development vehicles which were used to develop the calibration...
Calibration artefacts in radio interferometry - II. Ghost patterns for irregular arrays
NASA Astrophysics Data System (ADS)
Wijnholds, S. J.; Grobler, T. L.; Smirnov, O. M.
2016-04-01
Calibration artefacts, like the self-calibration bias, usually emerge when data are calibrated using an incomplete sky model. In the first paper of this series, in which we analysed calibration artefacts in data from the Westerbork Synthesis Radio Telescope, we showed that these artefacts take the form of spurious positive and negative sources, which we refer to as ghosts or ghost sources. We also developed a mathematical framework with which we could predict the ghost pattern of an east-west interferometer for a simple two-source test case. In this paper, we extend our analysis to more general array layouts. This provides us with a useful method for the analysis of ghosts that we refer to as extrapolation. Combining extrapolation with a perturbation analysis, we are able to (1) analyse the ghost pattern for a two-source test case with one modelled and one unmodelled source for an arbitrary array layout, (2) explain why some ghosts are brighter than others, (3) define a taxonomy allowing us to classify the different ghosts, (4) derive closed form expressions for the fluxes and positions of the brightest ghosts, and (5) explain the strange two-peak structure with which some ghosts manifest during imaging. We illustrate our mathematical predictions using simulations of the KAT-7 (seven-dish Karoo Array Telescope) array. These results show the explanatory power of our mathematical model. The insights gained in this paper provide a solid foundation to study calibration artefacts in arbitrary, I.e. more complicated than the two-source example discussed here, incomplete sky models or full synthesis observations including direction-dependent effects.
Luthi, François; Deriaz, Olivier; Vuistiner, Philippe; Burrus, Cyrille; Hilfiker, Roger
2014-01-01
Workers with persistent disabilities after orthopaedic trauma may need occupational rehabilitation. Despite various risk profiles for non-return-to-work (non-RTW), there is no available predictive model. Moreover, injured workers may have various origins (immigrant workers), which may either affect their return to work or their eligibility for research purposes. The aim of this study was to develop and validate a predictive model that estimates the likelihood of non-RTW after occupational rehabilitation using predictors which do not rely on the worker's background. Prospective cohort study (3177 participants, native (51%) and immigrant workers (49%)) with two samples: a) Development sample with patients from 2004 to 2007 with Full and Reduced Models, b) External validation of the Reduced Model with patients from 2008 to March 2010. We collected patients' data and biopsychosocial complexity with an observer rated interview (INTERMED). Non-RTW was assessed two years after discharge from the rehabilitation. Discrimination was assessed by the area under the receiver operating curve (AUC) and calibration was evaluated with a calibration plot. The model was reduced with random forests. At 2 years, the non-RTW status was known for 2462 patients (77.5% of the total sample). The prevalence of non-RTW was 50%. The full model (36 items) and the reduced model (19 items) had acceptable discrimination performance (AUC 0.75, 95% CI 0.72 to 0.78 and 0.74, 95% CI 0.71 to 0.76, respectively) and good calibration. For the validation model, the discrimination performance was acceptable (AUC 0.73; 95% CI 0.70 to 0.77) and calibration was also adequate. Non-RTW may be predicted with a simple model constructed with variables independent of the patient's education and language fluency. This model is useful for all kinds of trauma in order to adjust for case mix and it is applicable to vulnerable populations like immigrant workers.
Luthi, François; Deriaz, Olivier; Vuistiner, Philippe; Burrus, Cyrille; Hilfiker, Roger
2014-01-01
Background Workers with persistent disabilities after orthopaedic trauma may need occupational rehabilitation. Despite various risk profiles for non-return-to-work (non-RTW), there is no available predictive model. Moreover, injured workers may have various origins (immigrant workers), which may either affect their return to work or their eligibility for research purposes. The aim of this study was to develop and validate a predictive model that estimates the likelihood of non-RTW after occupational rehabilitation using predictors which do not rely on the worker’s background. Methods Prospective cohort study (3177 participants, native (51%) and immigrant workers (49%)) with two samples: a) Development sample with patients from 2004 to 2007 with Full and Reduced Models, b) External validation of the Reduced Model with patients from 2008 to March 2010. We collected patients’ data and biopsychosocial complexity with an observer rated interview (INTERMED). Non-RTW was assessed two years after discharge from the rehabilitation. Discrimination was assessed by the area under the receiver operating curve (AUC) and calibration was evaluated with a calibration plot. The model was reduced with random forests. Results At 2 years, the non-RTW status was known for 2462 patients (77.5% of the total sample). The prevalence of non-RTW was 50%. The full model (36 items) and the reduced model (19 items) had acceptable discrimination performance (AUC 0.75, 95% CI 0.72 to 0.78 and 0.74, 95% CI 0.71 to 0.76, respectively) and good calibration. For the validation model, the discrimination performance was acceptable (AUC 0.73; 95% CI 0.70 to 0.77) and calibration was also adequate. Conclusions Non-RTW may be predicted with a simple model constructed with variables independent of the patient’s education and language fluency. This model is useful for all kinds of trauma in order to adjust for case mix and it is applicable to vulnerable populations like immigrant workers. PMID:24718689
NASA Astrophysics Data System (ADS)
Guerrero, C.; Zornoza, R.; Gómez, I.; Mataix-Solera, J.; Navarro-Pedreño, J.; Mataix-Beneyto, J.; García-Orenes, F.
2009-04-01
Near infrared (NIR) reflectance spectroscopy offers important advantages because is a non-destructive technique, the pre-treatments needed in samples are minimal, and the spectrum of the sample is obtained in less than 1 minute without the needs of chemical reagents. For these reasons, NIR is a fast and cost-effective method. Moreover, NIR allows the analysis of several constituents or parameters simultaneously from the same spectrum once it is obtained. For this, a needed steep is the development of soil spectral libraries (set of samples analysed and scanned) and calibrations (using multivariate techniques). The calibrations should contain the variability of the target site soils in which the calibration is to be used. Many times this premise is not easy to fulfil, especially in libraries recently developed. A classical way to solve this problem is through the repopulation of libraries and the subsequent recalibration of the models. In this work we studied the changes in the accuracy of the predictions as a consequence of the successive addition of samples to repopulation. In general, calibrations with high number of samples and high diversity are desired. But we hypothesized that calibrations with lower quantities of samples (lower size) will absorb more easily the spectral characteristics of the target site. Thus, we suspect that the size of the calibration (model) that will be repopulated could be important. For this reason we also studied this effect in the accuracy of predictions of the repopulated models. In this study we used those spectra of our library which contained data of soil Kjeldahl Nitrogen (NKj) content (near to 1500 samples). First, those spectra from the target site were removed from the spectral library. Then, different quantities of samples of the library were selected (representing the 5, 10, 25, 50, 75 and 100% of the total library). These samples were used to develop calibrations with different sizes (%) of samples. We used partial least squares regression, and leave-one-out cross validation as methods of calibration. Two methods were used to select the different quantities (size of models) of samples: (1) Based on Characteristics of Spectra (BCS), and (2) Based on NKj Values of Samples (BVS). Both methods tried to select representative samples. Each of the calibrations (containing the 5, 10, 25, 50, 75 or 100% of the total samples of the library) was repopulated with samples from the target site and then recalibrated (by leave-one-out cross validation). This procedure was sequential. In each step, 2 samples from the target site were added to the models, and then recalibrated. This process was repeated successively 10 times, being 20 the total number of samples added. A local model was also created with the 20 samples used for repopulation. The repopulated, non-repopulated and local calibrations were used to predict the NKj content in those samples from the target site not included in repopulations. For the measurement of the accuracy of the predictions, the r2, RMSEP and slopes were calculated comparing predicted with analysed NKj values. This scheme was repeated for each of the four target sites studied. In general, scarce differences can be found between results obtained with BCS and BVS models. We observed that the repopulation of models increased the r2 of the predictions in sites 1 and 3. The repopulation caused scarce changes of the r2 of the predictions in sites 2 and 4, maybe due to the high initial values (using non-repopulated models r2 >0.90). As consequence of repopulation, the RMSEP decreased in all the sites except in site 2, where a very low RMESP was obtained before the repopulation (0.4 g×kg-1). The slopes trended to approximate to 1, but this value was reached only in site 4 and after the repopulation with 20 samples. In sites 3 and 4, accurate predictions were obtained using the local models. Predictions obtained with models using similar size of samples (similar %) were averaged with the aim to describe the main patterns. The r2 of predictions obtained with models of higher size were not more accurate than those obtained with models of lower size. After repopulation, the RMSEP of predictions using models with lower sizes (5, 10 and 25% of samples of the library) were lower than RMSEP obtained with higher sizes (75 and 100%), indicating that small models can easily integrate the variability of the soils from the target site. The results suggest that calibrations of small size could be repopulated and "converted" in local calibrations. According to this, we can focus most of the efforts in the obtainment of highly accurate analytical values in a reduced set of samples (including some samples from the target sites). The patterns observed here are in opposition with the idea of global models. These results could encourage the expansion of this technique, because very large data based seems not to be needed. Future studies with very different samples will help to confirm the robustness of the patterns observed. Authors acknowledge to "Bancaja-UMH" for the financial support of the project "NIRPROS".
NASA Astrophysics Data System (ADS)
Jackson-Blake, Leah; Helliwell, Rachel
2015-04-01
Process-based catchment water quality models are increasingly used as tools to inform land management. However, for such models to be reliable they need to be well calibrated and shown to reproduce key catchment processes. Calibration can be challenging for process-based models, which tend to be complex and highly parameterised. Calibrating a large number of parameters generally requires a large amount of monitoring data, spanning all hydrochemical conditions. However, regulatory agencies and research organisations generally only sample at a fortnightly or monthly frequency, even in well-studied catchments, often missing peak flow events. The primary aim of this study was therefore to investigate how the quality and uncertainty of model simulations produced by a process-based, semi-distributed catchment model, INCA-P (the INtegrated CAtchment model of Phosphorus dynamics), were improved by calibration to higher frequency water chemistry data. Two model calibrations were carried out for a small rural Scottish catchment: one using 18 months of daily total dissolved phosphorus (TDP) concentration data, another using a fortnightly dataset derived from the daily data. To aid comparability, calibrations were carried out automatically using the Markov Chain Monte Carlo - DiffeRential Evolution Adaptive Metropolis (MCMC-DREAM) algorithm. Calibration to daily data resulted in improved simulation of peak TDP concentrations and improved model performance statistics. Parameter-related uncertainty in simulated TDP was large when fortnightly data was used for calibration, with a 95% credible interval of 26 μg/l. This uncertainty is comparable in size to the difference between Water Framework Directive (WFD) chemical status classes, and would therefore make it difficult to use this calibration to predict shifts in WFD status. The 95% credible interval reduced markedly with the higher frequency monitoring data, to 6 μg/l. The number of parameters that could be reliably auto-calibrated was lower for the fortnightly data, with a physically unrealistic TDP simulation being produced when too many parameters were allowed to vary during model calibration. Parameters should not therefore be varied spatially for models such as INCA-P unless there is solid evidence that this is appropriate, or there is a real need to do so for the model to fulfil its purpose. This study highlights the potential pitfalls of using low frequency timeseries of observed water quality to calibrate complex process-based models. For reliable model calibrations to be produced, monitoring programmes need to be designed which capture system variability, in particular nutrient dynamics during high flow events. In addition, there is a need for simpler models, so that all model parameters can be included in auto-calibration and uncertainty analysis, and to reduce the data needs during calibration.
A short-term ensemble wind speed forecasting system for wind power applications
NASA Astrophysics Data System (ADS)
Baidya Roy, S.; Traiteur, J. J.; Callicutt, D.; Smith, M.
2011-12-01
This study develops an adaptive, blended forecasting system to provide accurate wind speed forecasts 1 hour ahead of time for wind power applications. The system consists of an ensemble of 21 forecasts with different configurations of the Weather Research and Forecasting Single Column Model (WRFSCM) and a persistence model. The ensemble is calibrated against observations for a 2 month period (June-July, 2008) at a potential wind farm site in Illinois using the Bayesian Model Averaging (BMA) technique. The forecasting system is evaluated against observations for August 2008 at the same site. The calibrated ensemble forecasts significantly outperform the forecasts from the uncalibrated ensemble while significantly reducing forecast uncertainty under all environmental stability conditions. The system also generates significantly better forecasts than persistence, autoregressive (AR) and autoregressive moving average (ARMA) models during the morning transition and the diurnal convective regimes. This forecasting system is computationally more efficient than traditional numerical weather prediction models and can generate a calibrated forecast, including model runs and calibration, in approximately 1 minute. Currently, hour-ahead wind speed forecasts are almost exclusively produced using statistical models. However, numerical models have several distinct advantages over statistical models including the potential to provide turbulence forecasts. Hence, there is an urgent need to explore the role of numerical models in short-term wind speed forecasting. This work is a step in that direction and is likely to trigger a debate within the wind speed forecasting community.
Chase, K.J.
2011-01-01
This report documents the development of a precipitation-runoff model for the South Fork Flathead River Basin, Mont. The Precipitation-Runoff Modeling System model, developed in cooperation with the Bureau of Reclamation, can be used to simulate daily mean unregulated streamflow upstream and downstream from Hungry Horse Reservoir for water-resources planning. Two input files are required to run the model. The time-series data file contains daily precipitation data and daily minimum and maximum air-temperature data from climate stations in and near the South Fork Flathead River Basin. The parameter file contains values of parameters that describe the basin topography, the flow network, the distribution of the precipitation and temperature data, and the hydrologic characteristics of the basin soils and vegetation. A primary-parameter file was created for simulating streamflow during the study period (water years 1967-2005). The model was calibrated for water years 1991-2005 using the primary-parameter file. This calibration was further refined using snow-covered area data for water years 2001-05. The model then was tested for water years 1967-90. Calibration targets included mean monthly and daily mean unregulated streamflow upstream from Hungry Horse Reservoir, mean monthly unregulated streamflow downstream from Hungry Horse Reservoir, basin mean monthly solar radiation and potential evapotranspiration, and daily snapshots of basin snow-covered area. Simulated streamflow generally was in better agreement with observed streamflow at the upstream gage than at the downstream gage. Upstream from the reservoir, simulated mean annual streamflow was within 0.0 percent of observed mean annual streamflow for the calibration period and was about 2 percent higher than observed mean annual streamflow for the test period. Simulated mean April-July streamflow upstream from the reservoir was about 1 percent lower than observed streamflow for the calibration period and about 4 percent higher than observed for the test period. Downstream from the reservoir, simulated mean annual streamflow was 17 percent lower than observed streamflow for the calibration period and 12 percent lower than observed streamflow for the test period. Simulated mean April-July streamflow downstream from the reservoir was 13 percent lower than observed streamflow for the calibration period and 6 percent lower than observed streamflow for the test period. Calibrating to solar radiation, potential evapotranspiration, and snow-covered area improved the model representation of evapotranspiration, snow accumulation, and snowmelt processes. Simulated basin mean monthly solar radiation values for both the calibration and test periods were within 9 percent of observed values except during the month of December (28 percent different). Simulated basin potential evapotranspiration values for both the calibration and test periods were within 10 percent of observed values except during the months of January (100 percent different) and February (13 percent different). The larger percent errors in simulated potential evaporation occurred in the winter months when observed potential evapotranspiration values were very small; in January the observed value was 0.000 inches and in February the observed value was 0.009 inches. Simulated start of melting of the snowpack occurred at about the same time as observed start of melting. The simulated snowpack accumulated to 90-100 percent snow-covered area 1 to 3 months earlier than observed snowpack. This overestimated snowpack during the winter corresponded to underestimated streamflow during the same period. In addition to the primary-parameter file, four other parameter files were created: for a "recent" period (1991-2005), a historical period (1967-90), a "wet" period (1989-97), and a "dry" period (1998-2005). For each data file of projected precipitation and air temperature, a single parameter file can be used to simulate a s
NASA Astrophysics Data System (ADS)
Antoshechkina, P. M.; Wolf, A. S.; Hamecher, E. A.; Asimow, P. D.; Ghiorso, M. S.
2013-12-01
Community databases such as EarthChem, LEPR, and AMCSD both increase demand for quantitative petrological tools, including thermodynamic models like the MELTS family of algorithms, and are invaluable in development of such tools. The need to extend existing solid solution models to include minor components such as Cr and Na has been evident for years but as the number of components increases it becomes impossible to completely separate derivation of end-member thermodynamic data from calibration of solution properties. In Hamecher et al. (2012; 2013) we developed a calibration scheme that directly interfaces with a MySQL database based on LEPR, with volume data from AMCSD and elsewhere. Here we combine that scheme with a Bayesian approach, where independent constraints on parameter values (e.g. existence of miscibility gaps) are combined with uncertainty propagation to give a more reliable best-fit along with associated model uncertainties. We illustrate the scheme with a new model of molar volume for (Ca,Fe,Mg,Mn,Na)3(Al,Cr,Fe3+,Fe2+,Mg,Mn,Si,Ti)2Si3O12 cubic garnets. For a garnet in this chemical system, the model molar volume is obtained by adding excess volume terms to a linear combination of nine independent end-member volumes. The model calibration is broken into three main stages: (1) estimation of individual end-member thermodynamic properties; (2) calibration of standard state volumes for all available independent and dependent end members; (3) fitting of binary and mixed composition data. For each calibration step, the goodness-of-fit includes weighted residuals as well as χ2-like penalty terms representing the (not necessarily Gaussian) prior constraints on parameter values. Using the Bayesian approach, uncertainties are correctly propagated forward to subsequent steps, allowing determination of final parameter values and correlated uncertainties that account for the entire calibration process. For the aluminosilicate garnets, optimal values of the bulk modulus and its pressure derivative are obtained by fitting published compression data using the Vinet equation of state, with the Mie-Grüneisen-Debye thermal pressure formalism to model thermal expansion. End-member thermal parameters are obtained by fitting volume data while ensuring that the heat capacity is consistent with the thermodynamic database of Berman and co-workers. For other end members, data for related compositions are used where such data exist; otherwise ultrasonic data or density functional theory results are taken or, for thermal parameters, systematics in cation radii are used. In stages (2) and (3) the remaining data at ambient conditions are fit. Using this step-wise calibration scheme, most parameters are modified little by subsequent calibration steps but some, such as the standard state volume of the Ti-bearing end member, can vary within calculated uncertainties. The final model satisfies desired criteria and fits almost all the data (more than 1000 points); only excess parameters that are justified by the data are activated. The scheme can be easily extended to calibration of end-member and solution properties from experimental phase equilibria. As a first step we obtain the internally consistent standard state entropy and enthalpy of formation for knorringite and discuss differences between our results and those of Klemme and co-workers.
NASA Astrophysics Data System (ADS)
Minunno, Francesco; Peltoniemi, Mikko; Launiainen, Samuli; Mäkelä, Annikki
2014-05-01
Biogeochemical models quantify the material and energy flux exchanges between biosphere, atmosphere and soil, however there is still considerable uncertainty underpinning model structure and parametrization. The increasing availability of data from of multiple sources provides useful information for model calibration and validation at different space and time scales. We calibrated a simplified ecosystem process model PRELES to data from multiple sites. In this work we had the following objective: to compare a multi-site calibration and site-specific calibrations, in order to test if PRELES is a model of general applicability, and to test how well one parameterization can predict ecosystem fluxes. Model calibration and evaluation were carried out by the means of the Bayesian method; Bayesian calibration (BC) and Bayesian model comparison (BMC) were used to quantify the uncertainty in model parameters and model structure. Evapotranspiration (ET) and gross primary production (GPP) measurements collected in 9 sites of Finland and Sweden were used in the study; half dataset was used for model calibrations and half for the comparative analyses. 10 BCs were performed; the model was independently calibrated for each of the nine sites (site-specific calibrations) and a multi-site calibration was achieved using the data from all the sites in one BC. Then 9 BMCs were carried out, one for each site, using output from the multi-site and the site-specific versions of PRELES. Similar estimates were obtained for the parameters at which model outputs are most sensitive. Not surprisingly, the joint posterior distribution achieved through the multi-site calibration was characterized by lower uncertainty, because more data were involved in the calibration process. No significant differences were encountered in the prediction of the multi-site and site-specific versions of PRELES, and after BMC, we concluded that the model can be reliably used at regional scale to simulate carbon and water fluxes of Boreal forests. Despite being a simple model, PRELES provided good estimates of GPP and ET; only for one site PRELES multi-site version underestimated water fluxes. Our study implies convergence of GPP and water processes in boreal zone to the extent that their plausible prediction is possible with a simple model using global parameterization.
Building an Evaluation Framework for the VIC Model in the NLDAS Testbed
NASA Astrophysics Data System (ADS)
Xia, Y.; Mocko, D. M.; Wang, S.; Pan, M.; Kumar, S.; Peters-Lidard, C. D.; Wei, H.; Ek, M. B.
2017-12-01
Since the second phase of North American Land Data Assimilation System (NLDAS-2) was operationally implemented at NCEP in August 2014, developing the third phase of NLDAS system (NLDAS-3) has been a key task for the NCEP and NASA NLDAS team. The Variable Infiltration Capacity (VIC) model is one major component of the NLDAS system. The current operational NLDAS-2 uses version 4.0.3 (VIC403), research NLDAS-2 uses version 4.0.5 (VIC405), and LIS-based (Land Information System) NLDAS uses version 4.1.2 (VIC412). The purpose of this study is to compressively evaluate three versions and document changes in model behavior towards VIC412 for NLDAS-3. To do that, we develop a relatively comprehensive framework including multiple variables and metrics to assess the performance of different versions. This framework is being incorporated into the NASA Land Verification Toolkit (LVT) for evaluation of other LSMs for NLDAS-3 development. The evaluation results show that there are large and significant improvements for VIC412 in southeastern United States when compared with VIC403 and VIC405. In the other regions, there are very limited improvements or even some degree of deteriorations. Potential reasons are due to: (1) few USGS streamflow observations for soil and hydrologic parameter calibration, (2) the lack of re-calibration of VIC412 in the NLDAS domain, and (3) changes in model physics from VIC403 to VIC412. Overall, the model version upgrade largely/significantly enhances model performance and skill score for all United States except for the Great Plains, suggesting a right direction for VIC model development. Some further efforts are needed for science understanding of land surface physical processes in GP and a re-calibration for VIC412 using reasonable reference datasets is suggested.
Gradient-based model calibration with proxy-model assistance
NASA Astrophysics Data System (ADS)
Burrows, Wesley; Doherty, John
2016-02-01
Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.
NASA Astrophysics Data System (ADS)
Hdeib, Rouya; Abdallah, Chadi; Moussa, Roger; Colin, Francois
2017-04-01
Developing flood inundation maps of defined exceedance probabilities is required to provide information on the flood hazard and the associated risk. A methodology has been developed to model flood inundation in poorly gauged basins, where reliable information on the hydrological characteristics of floods are uncertain and partially captured by the traditional rain-gauge networks. Flood inundation is performed through coupling a hydrological rainfall-runoff (RR) model (HEC-HMS) with a hydraulic model (HEC-RAS). The RR model is calibrated against the January 2013 flood event in the Awali River basin, Lebanon (300 km2), whose flood peak discharge was estimated by post-event measurements. The resulting flows of the RR model are defined as boundary conditions of the hydraulic model, which is run to generate the corresponding water surface profiles and calibrated against 20 post-event surveyed cross sections after the January-2013 flood event. An uncertainty analysis is performed to assess the results of the models. Consequently, the coupled flood inundation model is simulated with design storms and flood inundation maps are generated of defined exceedance probabilities. The peak discharges estimated by the simulated RR model were in close agreement with the results from different empirical and statistical methods. This methodology can be extended to other poorly gauged basins facing common stage-gauge failure or characterized by floods with a stage exceeding the gauge measurement level, or higher than that defined by the rating curve.
Berenbrock, Charles; Tranmer, Andrew W.
2008-01-01
A one-dimensional sediment-transport model and a multi-dimensional hydraulic and bed shear stress model were developed to investigate the hydraulic, sediment transport, and sediment mobility characteristics of the lower Coeur d?Alene River in northern Idaho. This report documents the development and calibration of those models, as well as the results of model simulations. The one-dimensional sediment-transport model (HEC-6) was developed, calibrated, and used to simulate flow hydraulics and erosion, deposition, and transport of sediment in the lower Coeur d?Alene River. The HEC-6 modeled reach, comprised of 234 cross sections, extends from Enaville, Idaho, on the North Fork of the Coeur d?Alene River and near Pinehurst, Idaho, on the South Fork of the river to near Harrison, Idaho, on the main stem of the river. Bed-sediment samples collected by previous investigators and samples collected for this study in 2005 were used in the model. Sediment discharge curves from a previous study were updated using suspended-sediment samples collected at three sites since April 2000. The HEC-6 was calibrated using river discharge and water-surface elevations measured at five U.S. Geological Survey gaging stations. The calibrated HEC-6 model allowed simulation of management alternatives to assess erosion and deposition from proposed dredging of contaminated streambed sediments in the Dudley reach. Four management alternatives were simulated with HEC-6. Before the start of simulation for these alternatives, seven cross sections in the reach near Dudley, Idaho, were deepened 20 feet?removing about 296,000 cubic yards of sediments?to simulate dredging. Management alternative 1 simulated stage-discharge conditions from 2000, and alternative 2 simulated conditions from 1997. Results from alternatives 1 and 2 indicated that about 6,500 and 12,300 cubic yards, respectively, were deposited in the dredged reach. These figures represent 2 and 4 percent, respectively, of the total volume of dredged sediments removed before the start of simulation. In alternatives 3 and 4, the incoming total sediment discharges from the South Fork of the river were decreased by one-half. Management alternative 3 simulated stage-discharge conditions from 2000, and alternative 4 simulated conditions from 1997. Reducing incoming sediment discharge from the South Fork did not affect the streambed and deposition in the Dudley and downstream reaches, probably because the distance between the South Fork and the Dudley reach is long enough for sediment supply, transport capacity, and channel geometry to be balanced before reaching the Dudley and downstream reaches. Development and calibration of a multi-dimensional hydraulic and bed shear stress model (FASTMECH) allowed simulation of water-surface elevation, depth, velocity, bed shear stress, and sediment mobility in the Dudley reach (5.3 miles). The computational grid incorporated bathymetric and Light Detection and Ranging (LIDAR) data, with a node spacing of about 2.5 meters. With the exception of the fourth FASTMECH calibration simulation, results from the FASTMECH calibration simulations indicated that flow depths, flow velocities, and bed shear stresses increased as river discharge increased. Water-surface elevations in the fourth calibration simulation were about 2 feet higher than those in the other simulations because high lake levels in Coeur d?Alene Lake caused backwater conditions. Average simulated velocities along the thalweg ranged from about 3 to 5.3 feet per second, and maximum simulated velocities ranged from 3.9 to 7 feet per second. In the dredged reach, average simulated velocity along the thalweg ranged from 3.5 to 6 feet per second. The model also simulated several back-eddies (flow reversal); the largest eddy encompassed about one-third of the river width. Average bed shear stresses increased more than 200 percent from the first to the last simulation. Simulated sediment mobility, asses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akerib, DS; Alsum, S; Araújo, HM
The LUX experiment has performed searches for dark matter particles scattering elastically on xenon nuclei, leading to stringent upper limits on the nuclear scattering cross sections for dark matter. Here, for results derived frommore » $${1.4}\\times 10^{4}\\;\\mathrm{kg\\,days}$$ of target exposure in 2013, details of the calibration, event-reconstruction, modeling, and statistical tests that underlie the results are presented. Detector performance is characterized, including measured efficiencies, stability of response, position resolution, and discrimination between electron- and nuclear-recoil populations. Models are developed for the drift field, optical properties, background populations, the electron- and nuclear-recoil responses, and the absolute rate of low-energy background events. Innovations in the analysis include in situ measurement of the photomultipliers' response to xenon scintillation photons, verification of fiducial mass with a low-energy internal calibration source, and new empirical models for low-energy signal yield based on large-sample, in situ calibrations.« less
Improving Risk Adjustment for Mortality After Pediatric Cardiac Surgery: The UK PRAiS2 Model.
Rogers, Libby; Brown, Katherine L; Franklin, Rodney C; Ambler, Gareth; Anderson, David; Barron, David J; Crowe, Sonya; English, Kate; Stickley, John; Tibby, Shane; Tsang, Victor; Utley, Martin; Witter, Thomas; Pagel, Christina
2017-07-01
Partial Risk Adjustment in Surgery (PRAiS), a risk model for 30-day mortality after children's heart surgery, has been used by the UK National Congenital Heart Disease Audit to report expected risk-adjusted survival since 2013. This study aimed to improve the model by incorporating additional comorbidity and diagnostic information. The model development dataset was all procedures performed between 2009 and 2014 in all UK and Ireland congenital cardiac centers. The outcome measure was death within each 30-day surgical episode. Model development followed an iterative process of clinical discussion and development and assessment of models using logistic regression under 25 × 5 cross-validation. Performance was measured using Akaike information criterion, the area under the receiver-operating characteristic curve (AUC), and calibration. The final model was assessed in an external 2014 to 2015 validation dataset. The development dataset comprised 21,838 30-day surgical episodes, with 539 deaths (mortality, 2.5%). The validation dataset comprised 4,207 episodes, with 97 deaths (mortality, 2.3%). The updated risk model included 15 procedural, 11 diagnostic, and 4 comorbidity groupings, and nonlinear functions of age and weight. Performance under cross-validation was: median AUC of 0.83 (range, 0.82 to 0.83), median calibration slope and intercept of 0.92 (range, 0.64 to 1.25) and -0.23 (range, -1.08 to 0.85) respectively. In the validation dataset, the AUC was 0.86 (95% confidence interval [CI], 0.82 to 0.89), and the calibration slope and intercept were 1.01 (95% CI, 0.83 to 1.18) and 0.11 (95% CI, -0.45 to 0.67), respectively, showing excellent performance. A more sophisticated PRAiS2 risk model for UK use was developed with additional comorbidity and diagnostic information, alongside age and weight as nonlinear variables. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Development of Long-term Datasets from Satellite BUV Instruments: The "Soft" Calibration Approach
NASA Technical Reports Server (NTRS)
Bhartia, Pawan K.; Taylor, Steven; Jaross, Glen
2005-01-01
The first BUV instrument was launched in April 1970 on NASA's Nimbus4 satellite. More than a dozen instruments, broadly based on the same principle, but using very different technologies, have been launched in the last 35 years on NASA, NOAA, Japanese and European satellites. In this paper we describe the basic principles of the "soft" calibration approach that we have successfully applied to the data from many of these instruments to produce a consistent long-term record of total ozone, ozone profile and aerosols. This approach is based on using accurate radiative transfer models and assumed/known properties of the atmosphere in ultraviolet to derive calibration parameters. Although the accuracy of the results inevitably depends upon how well the assumed atmospheric properties are known, the technique has several built-in cross- checks that improve the robustness of the method. To develop further confidence in the data the soft calibration technique can be combined with data collected from few well- calibrated ground-based instruments. We will use examples from past and present BUV instruments to show how the method works.
Simple Parametric Model for Intensity Calibration of Cassini Composite Infrared Spectrometer Data
NASA Technical Reports Server (NTRS)
Brasunas, J.; Mamoutkine, A.; Gorius, N.
2016-01-01
Accurate intensity calibration of a linear Fourier-transform spectrometer typically requires the unknown science target and the two calibration targets to be acquired under identical conditions. We present a simple model suitable for vector calibration that enables accurate calibration via adjustments of measured spectral amplitudes and phases when these three targets are recorded at different detector or optics temperatures. Our model makes calibration more accurate both by minimizing biases due to changing instrument temperatures that are always present at some level and by decreasing estimate variance through incorporating larger averages of science and calibration interferogram scans.
Prediction models for successful external cephalic version: a systematic review.
Velzel, Joost; de Hundt, Marcella; Mulder, Frederique M; Molkenboer, Jan F M; Van der Post, Joris A M; Mol, Ben W; Kok, Marjolein
2015-12-01
To provide an overview of existing prediction models for successful ECV, and to assess their quality, development and performance. We searched MEDLINE, EMBASE and the Cochrane Library to identify all articles reporting on prediction models for successful ECV published from inception to January 2015. We extracted information on study design, sample size, model-building strategies and validation. We evaluated the phases of model development and summarized their performance in terms of discrimination, calibration and clinical usefulness. We collected different predictor variables together with their defined significance, in order to identify important predictor variables for successful ECV. We identified eight articles reporting on seven prediction models. All models were subjected to internal validation. Only one model was also validated in an external cohort. Two prediction models had a low overall risk of bias, of which only one showed promising predictive performance at internal validation. This model also completed the phase of external validation. For none of the models their impact on clinical practice was evaluated. The most important predictor variables for successful ECV described in the selected articles were parity, placental location, breech engagement and the fetal head being palpable. One model was assessed using discrimination and calibration using internal (AUC 0.71) and external validation (AUC 0.64), while two other models were assessed with discrimination and calibration, respectively. We found one prediction model for breech presentation that was validated in an external cohort and had acceptable predictive performance. This model should be used to council women considering ECV. Copyright © 2015. Published by Elsevier Ireland Ltd.
An Accurate Temperature Correction Model for Thermocouple Hygrometers 1
Savage, Michael J.; Cass, Alfred; de Jager, James M.
1982-01-01
Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques. In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38°C). The model based on calibration at two temperatures is superior to that based on only one calibration. The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25°C, if the calibration slopes are corrected for temperature. PMID:16662241
Regression Model Term Selection for the Analysis of Strain-Gage Balance Calibration Data
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert Manfred; Volden, Thomas R.
2010-01-01
The paper discusses the selection of regression model terms for the analysis of wind tunnel strain-gage balance calibration data. Different function class combinations are presented that may be used to analyze calibration data using either a non-iterative or an iterative method. The role of the intercept term in a regression model of calibration data is reviewed. In addition, useful algorithms and metrics originating from linear algebra and statistics are recommended that will help an analyst (i) to identify and avoid both linear and near-linear dependencies between regression model terms and (ii) to make sure that the selected regression model of the calibration data uses only statistically significant terms. Three different tests are suggested that may be used to objectively assess the predictive capability of the final regression model of the calibration data. These tests use both the original data points and regression model independent confirmation points. Finally, data from a simplified manual calibration of the Ames MK40 balance is used to illustrate the application of some of the metrics and tests to a realistic calibration data set.
Aquarius Whole Range Calibration: Celestial Sky, Ocean, and Land Targets
NASA Technical Reports Server (NTRS)
Dinnat, Emmanuel P.; Le Vine, David M.; Bindlish, Rajat; Piepmeier, Jeffrey R.; Brown, Shannon T.
2014-01-01
Aquarius is a spaceborne instrument that uses L-band radiometers to monitor sea surface salinity globally. Other applications of its data over land and the cryosphere are being developed. Combining its measurements with existing and upcoming L-band sensors will allow for long term studies. For that purpose, the radiometers calibration is critical. Aquarius measurements are currently calibrated over the oceans. They have been found too cold at the low end (celestial sky) of the brightness temperature scale, and too warm at the warm end (land and ice). We assess the impact of the antenna pattern model on the biases and propose a correction. We re-calibrate Aquarius measurements using the corrected antenna pattern and measurements over the Sky and oceans. The performances of the new calibration are evaluated using measurements over well instrument land sites.
NASA Astrophysics Data System (ADS)
Wanders, N.; Bierkens, M. F. P.; de Jong, S. M.; de Roo, A.; Karssenberg, D.
2014-08-01
Large-scale hydrological models are nowadays mostly calibrated using observed discharge. As a result, a large part of the hydrological system, in particular the unsaturated zone, remains uncalibrated. Soil moisture observations from satellites have the potential to fill this gap. Here we evaluate the added value of remotely sensed soil moisture in calibration of large-scale hydrological models by addressing two research questions: (1) Which parameters of hydrological models can be identified by calibration with remotely sensed soil moisture? (2) Does calibration with remotely sensed soil moisture lead to an improved calibration of hydrological models compared to calibration based only on discharge observations, such that this leads to improved simulations of soil moisture content and discharge? A dual state and parameter Ensemble Kalman Filter is used to calibrate the hydrological model LISFLOOD for the Upper Danube. Calibration is done using discharge and remotely sensed soil moisture acquired by AMSR-E, SMOS, and ASCAT. Calibration with discharge data improves the estimation of groundwater and routing parameters. Calibration with only remotely sensed soil moisture results in an accurate identification of parameters related to land-surface processes. For the Upper Danube upstream area up to 40,000 km2, calibration on both discharge and soil moisture results in a reduction by 10-30% in the RMSE for discharge simulations, compared to calibration on discharge alone. The conclusion is that remotely sensed soil moisture holds potential for calibration of hydrological models, leading to a better simulation of soil moisture content throughout the catchment and a better simulation of discharge in upstream areas. This article was corrected on 15 SEP 2014. See the end of the full text for details.
A physiologically based pharmacokinetic (PBPK) model for the organoarsenical dimethylarsinic acid (DMA(V)) was developed in mice. The model was calibrated using tissue time course data from multiple tissues in mice administered DMA(V) intravenously. The final model structure was ...
Dependency of EBT2 film calibration curve on postirradiation time
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Liyun, E-mail: liyunc@isu.edu.tw; Ding, Hueisch-Jy; Ho, Sheng-Yow
2014-02-15
Purpose: The Ashland Inc. product EBT2 film model is a widely used quality assurance tool, especially for verification of 2-dimensional dose distributions. In general, the calibration film and the dose measurement film are irradiated, scanned, and calibrated at the same postirradiation time (PIT), 1-2 days after the films are irradiated. However, for a busy clinic or in some special situations, the PIT for the dose measurement film may be different from that of the calibration film. In this case, the measured dose will be incorrect. This paper proposed a film calibration method that includes the effect of PIT. Methods: Themore » dose versus film optical density was fitted to a power function with three parameters. One of these parameters was PIT dependent, while the other two were found to be almost constant with a standard deviation of the mean less than 4%. The PIT-dependent parameter was fitted to another power function of PIT. The EBT2 film model was calibrated using the PDD method with 14 different PITs ranging from 1 h to 2 months. Ten of the fourteen PITs were used for finding the fitting parameters, and the other four were used for testing the model. Results: The verification test shows that the differences between the delivered doses and the film doses calculated with this modeling were mainly within 2% for delivered doses above 60 cGy, and the total uncertainties were generally under 5%. The errors and total uncertainties of film dose calculation were independent of the PIT using the proposed calibration procedure. However, the fitting uncertainty increased with decreasing dose or PIT, but stayed below 1.3% for this study. Conclusions: The EBT2 film dose can be modeled as a function of PIT. For the ease of routine calibration, five PITs were suggested to be used. It is recommended that two PITs be located in the fast developing period (1∼6 h), one in 1 ∼ 2 days, one around a week, and one around a month.« less
2014-01-01
Background The utility of self-report measures of physical activity (PA) in youth can be greatly enhanced by calibrating self-report output against objectively measured PA data. This study demonstrates the potential of calibrating self-report output against objectively measured physical activity (PA) in youth by using a commonly used self-report tool called the Physical Activity Questionnaire (PAQ). Methods A total of 148 participants (grades 4 through 12) from 9 schools (during the 2009–2010 school year) wore an Actigraph accelerometer for 7 days and then completed the PAQ. Multiple linear regression modeling was used on 70% of the available sample to develop a calibration equation and this was cross validated on an independent sample of participants (30% of sample). Results A calibration model with age, gender, and PAQ scores explained 40% of the variance in values for the percentage of time in moderate-to-vigorous PA (%MVPA) measured from the accelerometers (%MVPA = 14.56 - (sex*0.98) - (0.84*age) + (1.01*PAQ)). When tested on an independent, hold-out sample, the model estimated %MVPA values that were highly correlated with the recorded accelerometer values (r = .63) and there was no significant difference between the estimated and recorded activity values (mean diff. = 25.3 ± 18.1 min; p = .17). Conclusions These results suggest that the calibrated PAQ may be a valid alternative tool to activity monitoring instruments for estimating %MVPA in groups of youth. PMID:24886625
Saint-Maurice, Pedro F; Welk, Gregory J; Beyler, Nicholas K; Bartee, Roderick T; Heelan, Kate A
2014-05-16
The utility of self-report measures of physical activity (PA) in youth can be greatly enhanced by calibrating self-report output against objectively measured PA data.This study demonstrates the potential of calibrating self-report output against objectively measured physical activity (PA) in youth by using a commonly used self-report tool called the Physical Activity Questionnaire (PAQ). A total of 148 participants (grades 4 through 12) from 9 schools (during the 2009-2010 school year) wore an Actigraph accelerometer for 7 days and then completed the PAQ. Multiple linear regression modeling was used on 70% of the available sample to develop a calibration equation and this was cross validated on an independent sample of participants (30% of sample). A calibration model with age, gender, and PAQ scores explained 40% of the variance in values for the percentage of time in moderate-to-vigorous PA (%MVPA) measured from the accelerometers (%MVPA = 14.56 - (sex*0.98) - (0.84*age) + (1.01*PAQ)). When tested on an independent, hold-out sample, the model estimated %MVPA values that were highly correlated with the recorded accelerometer values (r = .63) and there was no significant difference between the estimated and recorded activity values (mean diff. = 25.3 ± 18.1 min; p = .17). These results suggest that the calibrated PAQ may be a valid alternative tool to activity monitoring instruments for estimating %MVPA in groups of youth.
Decoupling Principle Analysis and Development of a Parallel Three-Dimensional Force Sensor
Zhao, Yanzhi; Jiao, Leihao; Weng, Dacheng; Zhang, Dan; Zheng, Rencheng
2016-01-01
In the development of the multi-dimensional force sensor, dimension coupling is the ubiquitous factor restricting the improvement of the measurement accuracy. To effectively reduce the influence of dimension coupling on the parallel multi-dimensional force sensor, a novel parallel three-dimensional force sensor is proposed using a mechanical decoupling principle, and the influence of the friction on dimension coupling is effectively reduced by making the friction rolling instead of sliding friction. In this paper, the mathematical model is established by combining with the structure model of the parallel three-dimensional force sensor, and the modeling and analysis of mechanical decoupling are carried out. The coupling degree (ε) of the designed sensor is defined and calculated, and the calculation results show that the mechanical decoupling parallel structure of the sensor possesses good decoupling performance. A prototype of the parallel three-dimensional force sensor was developed, and FEM analysis was carried out. The load calibration and data acquisition experiment system are built, and then calibration experiments were done. According to the calibration experiments, the measurement accuracy is less than 2.86% and the coupling accuracy is less than 3.02%. The experimental results show that the sensor system possesses high measuring accuracy, which provides a basis for the applied research of the parallel multi-dimensional force sensor. PMID:27649194
A methodology to develop computational phantoms with adjustable posture for WBC calibration
NASA Astrophysics Data System (ADS)
Ferreira Fonseca, T. C.; Bogaerts, R.; Hunt, John; Vanhavere, F.
2014-11-01
A Whole Body Counter (WBC) is a facility to routinely assess the internal contamination of exposed workers, especially in the case of radiation release accidents. The calibration of the counting device is usually done by using anthropomorphic physical phantoms representing the human body. Due to such a challenge of constructing representative physical phantoms a virtual calibration has been introduced. The use of computational phantoms and the Monte Carlo method to simulate radiation transport have been demonstrated to be a worthy alternative. In this study we introduce a methodology developed for the creation of realistic computational voxel phantoms with adjustable posture for WBC calibration. The methodology makes use of different software packages to enable the creation and modification of computational voxel phantoms. This allows voxel phantoms to be developed on demand for the calibration of different WBC configurations. This in turn helps to study the major source of uncertainty associated with the in vivo measurement routine which is the difference between the calibration phantoms and the real persons being counted. The use of realistic computational phantoms also helps the optimization of the counting measurement. Open source codes such as MakeHuman and Blender software packages have been used for the creation and modelling of 3D humanoid characters based on polygonal mesh surfaces. Also, a home-made software was developed whose goal is to convert the binary 3D voxel grid into a MCNPX input file. This paper summarizes the development of a library of phantoms of the human body that uses two basic phantoms called MaMP and FeMP (Male and Female Mesh Phantoms) to create a set of male and female phantoms that vary both in height and in weight. Two sets of MaMP and FeMP phantoms were developed and used for efficiency calibration of two different WBC set-ups: the Doel NPP WBC laboratory and AGM laboratory of SCK-CEN in Mol, Belgium.
A methodology to develop computational phantoms with adjustable posture for WBC calibration.
Fonseca, T C Ferreira; Bogaerts, R; Hunt, John; Vanhavere, F
2014-11-21
A Whole Body Counter (WBC) is a facility to routinely assess the internal contamination of exposed workers, especially in the case of radiation release accidents. The calibration of the counting device is usually done by using anthropomorphic physical phantoms representing the human body. Due to such a challenge of constructing representative physical phantoms a virtual calibration has been introduced. The use of computational phantoms and the Monte Carlo method to simulate radiation transport have been demonstrated to be a worthy alternative. In this study we introduce a methodology developed for the creation of realistic computational voxel phantoms with adjustable posture for WBC calibration. The methodology makes use of different software packages to enable the creation and modification of computational voxel phantoms. This allows voxel phantoms to be developed on demand for the calibration of different WBC configurations. This in turn helps to study the major source of uncertainty associated with the in vivo measurement routine which is the difference between the calibration phantoms and the real persons being counted. The use of realistic computational phantoms also helps the optimization of the counting measurement. Open source codes such as MakeHuman and Blender software packages have been used for the creation and modelling of 3D humanoid characters based on polygonal mesh surfaces. Also, a home-made software was developed whose goal is to convert the binary 3D voxel grid into a MCNPX input file. This paper summarizes the development of a library of phantoms of the human body that uses two basic phantoms called MaMP and FeMP (Male and Female Mesh Phantoms) to create a set of male and female phantoms that vary both in height and in weight. Two sets of MaMP and FeMP phantoms were developed and used for efficiency calibration of two different WBC set-ups: the Doel NPP WBC laboratory and AGM laboratory of SCK-CEN in Mol, Belgium.
Calibration of X-Ray Observatories
NASA Technical Reports Server (NTRS)
Weisskopf, Martin C.; L'Dell, Stephen L.
2011-01-01
Accurate calibration of x-ray observatories has proved an elusive goal. Inaccuracies and inconsistencies amongst on-ground measurements, differences between on-ground and in-space performance, in-space performance changes, and the absence of cosmic calibration standards whose physics we truly understand have precluded absolute calibration better than several percent and relative spectral calibration better than a few percent. The philosophy "the model is the calibration" relies upon a complete high-fidelity model of performance and an accurate verification and calibration of this model. As high-resolution x-ray spectroscopy begins to play a more important role in astrophysics, additional issues in accurately calibrating at high spectral resolution become more evident. Here we review the challenges of accurately calibrating the absolute and relative response of x-ray observatories. On-ground x-ray testing by itself is unlikely to achieve a high-accuracy calibration of in-space performance, especially when the performance changes with time. Nonetheless, it remains an essential tool in verifying functionality and in characterizing and verifying the performance model. In the absence of verified cosmic calibration sources, we also discuss the notion of an artificial, in-space x-ray calibration standard. 6th
Neuromusculoskeletal model self-calibration for on-line sequential bayesian moment estimation
NASA Astrophysics Data System (ADS)
Bueno, Diana R.; Montano, L.
2017-04-01
Objective. Neuromusculoskeletal models involve many subject-specific physiological parameters that need to be adjusted to adequately represent muscle properties. Traditionally, neuromusculoskeletal models have been calibrated with a forward-inverse dynamic optimization which is time-consuming and unfeasible for rehabilitation therapy. Non self-calibration algorithms have been applied to these models. To the best of our knowledge, the algorithm proposed in this work is the first on-line calibration algorithm for muscle models that allows a generic model to be adjusted to different subjects in a few steps. Approach. In this paper we propose a reformulation of the traditional muscle models that is able to sequentially estimate the kinetics (net joint moments), and also its full self-calibration (subject-specific internal parameters of the muscle from a set of arbitrary uncalibrated data), based on the unscented Kalman filter. The nonlinearity of the model as well as its calibration problem have obliged us to adopt the sum of Gaussians filter suitable for nonlinear systems. Main results. This sequential Bayesian self-calibration algorithm achieves a complete muscle model calibration using as input only a dataset of uncalibrated sEMG and kinematics data. The approach is validated experimentally using data from the upper limbs of 21 subjects. Significance. The results show the feasibility of neuromusculoskeletal model self-calibration. This study will contribute to a better understanding of the generalization of muscle models for subject-specific rehabilitation therapies. Moreover, this work is very promising for rehabilitation devices such as electromyography-driven exoskeletons or prostheses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cartas, Raul; Mimendia, Aitor; Valle, Manel del
2009-05-23
Calibration models for multi-analyte electronic tongues have been commonly built using a set of sensors, at least one per analyte under study. Complex signals recorded with these systems are formed by the sensors' responses to the analytes of interest plus interferents, from which a multivariate response model is then developed. This work describes a data treatment method for the simultaneous quantification of two species in solution employing the signal from a single sensor. The approach used here takes advantage of the complex information recorded with one electrode's transient after insertion of sample for building the calibration models for both analytes.more » The departure information from the electrode was firstly processed by discrete wavelet for transforming the signals to extract useful information and reduce its length, and then by artificial neural networks for fitting a model. Two different potentiometric sensors were used as study case for simultaneously corroborating the effectiveness of the approach.« less
Tyler Jon Smith; Lucy Amanda Marshall
2010-01-01
Model selection is an extremely important aspect of many hydrologic modeling studies because of the complexity, variability, and uncertainty that surrounds the current understanding of watershed-scale systems. However, development and implementation of a complete precipitation-runoff modeling framework, from model selection to calibration and uncertainty analysis, are...
NASA Astrophysics Data System (ADS)
Jumadi, Nur Anida; Beng, Gan Kok; Ali, Mohd Alauddin Mohd; Zahedi, Edmond; Morsin, Marlia
2017-09-01
The implementation of surface-based Monte Carlo simulation technique for oxygen saturation (SaO2) calibration curve estimation is demonstrated in this paper. Generally, the calibration curve is estimated either from the empirical study using animals as the subject of experiment or is derived from mathematical equations. However, the determination of calibration curve using animal is time consuming and requires expertise to conduct the experiment. Alternatively, an optical simulation technique has been used widely in the biomedical optics field due to its capability to exhibit the real tissue behavior. The mathematical relationship between optical density (OD) and optical density ratios (ODR) associated with SaO2 during systole and diastole is used as the basis of obtaining the theoretical calibration curve. The optical properties correspond to systolic and diastolic behaviors were applied to the tissue model to mimic the optical properties of the tissues. Based on the absorbed ray flux at detectors, the OD and ODR were successfully calculated. The simulation results of optical density ratio occurred at every 20 % interval of SaO2 is presented with maximum error of 2.17 % when comparing it with previous numerical simulation technique (MC model). The findings reveal the potential of the proposed method to be used for extended calibration curve study using other wavelength pair.
NASA Astrophysics Data System (ADS)
Laborda, Francisco; Medrano, Jesús; Castillo, Juan R.
2004-06-01
The quality of the quantitative results obtained from transient signals in high-performance liquid chromatography-inductively coupled plasma mass spectrometry (HPLC-ICPMS) and flow injection-inductively coupled plasma mass spectrometry (FI-ICPMS) was investigated under multielement conditions. Quantification methods were based on multiple-point calibration by simple and weighted linear regression, and double-point calibration (measurement of the baseline and one standard). An uncertainty model, which includes the main sources of uncertainty from FI-ICPMS and HPLC-ICPMS (signal measurement, sample flow rate and injection volume), was developed to estimate peak area uncertainties and statistical weights used in weighted linear regression. The behaviour of the ICPMS instrument was characterized in order to be considered in the model, concluding that the instrument works as a concentration detector when it is used to monitorize transient signals from flow injection or chromatographic separations. Proper quantification by the three calibration methods was achieved when compared to reference materials, although the double-point calibration allowed to obtain results of the same quality as the multiple-point calibration, shortening the calibration time. Relative expanded uncertainties ranged from 10-20% for concentrations around the LOQ to 5% for concentrations higher than 100 times the LOQ.
NASA Astrophysics Data System (ADS)
Abbaspour, K. C.; Rouholahnejad, E.; Vaghefi, S.; Srinivasan, R.; Yang, H.; Kløve, B.
2015-05-01
A combination of driving forces are increasing pressure on local, national, and regional water supplies needed for irrigation, energy production, industrial uses, domestic purposes, and the environment. In many parts of Europe groundwater quantity, and in particular quality, have come under sever degradation and water levels have decreased resulting in negative environmental impacts. Rapid improvements in the economy of the eastern European block of countries and uncertainties with regard to freshwater availability create challenges for water managers. At the same time, climate change adds a new level of uncertainty with regard to freshwater supplies. In this research we build and calibrate an integrated hydrological model of Europe using the Soil and Water Assessment Tool (SWAT) program. Different components of water resources are simulated and crop yield and water quality are considered at the Hydrological Response Unit (HRU) level. The water resources are quantified at subbasin level with monthly time intervals. Leaching of nitrate into groundwater is also simulated at a finer spatial level (HRU). The use of large-scale, high-resolution water resources models enables consistent and comprehensive examination of integrated system behavior through physically-based, data-driven simulation. In this article we discuss issues with data availability, calibration of large-scale distributed models, and outline procedures for model calibration and uncertainty analysis. The calibrated model and results provide information support to the European Water Framework Directive and lay the basis for further assessment of the impact of climate change on water availability and quality. The approach and methods developed are general and can be applied to any large region around the world.
Three-dimensional numerical model of ground-water flow in northern Utah Valley, Utah County, Utah
Gardner, Philip M.
2009-01-01
A three-dimensional, finite-difference, numerical model was developed to simulate ground-water flow in northern Utah Valley, Utah. The model includes expanded areal boundaries as compared to a previous ground-water flow model of the valley and incorporates more than 20 years of additional hydrologic data. The model boundary was generally expanded to include the bedrock in the surrounding mountain block as far as the surface-water divide. New wells have been drilled in basin-fill deposits near the consolidated-rock boundary. Simulating the hydrologic conditions within the bedrock allows for improved simulation of the effect of withdrawal from these wells. The inclusion of bedrock also allowed for the use of a recharge model that provided an alternative method for spatially distributing areal recharge over the mountains.The model was calibrated to steady- and transient-state conditions. The steady-state simulation was developed and calibrated by using hydrologic data that represented average conditions for 1947. The transient-state simulation was developed and calibrated by using hydrologic data collected from 1947 to 2004. Areally, the model grid is 79 rows by 70 columns, with variable cell size. Cells throughout most of the model domain represent 0.3 mile on each side. The largest cells are rectangular with dimensions of about 0.3 by 0.6 mile. The largest cells represent the mountain block on the eastern edge of the model domain where the least hydrologic data are available. Vertically, the aquifer system is divided into 4 layers which incorporate 11 hydrogeologic units. The model simulates recharge to the ground-water flow system as (1) infiltration of precipitation over the mountain block, (2) infiltration of precipitation over the valley floor, (3) infiltration of unconsumed irrigation water from fields, lawns, and gardens, (4) seepage from streams and canals, and (5) subsurface inflow from Cedar Valley. Discharge of ground water is simulated by the model to (1) flowing and pumping wells, (2) drains and springs, (3) evapotranspiration, (4) Utah Lake, (5) the Jordan River and mountain streams, and (6) Salt Lake Valley by subsurface outflow through the Jordan Narrows.During steady-state calibration, variables were adjusted within probable ranges to minimize differences between model-computed and measured water levels as well as between model-computed and independently estimated flows that include: recharge by seepage from individual streams and canals, discharge by seepage to individual streams and the Jordan River, discharge to Utah Lake, discharge to drains and springs, discharge by evapotranspiration, and subsurface flows into and out of northern Utah Valley from Cedar Valley and to Salt Lake Valley, respectively. The transient-state simulation was calibrated to measured water levels and water-level changes with consideration given to annual changes in the flows listed above.
NASA Astrophysics Data System (ADS)
Bijl, Piet; Reynolds, Joseph P.; Vos, Wouter K.; Hogervorst, Maarten A.; Fanning, Jonathan D.
2011-05-01
The TTP (Targeting Task Performance) metric, developed at NVESD, is the current standard US Army model to predict EO/IR Target Acquisition performance. This model however does not have a corresponding lab or field test to empirically assess the performance of a camera system. The TOD (Triangle Orientation Discrimination) method, developed at TNO in The Netherlands, provides such a measurement. In this study, we make a direct comparison between TOD performance for a range of sensors and the extensive historical US observer performance database built to develop and calibrate the TTP metric. The US perception data were collected doing an identification task by military personnel on a standard 12 target, 12 aspect tactical vehicle image set that was processed through simulated sensors for which the most fundamental sensor parameters such as blur, sampling, spatial and temporal noise were varied. In the present study, we measured TOD sensor performance using exactly the same sensors processing a set of TOD triangle test patterns. The study shows that good overall agreement is obtained when the ratio between target characteristic size and TOD test pattern size at threshold equals 6.3. Note that this number is purely based on empirical data without any intermediate modeling. The calibration of the TOD to the TTP is highly beneficial to the sensor modeling and testing community for a variety of reasons. These include: i) a connection between requirement specification and acceptance testing, and ii) a very efficient method to quickly validate or extend the TTP range prediction model to new systems and tasks.
Hill, Mary C.; Tiedeman, Claire
2007-01-01
Methods and guidelines for developing and using mathematical modelsTurn to Effective Groundwater Model Calibration for a set of methods and guidelines that can help produce more accurate and transparent mathematical models. The models can represent groundwater flow and transport and other natural and engineered systems. Use this book and its extensive exercises to learn methods to fully exploit the data on hand, maximize the model's potential, and troubleshoot any problems that arise. Use the methods to perform:Sensitivity analysis to evaluate the information content of dataData assessment to identify (a) existing measurements that dominate model development and predictions and (b) potential measurements likely to improve the reliability of predictionsCalibration to develop models that are consistent with the data in an optimal mannerUncertainty evaluation to quantify and communicate errors in simulated results that are often used to make important societal decisionsMost of the methods are based on linear and nonlinear regression theory.Fourteen guidelines show the reader how to use the methods advantageously in practical situations.Exercises focus on a groundwater flow system and management problem, enabling readers to apply all the methods presented in the text. The exercises can be completed using the material provided in the book, or as hands-on computer exercises using instructions and files available on the text's accompanying Web site.Throughout the book, the authors stress the need for valid statistical concepts and easily understood presentation methods required to achieve well-tested, transparent models. Most of the examples and all of the exercises focus on simulating groundwater systems; other examples come from surface-water hydrology and geophysics.The methods and guidelines in the text are broadly applicable and can be used by students, researchers, and engineers to simulate many kinds systems.
Lamain-de Ruiter, Marije; Kwee, Anneke; Naaktgeboren, Christiana A; de Groot, Inge; Evers, Inge M; Groenendaal, Floris; Hering, Yolanda R; Huisjes, Anjoke J M; Kirpestein, Cornel; Monincx, Wilma M; Siljee, Jacqueline E; Van 't Zelfde, Annewil; van Oirschot, Charlotte M; Vankan-Buitelaar, Simone A; Vonk, Mariska A A W; Wiegers, Therese A; Zwart, Joost J; Franx, Arie; Moons, Karel G M; Koster, Maria P H
2016-08-30
To perform an external validation and direct comparison of published prognostic models for early prediction of the risk of gestational diabetes mellitus, including predictors applicable in the first trimester of pregnancy. External validation of all published prognostic models in large scale, prospective, multicentre cohort study. 31 independent midwifery practices and six hospitals in the Netherlands. Women recruited in their first trimester (<14 weeks) of pregnancy between December 2012 and January 2014, at their initial prenatal visit. Women with pre-existing diabetes mellitus of any type were excluded. Discrimination of the prognostic models was assessed by the C statistic, and calibration assessed by calibration plots. 3723 women were included for analysis, of whom 181 (4.9%) developed gestational diabetes mellitus in pregnancy. 12 prognostic models for the disorder could be validated in the cohort. C statistics ranged from 0.67 to 0.78. Calibration plots showed that eight of the 12 models were well calibrated. The four models with the highest C statistics included almost all of the following predictors: maternal age, maternal body mass index, history of gestational diabetes mellitus, ethnicity, and family history of diabetes. Prognostic models had a similar performance in a subgroup of nulliparous women only. Decision curve analysis showed that the use of these four models always had a positive net benefit. In this external validation study, most of the published prognostic models for gestational diabetes mellitus show acceptable discrimination and calibration. The four models with the highest discriminative abilities in this study cohort, which also perform well in a subgroup of nulliparous women, are easy models to apply in clinical practice and therefore deserve further evaluation regarding their clinical impact. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
A theory of forest dynamics: Spatially explicit models and issues of scale
NASA Technical Reports Server (NTRS)
Pacala, S.
1990-01-01
Good progress has been made in the first year of DOE grant (number sign) FG02-90ER60933. The purpose of the project is to develop and investigate models of forest dynamics that apply across a range of spatial scales. The grant is one third of a three-part project. The second third was funded by the NSF this year and is intended to provide the empirical data necessary to calibrate and test small-scale (less than or equal to 1000 ha) models. The final third was also funded this year (NASA), and will provide data to calibrate and test the large-scale features of the models.
Using sensitivity analysis in model calibration efforts
Tiedeman, Claire; Hill, Mary C.
2003-01-01
In models of natural and engineered systems, sensitivity analysis can be used to assess relations among system state observations, model parameters, and model predictions. The model itself links these three entities, and model sensitivities can be used to quantify the links. Sensitivities are defined as the derivatives of simulated quantities (such as simulated equivalents of observations, or model predictions) with respect to model parameters. We present four measures calculated from model sensitivities that quantify the observation-parameter-prediction links and that are especially useful during the calibration and prediction phases of modeling. These four measures are composite scaled sensitivities (CSS), prediction scaled sensitivities (PSS), the value of improved information (VOII) statistic, and the observation prediction (OPR) statistic. These measures can be used to help guide initial calibration of models, collection of field data beneficial to model predictions, and recalibration of models updated with new field information. Once model sensitivities have been calculated, each of the four measures requires minimal computational effort. We apply the four measures to a three-layer MODFLOW-2000 (Harbaugh et al., 2000; Hill et al., 2000) model of the Death Valley regional ground-water flow system (DVRFS), located in southern Nevada and California. D’Agnese et al. (1997, 1999) developed and calibrated the model using nonlinear regression methods. Figure 1 shows some of the observations, parameters, and predictions for the DVRFS model. Observed quantities include hydraulic heads and spring flows. The 23 defined model parameters include hydraulic conductivities, vertical anisotropies, recharge rates, evapotranspiration rates, and pumpage. Predictions of interest for this regional-scale model are advective transport paths from potential contamination sites underlying the Nevada Test Site and Yucca Mountain.
Yang, Tao; Sezer, Hayri; Celik, Ismail B.; ...
2015-06-02
In the present paper, a physics-based procedure combining experiments and multi-physics numerical simulations is developed for overall analysis of SOFCs operational diagnostics and performance predictions. In this procedure, essential information for the fuel cell is extracted first by utilizing empirical polarization analysis in conjunction with experiments and refined by multi-physics numerical simulations via simultaneous analysis and calibration of polarization curve and impedance behavior. The performance at different utilization cases and operating currents is also predicted to confirm the accuracy of the proposed model. It is demonstrated that, with the present electrochemical model, three air/fuel flow conditions are needed to producemore » a set of complete data for better understanding of the processes occurring within SOFCs. After calibration against button cell experiments, the methodology can be used to assess performance of planar cell without further calibration. The proposed methodology would accelerate the calibration process and improve the efficiency of design and diagnostics.« less
Meat mixture detection in Iberian pork sausages.
Ortiz-Somovilla, V; España-España, F; De Pedro-Sanz, E J; Gaitán-Jurado, A J
2005-11-01
Five homogenized meat mixture treatments of Iberian (I) and/or Standard (S) pork were set up. Each treatment was analyzed by NIRS as a fresh product (N=75) and as dry-cured sausage (N=75). Spectra acquisition was carried out using DA 7000 equipment (Perten Instruments), obtaining a total of 750 spectra. Several absorption peaks and bands were selected as the most representative for homogenized dry-cured and fresh sausages. Discriminant analysis and mixture prediction equations were carried out based on the spectral data gathered. The best results using discriminant models were for fresh products, with 98.3% (calibration) and 60% (validation) correct classification. For dry-cured sausages 91.7% (calibration) and 80% (validation) of the samples were correctly classified. Models developed using mixture prediction equations showed SECV=4.7, r(2)=0.98 (calibration) and 73.3% of validation set were correctly classified for the fresh product. These values for dry-cured sausages were SECV=5.9, r(2)=0.99 (calibration) and 93.3% correctly classified for validation.
Model development for MODIS thermal band electronic cross-talk
NASA Astrophysics Data System (ADS)
Chang, Tiejun; Wu, Aisheng; Geng, Xu; Li, Yonghong; Brinkmann, Jake; Keller, Graziela; Xiong, Xiaoxiong (Jack)
2016-10-01
MODerate-resolution Imaging Spectroradiometer (MODIS) has 36 bands. Among them, 16 thermal emissive bands covering a wavelength range from 3.8 to 14.4 μm. After 16 years on-orbit operation, the electronic crosstalk of a few Terra MODIS thermal emissive bands develop substantial issues which cause biases in the EV brightness temperature measurements and surface feature contamination. The crosstalk effects on band 27 with center wavelength at 6.7 μm and band 29 at 8.5 μm increased significantly in recent years, affecting downstream products such as water vapor and cloud mask. The crosstalk issue can be observed from nearly monthly scheduled lunar measurements, from which the crosstalk coefficients can be derived. Most of MODIS thermal bands are saturated at moon surface temperatures and the development of an alternative approach is very helpful for verification. In this work, a physical model was developed to assess the crosstalk impact on calibration as well as in Earth view brightness temperature retrieval. This model was applied to Terra MODIS band 29 empirically for correction of Earth brightness temperature measurements. In the model development, the detector nonlinear response is considered. The impacts of the electronic crosstalk are assessed in two steps. The first step consists of determining the impact on calibration using the on-board blackbody (BB). Due to the detector nonlinear response and large background signal, both linear and nonlinear coefficients are affected by the crosstalk from sending bands. The crosstalk impact on calibration coefficients was calculated. The second step is to calculate the effects on the Earth view brightness temperature retrieval. The effects include those from affected calibration coefficients and the contamination of Earth view measurements. This model links the measurement bias with crosstalk coefficients, detector nonlinearity, and the ratio of Earth measurements between the sending and receiving bands. The correction of the electronic crosstalk can be implemented empirically from the processed bias at different brightness temperature. The implementation can be done through two approaches. As routine calibration assessment for thermal infrared bands, the trending over select Earth scenes is processed for all the detectors in a band and the band averaged bias is derived for certain time. In this case, the correction of an affected band can be made using the regression of the model with band averaged bias and then corrections of detector differences are applied. The second approach requires the trending for individual detectors and the bias for each detector is used for regression with the model. A test using the first approach was made for Terra MODIS band 29 with the biases derived from long-term trending of sea surface temperature and Dome-C surface temperature.
Objective calibration of numerical weather prediction models
NASA Astrophysics Data System (ADS)
Voudouri, A.; Khain, P.; Carmona, I.; Bellprat, O.; Grazzini, F.; Avgoustoglou, E.; Bettems, J. M.; Kaufmann, P.
2017-07-01
Numerical weather prediction (NWP) and climate models use parameterization schemes for physical processes, which often include free or poorly confined parameters. Model developers normally calibrate the values of these parameters subjectively to improve the agreement of forecasts with available observations, a procedure referred as expert tuning. A practicable objective multi-variate calibration method build on a quadratic meta-model (MM), that has been applied for a regional climate model (RCM) has shown to be at least as good as expert tuning. Based on these results, an approach to implement the methodology to an NWP model is presented in this study. Challenges in transferring the methodology from RCM to NWP are not only restricted to the use of higher resolution and different time scales. The sensitivity of the NWP model quality with respect to the model parameter space has to be clarified, as well as optimize the overall procedure, in terms of required amount of computing resources for the calibration of an NWP model. Three free model parameters affecting mainly turbulence parameterization schemes were originally selected with respect to their influence on the variables associated to daily forecasts such as daily minimum and maximum 2 m temperature as well as 24 h accumulated precipitation. Preliminary results indicate that it is both affordable in terms of computer resources and meaningful in terms of improved forecast quality. In addition, the proposed methodology has the advantage of being a replicable procedure that can be applied when an updated model version is launched and/or customize the same model implementation over different climatological areas.
NASA Astrophysics Data System (ADS)
Becker, R.; Usman, M.
2017-12-01
A SWAT (Soil Water Assessment Tool) model is applied in the semi-arid Punjab region in Pakistan. The physically based hydrological model is set up to simulate hydrological processes and water resources demands under future land use, climate change and irrigation management scenarios. In order to successfully run the model, detailed focus is laid on the calibration procedure of the model. The study deals with the following calibration issues:i. lack of reliable calibration/validation data, ii. difficulty to accurately model a highly managed system with a physically based hydrological model and iii. use of alternative and spatially distributed data sets for model calibration. In our study area field observations are rare and the entirely human controlled irrigation system renders central calibration parameters (e.g. runoff/curve number) unsuitable, as it can't be assumed that they represent the natural behavior of the hydrological system. From evapotranspiration (ET) however principal hydrological processes can still be inferred. Usman et al. (2015) derived satellite based monthly ET data for our study area based on SEBAL (Surface Energy Balance Algorithm) and created a reliable ET data set which we use in this study to calibrate our SWAT model. The initial SWAT model performance is evaluated with respect to the SEBAL results using correlation coefficients, RMSE, Nash-Sutcliffe efficiencies and mean differences. Particular focus is laid on the spatial patters, investigating the potential of a spatially differentiated parameterization instead of just using spatially uniform calibration data. A sensitivity analysis reveals the most sensitive parameters with respect to changes in ET, which are then selected for the calibration process.Using the SEBAL-ET product we calibrate the SWAT model for the time period 2005-2006 using a dynamically dimensioned global search algorithm to minimize RMSE. The model improvement after the calibration procedure is finally evaluated based on the previously chosen evaluation criteria for the time period 2007-2008. The study reveals the sensitivity of SWAT model parameters to changes in ET in a semi-arid and human controlled system and the potential of calibrating those parameters using satellite derived ET data.
Comparison of global optimization approaches for robust calibration of hydrologic model parameters
NASA Astrophysics Data System (ADS)
Jung, I. W.
2015-12-01
Robustness of the calibrated parameters of hydrologic models is necessary to provide a reliable prediction of future performance of watershed behavior under varying climate conditions. This study investigated calibration performances according to the length of calibration period, objective functions, hydrologic model structures and optimization methods. To do this, the combination of three global optimization methods (i.e. SCE-UA, Micro-GA, and DREAM) and four hydrologic models (i.e. SAC-SMA, GR4J, HBV, and PRMS) was tested with different calibration periods and objective functions. Our results showed that three global optimization methods provided close calibration performances under different calibration periods, objective functions, and hydrologic models. However, using the agreement of index, normalized root mean square error, Nash-Sutcliffe efficiency as the objective function showed better performance than using correlation coefficient and percent bias. Calibration performances according to different calibration periods from one year to seven years were hard to generalize because four hydrologic models have different levels of complexity and different years have different information content of hydrological observation. Acknowledgements This research was supported by a grant (14AWMP-B082564-01) from Advanced Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.