C.F. Ahlers, H.H. Liu
2001-12-18
The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the AMR Development Plan for U0035 Calibrated Properties Model REV00 (CRWMS M&O 1999c). These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions.
C. Ahlers; H. Liu
2000-03-12
The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the ''AMR Development Plan for U0035 Calibrated Properties Model REV00. These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions.
Calibration for the SAGE III/EOS instruments
NASA Technical Reports Server (NTRS)
Chu, W. P.; Mccormick, M. P.; Zawodny, J. M.; Mcmaster, L. R.
1991-01-01
The calibration plan for the SAGE III instruments for maintaining instrument performance during the Earth Observing System (EOS) mission lifetime is described. The SAGE III calibration plan consists of detailed preflight and inflight calibration on the instrument performance together with the correlative measurement program to validate the data products from the inverted satellite measurements. Since the measurement technique is primarily solar/lunar occultation, the instrument will be self-calibrating by using the sun as the calibration source during the routine operation of the instrument in flight. The instrument is designed to perform radiometric calibration of throughput, spectral, and spatial response in flight during routine operation. Spectral calibration can be performed in-flight from observation of the solar Fraunhofer lines within the spectral region from 290 to 1030 nm wavelength.
T. Ghezzehej
2004-10-04
The purpose of this model report is to document the calibrated properties model that provides calibrated property sets for unsaturated zone (UZ) flow and transport process models (UZ models). The calibration of the property sets is performed through inverse modeling. This work followed, and was planned in, ''Technical Work Plan (TWP) for: Unsaturated Zone Flow Analysis and Model Report Integration'' (BSC 2004 [DIRS 169654], Sections 1.2.6 and 2.1.1.6). Direct inputs to this model report were derived from the following upstream analysis and model reports: ''Analysis of Hydrologic Properties Data'' (BSC 2004 [DIRS 170038]); ''Development of Numerical Grids for UZ Flow and Transport Modeling'' (BSC 2004 [DIRS 169855]); ''Simulation of Net Infiltration for Present-Day and Potential Future Climates'' (BSC 2004 [DIRS 170007]); ''Geologic Framework Model'' (GFM2000) (BSC 2004 [DIRS 170029]). Additionally, this model report incorporates errata of the previous version and closure of the Key Technical Issue agreement TSPAI 3.26 (Section 6.2.2 and Appendix B), and it is revised for improved transparency.
H. H. Liu
2003-02-14
This report has documented the methodologies and the data used for developing rock property sets for three infiltration maps. Model calibration is necessary to obtain parameter values appropriate for the scale of the process being modeled. Although some hydrogeologic property data (prior information) are available, these data cannot be directly used to predict flow and transport processes because they were measured on scales smaller than those characterizing property distributions in models used for the prediction. Since model calibrations were done directly on the scales of interest, the upscaling issue was automatically considered. On the other hand, joint use of data and the prior information in inversions can further increase the reliability of the developed parameters compared with those for the prior information. Rock parameter sets were developed for both the mountain and drift scales because of the scale-dependent behavior of fracture permeability. Note that these parameter sets, except those for faults, were determined using the 1-D simulations. Therefore, they cannot be directly used for modeling lateral flow because of perched water in the unsaturated zone (UZ) of Yucca Mountain. Further calibration may be needed for two- and three-dimensional modeling studies. As discussed above in Section 6.4, uncertainties for these calibrated properties are difficult to accurately determine, because of the inaccuracy of simplified methods for this complex problem or the extremely large computational expense of more rigorous methods. One estimate of uncertainty that may be useful to investigators using these properties is the uncertainty used for the prior information. In most cases, the inversions did not change the properties very much with respect to the prior information. The Output DTNs (including the input and output files for all runs) from this study are given in Section 9.4.
Calibrated multi-subband Monte Carlo modeling of tunnel-FETs in silicon and III-V channel materials
NASA Astrophysics Data System (ADS)
Revelant, A.; Palestri, P.; Osgnach, P.; Selmi, L.
2013-10-01
We present a semiclassical model for Tunnel-FET (TFET) devices capable to describe band-to-band tunneling (BtBT) as well as far from equilibrium transport of the generated carriers. BtBT generation is implemented as an add-on into an existing multi-subband Monte Carlo (MSMC) transport simulator that accounts as well for the effects typical to alternative channel materials and high-κ dielectrics. A simple but accurate correction for the calculation of the BtBT generation rate to account for carrier confinement in the subbands is proposed and verified by comparison with full 2D quantum calculation.
NASA Astrophysics Data System (ADS)
Grobler, T. L.; Stewart, A. J.; Wijnholds, S. J.; Kenyon, J. S.; Smirnov, O. M.
2016-09-01
This is the third installment in a series of papers in which we investigate calibration artefacts. Calibration artefacts (also known as ghosts or spurious sources) are created when we calibrate with an incomplete model. In the first two papers of this series, we developed a mathematical framework which enabled us to study the ghosting mechanism itself. An interesting concomitant of the second paper was that ghosts appear in symmetrical pairs. This could possibly account for spurious symmetrization. Spurious symmetrization refers to the appearance of a spurious source (the antighost) symmetrically opposite an unmodelled source around a modelled source. The analysis in the first two papers indicates that the antighost is usually very faint, in particular, when a large number of antennas are used. This suggests that spurious symmetrization will mainly occur at an almost undetectable flux level. In this paper, we show that phase-only calibration produces an antighost that is N-times (where N denotes the number of antennas in the array) as bright as the one produced by phase and amplitude calibration and that this already bright ghost can be further amplified by the primary beam correction.
Model Calibration in Watershed Hydrology
NASA Technical Reports Server (NTRS)
Yilmaz, Koray K.; Vrugt, Jasper A.; Gupta, Hoshin V.; Sorooshian, Soroosh
2009-01-01
Hydrologic models use relatively simple mathematical equations to conceptualize and aggregate the complex, spatially distributed, and highly interrelated water, energy, and vegetation processes in a watershed. A consequence of process aggregation is that the model parameters often do not represent directly measurable entities and must, therefore, be estimated using measurements of the system inputs and outputs. During this process, known as model calibration, the parameters are adjusted so that the behavior of the model approximates, as closely and consistently as possible, the observed response of the hydrologic system over some historical period of time. This Chapter reviews the current state-of-the-art of model calibration in watershed hydrology with special emphasis on our own contributions in the last few decades. We discuss the historical background that has led to current perspectives, and review different approaches for manual and automatic single- and multi-objective parameter estimation. In particular, we highlight the recent developments in the calibration of distributed hydrologic models using parameter dimensionality reduction sampling, parameter regularization and parallel computing.
Calibration of models using groundwater age
Sanford, W.
2011-01-01
There have been substantial efforts recently by geochemists to determine the age of groundwater (time since water entered the system) and its uncertainty, and by hydrologists to use these data to help calibrate groundwater models. This essay discusses the calibration of models using groundwater age, with conclusions that emphasize what is practical given current limitations rather than theoretical possibilities.
Assessing calibration of multinomial risk prediction models.
Van Hoorde, Kirsten; Vergouwe, Yvonne; Timmerman, Dirk; Van Huffel, Sabine; Steyerberg, Ewout W; Van Calster, Ben
2014-07-10
Calibration, that is, whether observed outcomes agree with predicted risks, is important when evaluating risk prediction models. For dichotomous outcomes, several tools exist to assess different aspects of model calibration, such as calibration-in-the-large, logistic recalibration, and (non-)parametric calibration plots. We aim to extend these tools to prediction models for polytomous outcomes. We focus on models developed using multinomial logistic regression (MLR): outcome Y with k categories is predicted using k - 1 equations comparing each category i (i = 2, … ,k) with reference category 1 using a set of predictors, resulting in k - 1 linear predictors. We propose a multinomial logistic recalibration framework that involves an MLR fit where Y is predicted using the k - 1 linear predictors from the prediction model. A non-parametric alternative may use vector splines for the effects of the linear predictors. The parametric and non-parametric frameworks can be used to generate multinomial calibration plots. Further, the parametric framework can be used for the estimation and statistical testing of calibration intercepts and slopes. Two illustrative case studies are presented, one on the diagnosis of malignancy of ovarian tumors and one on residual mass diagnosis in testicular cancer patients treated with cisplatin-based chemotherapy. The risk prediction models were developed on data from 2037 and 544 patients and externally validated on 1107 and 550 patients, respectively. We conclude that calibration tools can be extended to polytomous outcomes. The polytomous calibration plots are particularly informative through the visual summary of the calibration performance.
Preserving Flow Variability in Watershed Model Calibrations
Background/Question/Methods Although watershed modeling flow calibration techniques often emphasize a specific flow mode, ecological conditions that depend on flow-ecology relationships often emphasize a range of flow conditions. We used informal likelihood methods to investig...
A Novel Protocol for Model Calibration in Biological Wastewater Treatment
Zhu, Ao; Guo, Jianhua; Ni, Bing-Jie; Wang, Shuying; Yang, Qing; Peng, Yongzhen
2015-01-01
Activated sludge models (ASMs) have been widely used for process design, operation and optimization in wastewater treatment plants. However, it is still a challenge to achieve an efficient calibration for reliable application by using the conventional approaches. Hereby, we propose a novel calibration protocol, i.e. Numerical Optimal Approaching Procedure (NOAP), for the systematic calibration of ASMs. The NOAP consists of three key steps in an iterative scheme flow: i) global factors sensitivity analysis for factors fixing; ii) pseudo-global parameter correlation analysis for non-identifiable factors detection; and iii) formation of a parameter subset through an estimation by using genetic algorithm. The validity and applicability are confirmed using experimental data obtained from two independent wastewater treatment systems, including a sequencing batch reactor and a continuous stirred-tank reactor. The results indicate that the NOAP can effectively determine the optimal parameter subset and successfully perform model calibration and validation for these two different systems. The proposed NOAP is expected to use for automatic calibration of ASMs and be applied potentially to other ordinary differential equations models. PMID:25682959
Adaptable Multivariate Calibration Models for Spectral Applications
THOMAS,EDWARD V.
1999-12-20
Multivariate calibration techniques have been used in a wide variety of spectroscopic situations. In many of these situations spectral variation can be partitioned into meaningful classes. For example, suppose that multiple spectra are obtained from each of a number of different objects wherein the level of the analyte of interest varies within each object over time. In such situations the total spectral variation observed across all measurements has two distinct general sources of variation: intra-object and inter-object. One might want to develop a global multivariate calibration model that predicts the analyte of interest accurately both within and across objects, including new objects not involved in developing the calibration model. However, this goal might be hard to realize if the inter-object spectral variation is complex and difficult to model. If the intra-object spectral variation is consistent across objects, an effective alternative approach might be to develop a generic intra-object model that can be adapted to each object separately. This paper contains recommendations for experimental protocols and data analysis in such situations. The approach is illustrated with an example involving the noninvasive measurement of glucose using near-infrared reflectance spectroscopy. Extensions to calibration maintenance and calibration transfer are discussed.
Objective calibration of regional climate models
NASA Astrophysics Data System (ADS)
Bellprat, O.; Kotlarski, S.; Lüthi, D.; SchäR, C.
2012-12-01
Climate models are subject to high parametric uncertainty induced by poorly confined model parameters of parameterized physical processes. Uncertain model parameters are typically calibrated in order to increase the agreement of the model with available observations. The common practice is to adjust uncertain model parameters manually, often referred to as expert tuning, which lacks objectivity and transparency in the use of observations. These shortcomings often haze model inter-comparisons and hinder the implementation of new model parameterizations. Methods which would allow to systematically calibrate model parameters are unfortunately often not applicable to state-of-the-art climate models, due to computational constraints facing the high dimensionality and non-linearity of the problem. Here we present an approach to objectively calibrate a regional climate model, using reanalysis driven simulations and building upon a quadratic metamodel presented by Neelin et al. (2010) that serves as a computationally cheap surrogate of the model. Five model parameters originating from different parameterizations are selected for the optimization according to their influence on the model performance. The metamodel accurately estimates spatial averages of 2 m temperature, precipitation and total cloud cover, with an uncertainty of similar magnitude as the internal variability of the regional climate model. The non-linearities of the parameter perturbations are well captured, such that only a limited number of 20-50 simulations are needed to estimate optimal parameter settings. Parameter interactions are small, which allows to further reduce the number of simulations. In comparison to an ensemble of the same model which has undergone expert tuning, the calibration yields similar optimal model configurations, but leading to an additional reduction of the model error. The performance range captured is much wider than sampled with the expert-tuned ensemble and the presented
Modelling PTB's spatial angle autocollimator calibrator
NASA Astrophysics Data System (ADS)
Kranz, Oliver; Geckeler, Ralf D.; Just, Andreas; Krause, Michael
2013-05-01
The accurate and traceable form measurement of optical surfaces has been greatly advanced by a new generation of surface profilometers which are based on the reflection of light at the surface and the measurement of the reflection angle. For this application, high-resolution electronic autocollimators provide accurate and traceable angle metrology. In recent years, great progress has been made at the Physikalisch-Technische Bundesanstalt (PTB) in autocollimator calibration. For an advanced autocollimator characterisation, a novel calibration device has been built up at PTB: the Spatial Angle Autocollimator Calibrator (SAAC). The system makes use of an innovative Cartesian arrangement of three autocollimators (two reference autocollimators and the autocollimator to be calibrated), which allows a precise measurement of the angular orientation of a reflector cube. Each reference autocollimator is sensitive primarily to changes in one of the two relevant tilt angles, whereas the autocollimator to be calibrated is sensitive to both. The distance between the reflector cube and the autocollimator to be calibrated can be varied flexibly. In this contribution, we present the SAAC and aspects of the mathematical modelling of the system for deriving analytical expressions for the autocollimators' angle responses. These efforts will allow advancing the form measurement substantially with autocollimator-based profilometers and approaching fundamental measurement limits. Additionally, they will help manufacturers of autocollimators to improve their instruments and will provide improved angle measurement methods for precision engineering.
Calibration and validation of rockfall models
NASA Astrophysics Data System (ADS)
Frattini, Paolo; Valagussa, Andrea; Zenoni, Stefania; Crosta, Giovanni B.
2013-04-01
Calibrating and validating landslide models is extremely difficult due to the particular characteristic of landslides: limited recurrence in time, relatively low frequency of the events, short durability of post-event traces, poor availability of continuous monitoring data, especially for small landslide and rockfalls. For this reason, most of the rockfall models presented in literature completely lack calibration and validation of the results. In this contribution, we explore different strategies for rockfall model calibration and validation starting from both an historical event and a full-scale field test. The event occurred in 2012 in Courmayeur (Western Alps, Italy), and caused serious damages to quarrying facilities. This event has been studied soon after the occurrence through a field campaign aimed at mapping the blocks arrested along the slope, the shape and location of the detachment area, and the traces of scars associated to impacts of blocks on the slope. The full-scale field test was performed by Geovert Ltd in the Christchurch area (New Zealand) after the 2011 earthquake. During the test, a number of large blocks have been mobilized from the upper part of the slope and filmed with high velocity cameras from different viewpoints. The movies of each released block were analysed to identify the block shape, the propagation path, the location of impacts, the height of the trajectory and the velocity of the block along the path. Both calibration and validation of rockfall models should be based on the optimization of the agreement between the actual trajectories or location of arrested blocks and the simulated ones. A measure that describe this agreement is therefore needed. For calibration purpose, this measure should simple enough to allow trial and error repetitions of the model for parameter optimization. In this contribution we explore different calibration/validation measures: (1) the percentage of simulated blocks arresting within a buffer of the
More robust model built using SEM calibration
NASA Astrophysics Data System (ADS)
Wang, Ching-Heng; Liu, Qingwei; Zhang, Liguo
2007-10-01
More robust Optical Proximity Correction (OPC) model is highly required with integrated circuits' CD (Critical Dimension) being smaller. Generally a lot of wafer data of line-end features need to be collected for modeling. Scanning Electron Microscope (SEM) images are sources that include vast 2D information. Adding SEM images calibration into current model flow will be preferred. This paper presents a method using Mentor Graphics' Calibre SEMcal and ContourCal to integrated SEM calibration into model flow. Firstly simulated contour is generated and aligned with SEM image automatically. Secondly contour is edited by fixing the gap etc. CD measurement spots are applied also to get a more accurate contour. Lastly the final contour is extracted and inputted to the model flow. EPE will be calculated from SEM image contour. Thus a more stable and robust OPC model is generated. SEM calibration can accommodate structures such as asymmetrical CDs, line end pullbacks and corner rounding etc and save a lot of time on measuring line end wafer CD.
Modeling and Calibration of Automatic Guided Vehicle
NASA Astrophysics Data System (ADS)
Sawada, Kenji; Tanaka, Kosuke; Shin, Seiichi; Kumagai, Kenji; Yoneda, Hisato
This paper presents a modeling of an automatic guided vehicle (AGV) to achieve a model-based control. The modeling includes 3 kinds of choices; a choice of input-output data pair from 14 candidate pairs, a choice of system identification technique form 5 candidate techniques, a choice of discrete to continuous transform method from 2 candidate methods. In order to obtain reliable plant models of AGV, an approach for calibration between a statistical model and a physical model is also here. In our approach, the models are combined according to the weight of AGV. As a result, our calibration problem is recast as a nonlinear optimization problem that can be solved by quasi-Newton's method.
User-calibration of Fowler Ultra-Cal Mark III Digital caliper
Estill, J.
1996-09-19
The purpose of this technical implementing procedure (TIP) is to describe the procedure that will be employed for user-calibration of a digital caliper used in the determination of specimen dimensions. A caliper is used for some of the activities of the Scientific Investigation Plan (SIP) Metal Barrier Selection and Testing (SIP-CM-01, WBS {number_sign} 1.2.2.5.1). In particular, it will be used for Activity E-20-50, Long-Term Corrosion Studies. This procedure describes the methodology for user calibration of a Fowler Ultra-Cal Mark III digital caliper. National Institutes of Standards and Technology (NIST) traceable gauge blocks are employed in the calibration procedure.
Statistical regional calibration of subsidence prediction models
Cleaver, D.N.; Reddish, D.J.; Dunham, R.K.; Shadbolt, C.H.
1995-11-01
Like other influence function methods, the SWIFT subsidence prediction program, developed within the Mineral Resources Engineering Department at the University of Nottingham, requires calibration to regional data in order to produce accurate predictions of ground movements. Previously, this software had been solely calibrated to give results consistent with the Subsidence Engineer`s Handbook (NCB, 1975). This approach was satisfactory for the majority of cases based in the United Kingdom, upon which the calibration was based. However, in certain circumstances within the UK and, almost always, in overseas case studies, the predictions die no correspond to observed patterns of ground movement. Therefore, in order that SWIFT, and other subsidence prediction packages, can be considered more universal, an improved and adaptable method of regional calibration must be incorporated. This paper describes the analysis of a large database of case histories from the UK industry and international publications. Observed maximum subsidence, mining geometry and Geological Index for several hundred cases have been statistically analyzed in terms of developing prediction models. The models developed can more accurately predict maximum subsidence than previously used systems but also, are capable of indicating the likely range of prediction error to a certain degree of probability. Finally, the paper illustrates how this statistical approach can be incorporated as a calibration system for the influence function program, SWIFT.
New Method of Calibrating IRT Models.
ERIC Educational Resources Information Center
Jiang, Hai; Tang, K. Linda
This discussion of new methods for calibrating item response theory (IRT) models looks into new optimization procedures, such as the Genetic Algorithm (GA) to improve on the use of the Newton-Raphson procedure. The advantages of using a global optimization procedure like GA is that this kind of procedure is not easily affected by local optima and…
Hydrological model calibration for enhancing global flood forecast skill
NASA Astrophysics Data System (ADS)
Hirpa, Feyera A.; Beck, Hylke E.; Salamon, Peter; Thielen-del Pozo, Jutta
2016-04-01
Early warning systems play a key role in flood risk reduction, and their effectiveness is directly linked to streamflow forecast skill. The skill of a streamflow forecast is affected by several factors; among them are (i) model errors due to incomplete representation of physical processes and inaccurate parameterization, (ii) uncertainty in the model initial conditions, and (iii) errors in the meteorological forcing. In macro scale (continental or global) modeling, it is a common practice to use a priori parameter estimates over large river basins or wider regions, resulting in suboptimal streamflow estimations. The aim of this work is to improve flood forecast skill of the Global Flood Awareness System (GloFAS; www.globalfloods.eu), a grid-based forecasting system that produces flood forecast unto 30 days lead, through calibration of the distributed hydrological model parameters. We use a combination of in-situ and satellite-based streamflow data for automatic calibration using a multi-objective genetic algorithm. We will present the calibrated global parameter maps and report the forecast skill improvements achieved. Furthermore, we discuss current challenges and future opportunities with regard to global-scale early flood warning systems.
High Accuracy Transistor Compact Model Calibrations
Hembree, Charles E.; Mar, Alan; Robertson, Perry J.
2015-09-01
Typically, transistors are modeled by the application of calibrated nominal and range models. These models consists of differing parameter values that describe the location and the upper and lower limits of a distribution of some transistor characteristic such as current capacity. Correspond- ingly, when using this approach, high degrees of accuracy of the transistor models are not expected since the set of models is a surrogate for a statistical description of the devices. The use of these types of models describes expected performances considering the extremes of process or transistor deviations. In contrast, circuits that have very stringent accuracy requirements require modeling techniques with higher accuracy. Since these accurate models have low error in transistor descriptions, these models can be used to describe part to part variations as well as an accurate description of a single circuit instance. Thus, models that meet these stipulations also enable the calculation of quantifi- cation of margins with respect to a functional threshold and uncertainties in these margins. Given this need, new model high accuracy calibration techniques for bipolar junction transis- tors have been developed and are described in this report.
Gradient-based model calibration with proxy-model assistance
NASA Astrophysics Data System (ADS)
Burrows, Wesley; Doherty, John
2016-02-01
Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.
CALIBRATIONS OF ATMOSPHERIC PARAMETERS OBTAINED FROM THE FIRST YEAR OF SDSS-III APOGEE OBSERVATIONS
Mészáros, Sz.; Allende Prieto, C.; Holtzman, J.; García Pérez, A. E.; Chojnowski, S. D.; Hearty, F. R.; Majewski, S. R.; Schiavon, R. P.; Basu, S.; Bizyaev, D.; Chaplin, W. J.; Elsworth, Y.; Cunha, K.; Epstein, C.; Johnson, J. A.; Frinchaboy, P. M.; García, R. A.; Kallinger, T.; Koesterke, L.; and others
2013-11-01
The Sloan Digital Sky Survey III (SDSS-III) Apache Point Observatory Galactic Evolution Experiment (APOGEE) is a three-year survey that is collecting 10{sup 5} high-resolution spectra in the near-IR across multiple Galactic populations. To derive stellar parameters and chemical compositions from this massive data set, the APOGEE Stellar Parameters and Chemical Abundances Pipeline (ASPCAP) has been developed. Here, we describe empirical calibrations of stellar parameters presented in the first SDSS-III APOGEE data release (DR10). These calibrations were enabled by observations of 559 stars in 20 globular and open clusters. The cluster observations were supplemented by observations of stars in NASA's Kepler field that have well determined surface gravities from asteroseismic analysis. We discuss the accuracy and precision of the derived stellar parameters, considering especially effective temperature, surface gravity, and metallicity; we also briefly discuss the derived results for the abundances of the α-elements, carbon, and nitrogen. Overall, we find that ASPCAP achieves reasonably accurate results for temperature and metallicity, but suffers from systematic errors in surface gravity. We derive calibration relations that bring the raw ASPCAP results into better agreement with independently determined stellar parameters. The internal scatter of ASPCAP parameters within clusters suggests that metallicities are measured with a precision better than 0.1 dex, effective temperatures better than 150 K, and surface gravities better than 0.2 dex. The understanding provided by the clusters and Kepler giants on the current accuracy and precision will be invaluable for future improvements of the pipeline.
NASA Astrophysics Data System (ADS)
Cornic, Philippe; Illoul, Cédric; Cheminet, Adam; Le Besnerais, Guy; Champagnat, Frédéric; Le Sant, Yves; Leclaire, Benjamin
2016-09-01
We address calibration and self-calibration of tomographic PIV experiments within a pinhole model of cameras. A complete and explicit pinhole model of a camera equipped with a 2-tilt angles Scheimpflug adapter is presented. It is then used in a calibration procedure based on a freely moving calibration plate. While the resulting calibrations are accurate enough for Tomo-PIV, we confirm, through a simple experiment, that they are not stable in time, and illustrate how the pinhole framework can be used to provide a quantitative evaluation of geometrical drifts in the setup. We propose an original self-calibration method based on global optimization of the extrinsic parameters of the pinhole model. These methods are successfully applied to the tomographic PIV of an air jet experiment. An unexpected by-product of our work is to show that volume self-calibration induces a change in the world frame coordinates. Provided the calibration drift is small, as generally observed in PIV, the bias on the estimated velocity field is negligible but the absolute location cannot be accurately recovered using standard calibration data.
Seepage Calibration Model and Seepage Testing Data
P. Dixon
2004-02-17
The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM is developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA (see upcoming REV 02 of CRWMS M&O 2000 [153314]), which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model (see BSC 2003 [161530]). The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross Drift to obtain the permeability structure for the seepage model; (3) to use inverse modeling to calibrate the SCM and to estimate seepage-relevant, model-related parameters on the drift scale; (4) to estimate the epistemic uncertainty of the derived parameters, based on the goodness-of-fit to the observed data and the sensitivity of calculated seepage with respect to the parameters of interest; (5) to characterize the aleatory uncertainty of
Christchurch field data for rockfall model calibration
NASA Astrophysics Data System (ADS)
Vick, L.; Glover, J.; Davies, T. R.
2013-12-01
The Canterbury earthquake of 2012-2011 triggered devastating rockfalls in the Port Hills in Christchurch, over 8000 boulders resulted in fatalities and severe building damage. There is a requirement for detailed and defensible rockfall hazard analysis to guide planning decisions in response to these rockfall events, most commonly this is performed with the use of a rockfall model. Calibrating a rockfall model requires a robust data set of past rockfall events. Information of rockfall deposit shape and size should be mapped over the affected area, in addition to information on the dynamics of the rockfall events such as jump heights and velocities of rocks. It is often the case that such information is obtained from expensive rock rolling studies; however the dynamics of an event can be estimated from the runout terrain and impact scars. In this study a calibration of a 3D rigid-body rockfall model was performed based on mapped boulder sizes and shapes over the rockfall affected zones of Christchurch, and estimations of boulder velocities gleaned from rock impact scars of individual trajectories and a high resolution digital terrain model produced following the rockfall events. The impact scars were mapped recording their length and depth of penetration into the loess soil cover of the runout zones. Two methods to estimate the boulder velocities have been applied. The first crudely estimates the velocity based on the vertical free fall potential between the rockfall shadow line and the terrain surface, and a velocity correction factor to account for friction. The second uses the impact scars assuming a parabolic trajectory between rock-ground impacts giving an indication of both jump height and velocity. Maximum runout distances produced a shadow angle of 23° in the area. Applying the first method suggests velocities can reach up to ~26 m s-1 and maxima concentrate in gullies and steep terrain. On average the distance between impact scars was 23 m, from which jump
Thematic Mapper. Volume 1: Calibration report flight model, LANDSAT 5
NASA Technical Reports Server (NTRS)
Cooley, R. C.; Lansing, J. C.
1984-01-01
The calibration of the Flight 1 Model Thematic Mapper is discussed. Spectral response, scan profile, coherent noise, line spread profiles and white light leaks, square wave response, radiometric calibration, and commands and telemetry are specifically addressed.
NASA Astrophysics Data System (ADS)
Al-Abed, N. A.; Whiteley, H. R.
2002-11-01
Calibrating a comprehensive, multi-parameter conceptual hydrological model, such as the Hydrological Simulation Program Fortran model, is a major challenge. This paper describes calibration procedures for water-quantity parameters of the HSPF version 10·11 using the automatic-calibration parameter estimator model coupled with a geographical information system (GIS) approach for spatially averaged properties. The study area was the Grand River watershed, located in southern Ontario, Canada, between 79° 30 and 80° 57W longitude and 42° 51 and 44° 31N latitude. The drainage area is 6965 km2. Calibration efforts were directed to those model parameters that produced large changes in model response during sensitivity tests run prior to undertaking calibration. A GIS was used extensively in this study. It was first used in the watershed segmentation process. During calibration, the GIS data were used to establish realistic starting values for the surface and subsurface zone parameters LZSN, UZSN, COVER, and INFILT and physically reasonable ratios of these parameters among watersheds were preserved during calibration with the ratios based on the known properties of the subwatersheds determined using GIS. This calibration procedure produced very satisfactory results; the percentage difference between the simulated and the measured yearly discharge ranged between 4 to 16%, which is classified as good to very good calibration. The average simulated daily discharge for the watershed outlet at Brantford for the years 1981-85 was 67 m3 s-1 and the average measured discharge at Brantford was 70 m3 s-1. The coupling of a GIS with automatice calibration produced a realistic and accurate calibration for the HSPF model with much less effort and subjectivity than would be required for unassisted calibration.
Calibration of hydrological model with programme PEST
NASA Astrophysics Data System (ADS)
Brilly, Mitja; Vidmar, Andrej; Kryžanowski, Andrej; Bezak, Nejc; Šraj, Mojca
2016-04-01
PEST is tool based on minimization of an objective function related to the root mean square error between the model output and the measurement. We use "singular value decomposition", section of the PEST control file, and Tikhonov regularization method for successfully estimation of model parameters. The PEST sometimes failed if inverse problems were ill-posed, but (SVD) ensures that PEST maintains numerical stability. The choice of the initial guess for the initial parameter values is an important issue in the PEST and need expert knowledge. The flexible nature of the PEST software and its ability to be applied to whole catchments at once give results of calibration performed extremely well across high number of sub catchments. Use of parallel computing version of PEST called BeoPEST was successfully useful to speed up calibration process. BeoPEST employs smart slaves and point-to-point communications to transfer data between the master and slaves computers. The HBV-light model is a simple multi-tank-type model for simulating precipitation-runoff. It is conceptual balance model of catchment hydrology which simulates discharge using rainfall, temperature and estimates of potential evaporation. Version of HBV-light-CLI allows the user to run HBV-light from the command line. Input and results files are in XML form. This allows to easily connecting it with other applications such as pre and post-processing utilities and PEST itself. The procedure was applied on hydrological model of Savinja catchment (1852 km2) and consists of twenty one sub-catchments. Data are temporary processed on hourly basis.
The Adaptive Calibration Model of stress responsivity
Ellis, Bruce J.; Shirtcliff, Elizabeth A.
2010-01-01
This paper presents the Adaptive Calibration Model (ACM), an evolutionary-developmental theory of individual differences in the functioning of the stress response system. The stress response system has three main biological functions: (1) to coordinate the organism’s allostatic response to physical and psychosocial challenges; (2) to encode and filter information about the organism’s social and physical environment, mediating the organism’s openness to environmental inputs; and (3) to regulate the organism’s physiology and behavior in a broad range of fitness-relevant areas including defensive behaviors, competitive risk-taking, learning, attachment, affiliation and reproductive functioning. The information encoded by the system during development feeds back on the long-term calibration of the system itself, resulting in adaptive patterns of responsivity and individual differences in behavior. Drawing on evolutionary life history theory, we build a model of the development of stress responsivity across life stages, describe four prototypical responsivity patterns, and discuss the emergence and meaning of sex differences. The ACM extends the theory of biological sensitivity to context (BSC) and provides an integrative framework for future research in the field. PMID:21145350
Improving the Generic Camera Calibration Technique by an Extended Model of Calibration Display
NASA Astrophysics Data System (ADS)
Reh, T.; Li, W.; Burke, J.; Bergmann, R. B.
2014-10-01
Generic camera calibration is a method to characterize vision sensors by describing a line of sight for every single pixel. This procedure frees the calibration process from the restriction to pinhole-like optics that arises in the common photogrammetric camera models. Generic camera calibration also enables the calibration of high-frequency distortions, which is beneficial for high-precision measurement systems. The calibration process is as follows: To collect sufficient data for calculating a line of sight for each pixel, active grids are used as calibration reference rather than static markers such as corners of chessboard patterns. A common implementation of active grids are sinusoidal fringes presented on a flat TFT display. So far, the displays have always been treated as ideally flat. In this work we propose new and more sophisticated models to account for additional properties of the active grid display: The refraction of light in the glass cover is taken into account as well as a possible deviation of the top surface from absolute flatness. To examine the effectiveness of the new models, an example fringe projection measurement system is characterized with the resulting calibration methods and with the original generic camera calibration. Evaluating measurements using the different calibration methods shows that the extended display model substantially improves the uncertainty of the measurement system.
A Comparison of Two Balance Calibration Model Building Methods
NASA Technical Reports Server (NTRS)
DeLoach, Richard; Ulbrich, Norbert
2007-01-01
Simulated strain-gage balance calibration data is used to compare the accuracy of two balance calibration model building methods for different noise environments and calibration experiment designs. The first building method obtains a math model for the analysis of balance calibration data after applying a candidate math model search algorithm to the calibration data set. The second building method uses stepwise regression analysis in order to construct a model for the analysis. Four balance calibration data sets were simulated in order to compare the accuracy of the two math model building methods. The simulated data sets were prepared using the traditional One Factor At a Time (OFAT) technique and the Modern Design of Experiments (MDOE) approach. Random and systematic errors were introduced in the simulated calibration data sets in order to study their influence on the math model building methods. Residuals of the fitted calibration responses and other statistical metrics were compared in order to evaluate the calibration models developed with different combinations of noise environment, experiment design, and model building method. Overall, predicted math models and residuals of both math model building methods show very good agreement. Significant differences in model quality were attributable to noise environment, experiment design, and their interaction. Generally, the addition of systematic error significantly degraded the quality of calibration models developed from OFAT data by either method, but MDOE experiment designs were more robust with respect to the introduction of a systematic component of the unexplained variance.
Seepage Calibration Model and Seepage Testing Data
S. Finsterle
2004-09-02
The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM was developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). This Model Report has been revised in response to a comprehensive, regulatory-focused evaluation performed by the Regulatory Integration Team [''Technical Work Plan for: Regulatory Integration Evaluation of Analysis and Model Reports Supporting the TSPA-LA'' (BSC 2004 [DIRS 169653])]. The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross-Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA [''Seepage Model for PA Including Drift Collapse'' (BSC 2004 [DIRS 167652])], which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model [see ''Drift-Scale Coupled Processes (DST and TH Seepage) Models'' (BSC 2004 [DIRS 170338])]. The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross-Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross-Drift to obtain the permeability structure for the seepage model
Lithography process window analysis with calibrated model
NASA Astrophysics Data System (ADS)
Zhou, Wenzhan; Yu, Jin; Lo, James; Liu, Johnson
2004-05-01
As critical-dimension shrink below 0.13 μm, the SPC (Statistical Process Control) based on CD (Critical Dimension) control in lithography process becomes more difficult. Increasing requirements of a shrinking process window have called on the need for more accurate decision of process window center. However in practical fabrication, we found that systematic error introduced by metrology and/or resist process can significantly impact the process window analysis result. Especially, when the simple polynomial functions are used to fit the lithographic data from focus exposure matrix (FEM), the model will fit these systematic errors rather than filter them out. This will definitely impact the process window analysis and determination of the best process condition. In this paper, we proposed to use a calibrated first principle model to do process window analysis. With this method, the systematic metrology error can be filtered out efficiently and give a more reasonable window analysis result.
Rohani Moghadam, Masoud; Haji Shabani, Ali Mohammad; Dadfarnia, Shayessteh
2015-01-25
A solidified floating organic drop microextraction (SFODME) procedure was developed for the simultaneous extraction and preconcentration of Fe(III) and Al(III) from water samples. The method was based on the formation of cationic complexes between Fe(III) and Al(III) and 3,5,7,2',4'-pentahydroxyflavone (morin) which were extracted into 1-undecanol as ion pairs with perchlorate ions. The absorbance of the extracted complexes was then measured in the wavelength range of 300-450 nm. Finally, the concentration of each metal ion was determined by the use of the orthogonal signal correction-partial least squares (OSC-PLS) calibration method. Several experimental parameters that may be affected on the extraction process such as the type and volume of extraction solvent, pH of the aqueous solution, morin and perchlorate concentration and extraction time were optimized. Under the optimum conditions, Fe(III) and Al(III) were determined in the ranges of 0.83-27.00 μg L(-1) (R(2)=0.9985) and 1.00-32.00 μg L(-1) (R(2)=0.9979) of Fe(III) and Al(III), respectively. The relative standard deviations (n=6) at 12.80 μg L(-1) of Fe(III) and 17.00 μg L(-)(1) of Al(III) were 3.2% and 3.5%, respectively. An enhancement factors of 102 and 96 were obtained for Fe(III) and Al(III) ions, respectively. The procedure was successfully applied to determination of iron and aluminum in steam and water samples of thermal power plant; and the accuracy was assessed through the recovery experiments and independent analysis by electrothermal atomic absorption spectroscopy (ETAAS). PMID:25168229
NASA Astrophysics Data System (ADS)
Rohani Moghadam, Masoud; Haji Shabani, Ali Mohammad; Dadfarnia, Shayessteh
2015-01-01
A solidified floating organic drop microextraction (SFODME) procedure was developed for the simultaneous extraction and preconcentration of Fe(III) and Al(III) from water samples. The method was based on the formation of cationic complexes between Fe(III) and Al(III) and 3,5,7,2‧,4‧-pentahydroxyflavone (morin) which were extracted into 1-undecanol as ion pairs with perchlorate ions. The absorbance of the extracted complexes was then measured in the wavelength range of 300-450 nm. Finally, the concentration of each metal ion was determined by the use of the orthogonal signal correction-partial least squares (OSC-PLS) calibration method. Several experimental parameters that may be affected on the extraction process such as the type and volume of extraction solvent, pH of the aqueous solution, morin and perchlorate concentration and extraction time were optimized. Under the optimum conditions, Fe(III) and Al(III) were determined in the ranges of 0.83-27.00 μg L-1 (R2 = 0.9985) and 1.00-32.00 μg L-1 (R2 = 0.9979) of Fe(III) and Al(III), respectively. The relative standard deviations (n = 6) at 12.80 μg L-1 of Fe(III) and 17.00 μg L-1 of Al(III) were 3.2% and 3.5%, respectively. An enhancement factors of 102 and 96 were obtained for Fe(III) and Al(III) ions, respectively. The procedure was successfully applied to determination of iron and aluminum in steam and water samples of thermal power plant; and the accuracy was assessed through the recovery experiments and independent analysis by electrothermal atomic absorption spectroscopy (ETAAS).
NASA Technical Reports Server (NTRS)
Tripp, John S.; Tcheng, Ping
1999-01-01
Statistical tools, previously developed for nonlinear least-squares estimation of multivariate sensor calibration parameters and the associated calibration uncertainty analysis, have been applied to single- and multiple-axis inertial model attitude sensors used in wind tunnel testing to measure angle of attack and roll angle. The analysis provides confidence and prediction intervals of calibrated sensor measurement uncertainty as functions of applied input pitch and roll angles. A comparative performance study of various experimental designs for inertial sensor calibration is presented along with corroborating experimental data. The importance of replicated calibrations over extended time periods has been emphasized; replication provides independent estimates of calibration precision and bias uncertainties, statistical tests for calibration or modeling bias uncertainty, and statistical tests for sensor parameter drift over time. A set of recommendations for a new standardized model attitude sensor calibration method and usage procedures is included. The statistical information provided by these procedures is necessary for the uncertainty analysis of aerospace test results now required by users of industrial wind tunnel test facilities.
Towards automatic calibration of 2-dimensional flood propagation models
NASA Astrophysics Data System (ADS)
Fabio, P.; Aronica, G. T.; Apel, H.
2009-11-01
Hydraulic models for flood propagation description are an essential tool in many fields, e.g. civil engineering, flood hazard and risk assessments, evaluation of flood control measures, etc. Nowadays there are many models of different complexity regarding the mathematical foundation and spatial dimensions available, and most of them are comparatively easy to operate due to sophisticated tools for model setup and control. However, the calibration of these models is still underdeveloped in contrast to other models like e.g. hydrological models or models used in ecosystem analysis. This has basically two reasons: first, the lack of relevant data against the models can be calibrated, because flood events are very rarely monitored due to the disturbances inflicted by them and the lack of appropriate measuring equipment in place. Secondly, especially the two-dimensional models are computationally very demanding and therefore the use of available sophisticated automatic calibration procedures is restricted in many cases. This study takes a well documented flood event in August 2002 at the Mulde River in Germany as an example and investigates the most appropriate calibration strategy for a full 2-D hyperbolic finite element model. The model independent optimiser PEST, that gives the possibility of automatic calibrations, is used. The application of the parallel version of the optimiser to the model and calibration data showed that a) it is possible to use automatic calibration in combination of 2-D hydraulic model, and b) equifinality of model parameterisation can also be caused by a too large number of degrees of freedom in the calibration data in contrast to a too simple model setup. In order to improve model calibration and reduce equifinality a method was developed to identify calibration data with likely errors that obstruct model calibration.
Calibration of a fuel relocation model in BISON
Swiler, L. P.; Williamson, R. L.; Perez, D. M.
2013-07-01
We demonstrate parameter calibration in the context of the BISON nuclear fuels performance analysis code. Specifically, we present the calibration of a parameter governing fuel relocation: the power level at which the relocation model is activated. This relocation activation parameter is a critical value in obtaining reasonable comparison with fuel centerline temperature measurements. It also is the subject of some debate in terms of the optimal values. We show that the optimal value does vary across the calibration to individual rods. We also demonstrate an aggregated calibration, where we calibrate to observations from six rods. (authors)
METHODOLOGIES FOR CALIBRATION AND PREDICTIVE ANALYSIS OF A WATERSHED MODEL
The use of a fitted-parameter watershed model to address water quantity and quality management issues requires that it be calibrated under a wide range of hydrologic conditions. However, rarely does model calibration result in a unique parameter set. Parameter nonuniqueness can l...
Calibrating Historical IR Sensors Using GEO, and AVHRR Infrared Tropical Mean Calibration Models
NASA Technical Reports Server (NTRS)
Scarino, Benjamin; Doelling, David R.; Minnis, Patrick; Gopalan, Arun; Haney, Conor; Bhatt, Rajendra
2014-01-01
Long-term, remote-sensing-based climate data records (CDRs) are highly dependent on having consistent, wellcalibrated satellite instrument measurements of the Earth's radiant energy. Therefore, by making historical satellite calibrations consistent with those of today's imagers, the Earth-observing community can benefit from a CDR that spans a minimum of 30 years. Most operational meteorological satellites rely on an onboard blackbody and space looks to provide on-orbit IR calibration, but neither target is traceable to absolute standards. The IR channels can also be affected by ice on the detector window, angle dependency of the scan mirror emissivity, stray-light, and detector-to-detector striping. Being able to quantify and correct such degradations would mean IR data from any satellite imager could contribute to a CDR. Recent efforts have focused on utilizing well-calibrated modern hyper-spectral sensors to intercalibrate concurrent operational IR imagers to a single reference. In order to consistently calibrate both historical and current IR imagers to the same reference, however, another strategy is needed. Large, well-characterized tropical-domain Earth targets have the potential of providing an Earth-view reference accuracy of within 0.5 K. To that effort, NASA Langley is developing an IR tropical mean calibration model in order to calibrate historical Advanced Very High Resolution Radiometer (AVHRR) instruments. Using Meteosat-9 (Met-9) as a reference, empirical models are built based on spatially/temporally binned Met-9 and AVHRR tropical IR brightness temperatures. By demonstrating the stability of the Met-9 tropical models, NOAA-18 AVHRR can be calibrated to Met-9 by matching the AVHRR monthly histogram averages with the Met-9 model. This method is validated with ray-matched AVHRR and Met-9 biasdifference time series. Establishing the validity of this empirical model will allow for the calibration of historical AVHRR sensors to within 0.5 K, and thereby
Evaluation of “Autotune” calibration against manual calibration of building energy models
Chaudhary, Gaurav; New, Joshua; Sanyal, Jibonananda; Im, Piljae; O’Neill, Zheng; Garg, Vishal
2016-08-26
Our paper demonstrates the application of Autotune, a methodology aimed at automatically producing calibrated building energy models using measured data, in two case studies. In the first case, a building model is de-tuned by deliberately injecting faults into more than 60 parameters. This model was then calibrated using Autotune and its accuracy with respect to the original model was evaluated in terms of the industry-standard normalized mean bias error and coefficient of variation of root mean squared error metrics set forth in ASHRAE Guideline 14. In addition to whole-building energy consumption, outputs including lighting, plug load profiles, HVAC energy consumption,more » zone temperatures, and other variables were analyzed. In the second case, Autotune calibration is compared directly to experts’ manual calibration of an emulated-occupancy, full-size residential building with comparable calibration results in much less time. Lastly, our paper concludes with a discussion of the key strengths and weaknesses of auto-calibration approaches.« less
Automatically calibrating admittances in KATE's autonomous launch operations model
NASA Astrophysics Data System (ADS)
Morgan, Steve
1992-09-01
This report documents a 1000-line Symbolics LISP program that automatically calibrates all 15 fluid admittances in KATE's Autonomous Launch Operations (ALO) model. (KATE is Kennedy Space Center's Knowledge-based Autonomous Test Engineer, a diagnosis and repair expert system created for use on the Space Shuttle's various fluid flow systems.) As a new KATE application, the calibrator described here breaks new ground for KSC's Artificial Intelligence Lab by allowing KATE to both control and measure the hardware she supervises. By automating a formerly manual process, the calibrator: (1) saves the ALO model builder untold amounts of labor; (2) enables quick repairs after workmen accidently adjust ALO's hand valves; and (3) frees the modeler to pursue new KATE applications that previously were too complicated. Also reported are suggestions for enhancing the program: (1) to calibrate ALO's TV cameras, pumps, and sensor tolerances; and (2) to calibrate devices in other KATE models, such as the shuttle's LOX and Environment Control System (ECS).
Automatically calibrating admittances in KATE's autonomous launch operations model
NASA Technical Reports Server (NTRS)
Morgan, Steve
1992-01-01
This report documents a 1000-line Symbolics LISP program that automatically calibrates all 15 fluid admittances in KATE's Autonomous Launch Operations (ALO) model. (KATE is Kennedy Space Center's Knowledge-based Autonomous Test Engineer, a diagnosis and repair expert system created for use on the Space Shuttle's various fluid flow systems.) As a new KATE application, the calibrator described here breaks new ground for KSC's Artificial Intelligence Lab by allowing KATE to both control and measure the hardware she supervises. By automating a formerly manual process, the calibrator: (1) saves the ALO model builder untold amounts of labor; (2) enables quick repairs after workmen accidently adjust ALO's hand valves; and (3) frees the modeler to pursue new KATE applications that previously were too complicated. Also reported are suggestions for enhancing the program: (1) to calibrate ALO's TV cameras, pumps, and sensor tolerances; and (2) to calibrate devices in other KATE models, such as the shuttle's LOX and Environment Control System (ECS).
Polarimetric PALSAR System Model Assessment and Calibration
NASA Astrophysics Data System (ADS)
Touzi, R.; Shimada, M.
2009-04-01
Polarimetric PALSAR system parameters are assessed using data sets collected over various calibration sites. The data collected over the Amazonian forest permits validating the zero Faraday rotation hypotheses near the equator. The analysis of the Amazonian forest data and the response of the corner reflectors deployed during the PALSAR acquisitions lead to the conclusion that the antenna is highly isolated (better than -35 dB). Theses results are confirmed using data collected over the Sweden and Ottawa calibration sites. The 5-m height trihedrals deployed in the Sweden calibration site by the Chalmers University of technology permits accurate measurement of antenna parameters, and detection of 2-3 degree Faraday rotation during day acquisition, whereas no Faraday rotation was noted during night acquisition. Small Faraday rotation angles (2-3 degree) have been measured using acquisitions over the DLR Oberpfaffenhofen and the Ottawa calibration sites. The presence of small but still significant Faraday rotation (2-3 degree) induces a CR return at the crosspolarization HV and VH that should not be interpreted as the actual antenna cross-talk. PALSAR antenna is highly isolated (better than -35 dB), and diagonal antenna distortion matrices (with zero cross-talk terms) can be used for accurate calibration of PALSAR polarimetric data.
Cook, D A; Joyce, C J; Barnett, R J; Birgan, S P; Playford, H; Cockings, J G L; Hurford, R W
2002-06-01
Evaluation of the performance of the APACHE III (Acute Physiology and Chronic Health Evaluation) ICU (intensive care unit) and hospital mortality models at the Princess Alexandra Hospital, Brisbane is reported. Prospective collection of demographic, diagnostic, physiological, laboratory, admission and discharge data of 5681 consecutive eligible admissions (1 January 1995 to 1 January 2000) was conducted at the Princess Alexandra Hospital, a metropolitan Australian tertiary referral medical/surgical adult ICU ROC (receiver operating characteristic) curve areas for the APACHE III ICU mortality and hospital mortality models demonstrated excellent discrimination. Observed ICU mortality (9.1%) was significantly overestimated by the APACHE III model adjusted for hospital characteristics (10.1%), but did not significantly differ from the prediction of the generic APACHE III model (8.6%). In contrast, observed hospital mortality (14.8%) agreed well with the prediction of the APACHE III model adjusted for hospital characteristics (14.6%), but was significantly underestimated by the unadjusted APACHE III model (13.2%). Calibration curves and goodness-of-fit analysis using Hosmer-Lemeshow statistics, demonstrated that calibration was good with the unadjusted APACHE III ICU mortality model, and the APACHE III hospital mortality model adjusted for hospital characteristics. Post hoc analysis revealed a declining annual SMR (standardized mortality rate) during the study period. This trend was present in each of the non-surgical, emergency and elective surgical diagnostic groups, and the change was temporally related to increased specialist staffing levels. This study demonstrates that the APACHE III model performs well on independent assessment in an Australian hospital. Changes observed in annual SMR using such a validated model support an hypothesis of improved survival outcomes 1995-1999. PMID:12075637
NASA Astrophysics Data System (ADS)
Garavaglia, F.; Seyve, E.; Gottardi, F.; Le Lay, M.; Gailhard, J.; Garçon, R.
2014-12-01
MORDOR is a conceptual hydrological model extensively used in Électricité de France (EDF, French electric utility company) operational applications: (i) hydrological forecasting, (ii) flood risk assessment, (iii) water balance and (iv) climate change studies. MORDOR is a lumped, reservoir, elevation based model with hourly or daily areal rainfall and air temperature as the driving input data. The principal hydrological processes represented are evapotranspiration, direct and indirect runoff, ground water, snow accumulation and melt and routing. The model has been intensively used at EDF for more than 20 years, in particular for modeling French mountainous watersheds. In the matter of parameters calibration we propose and test alternative multi-criteria techniques based on two specific approaches: automatic calibration using single-objective functions and a priori parameter calibration founded on hydrological watershed features. The automatic calibration approach uses single-objective functions, based on Kling-Gupta efficiency, to quantify the good agreement between the simulated and observed runoff focusing on four different runoff samples: (i) time-series sample, (I) annual hydrological regime, (iii) monthly cumulative distribution functions and (iv) recession sequences.The primary purpose of this study is to analyze the definition and sensitivity of MORDOR parameters testing different calibration techniques in order to: (i) simplify the model structure, (ii) increase the calibration-validation performance of the model and (iii) reduce the equifinality problem of calibration process. We propose an alternative calibration strategy that reaches these goals. The analysis is illustrated by calibrating MORDOR model to daily data for 50 watersheds located in French mountainous regions.
Impact of data quality and quantity and the calibration procedure on crop growth model calibration
NASA Astrophysics Data System (ADS)
Seidel, Sabine J.; Werisch, Stefan
2014-05-01
Crop growth models are a commonly used tool for impact assessment of climate variability and climate change on crop yields and water use. Process-based crop models rely on algorithms that approximate the main physiological plant processes by a set of equations containing several calibration parameters as well as basic underlying assumptions. It is well recognized that model calibration is essential to improve the accuracy and reliability of model predictions. However, model calibration and validation is often hindered by a limited quantity and quality of available data. Recent studies suggest that crop model parameters can only be derived from field experiments in which plant growth and development processes have been measured. To be able to achieve a reliable prediction of crop growth under irrigation or drought stress, the correct characterization of the whole soil-plant-atmosphere system is essential. In this context is the accurate simulation of crop development, yield and the soil water dynamics plays an important role. In this study we aim to investigate the importance of a site and cultivar-specific model calibration based on experimental data using the SVAT model Daisy. We investigate to which extent different data sets and different parameter estimation procedures affect particularly yield estimates, irrigation water demand and the soil water dynamics. The comprehensive experimental data has been derived from an experiment conducted in Germany where five irrigation regimes were imposed on cabbage. Data collection included continuous measurements of soil tension and soil water content in two plots at three depths, weekly measurements of LAI, plant heights, leaf-N-content, stomatal conductivity, biomass partitioning, rooting depth as well as harvested yields and duration of growing period. Three crop growth calibration strategies were compared: (1) manual calibration based on yield and duration of growing period, (2) manual calibration based on yield
Multi-fidelity approach to dynamics model calibration
NASA Astrophysics Data System (ADS)
Absi, Ghina N.; Mahadevan, Sankaran
2016-02-01
This paper investigates the use of structural dynamics computational models with multiple levels of fidelity in the calibration of system parameters. Different types of models may be available for the estimation of unmeasured system properties, with different levels of physics fidelity, mesh resolution and boundary condition assumptions. In order to infer these system properties, Bayesian calibration uses information from multiple sources (including experimental data and prior knowledge), and comprehensively quantifies the uncertainty in the calibration parameters. Estimating the posteriors is done using Markov Chain Monte Carlo sampling, which requires a large number of computations, thus making the use of a high-fidelity model for calibration prohibitively expensive. On the other hand, use of a low-fidelity model could lead to significant error in calibration and prediction. Therefore, this paper develops an approach for model parameter calibration with a low-fidelity model corrected using higher fidelity simulations, and investigates the trade-off between accuracy and computational effort. The methodology is illustrated for a curved panel located in the vicinity of a hypersonic aircraft engine, subjected to acoustic loading. Two models (a frequency response analysis and a full time history analysis) are combined to calibrate the damping characteristics of the panel.
Calibration of stormwater quality regression models: a random process?
Dembélé, A; Bertrand-Krajewski, J-L; Barillon, B
2010-01-01
Regression models are among the most frequently used models to estimate pollutants event mean concentrations (EMC) in wet weather discharges in urban catchments. Two main questions dealing with the calibration of EMC regression models are investigated: i) the sensitivity of models to the size and the content of data sets used for their calibration, ii) the change of modelling results when models are re-calibrated when data sets grow and change with time when new experimental data are collected. Based on an experimental data set of 64 rain events monitored in a densely urbanised catchment, four TSS EMC regression models (two log-linear and two linear models) with two or three explanatory variables have been derived and analysed. Model calibration with the iterative re-weighted least squares method is less sensitive and leads to more robust results than the ordinary least squares method. Three calibration options have been investigated: two options accounting for the chronological order of the observations, one option using random samples of events from the whole available data set. Results obtained with the best performing non linear model clearly indicate that the model is highly sensitive to the size and the content of the data set used for its calibration.
Calibration of the Site-Scale Saturated Zone Flow Model
G. A. Zyvoloski
2001-06-28
The purpose of the flow calibration analysis work is to provide Performance Assessment (PA) with the calibrated site-scale saturated zone (SZ) flow model that will be used to make radionuclide transport calculations. As such, it is one of the most important models developed in the Yucca Mountain project. This model will be a culmination of much of our knowledge of the SZ flow system. The objective of this study is to provide a defensible site-scale SZ flow and transport model that can be used for assessing total system performance. A defensible model would include geologic and hydrologic data that are used to form the hydrogeologic framework model; also, it would include hydrochemical information to infer transport pathways, in-situ permeability measurements, and water level and head measurements. In addition, the model should include information on major model sensitivities. Especially important are those that affect calibration, the direction of transport pathways, and travel times. Finally, if warranted, alternative calibrations representing different conceptual models should be included. To obtain a defensible model, all available data should be used (or at least considered) to obtain a calibrated model. The site-scale SZ model was calibrated using measured and model-generated water levels and hydraulic head data, specific discharge calculations, and flux comparisons along several of the boundaries. Model validity was established by comparing model-generated permeabilities with the permeability data from field and laboratory tests; by comparing fluid pathlines obtained from the SZ flow model with those inferred from hydrochemical data; and by comparing the upward gradient generated with the model with that observed in the field. This analysis is governed by the Office of Civilian Radioactive Waste Management (OCRWM) Analysis and Modeling Report (AMR) Development Plan ''Calibration of the Site-Scale Saturated Zone Flow Model'' (CRWMS M&O 1999a).
Model Calibration of Exciter and PSS Using Extended Kalman Filter
Kalsi, Karanjit; Du, Pengwei; Huang, Zhenyu
2012-07-26
Power system modeling and controls continue to become more complex with the advent of smart grid technologies and large-scale deployment of renewable energy resources. As demonstrated in recent studies, inaccurate system models could lead to large-scale blackouts, thereby motivating the need for model calibration. Current methods of model calibration rely on manual tuning based on engineering experience, are time consuming and could yield inaccurate parameter estimates. In this paper, the Extended Kalman Filter (EKF) is used as a tool to calibrate exciter and Power System Stabilizer (PSS) models of a particular type of machine in the Western Electricity Coordinating Council (WECC). The EKF-based parameter estimation is a recursive prediction-correction process which uses the mismatch between simulation and measurement to adjust the model parameters at every time step. Numerical simulations using actual field test data demonstrate the effectiveness of the proposed approach in calibrating the parameters.
Finite Element Model Calibration Approach for Ares I-X
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Lazor, Daniel R.; Gaspar, James L.; Parks, Russel A.; Bartolotta, Paul A.
2010-01-01
Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of nonconventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pre-test predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.
Finite Element Model Calibration Approach for Area I-X
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Gaspar, James L.; Lazor, Daniel R.; Parks, Russell A.; Bartolotta, Paul A.
2010-01-01
Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of non-conventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pretest predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.
Sediment calibration strategies of Phase 5 Chesapeake Bay watershed model
Wu, J.; Shenk, G.W.; Raffensperger, J.; Moyer, D.; Linker, L.C.; ,
2005-01-01
Sediment is a primary constituent of concern for Chesapeake Bay due to its effect on water clarity. Accurate representation of sediment processes and behavior in Chesapeake Bay watershed model is critical for developing sound load reduction strategies. Sediment calibration remains one of the most difficult components of watershed-scale assessment. This is especially true for Chesapeake Bay watershed model given the size of the watershed being modeled and complexity involved in land and stream simulation processes. To obtain the best calibration, the Chesapeake Bay program has developed four different strategies for sediment calibration of Phase 5 watershed model, including 1) comparing observed and simulated sediment rating curves for different parts of the hydrograph; 2) analyzing change of bed depth over time; 3) relating deposition/scour to total annual sediment loads; and 4) calculating "goodness-of-fit' statistics. These strategies allow a more accurate sediment calibration, and also provide some insightful information on sediment processes and behavior in Chesapeake Bay watershed.
Tradeoffs among watershed model calibration targets for parameter estimation
Hydrologic models are commonly calibrated by optimizing a single objective function target to compare simulated and observed flows, although individual targets are influenced by specific flow modes. Nash-Sutcliffe efficiency (NSE) emphasizes flood peaks in evaluating simulation f...
Calibration model for the DCXC x-ray camera
Fehl, D.L.; Chang, J.
1980-01-01
A physical model for the DCXC camera used in x-radiographic studies of inertial confinement fusion (ICF) targets is described. Empirical calibration procedures, based on pulsed, bremsstrahlung sources, are proposed.
Analysis of Sting Balance Calibration Data Using Optimized Regression Models
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Bader, Jon B.
2010-01-01
Calibration data of a wind tunnel sting balance was processed using a candidate math model search algorithm that recommends an optimized regression model for the data analysis. During the calibration the normal force and the moment at the balance moment center were selected as independent calibration variables. The sting balance itself had two moment gages. Therefore, after analyzing the connection between calibration loads and gage outputs, it was decided to choose the difference and the sum of the gage outputs as the two responses that best describe the behavior of the balance. The math model search algorithm was applied to these two responses. An optimized regression model was obtained for each response. Classical strain gage balance load transformations and the equations of the deflection of a cantilever beam under load are used to show that the search algorithm s two optimized regression models are supported by a theoretical analysis of the relationship between the applied calibration loads and the measured gage outputs. The analysis of the sting balance calibration data set is a rare example of a situation when terms of a regression model of a balance can directly be derived from first principles of physics. In addition, it is interesting to note that the search algorithm recommended the correct regression model term combinations using only a set of statistical quality metrics that were applied to the experimental data during the algorithm s term selection process.
Method calibration of the model 13145 infrared target projectors
NASA Astrophysics Data System (ADS)
Huang, Jianxia; Gao, Yuan; Han, Ying
2014-11-01
The SBIR Model 13145 Infrared Target Projectors ( The following abbreviation Evaluation Unit ) used for characterizing the performances of infrared imaging system. Test items: SiTF, MTF, NETD, MRTD, MDTD, NPS. Infrared target projectors includes two area blackbodies, a 12 position target wheel, all reflective collimator. It provide high spatial frequency differential targets, Precision differential targets imaged by infrared imaging system. And by photoelectricity convert on simulate signal or digital signal. Applications software (IR Windows TM 2001) evaluate characterizing the performances of infrared imaging system. With regards to as a whole calibration, first differently calibration for distributed component , According to calibration specification for area blackbody to calibration area blackbody, by means of to amend error factor to calibration of all reflective collimator, radiance calibration of an infrared target projectors using the SR5000 spectral radiometer, and to analyze systematic error. With regards to as parameter of infrared imaging system, need to integrate evaluation method. According to regulation with -GJB2340-1995 General specification for military thermal imaging sets -testing parameters of infrared imaging system, the results compare with results from Optical Calibration Testing Laboratory . As a goal to real calibration performances of the Evaluation Unit.
Simultaneous calibration of hydrological models in geographical space
NASA Astrophysics Data System (ADS)
Bárdossy, András; Huang, Yingchun; Wagener, Thorsten
2016-07-01
Hydrological models are usually calibrated for selected catchments individually using specific performance criteria. This procedure assumes that the catchments show individual behavior. As a consequence, the transfer of model parameters to other ungauged catchments is problematic. In this paper, the possibility of transferring part of the model parameters was investigated. Three different conceptual hydrological models were considered. The models were restructured by introducing a new parameter η which exclusively controls water balances. This parameter was considered as individual to each catchment. All other parameters, which mainly control the dynamics of the discharge (dynamical parameters), were considered for spatial transfer. Three hydrological models combined with three different performance measures were used in three different numerical experiments to investigate this transferability. The first numerical experiment, involving individual calibration of the models for 15 selected MOPEX catchments, showed that it is difficult to identify which catchments share common dynamical parameters. Parameters of one catchment might be good for another catchment but not the opposite. In the second numerical experiment, a common spatial calibration strategy was used. It was explicitly assumed that the catchments share common dynamical parameters. This strategy leads to parameters which perform well on all catchments. A leave-one-out common calibration showed that in this case a good parameter transfer to ungauged catchments can be achieved. In the third numerical experiment, the common calibration methodology was applied for 96 catchments. Another set of 96 catchments was used to test the transfer of common dynamical parameters. The results show that even a large number of catchments share similar dynamical parameters. The performance is worse than those obtained by individual calibration, but the transfer to ungauged catchments remains possible. The performance of the
A calibration model for screen-caged Peltier thermocouple psychrometers
NASA Astrophysics Data System (ADS)
Brown, R. W.; Bartos, D. L.
1982-07-01
A calibration model for screen-caged Peltier thermocouple psychrometers was developed that applies to a water potential range of 0 to 80 bars, over a temperature range of 0 to 40 C, and for cooling times of 15 to 60 seconds. In addition, the model corrects for the effects of temperature gradients over zero-offsets from -60 to +60 microvolts. Complete details of model development are discussed, together with the theory of thermocouple psychrometers, and techniques of calibration and cleaning. Also, information for computer programing and tabular summaries of model characteristics are provided.
Calibrating RZWQM2 model for maize responses to deficit irrigation
Technology Transfer Automated Retrieval System (TEKTRAN)
Calibrating a system model for field research is a challenge and requires collaboration between modelers and experimentalists. In this study, the Root Zone Water Quality Model-DSSAT (RZWQM2) was used for simulating plant water stresses in corn in Eastern Colorado. The experiments were conducted in 2...
Stormwater quality modelling in combined sewers: calibration and uncertainty analysis.
Kanso, A; Chebbo, G; Tassin, B
2005-01-01
Estimating the level of uncertainty in urban stormwater quality models is vital for their utilization. This paper presents the results of application of a Monte Carlo Markov Chain method based on the Bayesian theory for the calibration and uncertainty analysis of a storm water quality model commonly used in available software. The tested model uses a hydrologic/hydrodynamic scheme to estimate the accumulation, the erosion and the transport of pollutants on surfaces and in sewers. It was calibrated for four different initial conditions of in-sewer deposits. Calibration results showed large variability in the model's responses in function of the initial conditions. They demonstrated that the model's predictive capacity is very low. PMID:16206845
Real-data Calibration Experiments On A Distributed Hydrologic Model
NASA Astrophysics Data System (ADS)
Brath, A.; Montanari, A.; Toth, E.
The increasing availability of extended information on the study watersheds does not generally overcome the need for the determination through calibration of at least a part of the parameters of distributed hydrologic models. The complexity of such models, making the computations highly intensive, has often prevented an extensive analysis of calibration issues. The purpose of this study is an evaluation of the validation results of a series of automatic calibration experiments (using the shuffled complex evolu- tion method, Duan et al., 1992) performed with a highly conceptualised, continuously simulating, distributed hydrologic model applied on the real data of a mid-sized Ital- ian watershed. Major flood events occurred in the 1990-2000 decade are simulated with the parameters obtained by the calibration of the model against discharge data observed at the closure section of the watershed and the hydrological features (overall agreement, volumes, peaks and times to peak) of the discharges obtained both in the closure and in an interior stream-gauge are analysed for validation purposes. A first set of calibrations investigates the effect of the variability of the calibration periods, using the data from several single flood events and from longer, continuous periods. Another analysis regards the influence of rainfall input and it is carried out varying the size and distribution of the raingauge network, in order to examine the relation between the spatial pattern of observed rainfall and the variability of modelled runoff. Lastly, a comparison of the hydrographs obtained for the flood events with the model parameterisation resulting when modifying the objective function to be minimised in the automatic calibration procedure is presented.
An Example Multi-Model Analysis: Calibration and Ranking
NASA Astrophysics Data System (ADS)
Ahlmann, M.; James, S. C.; Lowry, T. S.
2007-12-01
Modeling solute transport is a complex process governed by multiple site-specific parameters like porosity and hydraulic conductivity as well as many solute-dependent processes such as diffusion and reaction. Furthermore, it must be determined whether a steady or time-variant model is most appropriate. A problem arises because over-parameterized conceptual models may be easily calibrated to exactly reproduce measured data, even if these data contain measurement noise. During preliminary site investigation stages where available data may be scarce it is often advisable to develop multiple independent conceptual models, but the question immediately arises: which model is best? This work outlines a method for quickly calibrating and ranking multiple models using the parameter estimation code PEST in conjunction with the second-order-bias-corrected Akaike Information Criterion (AICc). The method is demonstrated using the twelve analytical solutions to the one- dimensional convective-dispersive-reactive solute transport equation as the multiple conceptual models (van~Genuchten M. Th. and W. J. Alves, 1982. Analytical solutions of the one-dimensional convective- dispersive solute transport equation, USDA ARS Technical Bulletin Number 1661. U.S. Salinity Laboratory, 4500 Glenwood Drive, Riverside, CA 92501.). Each solution is calibrated to three data sets, each comprising an increasing number of calibration points that represent increased knowledge of the modeled site (calibration points are selected from one of the analytical solutions that provides the "correct" model). The AICc is calculated after each successive calibration to the three data sets yielding model weights that are functions of the sum of the squared, weighted residuals, the number of parameters, and the number of observations (calibration data points) and ultimately indicates which model has the highest likelihood of being correct. The results illustrate how the sparser data sets can be modeled
Stochastic calibration and learning in nonstationary hydroeconomic models
NASA Astrophysics Data System (ADS)
Maneta, M. P.; Howitt, R.
2014-05-01
Concern about water scarcity and adverse climate events over agricultural regions has motivated a number of efforts to develop operational integrated hydroeconomic models to guide adaptation and optimal use of water. Once calibrated, these models are used for water management and analysis assuming they remain valid under future conditions. In this paper, we present and demonstrate a methodology that permits the recursive calibration of economic models of agricultural production from noisy but frequently available data. We use a standard economic calibration approach, namely positive mathematical programming, integrated in a data assimilation algorithm based on the ensemble Kalman filter equations to identify the economic model parameters. A moving average kernel ensures that new and past information on agricultural activity are blended during the calibration process, avoiding loss of information and overcalibration for the conditions of a single year. A regularization constraint akin to the standard Tikhonov regularization is included in the filter to ensure its stability even in the presence of parameters with low sensitivity to observations. The results show that the implementation of the PMP methodology within a data assimilation framework based on the enKF equations is an effective method to calibrate models of agricultural production even with noisy information. The recursive nature of the method incorporates new information as an added value to the known previous observations of agricultural activity without the need to store historical information. The robustness of the method opens the door to the use of new remote sensing algorithms for operational water management.
Insights into multivariate calibration using errors-in-variables modeling
Thomas, E.V.
1996-09-01
A {ital q}-vector of responses, y, is related to a {ital p}-vector of explanatory variables, x, through a causal linear model. In analytical chemistry, y and x might represent the spectrum and associated set of constituent concentrations of a multicomponent sample which are related through Beer`s law. The model parameters are estimated during a calibration process in which both x and y are available for a number of observations (samples/specimens) which are collectively referred to as the calibration set. For new observations, the fitted calibration model is then used as the basis for predicting the unknown values of the new x`s (concentrations) form the associated new y`s (spectra) in the prediction set. This prediction procedure can be viewed as parameter estimation in an errors-in-variables (EIV) framework. In addition to providing a basis for simultaneous inference about the new x`s, consideration of the EIV framework yields a number of insights relating to the design and execution of calibration studies. A particularly interesting result is that predictions of the new x`s for individual samples can be improved by using seemingly unrelated information contained in the y`s from the other members of the prediction set. Furthermore, motivated by this EIV analysis, this result can be extended beyond the causal modeling context to a broader range of applications of multivariate calibration which involve the use of principal components regression.
Cloud-Based Model Calibration Using OpenStudio: Preprint
Hale, E.; Lisell, L.; Goldwasser, D.; Macumber, D.; Dean, J.; Metzger, I.; Parker, A.; Long, N.; Ball, B.; Schott, M.; Weaver, E.; Brackney, L.
2014-03-01
OpenStudio is a free, open source Software Development Kit (SDK) and application suite for performing building energy modeling and analysis. The OpenStudio Parametric Analysis Tool has been extended to allow cloud-based simulation of multiple OpenStudio models parametrically related to a baseline model. This paper describes the new cloud-based simulation functionality and presents a model cali-bration case study. Calibration is initiated by entering actual monthly utility bill data into the baseline model. Multiple parameters are then varied over multiple iterations to reduce the difference between actual energy consumption and model simulation results, as calculated and visualized by billing period and by fuel type. Simulations are per-formed in parallel using the Amazon Elastic Cloud service. This paper highlights model parameterizations (measures) used for calibration, but the same multi-nodal computing architecture is available for other purposes, for example, recommending combinations of retrofit energy saving measures using the calibrated model as the new baseline.
Calibration and Confirmation in Geophysical Models
NASA Astrophysics Data System (ADS)
Werndl, Charlotte
2016-04-01
For policy decisions the best geophysical models are needed. To evaluate geophysical models, it is essential that the best available methods for confirmation are used. A hotly debated issue on confirmation in climate science (as well as in philosophy) is the requirement of use-novelty (i.e. that data can only confirm models if they have not already been used before. This talk investigates the issue of use-novelty and double-counting for geophysical models. We will see that the conclusions depend on the framework of confirmation and that it is not clear that use-novelty is a valid requirement and that double-counting is illegitimate.
SMAP Global Model Calibration Using SMOS Time-Series Observations
NASA Astrophysics Data System (ADS)
Chan, S.; Njoku, E. G.; Bindlish, R.; O'Neill, P. E.; Jackson, T. J.
2014-12-01
Within the suite of SMAP's standard data products is the Level 2 Passive Soil Moisture Product, which is derived primarily from SMAP's brightness temperature (TB) observations. The baseline retrieval algorithm uses an established microwave emission model that had been extensively tested in many past field experiments. One approach to applying the same model at a global scale with SMAP's TB observations is to use the same calibration coefficients derived from past field experiments and apply them globally. Although this approach is a simplification of reality, it resulted in accurate retrieval in several geographically limited studies. Nevertheless, significant retrieval bias may occur in areas where land cover types had not been considered in past field experiments. In this work, a time-series global model calibration approach is proposed and evaluated. One year of gridded L-band TB observations from the Soil Moisture and Ocean Salinity (SMOS) mission were used as the primary input. At each land pixel on the SMAP grid, the observed TBs were compared with the simulated TBs according to the model with unknown calibration coefficients to be determined. Because of the time-series nature of the input, the above comparison could be repeated for successive revisit dates as a system of equations until the number of known variables (TBs) exceeds the number of unknown variables (calibration coefficients and/or geophysical retrieval). Global nonlinear optimization techniques were then applied to the equations to solve for the optimal model calibration coefficients for that pixel. Following global application of this approach, soil moisture estimates were extracted and compared with in-situ ground measurement. The resulting soil moisture estimates were shown to have an accuracy comparable to what was observed in past field experiments, confirming the versatility of this global model calibration approach.
Multi-Dimensional Calibration of Impact Dynamic Models
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Annett, Martin S.; Jackson, Karen E.
2011-01-01
NASA Langley, under the Subsonic Rotary Wing Program, recently completed two helicopter tests in support of an in-house effort to study crashworthiness. As part of this effort, work is on-going to investigate model calibration approaches and calibration metrics for impact dynamics models. Model calibration of impact dynamics problems has traditionally assessed model adequacy by comparing time histories from analytical predictions to test at only a few critical locations. Although this approach provides for a direct measure of the model predictive capability, overall system behavior is only qualitatively assessed using full vehicle animations. In order to understand the spatial and temporal relationships of impact loads as they migrate throughout the structure, a more quantitative approach is needed. In this work impact shapes derived from simulated time history data are used to recommend sensor placement and to assess model adequacy using time based metrics and orthogonality multi-dimensional metrics. An approach for model calibration is presented that includes metric definitions, uncertainty bounds, parameter sensitivity, and numerical optimization to estimate parameters to reconcile test with analysis. The process is illustrated using simulated experiment data.
40 CFR Appendix III to Part 86 - Constant Volume Sampler Flow Calibration
Code of Federal Regulations, 2010 CFR
2010-07-01
... connected in series with the pump. The calculated flow rate (ft 3/rev@ pump inlet absolute pressure and... measurement of these same pump parameters enables the user to calculate the flow rate from the calibration... ETI °F ±.1 °F. Pressure depression upstream of LFE EPI “H20 ±.1“H20. Pressure drop across the...
Stepwise calibration procedure for regional coupled hydrological-hydrogeological models
NASA Astrophysics Data System (ADS)
Labarthe, Baptiste; Abasq, Lena; de Fouquet, Chantal; Flipo, Nicolas
2014-05-01
Stream-aquifer interaction is a complex process depending on regional and local processes. Indeed, the groundwater component of hydrosystem and large scale heterogeneities control the regional flows towards the alluvial plains and the rivers. In second instance, the local distribution of the stream bed permeabilities controls the dynamics of stream-aquifer water fluxes within the alluvial plain, and therefore the near-river piezometric head distribution. In order to better understand the water circulation and pollutant transport in watersheds, the integration of these multi-dimensional processes in modelling platform has to be performed. Thus, the nested interfaces concept in continental hydrosystem modelling (where regional fluxes, simulated by large scale models, are imposed at local stream-aquifer interfaces) has been presented in Flipo et al (2014). This concept has been implemented in EauDyssée modelling platform for a large alluvial plain model (900km2) part of a 11000km2 multi-layer aquifer system, located in the Seine basin (France). The hydrosystem modelling platform is composed of four spatially distributed modules (Surface, Sub-surface, River and Groundwater), corresponding to four components of the terrestrial water cycle. Considering the large number of parameters to be inferred simultaneously, the calibration process of coupled models is highly computationally demanding and therefore hardly applicable to a real case study of 10000km2. In order to improve the efficiency of the calibration process, a stepwise calibration procedure is proposed. The stepwise methodology involves determining optimal parameters of all components of the coupled model, to provide a near optimum prior information for the global calibration. It starts with the surface component parameters calibration. The surface parameters are optimised based on the comparison between simulated and observed discharges (or filtered discharges) at various locations. Once the surface parameters
Technical note: Bayesian calibration of dynamic ruminant nutrition models.
Reed, K F; Arhonditsis, G B; France, J; Kebreab, E
2016-08-01
Mechanistic models of ruminant digestion and metabolism have advanced our understanding of the processes underlying ruminant animal physiology. Deterministic modeling practices ignore the inherent variation within and among individual animals and thus have no way to assess how sources of error influence model outputs. We introduce Bayesian calibration of mathematical models to address the need for robust mechanistic modeling tools that can accommodate error analysis by remaining within the bounds of data-based parameter estimation. For the purpose of prediction, the Bayesian approach generates a posterior predictive distribution that represents the current estimate of the value of the response variable, taking into account both the uncertainty about the parameters and model residual variability. Predictions are expressed as probability distributions, thereby conveying significantly more information than point estimates in regard to uncertainty. Our study illustrates some of the technical advantages of Bayesian calibration and discusses the future perspectives in the context of animal nutrition modeling.
Calibration of hydrologic models using flow-duration curves
NASA Astrophysics Data System (ADS)
Westerberg, I.; Younger, P.; Guerrero, J.; Beven, K.; Seibert, J.; Halldin, S.; Xu, C.
2010-12-01
The usefulness of hydrological models depends on their skill to mimic real-world hydrology as attested by some efficiency criterion. The suitability of traditional criteria, such as the Nash-Sutcliffe efficiency, for model calibration has been much debated. Discharge data are plentiful for a few decades around the 1970’s but much less available in the last decades since the reported number of discharge stations in the world has gone down substantially from the peak in the late 1970’s. At the same time global precipitation and climate data such as TRMM and ERA-Interim, used to drive hydrological models, have become more readily available in the last 10-20 years. This mismatch of observation time periods makes traditional model calibration difficult or even impossible for basins where there are no overlapping periods of model input and evaluation data. A new calibration method is proposed here that addresses this mismatch and at the same time accounts for uncertainty in discharge data. An estimation of the discharge-data uncertainty is used as a basis to set limits of acceptability for observed flow-duration curves. These limits are then used for model calibration and evaluation within a Generalised Likelihood Uncertainty Estimation (GLUE) framework. Advantages of the new approach include less risk of bias because of epistemic (knowledge) type input-output errors (e.g. no simulated discharge for an observed flow peak because of no rain gauges in the only part of the catchment where it rained), a calibration that addresses the model performance for the whole flow regime (low, medium and high flows) simultaneously and a more realistic uncertainty estimation since discharge uncertainty is addressed. The new method is most suitable for water-balance model applications. Additional limits of acceptability for snow-routine parameters will be needed in basins with snow and frozen soils.
Bayesian calibration of the Community Land Model using surrogates
Ray, Jaideep; Hou, Zhangshuan; Huang, Maoyi; Swiler, Laura Painton
2014-02-01
We present results from the Bayesian calibration of hydrological parameters of the Community Land Model (CLM), which is often used in climate simulations and Earth system models. A statistical inverse problem is formulated for three hydrological parameters, conditional on observations of latent heat surface fluxes over 48 months. Our calibration method uses polynomial and Gaussian process surrogates of the CLM, and solves the parameter estimation problem using a Markov chain Monte Carlo sampler. Posterior probability densities for the parameters are developed for two sites with different soil and vegetation covers. Our method also allows us to examine the structural error in CLM under two error models. We find that surrogate models can be created for CLM in most cases. The posterior distributions are more predictive than the default parameter values in CLM. Climatologically averaging the observations does not modify the parameters' distributions significantly. The structural error model reveals a correlation time-scale which can be used to identify the physical process that could be contributing to it. While the calibrated CLM has a higher predictive skill, the calibration is under-dispersive.
Bayesian Calibration of the Community Land Model using Surrogates
Ray, Jaideep; Hou, Zhangshuan; Huang, Maoyi; Sargsyan, K.; Swiler, Laura P.
2015-01-01
We present results from the Bayesian calibration of hydrological parameters of the Community Land Model (CLM), which is often used in climate simulations and Earth system models. A statistical inverse problem is formulated for three hydrological parameters, conditioned on observations of latent heat surface fluxes over 48 months. Our calibration method uses polynomial and Gaussian process surrogates of the CLM, and solves the parameter estimation problem using a Markov chain Monte Carlo sampler. Posterior probability densities for the parameters are developed for two sites with different soil and vegetation covers. Our method also allows us to examine the structural error in CLM under two error models. We find that accurate surrogate models can be created for CLM in most cases. The posterior distributions lead to better prediction than the default parameter values in CLM. Climatologically averaging the observations does not modify the parameters’ distributions significantly. The structural error model reveals a correlation time-scale which can potentially be used to identify physical processes that could be contributing to it. While the calibrated CLM has a higher predictive skill, the calibration is under-dispersive.
WEPP: Model use, calibration, and validation
Technology Transfer Automated Retrieval System (TEKTRAN)
The Water Erosion Prediction Project (WEPP) model is a process-based, continuous simulation, distributed parameter, hydrologic and soil erosion prediction system. It has been developed over the past 25 years to allow for easy application to a large number of land management scenarios. Most general o...
Calibration of hydrological models using flow-duration curves
NASA Astrophysics Data System (ADS)
Westerberg, I. K.; Guerrero, J.-L.; Younger, P. M.; Beven, K. J.; Seibert, J.; Halldin, S.; Freer, J. E.; Xu, C.-Y.
2011-07-01
The degree of belief we have in predictions from hydrologic models will normally depend on how well they can reproduce observations. Calibrations with traditional performance measures, such as the Nash-Sutcliffe model efficiency, are challenged by problems including: (1) uncertain discharge data, (2) variable sensitivity of different performance measures to different flow magnitudes, (3) influence of unknown input/output errors and (4) inability to evaluate model performance when observation time periods for discharge and model input data do not overlap. This paper explores a calibration method using flow-duration curves (FDCs) to address these problems. The method focuses on reproducing the observed discharge frequency distribution rather than the exact hydrograph. It consists of applying limits of acceptability for selected evaluation points (EPs) on the observed uncertain FDC in the extended GLUE approach. Two ways of selecting the EPs were tested - based on equal intervals of discharge and of volume of water. The method was tested and compared to a calibration using the traditional model efficiency for the daily four-parameter WASMOD model in the Paso La Ceiba catchment in Honduras and for Dynamic TOPMODEL evaluated at an hourly time scale for the Brue catchment in Great Britain. The volume method of selecting EPs gave the best results in both catchments with better calibrated slow flow, recession and evaporation than the other criteria. Observed and simulated time series of uncertain discharges agreed better for this method both in calibration and prediction in both catchments. An advantage with the method is that the rejection criterion is based on an estimation of the uncertainty in discharge data and that the EPs of the FDC can be chosen to reflect the aims of the modelling application, e.g. using more/less EPs at high/low flows. While the method appears less sensitive to epistemic input/output errors than previous use of limits of acceptability applied
Calibration of hydrological models using flow-duration curves
NASA Astrophysics Data System (ADS)
Westerberg, I. K.; Guerrero, J.-L.; Younger, P. M.; Beven, K. J.; Seibert, J.; Halldin, S.; Freer, J. E.; Xu, C.-Y.
2010-12-01
The degree of belief we have in predictions from hydrologic models depends on how well they can reproduce observations. Calibrations with traditional performance measures such as the Nash-Sutcliffe model efficiency are challenged by problems including: (1) uncertain discharge data, (2) variable importance of the performance with flow magnitudes, (3) influence of unknown input/output errors and (4) inability to evaluate model performance when observation time periods for discharge and model input data do not overlap. A new calibration method using flow-duration curves (FDCs) was developed which addresses these problems. The method focuses on reproducing the observed discharge frequency distribution rather than the exact hydrograph. It consists of applying limits of acceptability for selected evaluation points (EPs) of the observed uncertain FDC in the extended GLUE approach. Two ways of selecting the EPs were tested - based on equal intervals of discharge and of volume of water. The method was tested and compared to a calibration using the traditional model efficiency for the daily four-parameter WASMOD model in the Paso La Ceiba catchment in Honduras and for Dynamic TOPMODEL evaluated at an hourly time scale for the Brue catchment in Great Britain. The volume method of selecting EPs gave the best results in both catchments with better calibrated slow flow, recession and evaporation than the other criteria. Observed and simulated time series of uncertain discharges agreed better for this method both in calibration and prediction in both catchments without resulting in overpredicted simulated uncertainty. An advantage with the method is that the rejection criterion is based on an estimation of the uncertainty in discharge data and that the EPs of the FDC can be chosen to reflect the aims of the modelling application e.g. using more/less EPs at high/low flows. While the new method is less sensitive to epistemic input/output errors than the normal use of limits of
Simultaneous calibration of hydrological models in geographical space
NASA Astrophysics Data System (ADS)
Bárdossy, A.; Huang, Y.; Wagener, T.
2015-10-01
Hydrological models are usually calibrated for selected catchments individually using specific performance criteria. This procedure assumes that the catchments show individual behavior. As a consequence, the transfer of model parameters to other ungauged catchments is problematic. In this paper, the possibility of transferring part of the model parameters was investigated. Three different conceptual hydrological models were considered. The models were restructured by introducing a new parameter η which exclusively controls water balances. This parameter was considered as individual to each catchment. All other parameters, which mainly control the dynamics of the discharge (dynamical parameters), were considered for spatial transfer. Three hydrological models combined with three different performance measures were used in four different numerical experiments to investigate this transferability. The first numerical experiment, individual calibration of the models for 15 selected MOPEX catchments, showed that it is difficult to identify which catchments share common dynamical parameters. Parameters of one catchment might be good for another catchment but not reversed. In the second numerical experiment, a common spatial calibration strategy was used. It was explicitly assumed that the catchments share common dynamical parameters. This strategy leads to parameters which perform well on all catchments. A leave one out common calibration showed that in this case a good parameter transfer to ungauged catchments can be achieved. In the third numerical experiment, the common calibration methodology was applied for 96 catchments. Another set of 96 catchments were used to test the transfer of common dynamical parameters. The results show that even a large number of catchments share similar dynamical parameters. The performance is worse than those obtained by individual calibration, but the transfer to ungauged catchments remains possible. The performance of the common
Calibrating Subjective Probabilities Using Hierarchical Bayesian Models
NASA Astrophysics Data System (ADS)
Merkle, Edgar C.
A body of psychological research has examined the correspondence between a judge's subjective probability of an event's outcome and the event's actual outcome. The research generally shows that subjective probabilities are noisy and do not match the "true" probabilities. However, subjective probabilities are still useful for forecasting purposes if they bear some relationship to true probabilities. The purpose of the current research is to exploit relationships between subjective probabilities and outcomes to create improved, model-based probabilities for forecasting. Once the model has been trained in situations where the outcome is known, it can then be used in forecasting situations where the outcome is unknown. These concepts are demonstrated using experimental psychology data, and potential applications are discussed.
Hydrologic and water quality models: Use, calibration, and validation
Technology Transfer Automated Retrieval System (TEKTRAN)
This paper introduces a special collection of 22 research articles that present and discuss calibration and validation concepts in detail for hydrologic and water quality models by their developers and presents a broad framework for developing the American Society of Agricultural and Biological Engi...
Robust camera calibration for sport videos using court models
NASA Astrophysics Data System (ADS)
Farin, Dirk; Krabbe, Susanne; de With, Peter H. N.; Effelsberg, Wolfgang
2003-12-01
We propose an automatic camera calibration algorithm for court sports. The obtained camera calibration parameters are required for applications that need to convert positions in the video frame to real-world coordinates or vice versa. Our algorithm uses a model of the arrangement of court lines for calibration. Since the court model can be specified by the user, the algorithm can be applied to a variety of different sports. The algorithm starts with a model initialization step which locates the court in the image without any user assistance or a-priori knowledge about the most probable position. Image pixels are classified as court line pixels if they pass several tests including color and local texture constraints. A Hough transform is applied to extract line elements, forming a set of court line candidates. The subsequent combinatorial search establishes correspondences between lines in the input image and lines from the court model. For the succeeding input frames, an abbreviated calibration algorithm is used, which predicts the camera parameters for the new image and optimizes the parameters using a gradient-descent algorithm. We have conducted experiments on a variety of sport videos (tennis, volleyball, and goal area sequences of soccer games). Video scenes with considerable difficulties were selected to test the robustness of the algorithm. Results show that the algorithm is very robust to occlusions, partial court views, bad lighting conditions, or shadows.
Optical model and calibration of a sun tracker
NASA Astrophysics Data System (ADS)
Volkov, Sergei N.; Samokhvalov, Ignatii V.; Cheong, Hai Du; Kim, Dukhyeon
2016-09-01
Sun trackers are widely used to investigate scattering and absorption of solar radiation in the Earth's atmosphere. We present a method for optimization of the optical altazimuth sun tracker model with output radiation direction aligned with the axis of a stationary spectrometer. The method solves the problem of stability loss in tracker pointing at the Sun near the zenith. An optimal method for tracker calibration at the measurement site is proposed in the present work. A method of moving calibration is suggested for mobile applications in the presence of large temperature differences and errors in the alignment of the optical system of the tracker.
Calibration of longwavelength exotech model 20-C spectroradiometer
NASA Technical Reports Server (NTRS)
Kumar, R.; Robinson, B.; Silva, L.
1978-01-01
A brief description of the Exotech model 20-C field spectroradiometer which measures the spectral radiance of a target in the wavelength ranges 0.37 to 2.5 microns (short wavelength unit), 2.8 to 5.6 microns and 7.0 to 14 microns (long wavelength unit) is given. Wavelength calibration of long wavelength unit was done by knowing the strong, sharp and accurately known absorption bands of polystyrene, atmospheric carbon dioxide and methyl cyclohexane (liquid) in the infrared wavelength region. The spectral radiance calibration was done by recording spectral scans of the hot and the cold blackbodies and assuming that spectral radiance varies linearly with the signal.
Zhang, Xuesong; Srinivasan, Ragahvan; Arnold, J. G.; Izaurralde, Roberto C.; Bosch, David
2011-04-21
Accurate analysis of water flow pathways from rainfall to streams is critical for simulating water use, climate change impact, and contaminants transport. In this study, we developed a new scheme to simultaneously calibrate surface flow (SF) and baseflow (BF) simulations of soil and water assessment tool (SWAT) by combing evolutionary multi-objective optimization (EMO) and BF separation techniques. The application of this scheme demonstrated pronounced trade-off of SWAT’s performance on SF and BF simulations. The simulated major water fluxes and storages variables (e.g. soil moisture, evapotranspiration, and groundwater) using the multiple parameters from EMO span wide ranges. Uncertainty analysis was conducted by Bayesian model averaging of the Pareto optimal solutions. The 90% confidence interval (CI) estimated using all streamflows substantially overestimate the uncertainty of low flows on BF days while underestimating the uncertainty of high flows on SF days. Despite using statistical criteria calculated based on streamflow for model selection, it is important to conduct diagnostic analysis of the agreement of SWAT behaviour and actual watershed dynamics. The new calibration technique can serve as a useful tool to explore the tradeoff between SF and BF simulations and provide candidates for further diagnostic assessment and model identification.
Analysis of Sting Balance Calibration Data Using Optimized Regression Models
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert; Bader, Jon B.
2009-01-01
Calibration data of a wind tunnel sting balance was processed using a search algorithm that identifies an optimized regression model for the data analysis. The selected sting balance had two moment gages that were mounted forward and aft of the balance moment center. The difference and the sum of the two gage outputs were fitted in the least squares sense using the normal force and the pitching moment at the balance moment center as independent variables. The regression model search algorithm predicted that the difference of the gage outputs should be modeled using the intercept and the normal force. The sum of the two gage outputs, on the other hand, should be modeled using the intercept, the pitching moment, and the square of the pitching moment. Equations of the deflection of a cantilever beam are used to show that the search algorithm s two recommended math models can also be obtained after performing a rigorous theoretical analysis of the deflection of the sting balance under load. The analysis of the sting balance calibration data set is a rare example of a situation when regression models of balance calibration data can directly be derived from first principles of physics and engineering. In addition, it is interesting to see that the search algorithm recommended the same regression models for the data analysis using only a set of statistical quality metrics.
Freire, Ricardo O; da Costa, Nivan B; Rocha, Gerd B; Simas, Alfredo M
2007-07-01
The Sparkle/PM3 model is extended to neodymium(III), promethium(III), and samarium(III) complexes. The unsigned mean error, for all Sparkle/PM3 interatomic distances between the trivalent lanthanide ion and the ligand atoms of the first sphere of coordination, is 0.074 Å for Nd(III); 0.057 Å for Pm(III); and 0.075 Å for Sm(III). These figures are similar to the Sparkle/AM1 ones of 0.076 Å, 0.059 Å, and 0.075 Å, respectively, indicating they are all comparable models. Moreover, their accuracy is similar to what can be obtained by present-day ab initio effective potential calculations on such lanthanide complexes. Hence, the choice of which model to utilize will depend on the assessment of the effect of either AM1 or PM3 on the quantum chemical description of the organic ligands. Finally, we present a preliminary attempt to verify the geometry prediction consistency of Sparkle/PM3. Since lanthanide complexes are usually flexible, we randomly generated 200 different input geometries for the samarium complex QIPQOV which were then fully optimized by Sparkle/PM3. A trend appeared in that, on average, the lower the total energy of the local minima found, the lower the unsigned mean errors, and the higher the accuracy of the model. These preliminary results do indicate that attempting to find, with Sparkle/PM3, a global minimum for the geometry of a given complex, with the understanding that it will tend to be closer to the experimental geometry, appears to be warranted. Therefore, the sparkle model is seemingly a trustworthy semiempirical quantum chemical model for the prediction of lanthanide complexes geometries.
An Expectation-Maximization Method for Calibrating Synchronous Machine Models
Meng, Da; Zhou, Ning; Lu, Shuai; Lin, Guang
2013-07-21
The accuracy of a power system dynamic model is essential to its secure and efficient operation. Lower confidence in model accuracy usually leads to conservative operation and lowers asset usage. To improve model accuracy, this paper proposes an expectation-maximization (EM) method to calibrate the synchronous machine model using phasor measurement unit (PMU) data. First, an extended Kalman filter (EKF) is applied to estimate the dynamic states using measurement data. Then, the parameters are calculated based on the estimated states using maximum likelihood estimation (MLE) method. The EM method iterates over the preceding two steps to improve estimation accuracy. The proposed EM method’s performance is evaluated using a single-machine infinite bus system and compared with a method where both state and parameters are estimated using an EKF method. Sensitivity studies of the parameter calibration using EM method are also presented to show the robustness of the proposed method for different levels of measurement noise and initial parameter uncertainty.
Calibration of the hydrogeological model of the Baltic Artesian Basin
NASA Astrophysics Data System (ADS)
Virbulis, J.; Klints, I.; Timuhins, A.; Sennikovs, J.; Bethers, U.
2012-04-01
Let us consider the calibration issue for the Baltic Artesian Basin (BAB) which is a complex hydrogeological system in the southeastern Baltic with surface area close to 0.5 million square kilometers. The model of the geological structure contains 42 layers including aquifers and aquitards. The age of sediments varies from Cambrian up to the Quaternary deposits. The finite element method model was developed for the calculation of the steady state three-dimensional groundwater flow with free surface. No-flow boundary conditions were applied on the rock bottom and the side boundaries of BAB, while simple hydrological model is applied on the surface. The level of the lakes, rivers and the sea is fixed as constant hydraulic head. Constant mean value of 70 mm/year was assumed as an infiltration flux elsewhere and adjusted during the automatic calibration process. Averaged long-term water extraction was applied at the water supply wells. The calibration of the hydrogeological model is one of the most important steps during the model development. The knowledge about the parameters of the modeled system is often insufficient, especially for the large regional models, and a lack of geometric and hydraulic conductivity data is typical. The quasi-Newton optimization method L-BFGS-B is used for the calibration of the BAB model. Model is calibrated on the available water level measurements in monitoring wells and level measurements in boreholes during their installation. As the available data is not uniformly distributed over the covered area, weight coefficient is assigned to each borehole in order not to overestimate the clusters of boreholes. The year 2000 is chosen as the reference year for the present time scenario and the data from surrounding years are also taken into account but with smaller weighting coefficients. The objective function to be minimized by the calibration process is the weighted sum of squared differences between observed and modeled piezometric heads
NASA Technical Reports Server (NTRS)
Lokos, William; Miller, Eric; Hudson, Larry; Holguin, Andrew; Neufeld, David; Haraguchi, Ronnie
2015-01-01
This paper describes the design and conduct of the strain gage load calibration ground test of the SubsoniC Research Aircraft Testbed, Gulfstream III aircraft, and the subsequent data analysis and its results. The goal of this effort was to create and validate multi-gage load equations for shear force, bending moment, and torque for two wing measurement stations. For some of the testing the aircraft was supported by three air bags in order to isolate the wing structure from extraneous load inputs through the main landing gear. Thirty-two strain gage bridges were installed on the left wing. Hydraulic loads were applied to the wing lower surface through a total of 16 load zones. Some dead weight load cases were applied to the upper wing surface using shot bags. Maximum applied loads reached 54,000 pounds.
Non-linear calibration models for near infrared spectroscopy.
Ni, Wangdong; Nørgaard, Lars; Mørup, Morten
2014-02-27
Different calibration techniques are available for spectroscopic applications that show nonlinear behavior. This comprehensive comparative study presents a comparison of different nonlinear calibration techniques: kernel PLS (KPLS), support vector machines (SVM), least-squares SVM (LS-SVM), relevance vector machines (RVM), Gaussian process regression (GPR), artificial neural network (ANN), and Bayesian ANN (BANN). In this comparison, partial least squares (PLS) regression is used as a linear benchmark, while the relationship of the methods is considered in terms of traditional calibration by ridge regression (RR). The performance of the different methods is demonstrated by their practical applications using three real-life near infrared (NIR) data sets. Different aspects of the various approaches including computational time, model interpretability, potential over-fitting using the non-linear models on linear problems, robustness to small or medium sample sets, and robustness to pre-processing, are discussed. The results suggest that GPR and BANN are powerful and promising methods for handling linear as well as nonlinear systems, even when the data sets are moderately small. The LS-SVM is also attractive due to its good predictive performance for both linear and nonlinear calibrations.
Soybean Physiology Calibration in the Community Land Model
NASA Astrophysics Data System (ADS)
Drewniak, B. A.; Bilionis, I.; Constantinescu, E. M.
2014-12-01
With the large influence of agricultural land use on biophysical and biogeochemical cycles, integrating cultivation into Earth System Models (ESMs) is increasingly important. The Community Land Model (CLM) was augmented with a CLM-Crop extension that simulates the development of three crop types: maize, soybean, and spring wheat. The CLM-Crop model is a complex system that relies on a suite of parametric inputs that govern plant growth under a given atmospheric forcing and available resources. However, the strong nonlinearity of ESMs makes parameter fitting a difficult task. In this study, our goal is to calibrate ten of the CLM-Crop parameters for one crop type, soybean, in order to improve model projection of plant development and carbon fluxes. We used measurements of gross primary productivity, net ecosystem exchange, and plant biomass from AmeriFlux sites to choose parameter values that optimize crop productivity in the model. Calibration is performed in a Bayesian framework by developing a scalable and adaptive scheme based on sequential Monte Carlo (SMC). Our scheme can perform model calibration using very few evaluations and, by exploiting parallelism, at a fraction of the time required by plain vanilla Markov Chain Monte Carlo (MCMC). We present the results from a twin experiment (self-validation) and calibration results and validation using real observations from an AmeriFlux tower site in the Midwestern United States, for the soybean crop type. The improved model will help researchers understand how climate affects crop production and resulting carbon fluxes, and additionally, how cultivation impacts climate.
Calibration of two complex ecosystem models with different likelihood functions
NASA Astrophysics Data System (ADS)
Hidy, Dóra; Haszpra, László; Pintér, Krisztina; Nagy, Zoltán; Barcza, Zoltán
2014-05-01
The biosphere is a sensitive carbon reservoir. Terrestrial ecosystems were approximately carbon neutral during the past centuries, but they became net carbon sinks due to climate change induced environmental change and associated CO2 fertilization effect of the atmosphere. Model studies and measurements indicate that the biospheric carbon sink can saturate in the future due to ongoing climate change which can act as a positive feedback. Robustness of carbon cycle models is a key issue when trying to choose the appropriate model for decision support. The input parameters of the process-based models are decisive regarding the model output. At the same time there are several input parameters for which accurate values are hard to obtain directly from experiments or no local measurements are available. Due to the uncertainty associated with the unknown model parameters significant bias can be experienced if the model is used to simulate the carbon and nitrogen cycle components of different ecosystems. In order to improve model performance the unknown model parameters has to be estimated. We developed a multi-objective, two-step calibration method based on Bayesian approach in order to estimate the unknown parameters of PaSim and Biome-BGC models. Biome-BGC and PaSim are a widely used biogeochemical models that simulate the storage and flux of water, carbon, and nitrogen between the ecosystem and the atmosphere, and within the components of the terrestrial ecosystems (in this research the developed version of Biome-BGC is used which is referred as BBGC MuSo). Both models were calibrated regardless the simulated processes and type of model parameters. The calibration procedure is based on the comparison of measured data with simulated results via calculating a likelihood function (degree of goodness-of-fit between simulated and measured data). In our research different likelihood function formulations were used in order to examine the effect of the different model
Model calibration criteria for estimating ecological flow characteristics
Vis, Marc; Knight, Rodney; Poole, Sandra; Wolfe, William; Seibert, Jan; Breuer, Lutz; Kraft, Philipp
2016-01-01
Quantification of streamflow characteristics in ungauged catchments remains a challenge. Hydrological modeling is often used to derive flow time series and to calculate streamflow characteristics for subsequent applications that may differ from those envisioned by the modelers. While the estimation of model parameters for ungauged catchments is a challenging research task in itself, it is important to evaluate whether simulated time series preserve critical aspects of the streamflow hydrograph. To address this question, seven calibration objective functions were evaluated for their ability to preserve ecologically relevant streamflow characteristics of the average annual hydrograph using a runoff model, HBV-light, at 27 catchments in the southeastern United States. Calibration trials were repeated 100 times to reduce parameter uncertainty effects on the results, and 12 ecological flow characteristics were computed for comparison. Our results showed that the most suitable calibration strategy varied according to streamflow characteristic. Combined objective functions generally gave the best results, though a clear underprediction bias was observed. The occurrence of low prediction errors for certain combinations of objective function and flow characteristic suggests that (1) incorporating multiple ecological flow characteristics into a single objective function would increase model accuracy, potentially benefitting decision-making processes; and (2) there may be a need to have different objective functions available to address specific applications of the predicted time series.
MODELING NATURAL ATTENUATION OF FUELS WITH BIOPLUME III
A natural attenuation model that simulates the aerobic and anaerobic biodegradation of fuel hydrocarbons was developed. The resulting model, BIOPLUME III, demonstrates the importance of biodegradation in reducing contaminant concentrations in ground water. In hypothetical simulat...
Computationally efficient calibration of WATCLASS Hydrologic models using surrogate optimization
NASA Astrophysics Data System (ADS)
Kamali, M.; Ponnambalam, K.; Soulis, E. D.
2007-07-01
In this approach, exploration of the cost function space was performed with an inexpensive surrogate function, not the expensive original function. The Design and Analysis of Computer Experiments(DACE) surrogate function, which is one type of approximate models, which takes correlation function for error was employed. The results for Monte Carlo Sampling, Latin Hypercube Sampling and Design and Analysis of Computer Experiments(DACE) approximate model have been compared. The results show that DACE model has a good potential for predicting the trend of simulation results. The case study of this document was WATCLASS hydrologic model calibration on Smokey-River watershed.
A controlled experiment in ground water flow model calibration
Hill, M.C.; Cooley, R.L.; Pollock, D.W.
1998-01-01
Nonlinear regression was introduced to ground water modeling in the 1970s, but has been used very little to calibrate numerical models of complicated ground water systems. Apparently, nonlinear regression is thought by many to be incapable of addressing such complex problems. With what we believe to be the most complicated synthetic test case used for such a study, this work investigates using nonlinear regression in ground water model calibration. Results of the study fall into two categories. First, the study demonstrates how systematic use of a well designed nonlinear regression method can indicate the importance of different types of data and can lead to successive improvement of models and their parameterizations. Our method differs from previous methods presented in the ground water literature in that (1) weighting is more closely related to expected data errors than is usually the case; (2) defined diagnostic statistics allow for more effective evaluation of the available data, the model, and their interaction; and (3) prior information is used more cautiously. Second, our results challenge some commonly held beliefs about model calibration. For the test case considered, we show that (1) field measured values of hydraulic conductivity are not as directly applicable to models as their use in some geostatistical methods imply; (2) a unique model does not necessarily need to be identified to obtain accurate predictions; and (3) in the absence of obvious model bias, model error was normally distributed. The complexity of the test case involved implies that the methods used and conclusions drawn are likely to be powerful in practice.Nonlinear regression was introduced to ground water modeling in the 1970s, but has been used very little to calibrate numerical models of complicated ground water systems. Apparently, nonlinear regression is thought by many to be incapable of addressing such complex problems. With what we believe to be the most complicated synthetic
Use of Cloud Computing to Calibrate a Highly Parameterized Model
NASA Astrophysics Data System (ADS)
Hayley, K. H.; Schumacher, J.; MacMillan, G.; Boutin, L.
2012-12-01
We present a case study using cloud computing to facilitate the calibration of a complex and highly parameterized model of regional groundwater flow. The calibration dataset consisted of many (~1500) measurements or estimates of static hydraulic head, a high resolution time series of groundwater extraction and disposal rates at 42 locations and pressure monitoring at 147 locations with a total of more than one million raw measurements collected over a ten year pumping history, and base flow estimates at 5 surface water monitoring locations. This modeling project was undertaken to assess the sustainability of groundwater withdrawal and disposal plans for insitu heavy oil extraction in Northeast Alberta, Canada. The geological interpretations used for model construction were based on more than 5,000 wireline logs collected throughout the 30,865 km2 regional study area (RSA), and resulted in a model with 28 slices, and 28 hydro stratigraphic units (average model thickness of 700 m, with aquifers ranging from a depth of 50 to 500 m below ground surface). The finite element FEFLOW model constructed on this geological interpretation had 331,408 nodes and required 265 time steps to simulate the ten year transient calibration period. This numerical model of groundwater flow required 3 hours to run on a on a server with two, 2.8 GHz processers and 16 Gb. RAM. Calibration was completed using PEST. Horizontal and vertical hydraulic conductivity as well as specific storage for each unit were independent parameters. For the recharge and the horizontal hydraulic conductivity in the three aquifers with the most transient groundwater use, a pilot point parameterization was adopted. A 7*7 grid of pilot points was defined over the RSA that defined a spatially variable horizontal hydraulic conductivity or recharge field. A 7*7 grid of multiplier pilot points that perturbed the more regional field was then superimposed over the 3,600 km2 local study area (LSA). The pilot point
Design of Experiments, Model Calibration and Data Assimilation
Williams, Brian J.
2014-07-30
This presentation provides an overview of emulation, calibration and experiment design for computer experiments. Emulation refers to building a statistical surrogate from a carefully selected and limited set of model runs to predict unsampled outputs. The standard kriging approach to emulation of complex computer models is presented. Calibration refers to the process of probabilistically constraining uncertain physics/engineering model inputs to be consistent with observed experimental data. An initial probability distribution for these parameters is updated using the experimental information. Markov chain Monte Carlo (MCMC) algorithms are often used to sample the calibrated parameter distribution. Several MCMC algorithms commonly employed in practice are presented, along with a popular diagnostic for evaluating chain behavior. Space-filling approaches to experiment design for selecting model runs to build effective emulators are discussed, including Latin Hypercube Design and extensions based on orthogonal array skeleton designs and imposed symmetry requirements. Optimization criteria that further enforce space-filling, possibly in projections of the input space, are mentioned. Designs to screen for important input variations are summarized and used for variable selection in a nuclear fuels performance application. This is followed by illustration of sequential experiment design strategies for optimization, global prediction, and rare event inference.
A new sewage exfiltration model--parameters and calibration.
Karpf, Christian; Krebs, Peter
2011-01-01
Exfiltration of waste water from sewer systems represents a potential danger for the soil and the aquifer. Common models, which are used to describe the exfiltration process, are based on the law of Darcy, extended by a more or less detailed consideration of the expansion of leaks, the characteristics of the soil and the colmation layer. But, due to the complexity of the exfiltration process, the calibration of these models includes a significant uncertainty. In this paper, a new exfiltration approach is introduced, which implements the dynamics of the clogging process and the structural conditions near sewer leaks. The calibration is realised according to experimental studies and analysis of groundwater infiltration to sewers. Furthermore, exfiltration rates and the sensitivity of the approach are estimated and evaluated, respectively, by Monte-Carlo simulations.
Spatial and Temporal Self-Calibration of a Hydroeconomic Model
NASA Astrophysics Data System (ADS)
Howitt, R. E.; Hansen, K. M.
2008-12-01
Hydroeconomic modeling of water systems where risk and reliability of water supply are of critical importance must address explicitly how to model water supply uncertainty. When large fluctuations in annual precipitation and significant variation in flows by location are present, a model which solves with perfect foresight of future water conditions may be inappropriate for some policy and research questions. We construct a simulation-optimization model with limited foresight of future water conditions using positive mathematical programming and self-calibration techniques. This limited foresight netflow (LFN) model signals the value of storing water for future use and reflects a more accurate economic value of water at key locations, given that future water conditions are unknown. Failure to explicitly model this uncertainty could lead to undervaluation of storage infrastructure and contractual mechanisms for managing water supply risk. A model based on sequentially updated information is more realistic, since water managers make annual storage decisions without knowledge of yet to be realized future water conditions. The LFN model runs historical hydrological conditions through the current configuration of the California water system to determine the economically efficient allocation of water under current economic conditions and infrastructure. The model utilizes current urban and agricultural demands, storage and conveyance infrastructure, and the state's hydrological history to indicate the scarcity value of water at key locations within the state. Further, the temporal calibration penalty functions vary by year type, reflecting agricultural water users' ability to alter cropping patterns in response to water conditions. The model employs techniques from positive mathematical programming (Howitt, 1995; Howitt, 1998; Cai and Wang, 2006) to generate penalty functions that are applied to deviations from observed data. The functions are applied to monthly flows
Methane emission modeling with MCMC calibration for a boreal peatland
NASA Astrophysics Data System (ADS)
Raivonen, Maarit; Smolander, Sampo; Susiluoto, Jouni; Backman, Leif; Li, Xuefei; Markkanen, Tiina; Kleinen, Thomas; Makela, Jarmo; Aalto, Tuula; Rinne, Janne; Brovkin, Victor; Vesala, Timo
2016-04-01
Natural wetlands, particularly peatlands of the boreal latitudes, are a significant source of methane (CH4). At the moment, the emission estimates are highly uncertain. These natural emissions respond to climatic variability, so it is necessary to understand their dynamics, in order to be able to predict how they affect the greenhouse gas balance in the future. We have developed a model of CH4 production, oxidation and transport in boreal peatlands. It simulates production of CH4 as a proportion of anaerobic peat respiration, transport of CH4 and oxygen between the soil and the atmosphere via diffusion in aerenchymatous plants and in peat pores (water and air filled), ebullition and oxidation of CH4 by methanotrophic microbes. Ultimately, we aim to add the model functionality to global climate models such as the JSBACH (Reick et al., 2013), the land surface scheme of the MPI Earth System Model. We tested the model with measured methane fluxes (using eddy covariance technique) from the Siikaneva site, an oligotrophic boreal fen in southern Finland (61°49' N, 24°11' E), over years 2005-2011. To give the model estimates regional reliability, we calibrated the model using Markov chain Monte Carlo (MCMC) technique. Although the simulations and the research are still ongoing, preliminary results from the MCMC calibration can be described as very promising considering that the model is still at relatively early stage. We will present the model and its dynamics as well as results from the MCMC calibration and the comparison with Siikaneva flux data.
Xenon arc lamp spectral radiance modelling for satellite instrument calibration
NASA Astrophysics Data System (ADS)
Rolt, Stephen; Clark, Paul; Schmoll, Jürgen; Shaw, Benjamin J. R.
2016-07-01
Precise radiometric measurements play a central role in many areas of astronomical and terrestrial observation. We focus on the use of continuum light sources in the absolute radiometric calibration of detectors in an imaging spectrometer for space applications. The application, in this instance, revolves around the ground based calibration of the Sentinel-4/UVN instrument. This imaging spectrometer instrument is expected to be deployed in 2019 and will make spatially resolved spectroscopic measurements of atmospheric chemistry. The instrument, which operates across the UV/VIS and NIR spectrum from 305-775 nm, is designed to measure the absolute spectral radiance of the Earth and compare it with the absolute spectral irradiance of the Sun. Of key importance to the fidelity of these absolute measurements is the ground based calibration campaign. Continuum lamp sources that are temporally stable and are spatially well defined are central to this process. Xenon short arc lamps provide highly intense and efficient continuum illumination in a range extending from the ultra-violet to the infra-red and their spectrum is well matched to this specific application. Despite their widespread commercial use, certain aspects of their performance are not well documented in the literature. One of the important requirements in this calibration application is the delivery of highly uniform, collimated illumination at high radiance. In this process, it cannot be assumed that the xenon arc is a point source; the spatial distribution of the radiance must be characterised accurately. We present here careful measurements that thoroughly characterise the spatial distribution of the spectral radiance of a 1000W xenon lamp. A mathematical model is presented describing the spatial distribution. Temporal stability is another exceptionally important requirement in the calibration process. As such, the paper also describes strategies to re-inforce the temporal stability of the lamp output by
An improved calibration technique for wind tunnel model attitude sensors
NASA Technical Reports Server (NTRS)
Tripp, John S.; Wong, Douglas T.; Finley, Tom D.; Tcheng, Ping
1993-01-01
Aerodynamic wind tunnel tests at NASA Langley Research Center (LaRC) require accurate measurement of model attitude. Inertial accelerometer packages have been the primary sensor used to measure model attitude to an accuracy of +/- 0.01 deg as required for aerodynamic research. The calibration parameters of the accelerometer package are currently obtained from a seven-point tumble test using a simplified empirical approximation. The inaccuracy due to the approximation exceeds the accuracy requirement as the misalignment angle between the package axis and the model body axis increases beyond 1.4 deg. This paper presents the exact solution derived from the coordinate transformation to eliminate inaccuracy caused by the approximation. In addition, a new calibration procedure is developed in which the data taken from the seven-point tumble test is fit to the exact solution by means of a least-squares estimation procedure. Validation tests indicate that the new calibration procedure provides +/- 0.005-deg accuracy over large package misalignments, which is not possible with the current procedure.
KINEROS2-AGWA: Model Use, Calibration, and Validation
NASA Technical Reports Server (NTRS)
Goodrich, D C.; Burns, I. S.; Unkrich, C. L.; Semmens, D. J.; Guertin, D. P.; Hernandez, M.; Yatheendradas, S.; Kennedy, J. R.; Levick, L. R..
2013-01-01
KINEROS (KINematic runoff and EROSion) originated in the 1960s as a distributed event-based model that conceptualizes a watershed as a cascade of overland flow model elements that flow into trapezoidal channel model elements. KINEROS was one of the first widely available watershed models that interactively coupled a finite difference approximation of the kinematic overland flow equations to a physically based infiltration model. Development and improvement of KINEROS continued from the 1960s on a variety of projects for a range of purposes, which has resulted in a suite of KINEROS-based modeling tools. This article focuses on KINEROS2 (K2), a spatially distributed, event-based watershed rainfall-runoff and erosion model, and the companion ArcGIS-based Automated Geospatial Watershed Assessment (AGWA) tool. AGWA automates the time-consuming tasks of watershed delineation into distributed model elements and initial parameterization of these elements using commonly available, national GIS data layers. A variety of approaches have been used to calibrate and validate K2 successfully across a relatively broad range of applications (e.g., urbanization, pre- and post-fire, hillslope erosion, erosion from roads, runoff and recharge, and manure transport). The case studies presented in this article (1) compare lumped to stepwise calibration and validation of runoff and sediment at plot, hillslope, and small watershed scales; and (2) demonstrate an uncalibrated application to address relative change in watershed response to wildfire.
Synthetic calibration of a Rainfall-Runoff Model
Thompson, David B.; Westphal, Jerome A.; ,
1990-01-01
A method for synthetically calibrating storm-mode parameters for the U.S. Geological Survey's Precipitation-Runoff Modeling System is described. Synthetic calibration is accomplished by adjusting storm-mode parameters to minimize deviations between the pseudo-probability disributions represented by regional regression equations and actual frequency distributions fitted to model-generated peak discharge and runoff volume. Results of modeling storm hydrographs using synthetic and analytic storm-mode parameters are presented. Comparisons are made between model results from both parameter sets and between model results and observed hydrographs. Although mean storm runoff is reproducible to within about 26 percent of the observed mean storm runoff for five or six parameter sets, runoff from individual storms is subject to large disparities. Predicted storm runoff volume ranged from 2 percent to 217 percent of commensurate observed values. Furthermore, simulation of peak discharges was poor. Predicted peak discharges from individual storm events ranged from 2 percent to 229 percent of commensurate observed values. The model was incapable of satisfactorily executing storm-mode simulations for the study watersheds. This result is not considered a particular fault of the model, but instead is indicative of deficiencies in similar conceptual models.
Design driven test patterns for OPC models calibration
NASA Astrophysics Data System (ADS)
Al-Imam, Mohamed
2009-03-01
In the modern photolithography process for manufacturing integrated circuits, geometry dimensions need to be realized on silicon which are much smaller than the exposure wavelength. Thus Resolution Enhancement Techniques have an indispensable role towards the implementation of a successful technology process node. Finding an appropriate RET recipe, that answers the needs of a certain fabrication process, usually involves intensive computational simulations. These simulations have to reflect how different elements in the lithography process under study will behave. In order to achieve this, accurate models are needed that truly represent the transmission of patterns from mask to silicon. A common practice in calibrating lithography models is to collect data for the dimensions of some test structures created on the exposure mask along with the corresponding dimensions of these test structures on silicon after exposure. This data is used to tune the models for good predictions. The models will be guaranteed to accurately predict the test structures that has been used in its tuning. However, real designs might have a much greater variety of structures that might not have been included in the test structures. This paper explores a method for compiling the test structures to be used in the model calibration process using design layouts as an input. The method relies on reducing structures in the design layout to the essential unique structure from the lithography models point of view, and thus ensuring that the test structures represent what the model would actually have to predict during the simulations.
Differences between GPS receiver antenna calibration models and influence on geodetic positioning
NASA Astrophysics Data System (ADS)
Baire, Q.; Bruyninx, C.; Pottiaux, E.; Legrand, J.; Aerts, W.
2012-12-01
Since April 2011, the igs08.atx antenna calibration model is used in the routine IGS (International GNSS Service) data analysis. The model includes mean robot calibrations to correct for the offset and phase center variations of the GNSS receiver antennas. These so-called "type" calibrations are means of the individual calibrations available for a specific antenna/radome combination. The aim of this study is to quantify the offset on the computed station positions when using different receiver antenna calibration models in the analysis. First, type calibrations are compared to individual receiver antenna calibrations. We analyze the observations of the 43 EUREF Permanent Network (EPN) stations equipped with individually calibrated receiver antenna over the period covering 2003 to 2010 using the Precise Point Positioning (PPP) technique. The difference between individual and type calibrations has a larger impact on the vertical component: the position offsets reach 4 mm in the horizontal components and 10 mm in the vertical component. In a second step, the effect of different individual calibration models of the same antenna on the positioning is assessed. For that purpose, data from several GNSS stations equipped with an antenna which has been individually calibrated at two calibration agencies are used. Those agencies are GEO++, performing robot calibrations, and University of Bonn, performing anechoic chamber calibrations, both recognized by the IGS. Initial results show that the position offsets induced by different calibration methods can reach 3 mm in the horizontal components and 7 mm in the vertical component.
Calibrating the Abaqus Crushable Foam Material Model using UNM Data
Schembri, Philip E.; Lewis, Matthew W.
2014-02-27
Triaxial test data from the University of New Mexico and uniaxial test data from W-14 is used to calibrate the Abaqus crushable foam material model to represent the syntactic foam comprised of APO-BMI matrix and carbon microballoons used in the W76. The material model is an elasto-plasticity model in which the yield strength depends on pressure. Both the elastic properties and the yield stress are estimated by fitting a line to the elastic region of each test response. The model parameters are fit to the data (in a non-rigorous way) to provide both a conservative and not-conservative material model. The model is verified to perform as intended by comparing the values of pressure and shear stress at yield, as well as the shear and volumetric stress-strain response, to the test data.
CALIBRATING STELLAR POPULATION MODELS WITH MAGELLANIC CLOUD STAR CLUSTERS
Noeel, N. E. D.; Carollo, C. M.; Greggio, L.; Renzini, A.; Maraston, C.
2013-07-20
Stellar population models are commonly calculated using star clusters as calibrators for those evolutionary stages that depend on free parameters. However, discrepancies exist among different models, even if similar sets of calibration clusters are used. With the aim of understanding these discrepancies, and of improving the calibration procedure, we consider a set of 43 Magellanic Cloud (MC) clusters, taking age and photometric information from the literature. We carefully assign ages to each cluster based on up-to-date determinations, ensuring that these are as homogeneous as possible. To cope with statistical fluctuations, we stack the clusters in five age bins, deriving for each of them integrated luminosities and colors. We find that clusters become abruptly red in optical and optical-infrared colors as they age from {approx}0.6 to {approx}1 Gyr, which we interpret as due to the development of a well-populated thermally pulsing asymptotic giant branch (TP-AGB). We argue that other studies missed this detection because of coarser age binnings. Maraston and Girardi et al. models predict the presence of a populated TP-AGB at {approx}0.6 Gyr, with a correspondingly very red integrated color, at variance with the data; Bruzual and Charlot and Conroy models run within the error bars at all ages. The discrepancy between the synthetic colors of Maraston models and the average colors of MC clusters results from the now obsolete age scale adopted. Finally, our finding that the TP-AGB phase appears to develop between {approx}0.6 and 1 Gyr is dependent on the adopted age scale for the clusters and may have important implications for stellar evolution.
Calibrating Building Energy Models Using Supercomputer Trained Machine Learning Agents
Sanyal, Jibonananda; New, Joshua Ryan; Edwards, Richard; Parker, Lynne Edwards
2014-01-01
Building Energy Modeling (BEM) is an approach to model the energy usage in buildings for design and retrofit purposes. EnergyPlus is the flagship Department of Energy software that performs BEM for different types of buildings. The input to EnergyPlus can often extend in the order of a few thousand parameters which have to be calibrated manually by an expert for realistic energy modeling. This makes it challenging and expensive thereby making building energy modeling unfeasible for smaller projects. In this paper, we describe the Autotune research which employs machine learning algorithms to generate agents for the different kinds of standard reference buildings in the U.S. building stock. The parametric space and the variety of building locations and types make this a challenging computational problem necessitating the use of supercomputers. Millions of EnergyPlus simulations are run on supercomputers which are subsequently used to train machine learning algorithms to generate agents. These agents, once created, can then run in a fraction of the time thereby allowing cost-effective calibration of building models.
Calibration of the crop processes in the climate community model
NASA Astrophysics Data System (ADS)
Constantinescu, E. M.; Drewniak, B. A.; Zeng, X.
2012-12-01
Farming is gaining significant terrestrial ground with increases in population and the expanding use of agriculture for non-nutritional uses such as biofuel production. This agricultural expansion exerts an increasing impact on the terrestrial carbon cycle. In order to refine the impact of such processes, the Community Land Model (CLM) has been augmented with a CLM-Crop extension that simulates the development of three crop types: maize, soybean, and spring wheat. The CLM-Crop model is a complex system that relies on a suite of parametric inputs that govern plant growth under a given atmospheric forcing and available resources. CLM-Crop development used measurements of GPP and NEE from AmeriFlux sites to choose parameter values that optimize crop productivity in the model. In this research we aim to calibrate these parametric forms to provide a faithful projection both in terms of plant development and net carbon exchange. To this end, we propose a new calibration procedure based on a Bayesian approach, which is implemented through a parallel Markov chain Monte Carlo (MCMC) technique. We present the results from a twin experiment (self-validation) and calibration results and validation using real observations from AmeriFlux towers for two sites in the Midwestern U.S., rotating corn and soybean. Data from Bondville, IL and Mead, NE has been collected since the 1990's for GPP, NEE, and plant carbon. The improved model will enhance our understanding of how climate will effect crop production and resulting carbon fluxes and additionally, how cultivation will impact climate.
Parameter Calibration of Mini-LEO Hill Slope Model
NASA Astrophysics Data System (ADS)
Siegel, H.
2015-12-01
The mini-LEO hill slope, located at Biosphere 2, is a small-scale catchment model that is used to study the ways landscapes change in response to biological, chemical, and hydrological processes. Previous experiments have shown that soil heterogeneity can develop as a result of groundwater flow; changing the characteristics of the landscape. To determine whether or not flow has caused heterogeneity within the mini-LEO hill slope, numerical models were used to simulate the observed seepage flow, water table height, and storativity. To begin a numerical model of the hill slope was created using CATchment Hydrology (CATHY). The model was then brought to an initial steady state by applying a rainfall event of 5mm/day for 180 days. Then a specific rainfall experiment of alternating intensities was applied to the model. Next, a parameter calibration was conducted, to fit the model to the observed data, by changing soil parameters individually. The parameters of the best fitting calibration were taken to be the most representative of those present within the mini-LEO hill slope. Our model concluded that heterogeneities had indeed arisen as a result of the rainfall event, resulting in a lower hydraulic conductivity downslope. The lower hydraulic conductivity downslope in turn caused in an increased storage of water and a decrease in seepage flow compared to homogeneous models. This shows that the hydraulic processes acting within a landscape can change the very characteristics of the landscape itself, namely the permeability and conductivity of the soil. In the future results from the excavation of soil in mini-LEO can be compared to the models results to improve the model and validate its findings.
Dynamic calibration of agent-based models using data assimilation
Ward, Jonathan A.; Evans, Andrew J.; Malleson, Nicolas S.
2016-01-01
A widespread approach to investigating the dynamical behaviour of complex social systems is via agent-based models (ABMs). In this paper, we describe how such models can be dynamically calibrated using the ensemble Kalman filter (EnKF), a standard method of data assimilation. Our goal is twofold. First, we want to present the EnKF in a simple setting for the benefit of ABM practitioners who are unfamiliar with it. Second, we want to illustrate to data assimilation experts the value of using such methods in the context of ABMs of complex social systems and the new challenges these types of model present. We work towards these goals within the context of a simple question of practical value: how many people are there in Leeds (or any other major city) right now? We build a hierarchy of exemplar models that we use to demonstrate how to apply the EnKF and calibrate these using open data of footfall counts in Leeds. PMID:27152214
Dynamic calibration of agent-based models using data assimilation.
Ward, Jonathan A; Evans, Andrew J; Malleson, Nicolas S
2016-04-01
A widespread approach to investigating the dynamical behaviour of complex social systems is via agent-based models (ABMs). In this paper, we describe how such models can be dynamically calibrated using the ensemble Kalman filter (EnKF), a standard method of data assimilation. Our goal is twofold. First, we want to present the EnKF in a simple setting for the benefit of ABM practitioners who are unfamiliar with it. Second, we want to illustrate to data assimilation experts the value of using such methods in the context of ABMs of complex social systems and the new challenges these types of model present. We work towards these goals within the context of a simple question of practical value: how many people are there in Leeds (or any other major city) right now? We build a hierarchy of exemplar models that we use to demonstrate how to apply the EnKF and calibrate these using open data of footfall counts in Leeds. PMID:27152214
Efficiency of Evolutionary Algorithms for Calibration of Watershed Models
NASA Astrophysics Data System (ADS)
Ahmadi, M.; Arabi, M.
2009-12-01
Since the promulgation of the Clean Water Act in the U.S. and other similar legislations around the world over the past three decades, watershed management programs have focused on the nexus of pollution prevention and mitigation. In this context, hydrologic/water quality models have been increasingly embedded in the decision making process. Simulation models are now commonly used to investigate the hydrologic response of watershed systems under varying climatic and land use conditions, and also to study the fate and transport of contaminants at various spatiotemporal scales. Adequate calibration and corroboration of models for various outputs at varying scales is an essential component of watershed modeling. The parameter estimation process could be challenging when multiple objectives are important. For example, improving streamflow predictions of the model at a stream location may result in degradation of model predictions for sediments and/or nutrient at the same location or other outlets. This paper aims to evaluate the applicability and efficiency of single and multi objective evolutionary algorithms for parameter estimation of complex watershed models. To this end, the Shuffled Complex Evolution (SCE-UA) algorithm, a single-objective genetic algorithm (GA), and a multi-objective genetic algorithm (i.e., NSGA-II) were reconciled with the Soil and Water Assessment Tool (SWAT) to calibrate the model at various locations within the Wildcat Creek Watershed, Indiana. The efficiency of these methods were investigated using different error statistics including root mean square error, coefficient of determination and Nash-Sutcliffe efficiency coefficient for the output variables as well as the baseflow component of the stream discharge. A sensitivity analysis was carried out to screening model parameters that bear significant uncertainties. Results indicated that while flow processes can be reasonably ascertained, parameterization of nutrient and pesticide processes
Calibration of hydraulic models: effects of rating-curve uncertainty
NASA Astrophysics Data System (ADS)
Domeneghetti, Alessio; Castellarin, Attilio; Brath, Armando
2010-05-01
This research focuses on the uncertainty of rating-curves and how this uncertainty propagates to Manning's roughness coefficient during the calibration of numerical hydraulic models. Rating-curves, relating stage and flow discharge, are traditionally used for describing boundary conditions. The uncertainty associated with rating-curves is often neglected, and generally considered to be less important than other factors (see e.g., Di Baldassarre and Montanari, HESS, 2009). We performed a series of simulation experiments aimed at: (1) quantitatively assessing the uncertainty of the curves; (2) investigating its effects on the calibration of Manning's roughness coefficient. We used a quasi-bidimensional (quasi-2D) model of the middle-lower reach of the River Po (Northern Italy) to simulate 10 different historical flood events for the hydrometric river cross-section located in Cremona. Using the simulated data, we mimicked 15 measurement campaigns for each flood event and we corrupted the discharge data values according to the indications on measurement campaigns and errors reported in the literature (i.e., EU. ISO EN 748, 1997). We then constructed the 90% confidence interval for the synthetic curves. Then, we performed an additional set of model runs downstream of the Cremona's cross-section to assess how the uncertainty of rating curves affects the estimated Manning coefficients during the calibration phase. The results of the study show that the variation of Manning's roughness coefficient resulting from the rating-curve uncertainty is significant. This variation is analysed and discussed relative to the variability of Manning's coefficient reported in the literature for different channel conditions characterising lower reaches of large natural streams.
NASA Technical Reports Server (NTRS)
Lokos, William A.; Miller, Eric J.; Hudson, Larry D.; Holguin, Andrew C.; Neufeld, David C.; Haraguchi, Ronnie
2015-01-01
This paper describes the design and conduct of the strain-gage load calibration ground test of the SubsoniC Research Aircraft Testbed, Gulfstream III aircraft, and the subsequent data analysis and results. The goal of this effort was to create and validate multi-gage load equations for shear force, bending moment, and torque for two wing measurement stations. For some of the testing the aircraft was supported by three airbags in order to isolate the wing structure from extraneous load inputs through the main landing gear. Thirty-two strain gage bridges were installed on the left wing. Hydraulic loads were applied to the wing lower surface through a total of 16 load zones. Some dead-weight load cases were applied to the upper wing surface using shot bags. Maximum applied loads reached 54,000 lb. Twenty-six load cases were applied with the aircraft resting on its landing gear, and 16 load cases were performed with the aircraft supported by the nose gear and three airbags around the center of gravity. Maximum wing tip deflection reached 17 inches. An assortment of 2, 3, 4, and 5 strain-gage load equations were derived and evaluated against independent check cases. The better load equations had root mean square errors less than 1 percent. Test techniques and lessons learned are discussed.
New NIR Calibration Models Speed Biomass Composition and Reactivity Characterization
2015-09-01
Obtaining accurate chemical composition and reactivity (measures of carbohydrate release and yield) information for biomass feedstocks in a timely manner is necessary for the commercialization of biofuels. This highlight describes NREL's work to use near-infrared (NIR) spectroscopy and partial least squares multivariate analysis to develop calibration models to predict the feedstock composition and the release and yield of soluble carbohydrates generated by a bench-scale dilute acid pretreatment and enzymatic hydrolysis assay. This highlight is being developed for the September 2015 Alliance S&T Board meeting.
Air pollution modeling and its application III
De Wispelaere, C.
1984-01-01
This book focuses on the Lagrangian modeling of air pollution. Modeling cooling tower and power plant plumes, modeling the dispersion of heavy gases, remote sensing as a tool for air pollution modeling, dispersion modeling including photochemistry, and the evaluation of model performances in practical applications are discussed. Specific topics considered include dispersion in the convective boundary layer, the application of personal computers to Lagrangian modeling, the dynamic interaction of cooling tower and stack plumes, the diffusion of heavy gases, correlation spectrometry as a tool for mesoscale air pollution modeling, Doppler acoustic sounding, tetroon flights, photochemical air quality simulation modeling, acid deposition of photochemical oxidation products, atmospheric diffusion modeling, applications of an integral plume rise model, and the estimation of diffuse hydrocarbon leakages from petrochemical factories. This volume constitutes the proceedings of the Thirteenth International Technical Meeting on Air Pollution Modeling and Its Application held in France in 1982.
Testing calibration routines for LISFLOOD, a distributed hydrological model
NASA Astrophysics Data System (ADS)
Pannemans, B.
2009-04-01
Traditionally hydrological models are considered as difficult to calibrate: their highly non-linearity results in rugged and rough response surfaces were calibration algorithms easily get stuck in local minima. For the calibration of distributed hydrological models two extra factors play an important role: on the one hand they are often costly on computation, thus restricting the feasible number of model runs; on the other hand their distributed nature smooths the response surface, thus facilitating the search for a global minimum. Lisflood is a distributed hydrological model currently used for the European Flood Alert System - EFAS (Van der Knijff et al, 2008). Its upcoming recalibration over more then 200 catchments, each with an average runtime of 2-3 minutes, proved a perfect occasion to put several existing calibration algorithms to the test. The tested routines are Downhill Simplex (DHS, Nelder and Mead, 1965), SCEUA (Duan et Al. 1993), SCEM (Vrugt et al., 2003) and AMALGAM (Vrugt et al., 2008), and they were evaluated on their capability to efficiently converge onto the global minimum and on the spread in the found solutions in repeated runs. The routines were let loose on a simple hyperbolic function, on a Lisflood catchment using model output as observation, and on two Lisflood catchments using real observations (one on the river Inn in the Alps, the other along the downstream stretch of the Elbe). On the mathematical problem and on the catchment with synthetic observations DHS proved to be the fastest and the most efficient in finding a solution. SCEUA and AMALGAM are a slower, but while SCEUA keeps converging on the exact solution, AMALGAM slows down after about 600 runs. For the Lisflood models with real-time observations AMALGAM (hybrid algorithm that combines several other algorithms, we used CMA, PSO and GA) came as fastest out of the tests, and giving comparable results in consecutive runs. However, some more work is needed to tweak the stopping
Empirical calibration of the near-infrared Ca II triplet - III. Fitting functions
NASA Astrophysics Data System (ADS)
Cenarro, A. J.; Gorgas, J.; Cardiel, N.; Vazdekis, A.; Peletier, R. F.
2002-02-01
Using a near-infrared stellar library of 706 stars with a wide coverage of atmospheric parameters, we study the behaviour of the CaII triplet strength in terms of effective temperature, surface gravity and metallicity. Empirical fitting functions for recently defined line-strength indices, namely CaT*, CaT and PaT, are provided. These functions can be easily implemented into stellar population models to provide accurate predictions for integrated CaII strengths. We also present a thorough study of the various error sources and their relation to the residuals of the derived fitting functions. Finally, the derived functional forms and the behaviour of the predicted CaII are compared with those of previous works in the field.
Sin, Gürkan; Van Hulle, Stijn W H; De Pauw, Dirk J W; van Griensven, Ann; Vanrolleghem, Peter A
2005-07-01
Modelling activated sludge systems has gained an increasing momentum after the introduction of activated sludge models (ASMs) in 1987. Application of dynamic models for full-scale systems requires essentially a calibration of the chosen ASM to the case under study. Numerous full-scale model applications have been performed so far which were mostly based on ad hoc approaches and expert knowledge. Further, each modelling study has followed a different calibration approach: e.g. different influent wastewater characterization methods, different kinetic parameter estimation methods, different selection of parameters to be calibrated, different priorities within the calibration steps, etc. In short, there was no standard approach in performing the calibration study, which makes it difficult, if not impossible, to (1) compare different calibrations of ASMs with each other and (2) perform internal quality checks for each calibration study. To address these concerns, systematic calibration protocols have recently been proposed to bring guidance to the modeling of activated sludge systems and in particular to the calibration of full-scale models. In this contribution four existing calibration approaches (BIOMATH, HSG, STOWA and WERF) will be critically discussed using a SWOT (Strengths, Weaknesses, Opportunities, Threats) analysis. It will also be assessed in what way these approaches can be further developed in view of further improving the quality of ASM calibration. In this respect, the potential of automating some steps of the calibration procedure by use of mathematical algorithms is highlighted. PMID:15993465
Sin, Gürkan; Van Hulle, Stijn W H; De Pauw, Dirk J W; van Griensven, Ann; Vanrolleghem, Peter A
2005-07-01
Modelling activated sludge systems has gained an increasing momentum after the introduction of activated sludge models (ASMs) in 1987. Application of dynamic models for full-scale systems requires essentially a calibration of the chosen ASM to the case under study. Numerous full-scale model applications have been performed so far which were mostly based on ad hoc approaches and expert knowledge. Further, each modelling study has followed a different calibration approach: e.g. different influent wastewater characterization methods, different kinetic parameter estimation methods, different selection of parameters to be calibrated, different priorities within the calibration steps, etc. In short, there was no standard approach in performing the calibration study, which makes it difficult, if not impossible, to (1) compare different calibrations of ASMs with each other and (2) perform internal quality checks for each calibration study. To address these concerns, systematic calibration protocols have recently been proposed to bring guidance to the modeling of activated sludge systems and in particular to the calibration of full-scale models. In this contribution four existing calibration approaches (BIOMATH, HSG, STOWA and WERF) will be critically discussed using a SWOT (Strengths, Weaknesses, Opportunities, Threats) analysis. It will also be assessed in what way these approaches can be further developed in view of further improving the quality of ASM calibration. In this respect, the potential of automating some steps of the calibration procedure by use of mathematical algorithms is highlighted.
Mars Entry Atmospheric Data System Modeling, Calibration, and Error Analysis
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; VanNorman, John; Siemers, Paul M.; Schoenenberger, Mark; Munk, Michelle M.
2014-01-01
The Mars Science Laboratory (MSL) Entry, Descent, and Landing Instrumentation (MEDLI)/Mars Entry Atmospheric Data System (MEADS) project installed seven pressure ports through the MSL Phenolic Impregnated Carbon Ablator (PICA) heatshield to measure heatshield surface pressures during entry. These measured surface pressures are used to generate estimates of atmospheric quantities based on modeled surface pressure distributions. In particular, the quantities to be estimated from the MEADS pressure measurements include the dynamic pressure, angle of attack, and angle of sideslip. This report describes the calibration of the pressure transducers utilized to reconstruct the atmospheric data and associated uncertainty models, pressure modeling and uncertainty analysis, and system performance results. The results indicate that the MEADS pressure measurement system hardware meets the project requirements.
Differential Evolution algorithm applied to FSW model calibration
NASA Astrophysics Data System (ADS)
Idagawa, H. S.; Santos, T. F. A.; Ramirez, A. J.
2014-03-01
Friction Stir Welding (FSW) is a solid state welding process that can be modelled using a Computational Fluid Dynamics (CFD) approach. These models use adjustable parameters to control the heat transfer and the heat input to the weld. These parameters are used to calibrate the model and they are generally determined using the conventional trial and error approach. Since this method is not very efficient, we used the Differential Evolution (DE) algorithm to successfully determine these parameters. In order to improve the success rate and to reduce the computational cost of the method, this work studied different characteristics of the DE algorithm, such as the evolution strategy, the objective function, the mutation scaling factor and the crossover rate. The DE algorithm was tested using a friction stir weld performed on a UNS S32205 Duplex Stainless Steel.
Model Free Gate Design and Calibration For Superconducting Qubits
NASA Astrophysics Data System (ADS)
Egger, Daniel; Wilhelm, Frank
2014-03-01
Gates for superconducting qubits are realized by time dependent control pulses. The pulse shape for a specific gate depends on the parameters of the superconducting qubits, e.g. frequency and non-linearity. Based on ones knowledge of these parameters and using a specific model the pulse shape is determined either analytically or numerically using optimal control [arXiv:1306.6894, arXiv:1306.2279]. However the performance of the pulse is limited by the accuracy of the model. For a pulse with few parameters this is generally not a problem since it can be ``debugged'' manually. He we present an automated method for calibrating multiparameter pulses. We use the Nelder-Mead simplex method to close the control loop. This scheme uses the experiment as feedback and thus does not need a model. It requires few iterations and circumvents process tomogrophy, therefore making it a fast and versatile tool for gate design.
Calibration Modeling Methodology to Optimize Performance for Low Range Applications
NASA Technical Reports Server (NTRS)
McCollum, Raymond A.; Commo, Sean A.; Parker, Peter A.
2010-01-01
Calibration is a vital process in characterizing the performance of an instrument in an application environment and seeks to obtain acceptable accuracy over the entire design range. Often, project requirements specify a maximum total measurement uncertainty, expressed as a percent of full-scale. However in some applications, we seek to obtain enhanced performance at the low range, therefore expressing the accuracy as a percent of reading should be considered as a modeling strategy. For example, it is common to desire to use a force balance in multiple facilities or regimes, often well below its designed full-scale capacity. This paper presents a general statistical methodology for optimizing calibration mathematical models based on a percent of reading accuracy requirement, which has broad application in all types of transducer applications where low range performance is required. A case study illustrates the proposed methodology for the Mars Entry Atmospheric Data System that employs seven strain-gage based pressure transducers mounted on the heatshield of the Mars Science Laboratory mission.
Impact of different individual GNSS receiver antenna calibration models on geodetic positioning
NASA Astrophysics Data System (ADS)
Baire, Q.; Pottiaux, E.; Bruyninx, C.; Defraigne, P.; Aerts, W.; Legrand, J.; Bergeot, N.; Chevalier, J. M.
2012-04-01
Since April 2011, the igs08.atx antenna calibration model is used in the routine IGS (International GNSS Service) data analysis. The model includes mean robot calibrations to correct for the offset and phase center variations of the GNSS receiver antennas. These so-called "type" calibrations are means of the individual calibrations available for a specific antenna/radome combination. The GNSS data analysis performed within the EUREF Permanent Network (EPN) aims at being as consistent as possible with the IGS analysis. This also applies to the receiver antenna calibrations. However, when available, individual antenna calibrations are used within the EPN analysis instead of the "type" calibration. When these individual calibrations are unavailable, then the EPN analysis falls back to (type) calibrations identical as the ones used within the IGS (igs08.atx). The aim of this study is to evaluate the significance of the offset caused by using different receiver antenna calibration models on the station position. Using the PPP (Precise Point Positioning) technique, we first investigate the differences in positioning obtained when switching between individual antenna calibrations and type calibrations. We analyze the observations of the 43 EPN stations equipped with receiver antenna individually calibrated over the period covering from 2003 to 2010 and we show that these differences can reach up to 4 mm in horizontal and 10 mm in vertical. Secondly, we study the accuracy of the individual calibrations models and we evaluate the effect of different sets of individual calibrations on the positioning. For that purpose, we use the data from 6 GNSS stations equipped with an antenna which has been individually calibrated at two calibration facilities recognized by the IGS: GEO++ and Bonn institute.
Modeling Prairie Pothole Lakes: Linking Satellite Observation and Calibration (Invited)
NASA Astrophysics Data System (ADS)
Schwartz, F. W.; Liu, G.; Zhang, B.; Yu, Z.
2009-12-01
This paper examines the response of a complex lake wetland system to variations in climate. The focus is on the lakes and wetlands of the Missouri Coteau, which is part of the larger Prairie Pothole Region of the Central Plains of North America. Information on lake size was enumerated from satellite images, and yielded power law relationships for different hydrological conditions. More traditional lake-stage data were made available to us from the USGS Cottonwood Lake Study Site in North Dakota. A Probabilistic Hydrologic Model (PHM) was developed to simulate lake complexes comprised of tens-of-thousands or more individual closed-basin lakes and wetlands. What is new about this model is a calibration scheme that utilizes remotely-sensed data on lake area as well as stage data for individual lakes. Some ¼ million individual data points are used within a Genetic Algorithm to calibrate the model by comparing the simulated results with observed lake area-frequency power law relationships derived from Landsat images and water depths from seven individual lakes and wetlands. The simulated lake behaviors show good agreement with the observations under average, dry, and wet climatic conditions. The calibrated model is used to examine the impact of climate variability on a large lake complex in ND, in particular, the “Dust Bowl Drought” 1930s. This most famous drought of the 20th Century devastated the agricultural economy of the Great Plains with health and social impacts lingering for years afterwards. Interestingly, the drought of 1930s is unremarkable in relation to others of greater intensity and frequency before AD 1200 in the Great Plains. Major droughts and deluges have the ability to create marked variability of the power law function (e.g. up to one and a half orders of magnitude variability from the extreme Dust Bowl Drought to the extreme 1993-2001 deluge). This new probabilistic modeling approach provides a novel tool to examine the response of the
A New Perspective for the Calibration of Computational Predictor Models.
Crespo, Luis Guillermo
2014-11-01
This paper presents a framework for calibrating computational models using data from sev- eral and possibly dissimilar validation experiments. The offset between model predictions and observations, which might be caused by measurement noise, model-form uncertainty, and numerical error, drives the process by which uncertainty in the models parameters is characterized. The resulting description of uncertainty along with the computational model constitute a predictor model. Two types of predictor models are studied: Interval Predictor Models (IPMs) and Random Predictor Models (RPMs). IPMs use sets to characterize uncer- tainty, whereas RPMs use random vectors. The propagation of a set through a model makes the response an interval valued function of the state, whereas the propagation of a random vector yields a random process. Optimization-based strategies for calculating both types of predictor models are proposed. Whereas the formulations used to calculate IPMs target solutions leading to the interval value function of minimal spread containing all observations, those for RPMs seek to maximize the models' ability to reproduce the distribution of obser- vations. Regarding RPMs, we choose a structure for the random vector (i.e., the assignment of probability to points in the parameter space) solely dependent on the prediction error. As such, the probabilistic description of uncertainty is not a subjective assignment of belief, nor is it expected to asymptotically converge to a fixed value, but instead it is a description of the model's ability to reproduce the experimental data. This framework enables evaluating the spread and distribution of the predicted response of target applications depending on the same parameters beyond the validation domain (i.e., roll-up and extrapolation).
Calibration of Predictor Models Using Multiple Validation Experiments
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2015-01-01
This paper presents a framework for calibrating computational models using data from several and possibly dissimilar validation experiments. The offset between model predictions and observations, which might be caused by measurement noise, model-form uncertainty, and numerical error, drives the process by which uncertainty in the models parameters is characterized. The resulting description of uncertainty along with the computational model constitute a predictor model. Two types of predictor models are studied: Interval Predictor Models (IPMs) and Random Predictor Models (RPMs). IPMs use sets to characterize uncertainty, whereas RPMs use random vectors. The propagation of a set through a model makes the response an interval valued function of the state, whereas the propagation of a random vector yields a random process. Optimization-based strategies for calculating both types of predictor models are proposed. Whereas the formulations used to calculate IPMs target solutions leading to the interval value function of minimal spread containing all observations, those for RPMs seek to maximize the models' ability to reproduce the distribution of observations. Regarding RPMs, we choose a structure for the random vector (i.e., the assignment of probability to points in the parameter space) solely dependent on the prediction error. As such, the probabilistic description of uncertainty is not a subjective assignment of belief, nor is it expected to asymptotically converge to a fixed value, but instead it casts the model's ability to reproduce the experimental data. This framework enables evaluating the spread and distribution of the predicted response of target applications depending on the same parameters beyond the validation domain.
Optimizing the lithography model calibration algorithms for NTD process
NASA Astrophysics Data System (ADS)
Hu, C. M.; Lo, Fred; Yang, Elvis; Yang, T. H.; Chen, K. C.
2016-03-01
As patterns shrink to the resolution limits of up-to-date ArF immersion lithography technology, negative tone development (NTD) process has been an increasingly adopted technique to get superior imaging quality through employing bright-field (BF) masks to print the critical dark-field (DF) metal and contact layers. However, from the fundamental materials and process interaction perspectives, several key differences inherently exist between NTD process and the traditional positive tone development (PTD) system, especially the horizontal/vertical resist shrinkage and developer depletion effects, hence the traditional resist parameters developed for the typical PTD process have no longer fit well in NTD process modeling. In order to cope with the inherent differences between PTD and NTD processes accordingly get improvement on NTD modeling accuracy, several NTD models with different combinations of complementary terms were built to account for the NTD-specific resist shrinkage, developer depletion and diffusion, and wafer CD jump induced by sub threshold assistance feature (SRAF) effects. Each new complementary NTD term has its definite aim to deal with the NTD-specific phenomena. In this study, the modeling accuracy is compared among different models for the specific patterning characteristics on various feature types. Multiple complementary NTD terms were finally proposed to address all the NTD-specific behaviors simultaneously and further optimize the NTD modeling accuracy. The new algorithm of multiple complementary NTD term tested on our critical dark-field layers demonstrates consistent model accuracy improvement for both calibration and verification.
Metal ion leaching from contaminated soils: Model calibration and application
Ganguly, C.; Rabideau, A.J.; Matsumoto, M.R.; Van Benschoten, J.E.
1998-12-01
A previously developed model that describes leaching of heavy metals from contaminated soils is applied to four hazardous-waste-site soils contaminated with Pb. Processes included in the model are intraparticle diffusion, rate expressions for irreversibly and reversibly sorbed fractions, and metal complexation by ions in solution. The model is calibrated using laboratory experimental data in the pH 1--3 range, liquid-to-solid mass ratios from 5 to 20, and leaching times of 24 h. Parameters for the model are estimated through a combination of independent experiments, literature correlations, and mathematical optimization. Equilibrium data were used to estimate site density and an adsorption equilibrium constant. Two kinetic rate coefficients, a particle tortuosity factor, and a distribution coefficient ({alpha}{sub a}) that defined the amount of Pb in two contaminant fractions were adjusted to match kinetic leaching data. Using one set of parameter estimates for each soil, the model successfully simulated experimental data collected under different leaching conditions. The fraction of Pb associated with easily leachable, irreversibly sorbed fraction (1 {minus} {alpha}{sub a}) provides some insight to the geochemical distribution of Pb in the soils tested. The model is used to explore effects of process variables such as liquid-to-solid ratio and sequential washes. The model should be useful for simulating ex-situ soil washing processes and may, with further development, have applications for in-situ flushing processes.
Development of Metropolitan (CITY III) Model. Final Report.
ERIC Educational Resources Information Center
House, Peter
CITY III, a computer-assisted simulation model to be used in the study of complex interactions and consequences of public and private decision-making in an urban setting, is described in this report. The users of the model, with the help of a computer, become public and private decision-makers in a simulated city and, by interacting with one…
Application of Extended Kalman Filter Techniques for Dynamic Model Parameter Calibration
Huang, Zhenyu; Du, Pengwei; Kosterev, Dmitry; Yang, Bo
2009-07-26
Abstract -Phasor measurement has previously been used for sub-system model validation, which enables rigorous comparison of model simulation and recorded dynamics and facilitates identification of problematic model components. Recent work extends the sub-system model validation approach with a focus on how model parameters may be calibrated to match recorded dynamics. In this paper, a calibration method using Extended Kalman Filter (EKF) technique is proposed. This paper presents the formulation as well as case studies to show the validity of the EKF-based parameter calibration method. The proposed calibration method is expected to be a cost-effective means complementary to traditional equipment testing for improving dynamic model quality.
Root zone water quality model (RZWQM2): Model use, calibration and validation
Ma, Liwang; Ahuja, Lajpat; Nolan, B.T.; Malone, Robert; Trout, Thomas; Qi, Z.
2012-01-01
The Root Zone Water Quality Model (RZWQM2) has been used widely for simulating agricultural management effects on crop production and soil and water quality. Although it is a one-dimensional model, it has many desirable features for the modeling community. This article outlines the principles of calibrating the model component by component with one or more datasets and validating the model with independent datasets. Users should consult the RZWQM2 user manual distributed along with the model and a more detailed protocol on how to calibrate RZWQM2 provided in a book chapter. Two case studies (or examples) are included in this article. One is from an irrigated maize study in Colorado to illustrate the use of field and laboratory measured soil hydraulic properties on simulated soil water and crop production. It also demonstrates the interaction between soil and plant parameters in simulated plant responses to water stresses. The other is from a maize-soybean rotation study in Iowa to show a manual calibration of the model for crop yield, soil water, and N leaching in tile-drained soils. Although the commonly used trial-and-error calibration method works well for experienced users, as shown in the second example, an automated calibration procedure is more objective, as shown in the first example. Furthermore, the incorporation of the Parameter Estimation Software (PEST) into RZWQM2 made the calibration of the model more efficient than a grid (ordered) search of model parameters. In addition, PEST provides sensitivity and uncertainty analyses that should help users in selecting the right parameters to calibrate.
NASA Astrophysics Data System (ADS)
Tonkin, Matthew; Doherty, John
2009-12-01
We describe a subspace Monte Carlo (SSMC) technique that reduces the burden of calibration-constrained Monte Carlo when undertaken with highly parameterized models. When Monte Carlo methods are used to evaluate the uncertainty in model outputs, ensuring that parameter realizations reproduce the calibration data requires many model runs to condition each realization. In the new SSMC approach, the model is first calibrated using a subspace regularization method, ideally the hybrid Tikhonov-TSVD "superparameter" approach described by Tonkin and Doherty (2005). Sensitivities calculated with the calibrated model are used to define the calibration null-space, which is spanned by parameter combinations that have no effect on simulated equivalents to available observations. Next, a stochastic parameter generator is used to produce parameter realizations, and for each a difference is formed between the stochastic parameters and the calibrated parameters. This difference is projected onto the calibration null-space and added to the calibrated parameters. If the model is no longer calibrated, parameter combinations that span the calibration solution space are reestimated while retaining the null-space projected parameter differences as additive values. The recalibration can often be undertaken using existing sensitivities, so that conditioning requires only a small number of model runs. Using synthetic and real-world model applications we demonstrate that the SSMC approach is general (it is not limited to any particular model or any particular parameterization scheme) and that it can rapidly produce a large number of conditioned parameter sets.
Using Runoff Data to Calibrate the Community Land Model
NASA Astrophysics Data System (ADS)
Ray, J.; Hou, Z.; Huang, M.; Swiler, L.
2014-12-01
We present a statistical method for calibrating the Community Land Model (CLM) using streamflow observations collected between 1999 and 2008 at the outlet of two river basins from the Model Parameter Estimation Experiment (MOPEX), Oostanaula River at Resaca GA, and Walnut River at Winfield KS.. The observed streamflow shows variability over a large range of time-scales, none of which significantly dominates the others; consequently, the time-series seems noisy and is difficult to be directly used in model parameter estimation efforts without significant filtering. We perform a multi-resolution wavelet decomposition of the observed streamflow, and use the wavelet power coefficients (WPC) as the tuning data. We construct a mapping (a surrogate model) between WPC and three hydrological parameters of the CLM using a training set of 256 CLM runs. The dependence of WPC on the parameters is complex and cannot be captured using a surrogate unless the parameter combinations yield physically plausible model predictions, i.e., those that are skillful when compared to observations. Retaining only the top quartile of the runs ensures skillfulness, as measured by the RMS error between observations and CLM predictions. This "screening" of the training data yields a region (the "valid" region) in the parameter space where accurate surrogate models can be created. We construct a classifier for the "valid" region, and, in conjunction with the surrogate models for WPC, pose a Bayesian inverse problem for the three hydrological parameters. The inverse problem is solved using an adaptive Markov chain Monte Carlo (MCMC) method to construct a three-dimensional posterior distribution for the hydrological parameters. Posterior predictive tests using the surrogate model reveal that the posterior distribution is more predictive than the nominal values of the parameters, which are used as default values in the current version of CLM. The effectiveness of the inversion is then validated by
Hotspot detection and design recommendation using silicon calibrated CMP model
NASA Astrophysics Data System (ADS)
Hui, Colin; Wang, Xian Bin; Huang, Haigou; Katakamsetty, Ushasree; Economikos, Laertis; Fayaz, Mohammed; Greco, Stephen; Hua, Xiang; Jayathi, Subramanian; Yuan, Chi-Min; Li, Song; Mehrotra, Vikas; Chen, Kuang Han; Gbondo-Tugbawa, Tamba; Smith, Taber
2009-03-01
Chemical Mechanical Polishing (CMP) has been used in the manufacturing process for copper (Cu) damascene process. It is well known that dishing and erosion occur during CMP process, and they strongly depend on metal density and line width. The inherent thickness and topography variations become an increasing concern for today's designs running through advanced process nodes (sub 65nm). Excessive thickness and topography variations can have major impacts on chip yield and performance; as such they need to be accounted for during the design stage. In this paper, we will demonstrate an accurate physics based CMP model and its application for CMP-related hotspot detection. Model based checking capability is most useful to identify highly environment sensitive layouts that are prone to early process window limitation and hence failure. Model based checking as opposed to rule based checking can identify more accurately the weak points in a design and enable designers to provide improved layout for the areas with highest leverage for manufacturability improvement. Further, CMP modeling has the ability to provide information on interlevel effects such as copper puddling from underlying topography that cannot be captured in Design-for- Manufacturing (DfM) recommended rules. The model has been calibrated against the silicon produced with the 45nm process from Common Platform (IBMChartered- Samsung) technology. It is one of the earliest 45nm CMP models available today. We will show that the CMP-related hotspots can often occur around the spaces between analog macros and digital blocks in the SoC designs. With the help of the CMP model-based prediction, the design, the dummy fill or the placement of the blocks can be modified to improve planarity and eliminate CMP-related hotspots. The CMP model can be used to pass design recommendations to designers to improve chip yield and performance.
Interplanetary density models as inferred from solar Type III bursts
NASA Astrophysics Data System (ADS)
Oppeneiger, Lucas; Boudjada, Mohammed Y.; Lammer, Helmut; Lichtenegger, Herbert
2016-04-01
We report on the density models derived from spectral features of solar Type III bursts. They are generated by beams of electrons travelling outward from the Sun along open magnetic field lines. Electrons generate Langmuir waves at the plasma frequency along their ray paths through the corona and the interplanetary medium. A large frequency band is covered by the Type III bursts from several MHz down to few kHz. In this analysis, we consider the previous empirical density models proposed to describe the electron density in the interplanetary medium. We show that those models are mainly based on the analysis of Type III bursts generated in the interplanetary medium and observed by satellites (e.g. RAE, HELIOS, VOYAGER, ULYSSES,WIND). Those models are confronted to stereoscopic observations of Type III bursts recorded by WIND, ULYSSES and CASSINI spacecraft. We discuss the spatial evolution of the electron beam along the interplanetary medium where the trajectory is an Archimedean spiral. We show that the electron beams and the source locations are depending on the choose of the empirical density models.
NASA Astrophysics Data System (ADS)
Razavi, S.; Anderson, D.; Martin, P.; MacMillan, G.; Tolson, B.; Gabriel, C.; Zhang, B.
2012-12-01
Many sophisticated groundwater models tend to be computationally intensive as they rigorously represent detailed scientific knowledge about the groundwater systems. Calibration (model inversion), which is a vital step of groundwater model development, can require hundreds or thousands of model evaluations (runs) for different sets of parameters and as such demand prohibitively large computational time and resources. One common strategy to circumvent this computational burden is surrogate modelling which is concerned with developing and utilizing fast-to-run surrogates of the original computationally intensive models (also called fine models). Surrogates can be either based on statistical and data-driven models such as kriging and neural networks or simplified physically-based models with lower fidelity to the original system (also called coarse models). Fidelity in this context refers to the degree of the realism of a simulation model. This research initially investigates different strategies for developing lower-fidelity surrogates of a fine groundwater model and their combinations. These strategies include coarsening the fine model, relaxing the numerical convergence criteria, and simplifying the model geological conceptualisation. Trade-offs between model efficiency and fidelity (accuracy) are of special interest. A methodological framework is developed for coordinating the original fine model with its lower-fidelity surrogates with the objective of efficiently calibrating the parameters of the original model. This framework is capable of mapping the original model parameters to the corresponding surrogate model parameters and also mapping the surrogate model response for the given parameters to the original model response. This framework is general in that it can be used with different optimization and/or uncertainty analysis techniques available for groundwater model calibration and parameter/predictive uncertainty assessment. A real-world computationally
Test data sets for calibration of stochastic and fractional stochastic volatility models.
Pospíšil, Jan; Sobotka, Tomáš
2016-09-01
Data for calibration and out-of-sample error testing of option pricing models are provided alongside data obtained from optimization procedures in "On calibration of stochastic and fractional stochastic volatility models" [1]. Firstly we describe testing data sets, further calibration data obtained from combined optimizers is visually depicted - interactive 3d bar plots are provided. The data is suitable for a further comparison of other optimization routines and also to benchmark different pricing models.
Achleitner, S; Rinderer, M; Kirnbauer, R
2009-01-01
For the Tyrolean part of the river Inn, a hybrid model for flood forecast has been set up and is currently in its test phase. The system is a hybrid system which comprises of a hydraulic 1D model for the river Inn, and the hydrological models HQsim (Rainfall-runoff-discharge model) and the snow and ice melt model SES for modeling the rainfall runoff form non-glaciated and glaciated tributary catchment respectively. Within this paper the focus is put on the hydrological modeling of the totally 49 connected non-glaciated catchments realized with the software HQsim. In the course of model calibration, the identification of the most sensitive parameters is important aiming at an efficient calibration procedure. The indicators used for explaining the parameter sensitivities were chosen specifically for the purpose of flood forecasting. Finally five model parameters could be identified as being sensitive for model calibration when aiming for a well calibrated model for flood conditions. In addition two parameters were identified which are sensitive in situations where the snow line plays an important role.
Achleitner, S; Rinderer, M; Kirnbauer, R
2009-01-01
For the Tyrolean part of the river Inn, a hybrid model for flood forecast has been set up and is currently in its test phase. The system is a hybrid system which comprises of a hydraulic 1D model for the river Inn, and the hydrological models HQsim (Rainfall-runoff-discharge model) and the snow and ice melt model SES for modeling the rainfall runoff form non-glaciated and glaciated tributary catchment respectively. Within this paper the focus is put on the hydrological modeling of the totally 49 connected non-glaciated catchments realized with the software HQsim. In the course of model calibration, the identification of the most sensitive parameters is important aiming at an efficient calibration procedure. The indicators used for explaining the parameter sensitivities were chosen specifically for the purpose of flood forecasting. Finally five model parameters could be identified as being sensitive for model calibration when aiming for a well calibrated model for flood conditions. In addition two parameters were identified which are sensitive in situations where the snow line plays an important role. PMID:19759453
NASA Astrophysics Data System (ADS)
Aggett, Graeme; Spies, Ryan; Szfranski, Bill; Hahn, Claudia; Weil, Page
2016-04-01
An adequate forecasting model may not perform well if it is inadequately calibrated. Model calibration is often constrained by the lack of adequate calibration data, especially for small river basins with high spatial rainfall variability. Rainfall/snow station networks may not be dense enough to accurately estimate the catchment rainfall/SWE. High discharges during flood events are subject to significant error due to flow gauging difficulty. Dynamic changes in catchment conditions (e.g., urbanization; losses in karstic systems) invariably introduce non-homogeneity in the water level and flow data. This presentation will highlight some of the challenges in reliable calibration of National Weather Service (i.e. US) operational flood forecast models, emphasizing the various challenges in different physiographic/climatic domains. It will also highlight the benefit of using various data visualization techniques to transfer information about model calibration to operational forecasters so they may understand the influence of the calibration on model performance under various conditions.
Calibration of visual model for space manipulator with a hybrid LM-GA algorithm
NASA Astrophysics Data System (ADS)
Jiang, Wensong; Wang, Zhongyu
2016-01-01
A hybrid LM-GA algorithm is proposed to calibrate the camera system of space manipulator to improve its locational accuracy. This algorithm can dynamically fuse the Levenberg-Marqurdt (LM) algorithm and Genetic Algorithm (GA) together to minimize the error of nonlinear camera model. LM algorithm is called to optimize the initial camera parameters that are generated by genetic process previously. Iteration should be stopped if the optimized camera parameters meet the accuracy requirements. Otherwise, new populations are generated again by GA and optimized afresh by LM algorithm until the optimal solutions meet the accuracy requirements. A novel measuring machine of space manipulator is designed to on-orbit dynamic simulation and precision test. The camera system of space manipulator, calibrated by hybrid LM-GA algorithm, is used for locational precision test in this measuring instrument. The experimental results show that the mean composite errors are 0.074 mm for hybrid LM-GA camera calibration model, 1.098 mm for LM camera calibration model, and 1.202 mm for GA camera calibration model. Furthermore, the composite standard deviations are 0.103 mm for the hybrid LM-GA camera calibration model, 1.227 mm for LM camera calibration model, and 1.351 mm for GA camera calibration model. The accuracy of hybrid LM-GA camera calibration model is more than 10 times higher than that of other two methods. All in all, the hybrid LM-GA camera calibration model is superior to both the LM camera calibration model and GA camera calibration model.
Yu, Hua; Small, Gary W
2015-02-01
A diagnostic and updating strategy is explored for multivariate calibrations based on near-infrared spectroscopy. For use with calibration models derived from spectral fitting or decomposition techniques, the proposed method constructs models that relate the residual concentrations remaining after a prediction to the residual spectra remaining after the information associated with the calibration model has been extracted. This residual modeling approach is evaluated for use with partial least-squares (PLS) models for predicting physiological levels of glucose in a simulated biological matrix. Residual models are constructed with both PLS and a hybrid technique based on the use of PLS scores as inputs to support vector regression. Calibration and residual models are built with both absorbance and single-beam data collected over 416 days. Effective models for the spectral residuals are built with both types of data and demonstrate the ability to diagnose and correct deviations in performance of the calibration model with time. PMID:25473807
Barañao, P A; Hall, E R
2004-01-01
Activated Sludge Model No 3 (ASM3) was chosen to model an activated sludge system treating effluents from a mechanical pulp and paper mill. The high COD concentration and the high content of readily biodegradable substrates of the wastewater make this model appropriate for this system. ASM3 was calibrated based on batch respirometric tests using fresh wastewater and sludge from the treatment plant, and on analytical measurements of COD, TSS and VSS. The model, developed for municipal wastewater, was found suitable for fitting a variety of respirometric batch tests, performed at different temperatures and food to microorganism ratios (F/M). Therefore, a set of calibrated parameters, as well as the wastewater COD fractions, was estimated for this industrial wastewater. The majority of the calibrated parameters were in the range of those found in the literature.
Barañao, P A; Hall, E R
2004-01-01
Activated Sludge Model No 3 (ASM3) was chosen to model an activated sludge system treating effluents from a mechanical pulp and paper mill. The high COD concentration and the high content of readily biodegradable substrates of the wastewater make this model appropriate for this system. ASM3 was calibrated based on batch respirometric tests using fresh wastewater and sludge from the treatment plant, and on analytical measurements of COD, TSS and VSS. The model, developed for municipal wastewater, was found suitable for fitting a variety of respirometric batch tests, performed at different temperatures and food to microorganism ratios (F/M). Therefore, a set of calibrated parameters, as well as the wastewater COD fractions, was estimated for this industrial wastewater. The majority of the calibrated parameters were in the range of those found in the literature. PMID:15461393
View of a five inch standard Mark III model 1 ...
View of a five inch standard Mark III model 1 #39, manufactured in 1916 at the naval gun factory waterveliet, NY; this is the only gun remaining on olympia dating from the period when it was in commission; note ammunition lift at left side of photograph. (p36) - USS Olympia, Penn's Landing, 211 South Columbus Boulevard, Philadelphia, Philadelphia County, PA
Impact of Spatial Scale on Calibration and Model Output for a Grid-based SWAT Model
NASA Astrophysics Data System (ADS)
Pignotti, G.; Vema, V. K.; Rathjens, H.; Raj, C.; Her, Y.; Chaubey, I.; Crawford, M. M.
2014-12-01
The traditional implementation of the Soil and Water Assessment Tool (SWAT) model utilizes common landscape characteristics known as hydrologic response units (HRUs). Discretization into HRUs provides a simple, computationally efficient framework for simulation, but also represents a significant limitation of the model as spatial connectivity between HRUs is ignored. SWATgrid, a newly developed, distributed version of SWAT, provides modified landscape routing via a grid, overcoming these limitations. However, the current implementation of SWATgrid has significant computational overhead, which effectively precludes traditional calibration and limits the total number of grid cells in a given modeling scenario. Moreover, as SWATgrid is a relatively new modeling approach, it remains largely untested with little understanding of the impact of spatial resolution on model output. The objective of this study was to determine the effects of user-defined input resolution on SWATgrid predictions in the Upper Cedar Creek Watershed (near Auburn, IN, USA). Original input data, nominally at 30 m resolution, was rescaled for a range of resolutions between 30 and 4,000 m. A 30 m traditional SWAT model was developed as the baseline for model comparison. Monthly calibration was performed, and the calibrated parameter set was then transferred to all other SWAT and SWATgrid models to focus the effects of resolution on prediction uncertainty relative to the baseline. Model output was evaluated with respect to stream flow at the outlet and water quality parameters. Additionally, output of SWATgrid models were compared to output of traditional SWAT models at each resolution, utilizing the same scaled input data. A secondary objective considered the effect of scale on calibrated parameter values, where each standard SWAT model was calibrated independently, and parameters were transferred to SWATgrid models at equivalent scales. For each model, computational requirements were evaluated
Dong, Ren G.; Welcome, Daniel E.; McDowell, Thomas W.; Wu, John Z.
2015-01-01
While simulations of the measured biodynamic responses of the whole human body or body segments to vibration are conventionally interpreted as summaries of biodynamic measurements, and the resulting models are considered quantitative, this study looked at these simulations from a different angle: model calibration. The specific aims of this study are to review and clarify the theoretical basis for model calibration, to help formulate the criteria for calibration validation, and to help appropriately select and apply calibration methods. In addition to established vibration theory, a novel theorem of mechanical vibration is also used to enhance the understanding of the mathematical and physical principles of the calibration. Based on this enhanced understanding, a set of criteria was proposed and used to systematically examine the calibration methods. Besides theoretical analyses, a numerical testing method is also used in the examination. This study identified the basic requirements for each calibration method to obtain a unique calibration solution. This study also confirmed that the solution becomes more robust if more than sufficient calibration references are provided. Practically, however, as more references are used, more inconsistencies can arise among the measured data for representing the biodynamic properties. To help account for the relative reliabilities of the references, a baseline weighting scheme is proposed. The analyses suggest that the best choice of calibration method depends on the modeling purpose, the model structure, and the availability and reliability of representative reference data. PMID:26740726
NSLS-II: Nonlinear Model Calibration for Synchrotrons
Bengtsson, J.
2010-10-08
This tech note is essentially a summary of a lecture we delivered to the Acc. Phys. Journal Club Apr, 2010. However, since the estimated accuracy of these methods has been naive and misleading in the field of particle accelerators, i.e., ignores the impact of noise, we will elaborate on this in some detail. A prerequisite for a calibration of the nonlinear Hamiltonian is that the quadratic part has been understood, i.e., that the linear optics for the real accelerator has been calibrated. For synchrotron light source operations, this problem has been solved by the interactive LOCO technique/tool (Linear Optics from Closed Orbits). Before that, in the context of hadron accelerators, it has been done by signal processing of turn-by-turn BPM data. We have outlined how to make a basic calibration of the nonlinear model for synchrotrons. In particular, we have shown how this was done for LEAR, CERN (antiprotons) in the mid-80s. Specifically, our accuracy for frequency estimation was {approx} 1 x 10{sup -5} for 1024 turns (to calibrate the linear optics) and {approx} 1 x 10{sup -4} for 256 turns for tune footprint and betatron spectrum. For a comparison, the estimated tune footprint for stable beam for NSLS-II is {approx}0.1. Since the transverse damping time is {approx}20 msec, i.e., {approx}4,000 turns. There is no fundamental difference for: antiprotons, protons, and electrons in this case. Because the estimated accuracy for these methods in the field of particle accelerators has been naive, i.e., ignoring the impact of noise, we have also derived explicit formula, from first principles, for a quantitative statement. For e.g. N = 256 and 5% noise we obtain {delta}{nu} {approx} 1 x 10{sup -5}. A comparison with the state-of-the-arts in e.g. telecomm and electrical engineering since the 60s is quite revealing. For example, Kalman filter (1960), crucial for the: Ranger, Mariner, and Apollo (including the Lunar Module) missions during the 60s. Or Claude Shannon et al
Lee, Kenneth L.; Korellis, John S.; McFadden, Sam X.
2006-01-01
Experimental data for material plasticity and failure model calibration and validation were obtained from 304L stainless steel. Model calibration data were taken from smooth tension, notched tension, and compression tests. Model validation data were provided from experiments using thin-walled tube specimens subjected to path dependent combinations of internal pressure, extension, and torsion.
Monte Carlo strategies for calibration in climate models
NASA Astrophysics Data System (ADS)
Villagran-Hernandez, Alejandro
Intensive computational methods have been used by Earth scientists in a wide range of problems in data inversion and uncertainty quantification such as earthquake epicenter location and climate projections. To quantify the uncertainties resulting from a range of plausible model configurations it is necessary to estimate a multidimensional probability distribution. The computational cost of estimating these distributions for geoscience applications is impractical using traditional methods such as Metropolis/Gibbs algorithms as simulation costs limit the number of experiments that can be obtained reasonably. Several alternate sampling strategies have been proposed that could improve on the sampling efficiency including Multiple Very Fast Simulated Annealing (MVFSA) and Adaptive Metropolis algorithms. As a goal of this research, the performance of these proposed sampling strategies are evaluated with a surrogate climate model that is able to approximate the noise and response behavior of a realistic atmospheric general circulation model (AGCM). The surrogate model is fast enough that its evaluation can be embedded in these Monte Carlo algorithms. The goal of this thesis is to show that adaptive methods can be superior to MVFSA to approximate the known posterior distribution with fewer forward evaluations. However, the adaptive methods can also be limited by inadequate sample mixing. The Single Component and Delayed Rejection Adaptive Metropolis algorithms were found to resolve these limitations, although challenges remain to approximating multi-modal distributions. The results show that these advanced methods of statistical inference can provide practical solutions to the climate model calibration problem and challenges in quantifying climate projection uncertainties. The computational methods would also be useful to problems outside climate prediction, particularly those where sampling is limited by availability of computational resources.
Calibrating a Magnetotail Model for Storm/Substorm Forecasting
NASA Astrophysics Data System (ADS)
Horton, W.; Siebert, S.; Mithaiwala, M.; Doxas, I.
2003-12-01
The physics network model called WINDMI for the solar WIND driven Magnetosphere-Ionosphere weather system is calibrated on substorm databases [1] using a genetic algorithm. We report on the use of the network as a digital filter to classify the substorms into three types; a process traditionally performed individual inspection. We then turn to using the filter on the seven Geospace Environmental Modeling (GEM) Storms designated for community wide study. These storms cover periods of days and contain many substorms. First the WINDMI model is run with the 14 parameters set from the study based on the Blanchard-McPherron database of 117 isolated substorms with 80% of the data having the AL below -500nT. In contrast, the GEM storms have long periods with AL in the range of -1000nT. The prediction error measured with the average-relative variance (ARV) is of approximately unity. Reapplying the genetic algorithm the parameters shift such that the one long storm has an ARV=0.59. Physics modifications of the basic WINDMI model including the injection of sheet plasma into the ring current are being evaluated in terms of their impact on the ARV and comparisons with non-physics based signal processing prediction filters. Ensembles of initial conditions are run with 700MHz G3 CPU run times of order 17 sec per orbit per day of real data. The AMD AthlonXP 1700+ processor takes 5sec per orbit per day. The IBM SP-2 speed will be reported. With such speeds it is possible to run balls of initial conditions. Substrom Classification with the WINDMI Model, W. Horton, R.S. Weigel, D. Vassiliadis, and I. Doxas, Nonlinear Processes in Geophysics, 1-9, 2003. This work was supported by the National Science Foundation Grant ATM-0229863.
Technology Transfer Automated Retrieval System (TEKTRAN)
In this paper, the Genetic Algorithms (GA) and Bayesian model averaging (BMA) were combined to simultaneously conduct calibration and uncertainty analysis for the Soil and Water Assessment Tool (SWAT). In this hybrid method, several SWAT models with different structures are first selected; next GA i...
The value of subsidence data in ground water model calibration.
Yan, Tingting; Burbey, Thomas J
2008-01-01
The accurate estimation of aquifer parameters such as transmissivity and specific storage is often an important objective during a ground water modeling investigation or aquifer resource evaluation. Parameter estimation is often accomplished with changes in hydraulic head data as the key and most abundant type of observation. The availability and accessibility of global positioning system and interferometric synthetic aperture radar data in heavily pumped alluvial basins can provide important subsidence observations that can greatly aid parameter estimation. The aim of this investigation is to evaluate the value of spatial and temporal subsidence data for automatically estimating parameters with and without observation error using UCODE-2005 and MODFLOW-2000. A synthetic conceptual model (24 separate cases) containing seven transmissivity zones and three zones each for elastic and inelastic skeletal specific storage was used to simulate subsidence and drawdown in an aquifer with variably thick interbeds with delayed drainage. Five pumping wells of variable rates were used to stress the system for up to 15 years. Calibration results indicate that (1) the inverse of the square of the observation values is a reasonable way to weight the observations, (2) spatially abundant subsidence data typically produce superior parameter estimates under constant pumping even with observation error, (3) only a small number of subsidence observations are required to achieve accurate parameter estimates, and (4) for seasonal pumping, accurate parameter estimates for elastic skeletal specific storage values are largely dependent on the quantity of temporal observational data and less on the quantity of available spatial data. PMID:18384595
Model Robust Calibration: Method and Application to Electronically-Scanned Pressure Transducers
NASA Technical Reports Server (NTRS)
Walker, Eric L.; Starnes, B. Alden; Birch, Jeffery B.; Mays, James E.
2010-01-01
This article presents the application of a recently developed statistical regression method to the controlled instrument calibration problem. The statistical method of Model Robust Regression (MRR), developed by Mays, Birch, and Starnes, is shown to improve instrument calibration by reducing the reliance of the calibration on a predetermined parametric (e.g. polynomial, exponential, logarithmic) model. This is accomplished by allowing fits from the predetermined parametric model to be augmented by a certain portion of a fit to the residuals from the initial regression using a nonparametric (locally parametric) regression technique. The method is demonstrated for the absolute scale calibration of silicon-based pressure transducers.
Huang, Zhenyu; Du, Pengwei; Kosterev, Dmitry; Yang, Steve
2013-05-01
Disturbance data recorded by phasor measurement units (PMU) offers opportunities to improve the integrity of dynamic models. However, manually tuning parameters through play-back events demands significant efforts and engineering experiences. In this paper, a calibration method using the extended Kalman filter (EKF) technique is proposed. The formulation of EKF with parameter calibration is discussed. Case studies are presented to demonstrate its validity. The proposed calibration method is cost-effective, complementary to traditional equipment testing for improving dynamic model quality.
NASA Astrophysics Data System (ADS)
Hughes, J. D.; White, J.
2013-12-01
For many numerical hydrologic models it is a challenge to quantitatively demonstrate that complex models are preferable to simpler models. Typically, a decision is made to develop and calibrate a complex model at the beginning of a study. The value of selecting a complex model over simpler models is commonly inferred from use of a model with fewer simplifications of the governing equations because it can be time consuming to develop another numerical code with data processing and parameter estimation functionality. High-level programming languages like Python can greatly reduce the effort required to develop and calibrate simple models that can be used to quantitatively demonstrate the increased value of a complex model. We have developed and calibrated a spatially-distributed surface-water/groundwater flow model for managed basins in southeast Florida, USA, to (1) evaluate the effect of municipal groundwater pumpage on surface-water/groundwater exchange, (2) investigate how the study area will respond to sea-level rise, and (3) explore combinations of these forcing functions. To demonstrate the increased value of this complex model, we developed a two-parameter conceptual-benchmark-discharge model for each basin in the study area. The conceptual-benchmark-discharge model includes seasonal scaling and lag parameters and is driven by basin rainfall. The conceptual-benchmark-discharge models were developed in the Python programming language and used weekly rainfall data. Calibration was implemented with the Broyden-Fletcher-Goldfarb-Shanno method available in the Scientific Python (SciPy) library. Normalized benchmark efficiencies calculated using output from the complex model and the corresponding conceptual-benchmark-discharge model indicate that the complex model has more explanatory power than the simple model driven only by rainfall.
NASA Astrophysics Data System (ADS)
Pauwels, V. R.; De Vleeschouwer, N.
2013-12-01
In this paper the potential of discharge-based indirect calibration of the probability-distributed model (PDM), a lumped rainfall-runoff (RR) model, is examined for six selected catchments in Flanders. The concept of indirect calibration indicates that one has to estimate the calibration data because the catchment is ungauged or scarcely gauged. A first case in which indirect calibration is applied is that of spatial gauging divergence: because no observed discharge records are available at the outlet of the ungauged catchment, the calibration is carried out based on a rescaled discharge time series of a very similar donor catchment. Both a calibration in the time domain and the frequency domain (also known as spectral domain) are carried out. Furthermore, the case of temporal gauging divergence is considered: limited (e.g. historical or very recent) discharge records are available at the outlet of the scarcely gauged catchment. Additionally, no time overlap exists between the forcing and discharge records. Therefore, only an indirect spectral calibration can be performed in this case. To conclude also the combination case of spatio-temporal gauging divergence is considered. In this last case only limited discharge records are available at the outlet of a donor catchment. Again the forcing and discharge records are not concomitant, which only makes feasible an indirect spectral calibration. For most catchments the modelled discharge time series is found to be acceptable in the considered cases. In the case of spatial gauging divergence, indirect temporal calibration results in a better model performance than indirect spectral calibration. Furthermore, indirect spectral calibration in the case of temporal gauging divergence leads to a better model performance than indirect spectral calibration in the case of spatial gauging divergence. Finally, the combination of spatial and temporal gauging divergence does not lead to a notably worse model performance compared to
NASA Astrophysics Data System (ADS)
De Vleeschouwer, N.; Pauwels, V. R. N.
2013-01-01
In this paper the potential of discharge-based indirect calibration of the Probability Distributed Model (PDM), a lumped rainfall-runoff (RR) model, is examined for six selected catchments in Flanders. The concept of indirect calibration indicates that one has to estimate the calibration data because the catchment is ungauged. A first case in which indirect calibration is applied is that of spatial gauging divergence: Because no observed discharge records are available at the outlet of the ungauged catchment, the calibration is carried out based on a rescaled discharge time series of a very similar donor catchment. Both a calibration in the time domain and the frequency domain (a.k.a. spectral domain) are carried out. Furterhermore, the case of temporal gauging divergence is considered: Limited (e.g. historical or very recent) discharge records are available at the outlet of the ungauged catchment. Additionally, no time overlap exists between the forcing and discharge records. Therefore, only an indirect spectral calibration can be performed in this case. To conclude also the combination case of spatio-temporal gauging divergence is considered. In this last case only limited discharge records are available at the outlet of a donor catchment. Again the forcing and discharge records are not contemporaneous which only makes feasible an indirect spectral calibration. The modelled discharge time series are found to be acceptable in all three considered cases. In the case of spatial gauging divergence, indirect temporal calibration results in a slightly better model performance than indirect spectral calibration. Furthermore, indirect spectral calibration in the case of temporal gauging divergence leads to a better model performance than indirect spectral calibration in the case of spatial gauging divergence. Finally, the combination of spatial and temporal gauging divergence does not necessarily lead to a worse model performance compared to the separate cases of spatial
NASA Astrophysics Data System (ADS)
De Vleeschouwer, N.; Pauwels, V. R. N.
2013-05-01
In this paper the potential of discharge-based indirect calibration of the probability-distributed model (PDM), a lumped rainfall-runoff (RR) model, is examined for six selected catchments in Flanders. The concept of indirect calibration indicates that one has to estimate the calibration data because the catchment is ungauged or scarcely gauged. A first case in which indirect calibration is applied is that of spatial gauging divergence: because no observed discharge records are available at the outlet of the ungauged catchment, the calibration is carried out based on a rescaled discharge time series of a very similar donor catchment. Both a calibration in the time domain and the frequency domain (also known as spectral domain) are carried out. Furthermore, the case of temporal gauging divergence is considered: limited (e.g. historical or very recent) discharge records are available at the outlet of the scarcely gauged catchment. Additionally, no time overlap exists between the forcing and discharge records. Therefore, only an indirect spectral calibration can be performed in this case. To conclude also the combination case of spatio-temporal gauging divergence is considered. In this last case only limited discharge records are available at the outlet of a donor catchment. Again the forcing and discharge records are not concomitant, which only makes feasible an indirect spectral calibration. For most catchments the modelled discharge time series is found to be acceptable in the considered cases. In the case of spatial gauging divergence, indirect temporal calibration results in a better model performance than indirect spectral calibration. Furthermore, indirect spectral calibration in the case of temporal gauging divergence leads to a better model performance than indirect spectral calibration in the case of spatial gauging divergence. Finally, the combination of spatial and temporal gauging divergence does not lead to a notably worse model performance compared to
Using the cloud to speed-up calibration of watershed-scale hydrologic models (Invited)
NASA Astrophysics Data System (ADS)
Goodall, J. L.; Ercan, M. B.; Castronova, A. M.; Humphrey, M.; Beekwilder, N.; Steele, J.; Kim, I.
2013-12-01
This research focuses on using the cloud to address computational challenges associated with hydrologic modeling. One example is calibration of a watershed-scale hydrologic model, which can take days of execution time on typical computers. While parallel algorithms for model calibration exist and some researchers have used multi-core computers or clusters to run these algorithms, these solutions do not fully address the challenge because (i) calibration can still be too time consuming even on multicore personal computers and (ii) few in the community have the time and expertise needed to manage a compute cluster. Given this, another option for addressing this challenge that we are exploring through this work is the use of the cloud for speeding-up calibration of watershed-scale hydrologic models. The cloud used in this capacity provides a means for renting a specific number and type of machines for only the time needed to perform a calibration model run. The cloud allows one to precisely balance the duration of the calibration with the financial costs so that, if the budget allows, the calibration can be performed more quickly by renting more machines. Focusing specifically on the SWAT hydrologic model and a parallel version of the DDS calibration algorithm, we show significant speed-up time across a range of watershed sizes using up to 256 cores to perform a model calibration. The tool provides a simple web-based user interface and the ability to monitor the calibration job submission process during the calibration process. Finally this talk concludes with initial work to leverage the cloud for other tasks associated with hydrologic modeling including tasks related to preparing inputs for constructing place-based hydrologic models.
Calibration models for density borehole logging - construction report
Engelmann, R.E.; Lewis, R.E.; Stromswold, D.C.
1995-10-01
Two machined blocks of magnesium and aluminum alloys form the basis for Hanford`s density models. The blocks provide known densities of 1.780 {plus_minus} 0.002 g/cm{sup 3} and 2.804 {plus_minus} 0.002 g/cm{sup 3} for calibrating borehole logging tools that measure density based on gamma-ray scattering from a source in the tool. Each block is approximately 33 x 58 x 91 cm (13 x 23 x 36 in.) with cylindrical grooves cut into the sides of the blocks to hold steel casings of inner diameter 15 cm (6 in.) and 20 cm (8 in.). Spacers that can be inserted between the blocks and casings can create air gaps of thickness 0.64, 1.3, 1.9, and 2.5 cm (0.25, 0.5, 0.75 and 1.0 in.), simulating air gaps that can occur in actual wells from hole enlargements behind the casing.
Thermal Modeling Method Improvements for SAGE III on ISS
NASA Technical Reports Server (NTRS)
Liles, Kaitlin; Amundsen, Ruth; Davis, Warren; McLeod, Shawn
2015-01-01
The Stratospheric Aerosol and Gas Experiment III (SAGE III) instrument is the fifth in a series of instruments developed for monitoring aerosols and gaseous constituents in the stratosphere and troposphere. SAGE III will be delivered to the International Space Station (ISS) via the SpaceX Dragon vehicle. A detailed thermal model of the SAGE III payload, which consists of multiple subsystems, has been developed in Thermal Desktop (TD). Many innovative analysis methods have been used in developing this model; these will be described in the paper. This paper builds on a paper presented at TFAWS 2013, which described some of the initial developments of efficient methods for SAGE III. The current paper describes additional improvements that have been made since that time. To expedite the correlation of the model to thermal vacuum (TVAC) testing, the chambers and GSE for both TVAC chambers at Langley used to test the payload were incorporated within the thermal model. This allowed the runs of TVAC predictions and correlations to be run within the flight model, thus eliminating the need for separate models for TVAC. In one TVAC test, radiant lamps were used which necessitated shooting rays from the lamps, and running in both solar and IR wavebands. A new Dragon model was incorporated which entailed a change in orientation; that change was made using an assembly, so that any potential additional new Dragon orbits could be added in the future without modification of the model. The Earth orbit parameters such as albedo and Earth infrared flux were incorporated as time-varying values that change over the course of the orbit; despite being required in one of the ISS documents, this had not been done before by any previous payload. All parameters such as initial temperature, heater voltage, and location of the payload are defined based on the case definition. For one component, testing was performed in both air and vacuum; incorporating the air convection in a submodel that was
Evaluation of impact of length of calibration time period on the APEX model streamflow simulation
Technology Transfer Automated Retrieval System (TEKTRAN)
Due to resource constraints, continuous long-term measured data for model calibration and validation (C/V) are rare. As a result, most hydrologic and water quality models are calibrated and, if possible, validated using limited available measured data. However, little research has been carried out t...
Technology Transfer Automated Retrieval System (TEKTRAN)
Availability of continuous long-term measured data for model calibration and validation is limited due to time and resources constraints. As a result, hydrologic and water quality models are calibrated and, if possible, validated when measured data is available. Past work reported on the impact of t...
Technology Transfer Automated Retrieval System (TEKTRAN)
The reliability of common calibration practices for process based water quality models has recently been questioned. A so-called “adequately calibrated model” may contain input errors not readily identifiable by model users, or may not realistically represent intra-watershed responses. These short...
Technology Transfer Automated Retrieval System (TEKTRAN)
Watershed simulation models can be calibrated using “hard data” such as temporal streamflow observations; however, users may find upon examination of detailed outputs that some of the calibrated models may not reflect summative actual watershed behavior. Thus, it is necessary to use “soft data” (i....
HRMA calibration handbook: EKC gravity compensated XRCF models
NASA Technical Reports Server (NTRS)
Tananbaum, H. D.; Jerius, D.; Hughes, J.
1994-01-01
This document, consisting of hardcopy printout of explanatory text, figures, and tables, represents one incarnation of the AXAF high resolution mirror assembly (HRMA) Calibration Handbook. However, as we have envisioned it, the handbook also consists of electronic versions of this hardcopy printout (in the form of postscript files), the individual scripts which produced the various figures and the associated input data, the model raytrace files, and all scripts, parameter files, and input data necessary to generate the raytraces. These data are all available electronically as either ASCII or FITS files. The handbook is intended to be a living document and will be updated as new information and/or fabrication data on the HRMA are obtained, or when the need for additional results are indicated. The SAO Mission Support Team (MST) is developing a high fidelity HRMA model, consisting of analytical and numerical calculations, computer software, and databases of fundamental physical constants, laboratory measurements, configuration data, finite element models, AXAF assembly data, and so on. This model serves as the basis for the simulations presented in the handbook. The 'core' of the model is the raytrace package OSAC, which we have substantially modified and now refer to as SAOsac. One major structural modification to the software has been to utilize the UNIX binary pipe data transport mechanism for passing rays between program modules. This change has made it possible to simulate rays which are distributed randomly over the entrance aperture of the telescope. It has also resulted in a highly efficient system for tracing large numbers of rays. In one application to date (the analysis of VETA-I ring focus data) we have employed 2 x 10(exp 7) rays, a substantial improvement over the limit of 1 x 10(exp 4) rays in the original OSAC module. A second major modification is the manner in which SAOsac incorporates low spatial frequency surface errors into the geometric raytrace
Why Bother to Calibrate? Model Consistency and the Value of Prior Information
NASA Astrophysics Data System (ADS)
Hrachowitz, Markus; Fovet, Ophelie; Ruiz, Laurent; Euser, Tanja; Gharari, Shervan; Nijzink, Remko; Savenije, Hubert; Gascuel-Odoux, Chantal
2015-04-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.
NASA Technical Reports Server (NTRS)
Hovenac, Edward A.; Lock, James A.
1993-01-01
Scattering calculations using a detailed model of the multimode laser beam in the forward-scattering spectrometer probe (FSSP) were carried out using a recently developed extension to Mie scattering theory. From this model, new calibration curves for the FSSP were calculated. The difference between the old calibration curves and the new ones is small for droplet diameters less than 10 microns, but the difference increases to approximately 10 percent at diameters of 50 microns. When using glass beads to calibrate the FSSP, calibration errors can be minimized by using glass beads of many different diameters, over the entire range of the FSSP. If the FSSP is calibrated using one-diameter glass beads, then the new formalism is necessary to extrapolate the calibration over the entire range.
NASA Technical Reports Server (NTRS)
Hovenac, Edward A.; Lock, James A.
1993-01-01
Scattering calculations using a more detailed model of the multimode laser beam in the forward-scattering spectrometer probe (FSSP) were carried out by using a recently developed extension to Mie scattering theory. From this model, new calibration curves for the FSSP were calculated. The difference between the old calibration curves and the new ones is small for droplet diameters less than 10 micrometers, but the difference increases to approximately 10% at diameters of 50 micrometers. When using glass beads to calibrate the FSSP, calibration errors can be minimized, by using glass beads of many different diameters, over the entire range of the FSSP. If the FSSP is calibrated using one-diameter glass beads, then the new formalism is necessary to extrapolate the calibration over the entire range.
Automation of sample plan creation for process model calibration
NASA Astrophysics Data System (ADS)
Oberschmidt, James; Abdo, Amr; Desouky, Tamer; Al-Imam, Mohamed; Krasnoperova, Azalia; Viswanathan, Ramya
2010-04-01
The process of preparing a sample plan for optical and resist model calibration has always been tedious. Not only because it is required to accurately represent full chip designs with countless combinations of widths, spaces and environments, but also because of the constraints imposed by metrology which may result in limiting the number of structures to be measured. Also, there are other limits on the types of these structures, and this is mainly due to the accuracy variation across different types of geometries. For instance, pitch measurements are normally more accurate than corner rounding. Thus, only certain geometrical shapes are mostly considered to create a sample plan. In addition, the time factor is becoming very crucial as we migrate from a technology node to another due to the increase in the number of development and production nodes, and the process is getting more complicated if process window aware models are to be developed in a reasonable time frame, thus there is a need for reliable methods to choose sample plans which also help reduce cycle time. In this context, an automated flow is proposed for sample plan creation. Once the illumination and film stack are defined, all the errors in the input data are fixed and sites are centered. Then, bad sites are excluded. Afterwards, the clean data are reduced based on geometrical resemblance. Also, an editable database of measurement-reliable and critical structures are provided, and their percentage in the final sample plan as well as the total number of 1D/2D samples can be predefined. It has the advantage of eliminating manual selection or filtering techniques, and it provides powerful tools for customizing the final plan, and the time needed to generate these plans is greatly reduced.
Austin, Peter C; Steyerberg, Ewout W
2014-02-10
Predicting the probability of the occurrence of a binary outcome or condition is important in biomedical research. While assessing discrimination is an essential issue in developing and validating binary prediction models, less attention has been paid to methods for assessing model calibration. Calibration refers to the degree of agreement between observed and predicted probabilities and is often assessed by testing for lack-of-fit. The objective of our study was to examine the ability of graphical methods to assess the calibration of logistic regression models. We examined lack of internal calibration, which was related to misspecification of the logistic regression model, and external calibration, which was related to an overfit model or to shrinkage of the linear predictor. We conducted an extensive set of Monte Carlo simulations with a locally weighted least squares regression smoother (i.e., the loess algorithm) to examine the ability of graphical methods to assess model calibration. We found that loess-based methods were able to provide evidence of moderate departures from linearity and indicate omission of a moderately strong interaction. Misspecification of the link function was harder to detect. Visual patterns were clearer with higher sample sizes, higher incidence of the outcome, or higher discrimination. Loess-based methods were also able to identify the lack of calibration in external validation samples when an overfit regression model had been used. In conclusion, loess-based smoothing methods are adequate tools to graphically assess calibration and merit wider application.
An analysis of calibration curve models for solid-state heat-flow calorimeters
Hypes, P. A.; Bracken, D. S.; McCabe, G.
2001-01-01
Various calibration curve models for solid-state calorimeters are compared to determine which model best fits the calibration data. The calibration data are discussed. The criteria used to select the best model are explained. A conclusion regarding the best model for the calibration curve is presented. These results can also be used to evaluate the random and systematic error of a calorimetric measurement. A linear/quadratic model has been used for decades to fit the calibration curves for wheatstone bridge calorimeters. Excellent results have been obtained using this calibration curve model. The Multical software package uses this model for the calibration curve. The choice of this model is supported by 40 years [1] of calorimeter data. There is good empirical support for the linear/quadratic model. Calorimeter response is strongly linear. Calorimeter sensitivity is slightly lower at higher powers; the negative coefficient of the x{sup 2} term accounts for this. The solid-state calorimeter is operated using the Multical [2] software package. An investigation was undertaken to determine if the linear/quadratic model is the best model for the new sensor technology used in the solid-state calorimeter.
Automatic Calibration of a Semi-Distributed Hydrologic Model Using Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Bekele, E. G.; Nicklow, J. W.
2005-12-01
Hydrologic simulation models need to be calibrated and validated before using them for operational predictions. Spatially-distributed hydrologic models generally have a large number of parameters to capture the various physical characteristics of a hydrologic system. Manual calibration of such models is a very tedious and daunting task, and its success depends on the subjective assessment of a particular modeler, which includes knowledge of the basic approaches and interactions in the model. In order to alleviate these shortcomings, an automatic calibration model, which employs an evolutionary optimization technique known as Particle Swarm Optimizer (PSO) for parameter estimation, is developed. PSO is a heuristic search algorithm that is inspired by social behavior of bird flocking or fish schooling. The newly-developed calibration model is integrated to the U.S. Department of Agriculture's Soil and Water Assessment Tool (SWAT). SWAT is a physically-based, semi-distributed hydrologic model that was developed to predict the long term impacts of land management practices on water, sediment and agricultural chemical yields in large complex watersheds with varying soils, land use, and management conditions. SWAT was calibrated for streamflow and sediment concentration. The calibration process involves parameter specification, whereby sensitive model parameters are identified, and parameter estimation. In order to reduce the number of parameters to be calibrated, parameterization was performed. The methodology is applied to a demonstration watershed known as Big Creek, which is located in southern Illinois. Application results show the effectiveness of the approach and model predictions are significantly improved.
Calibration and uncertainty issues of a hydrological model (SWAT) applied to West Africa
NASA Astrophysics Data System (ADS)
Schuol, J.; Abbaspour, K. C.
2006-09-01
Distributed hydrological models like SWAT (Soil and Water Assessment Tool) are often highly over-parameterized, making parameter specification and parameter estimation inevitable steps in model calibration. Manual calibration is almost infeasible due to the complexity of large-scale models with many objectives. Therefore we used a multi-site semi-automated inverse modelling routine (SUFI-2) for calibration and uncertainty analysis. Nevertheless, the question of when a model is sufficiently calibrated remains open, and requires a project dependent definition. Due to the non-uniqueness of effective parameter sets, parameter calibration and prediction uncertainty of a model are intimately related. We address some calibration and uncertainty issues using SWAT to model a four million km2 area in West Africa, including mainly the basins of the river Niger, Volta and Senegal. This model is a case study in a larger project with the goal of quantifying the amount of global country-based available freshwater. Annual and monthly simulations with the "calibrated" model for West Africa show promising results in respect of the freshwater quantification but also point out the importance of evaluating the conceptual model uncertainty as well as the parameter uncertainty.
Technology Transfer Automated Retrieval System (TEKTRAN)
Watershed simulation models are used extensively to investigate hydrologic processes, landuse and climate change impacts, pollutant load assessments and best management practices (BMPs). Developing, calibrating and validating these models require a number of critical decisions that will influence t...
Comparison of global optimization approaches for robust calibration of hydrologic model parameters
NASA Astrophysics Data System (ADS)
Jung, I. W.
2015-12-01
Robustness of the calibrated parameters of hydrologic models is necessary to provide a reliable prediction of future performance of watershed behavior under varying climate conditions. This study investigated calibration performances according to the length of calibration period, objective functions, hydrologic model structures and optimization methods. To do this, the combination of three global optimization methods (i.e. SCE-UA, Micro-GA, and DREAM) and four hydrologic models (i.e. SAC-SMA, GR4J, HBV, and PRMS) was tested with different calibration periods and objective functions. Our results showed that three global optimization methods provided close calibration performances under different calibration periods, objective functions, and hydrologic models. However, using the agreement of index, normalized root mean square error, Nash-Sutcliffe efficiency as the objective function showed better performance than using correlation coefficient and percent bias. Calibration performances according to different calibration periods from one year to seven years were hard to generalize because four hydrologic models have different levels of complexity and different years have different information content of hydrological observation. Acknowledgements This research was supported by a grant (14AWMP-B082564-01) from Advanced Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.
NASA Astrophysics Data System (ADS)
Zhang, Y.; Wang, B.; Vaze, J.; Chiew, F. H.; Guerschman, J. P.; McVicar, T.
2012-12-01
Estimating runoff in ungauged or poorly gauged catchments is one of the most challenging tasks in surface water hydrology. This study focuses on runoff estimates across large regions using multiple data sources together with different regional model calibration schemes. First, 228 gauged catchments widely located across south-east Australia (~1.4 million km2) are selected. Half of the catchments are randomly selected for regional model calibrations and the remainder used for cross-validations. Four rainfall-runoff and landscape hydrological models (Xinanjiang, AWBM, Sacramento and AWRA-L) are regionally calibrated against multiple data sources, including recorded daily streamflow, gridded monthly remotely-sensed actual evapotranspiration (ETa) and gridded daily remotely-sensed soil moisture (SM) data. The modeling results are assessed against recorded streamflow, remotely sensed ETa and remotely-sensed SM in other half of the catchments. Results indicate that the multi-objective calibrations are better than the traditional model calibration solely against streamflow data, in terms of overall model performance in simulating daily runoff, monthly actual ET and daily SM in the validation catchments. The runoff prediction results using the regional model calibration schemes perform similarly to (or slightly better than) the traditional regionalization approach, i.e., the nearest neighbor (or spatial proximity) approach. However, the regional model calibration approach has an important advantage for runoff estimates across large regions where gauging stations are relatively sparse.
NASA Astrophysics Data System (ADS)
Klostermann, U. K.; Mülders, T.; Schmöller, T.; Lorusso, G. F.; Hendrickx, E.
2010-04-01
In this paper, we discuss the performance of EUV resist models in terms of predictive accuracy, and we assess the readiness of the corresponding model calibration methodology. The study is done on an extensive OPC data set collected at IMEC for the ShinEtsu resist SEVR-59 on the ASML EUV Alpha Demo Tool (ADT), with the data set including more than thousand CD values. We address practical aspects such as the speed of calibration and selection of calibration patterns. The model is calibrated on 12 process window data series varying in pattern width (32, 36, 40 nm), orientation (H, V) and pitch (dense, isolated). The minimum measured feature size at nominal process condition is a 32 nm CD at a dense pitch of 64 nm. Mask metrology is applied to verify and eventually correct nominal width of the drawn CD. Cross-sectional SEM information is included in the calibration to tune the simulated resist loss and sidewall angle. The achieved calibration RMS is ~ 1.0 nm. We show what elements are important to obtain a well calibrated model. We discuss the impact of 3D mask effects on the Bossung tilt. We demonstrate that a correct representation of the flare level during the calibration is important to achieve a high predictability at various flare conditions. Although the model calibration is performed on a limited subset of the measurement data (one dimensional structures only), its accuracy is validated based on a large number of OPC patterns (at nominal dose and focus conditions) not included in the calibration; validation RMS results as small as 1 nm can be reached. Furthermore, we study the model's extendibility to two-dimensional end of line (EOL) structures. Finally, we correlate the experimentally observed fingerprint of the CD uniformity to a model, where EUV tool specific signatures are taken into account.
Reaction-based reactive transport modeling of Fe(III)
Kemner, K.M.; Kelly, S.D.; Burgos, Bill; Roden, Eric
2006-06-01
This research project (started Fall 2004) was funded by a grant to Argonne National Laboratory, The Pennsylvania State University, and The University of Alabama in the Integrative Studies Element of the NABIR Program (DE-FG04-ER63914/63915/63196). Dr. Eric Roden, formerly at The University of Alabama, is now at the University of Wisconsin, Madison. Our project focuses on the development of a mechanistic understanding and quantitative models of coupled Fe(III)/U(VI) reduction in FRC Area 2 sediments. This work builds on our previous studies of microbial Fe(III) and U(VI) reduction, and is directly aligned with the Scheibe et al. NABIR FRC Field Project at Area 2.
Modified calibration protocol evaluated in a model-based testing of SBR flexibility.
Corominas, Lluís; Sin, Gürkan; Puig, Sebastià; Balaguer, Maria Dolors; Vanrolleghem, Peter A; Colprim, Jesús
2011-02-01
The purpose of this paper is to refine the BIOMATH calibration protocol for SBR systems, in particular to develop a pragmatic calibration protocol that takes advantage of SBR information-rich data, defines a simulation strategy to obtain proper initial conditions for model calibration and provides statistical evaluation of the calibration outcome. The updated calibration protocol is then evaluated on a case study to obtain a thoroughly validated model for testing the flexibility of an N-removing SBR to adapt the operating conditions to the changing influent wastewater load. The performance of reference operation using fixed phase length and dissolved oxygen set points and two real-time control strategies is compared to find optimal operation under dynamic conditions. The results show that a validated model of high quality is obtained using the updated protocol and that the optimization of the system's performance can be achieved in different manners by implementing the proposed control strategies.
On the interpretation of recharge estimates from steady-state model calibrations.
Anderson, William P; Evans, David G
2007-01-01
Ground water recharge is often estimated through the calibration of ground water flow models. We examine the nature of calibration errors by considering some simple mathematical and numerical calculations. From these calculations, we conclude that calibrating a steady-state ground water flow model to water level extremes yields estimates of recharge that have the same value as the time-varying recharge at the time the water levels are measured. These recharge values, however, are a subdued version of the actual transient recharge signal. In addition, calibrating a steady-state ground water flow model to data collected during periods of rising water levels will produce recharge values that underestimate the actual transient recharge. Similarly, calibrating during periods of falling water levels will overestimate the actual transient recharge. We also demonstrate that average water levels can be used to estimate the actual average recharge rate provided that water level data have been collected for a sufficient amount of time.
Genetic Algorithm Calibration of Probabilistic Cellular Automata for Modeling Mining Permit Activity
Louis, S.J.; Raines, G.L.
2003-01-01
We use a genetic algorithm to calibrate a spatially and temporally resolved cellular automata to model mining activity on public land in Idaho and western Montana. The genetic algorithm searches through a space of transition rule parameters of a two dimensional cellular automata model to find rule parameters that fit observed mining activity data. Previous work by one of the authors in calibrating the cellular automaton took weeks - the genetic algorithm takes a day and produces rules leading to about the same (or better) fit to observed data. These preliminary results indicate that genetic algorithms are a viable tool in calibrating cellular automata for this application. Experience gained during the calibration of this cellular automata suggests that mineral resource information is a critical factor in the quality of the results. With automated calibration, further refinements of how the mineral-resource information is provided to the cellular automaton will probably improve our model.
Wolfrum, E. J.; Sluiter, A. D.
2009-01-01
We have studied rapid calibration models to predict the composition of a variety of biomass feedstocks by correlating near-infrared (NIR) spectroscopic data to compositional data produced using traditional wet chemical analysis techniques. The rapid calibration models are developed using multivariate statistical analysis of the spectroscopic and wet chemical data. This work discusses the latest versions of the NIR calibration models for corn stover feedstock and dilute-acid pretreated corn stover. Measures of the calibration precision and uncertainty are presented. No statistically significant differences (p = 0.05) are seen between NIR calibration models built using different mathematical pretreatments. Finally, two common algorithms for building NIR calibration models are compared; no statistically significant differences (p = 0.05) are seen for the major constituents glucan, xylan, and lignin, but the algorithms did produce different predictions for total extractives. A single calibration model combining the corn stover feedstock and dilute-acid pretreated corn stover samples gave less satisfactory predictions than the separate models.
Exploring a Three-Level Model of Calibration Accuracy
ERIC Educational Resources Information Center
Schraw, Gregory; Kuch, Fred; Gutierrez, Antonio P.; Richmond, Aaron S.
2014-01-01
We compared 5 different statistics (i.e., G index, gamma, "d'", sensitivity, specificity) used in the social sciences and medical diagnosis literatures to assess calibration accuracy in order to examine the relationship among them and to explore whether one statistic provided a best fitting general measure of accuracy. College…
NASA Astrophysics Data System (ADS)
Willem Vervoort, R.; Miechels, Susannah F.; van Ogtrop, Floris F.; Guillaume, Joseph H. A.
2014-11-01
Physically representative hydrological models are essential for water resource management. New satellite evapotranspiration (ETobs) data might offer opportunities to improve model structure and parameter identifiability, if used as an independent calibration set. This study used a modelling experiment on 4 catchments in New South Wales, Australia, to investigate whether MODIS (16A3) ETobs can be used to improve parameter calibration for low parameter conceptual models. The catchment moisture deficit and exponential routing form of the model IHACRES was used to test calibration against streamflow, MODIS ETobs or a combination setoff the two. Results were compared against a regionalized parameter model and a model using MODIS ETobs directly as input. Firstly, the results indicated that the observed water balance of the catchments has, currently unexplained, large positive differences which impact the calibrated parameters. More generally, using MODIS ETobs as a calibration set, results in a reduction of the model performance as all residuals of the local water balance and timing differences between the water balance and the outflow need to be resolved by the routing component of the model. This is further complicated by variations in land cover affecting the MODIS ETobs. Finally this study confirms that the calibration of models using multiple environmental timeseries (such as MODIS ETobs and Q) can be used to identify structural model issues.
Calibration of the 7—Equation Transition Model for High Reynolds Flows at Low Mach
NASA Astrophysics Data System (ADS)
Colonia, S.; Leble, V.; Steijl, R.; Barakos, G.
2016-09-01
The numerical simulation of flows over large-scale wind turbine blades without considering the transition from laminar to fully turbulent flow may result in incorrect estimates of the blade loads and performance. Thanks to its relative simplicity and promising results, the Local-Correlation based Transition Modelling concept represents a valid way to include transitional effects into practical CFD simulations. However, the model involves coefficients that need tuning. In this paper, the γ—equation transition model is assessed and calibrated, for a wide range of Reynolds numbers at low Mach, as needed for wind turbine applications. An aerofoil is used to evaluate the original model and calibrate it; while a large scale wind turbine blade is employed to show that the calibrated model can lead to reliable solutions for complex three-dimensional flows. The calibrated model shows promising results for both two-dimensional and three-dimensional flows, even if cross-flow instabilities are neglected.
Calibration Methods Used in Cancer Simulation Models and Suggested Reporting Guidelines
Stout, Natasha K.; Knudsen, Amy B.; Kong, Chung Yin (Joey); McMahon, Pamela M.; Gazelle, G. Scott
2009-01-01
Background Increasingly, computer simulation models are used for economic and policy evaluation in cancer prevention and control. A model’s predictions of key outcomes such as screening effectiveness depends on the values of unobservable natural history parameters. Calibration is the process of determining the values of unobservable parameters by constraining model output to replicate observed data. Because there are many approaches for model calibration and little consensus on best practices, we surveyed the literature to catalogue the use and reporting of these methods in cancer simulation models. Methods We conducted a MEDLINE search (1980 through 2006) for articles on cancer screening models and supplemented search results with articles from our personal reference databases. For each article, two authors independently abstracted pre-determined items using a standard form. Data items included cancer site, model type, methods used for determination of unobservable parameter values, and description of any calibration protocol. All authors reached consensus on items of disagreement. Reviews and non-cancer models were excluded. Articles describing analytical models which estimate parameters with statistical approaches (e.g., maximum likelihood) were catalogued separately. Models that included unobservable parameters were analyzed and classified by whether calibration methods were reported and if so, the methods used. Results The review process yielded 154 articles that met our inclusion criteria and of these, we concluded that 131 may have used calibration methods to determine model parameters. Although the term “calibration” was not always used, descriptions of calibration or “model fitting” were found in 50% (n=66) of the articles with an additional 16% (n=21) providing a reference to methods. Calibration target data were identified in nearly all of these articles. Other methodologic details such as the goodness-of-fit metric were discussed in 54% (n=47
Regression Model Term Selection for the Analysis of Strain-Gage Balance Calibration Data
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert Manfred; Volden, Thomas R.
2010-01-01
The paper discusses the selection of regression model terms for the analysis of wind tunnel strain-gage balance calibration data. Different function class combinations are presented that may be used to analyze calibration data using either a non-iterative or an iterative method. The role of the intercept term in a regression model of calibration data is reviewed. In addition, useful algorithms and metrics originating from linear algebra and statistics are recommended that will help an analyst (i) to identify and avoid both linear and near-linear dependencies between regression model terms and (ii) to make sure that the selected regression model of the calibration data uses only statistically significant terms. Three different tests are suggested that may be used to objectively assess the predictive capability of the final regression model of the calibration data. These tests use both the original data points and regression model independent confirmation points. Finally, data from a simplified manual calibration of the Ames MK40 balance is used to illustrate the application of some of the metrics and tests to a realistic calibration data set.
Matching Images to Models: Camera Calibration for 3-D Surface Reconstruction
NASA Technical Reports Server (NTRS)
Morris, Robin D.; Smelyanskiy, Vadim N.; Cheeseman. Peter C.; Norvig, Peter (Technical Monitor)
2001-01-01
In a previous paper we described a system which recursively recovers a super-resolved three dimensional surface model from a set of images of the surface. In that paper we assumed that the camera calibration for each image was known. In this paper we solve two problems. Firstly, if an estimate of the surface is already known, the problem is to calibrate a new image relative to the existing surface model. Secondly, if no surface estimate is available, the relative camera calibration between the images in the set must be estimated. This will allow an initial surface model to be estimated. Results of both types of estimation are given.
Seismology on a Comet: Calibration Measurements, Modeling and Inversion
NASA Astrophysics Data System (ADS)
Faber, C.; Hoppe, J.; Knapmeyer, M.; Fischer, H.; Seidensticker, K. J.
2011-12-01
The Mission Rosetta was launched to comet 67P/Churyumov-Gerasimenko in 2004. It will finally reach the comet and will deliver the Lander Philae at the surface of the nucleus in November 2014. The Lander carries ten experiments, one of which is the Surface Electric Sounding and Acoustic Monitoring Experiment (SESAME). Part of this experiment is the Comet Acoustic Surface Sounding Experiment (CASSE) housed in the three feet of the lander. The primary goal of CASSE is to determine the elastic parameters of the surface material, like the Young's modulus and the Poisson ratio. Additional goals are the determination of shallow structure, quantification of porosity, and the location of activity spots and thermally and impact caused cometary activity. We conduct calibration measurements with accelerometers identical to the flight model. The goal of these measurements is to develop inversion procedures for travel times and to estimate the expected accuracy that CASSE can achieve in terms of elastic wave velocity, elastic parameters, and source location. The experiments are conducted mainly on sandy soil, in dry, wet or frozen conditions, and apart from buildings with their reflecting walls and artificial noise sources. We expect that natural sources, like thermal cracking at sunrise and sunset, can be located to an accuracy of about 10 degrees in direction and a few decimeters (1σ) in distance if occurring within the sensor triangle and from first arrivals alone. The accuracy of the direction is essentially independent of the distance, whereas distance determination depends critically on the identification of later arrivals. Determination of elastic wave velocities on the comet will be conducted with controlled sources at known positions and are likely to achieve an accuracy of σ=15% for the velocity of the first arriving wave. Limitations are due to the fixed source-receiver geometry and the wavelength emitted by the CASSE piezo-ceramic sources. In addition to the
McFadden, Sam X.; Korellis, John S.; Lee, Kenneth L.; Rogillio, Brendan R.; Hatch, Paul W.
2008-03-01
Experimental data for material plasticity and failure model calibration and validation were obtained from 6061-T651 aluminum, in the form of a 4-in. diameter extruded rod. Model calibration data were taken from smooth tension, notched tension, and shear tests. Model validation data were provided from experiments using thin-walled tube specimens subjected to path-dependent combinations of internal pressure, extension, and torsion.
NASA Astrophysics Data System (ADS)
McGurk, B. J.; Painter, T. H.
2014-12-01
Deterministic snow accumulation and ablation simulation models are widely used by runoff managers throughout the world to predict runoff quantities and timing. Model fitting is typically based on matching modeled runoff volumes and timing with observed flow time series at a few points in the basin. In recent decades, sparse networks of point measurements of the mountain snowpacks have been available to compare with modeled snowpack, but the comparability of results from a snow sensor or course to model polygons of 5 to 50 sq. km is suspect. However, snowpack extent, depth, and derived snow water equivalent have been produced by the NASA/JPL Airborne Snow Observatory (ASO) mission for spring of 20013 and 2014 in the Tuolumne River basin above Hetch Hetchy Reservoir. These high-resolution snowpack data have exposed the weakness in a model calibration based on runoff alone. The U.S. Geological Survey's Precipitation Runoff Modeling System (PRMS) calibration that was based on 30-years of inflow to Hetch Hetchy produces reasonable inflow results, but modeled spatial snowpack location and water quantity diverged significantly from the weekly measurements made by ASO during the two ablation seasons. The reason is that the PRMS model has many flow paths, storages, and water transfer equations, and a calibrated outflow time series can be right for many wrong reasons. The addition of a detailed knowledge of snow extent and water content constrains the model so that it is a better representation of the actual watershed hydrology. The mechanics of recalibrating PRMS to the ASO measurements will be described, and comparisons in observed versus modeled flow for both a small subbasin and the entire Hetch Hetchy basin will be shown. The recalibrated model provided a bitter fit to the snowmelt recession, a key factor for water managers as they balance declining inflows with demand for power generation and ecosystem releases during the final months of snow melt runoff.
Step wise, multiple objective calibration of a hydrologic model for a snowmelt dominated basin
Hay, L.E.; Leavesley, G.H.; Clark, M.P.; Markstrom, S.L.; Viger, R.J.; Umemoto, M.
2006-01-01
The ability to apply a hydrologic model to large numbers of basins for forecasting purposes requires a quick and effective calibration strategy. This paper presents a step wise, multiple objective, automated procedure for hydrologic model calibration. This procedure includes the sequential calibration of a model's simulation of solar radiation (SR), potential evapotranspiration (PET), water balance, and daily runoff. The procedure uses the Shuffled Complex Evolution global search algorithm to calibrate the U.S. Geological Survey's Precipitation Runoff Modeling System in the Yampa River basin of Colorado. This process assures that intermediate states of the model (SR and PET on a monthly mean basis), as well as the water balance and components of the daily hydrograph are simulated, consistently with measured values.
NASA Astrophysics Data System (ADS)
Di Luzio, Mauro; Arnold, Jeffrey G.
2004-10-01
This paper describes the background, formulation and results of an hourly input-output calibration approach proposed for the Soil and Water Assessment Tool (SWAT) watershed model, presented for 24 representative storm events occurring during the period between 1994 and 2000 in the Blue River watershed (1233 km 2 located in Oklahoma). This effort is the first follow up to the participation in the National Weather Service-Distributed Modeling Intercomparison Project (DMIP), an opportunity to apply, for the first time within the SWAT modeling framework, routines for hourly stream flow prediction based on gridded precipitation (NEXRAD) data input. Previous SWAT model simulations, uncalibrated and with moderate manual calibration (only the water balance over the calibration period), were provided for the entire set of watersheds and associated outlets for the comparison designed in the DMIP project. The extended goal of this follow up was to verify the model efficiency in simulating hourly hydrographs calibrating each storm event using the formulated approach. This included a combination of a manual and an automatic calibration approach (Shuffled Complex Evolution Method) and the use of input parameter values allowed to vary only within their physical extent. While the model provided reasonable water budget results with minimal calibration, event simulations with the revised calibration were significantly improved. The combination of NEXRAD precipitation data input, the soil water balance and runoff equations, along with the calibration strategy described in the paper, appear to adequately describe the storm events. The presented application and the formulated calibration method are initial steps toward the improvement of the simulation on an hourly basis of the SWAT model loading variables associated with the storm flow, such as sediment and pollutants, and the success of Total Maximum Daily Load (TMDL) projects.
NASA Astrophysics Data System (ADS)
De Vleeschouwer, Niels; Pauwels, Valentijn R. N.
2013-04-01
In this research the potential of discharge-based indirect calibration of the Probability Distributed Model (PDM), a lumped rainfall-runoff (RR) model, is examined for six selected catchments in Flanders. The concept of indirect calibration indicates that one has to estimate the calibration data because the catchment is ungauged. A first case in which indirect calibration is applied is that of spatial gauging divergence: Because no observed discharge records are available at the outlet of the ungauged catchment, the calibration is carried out based on a rescaled discharge time series of a very similar donor catchment. The latter is selected out of a catchment population on the basis of a dissimilarity measure which takes in account the mutual catchment distance and differences in drainage area, land topography, soil composition and land cover. Both a calibration in the time domain and the frequency domain (a.k.a. spectral domain) are carried out. Furthermore, the case of temporal gauging divergence is considered: Limited (e.g. historical or very recent) discharge records are available at the outlet of the ungauged catchment. Additionally, no time overlap exists between the forcing and discharge records. Therefore, only an indirect spectral calibration can be performed in this case. To conclude also the combination case of spatio-temporal gauging divergence is considered. In this last case only limited discharge records are available at the outlet of a donor catchment. Again the forcing and discharge records are not contemporaneous which only makes feasible an indirect spectral calibration. The post calibration model performance is assessed using four indicators: the Pearson correlation coefficient (R), the relative absolute bias (BIASn), the relative Root Mean Square Error (RMSEn) and the Nash- Sutcliffe coefficient (NS). The modelled discharge time series are found to be acceptable in all three considered cases. In the case of spatial gauging divergence, indirect
Group Sparsity Regularization for Calibration of SubsurfaceFlow Models under Geologic Uncertainty
NASA Astrophysics Data System (ADS)
Golmohammadi, A.; Jafarpour, B.
2014-12-01
Subsurface flow model calibration inverse problems typically involve inference of high-dimensional aquifer properties from limited monitoring and performance data. To find plausible solutions, the dynamic flow and pressure data are augmented with prior geological information about the unknown properties. Specifically, geologic continuity that exhibits itself as strong spatial correlation in heterogeneous rock properties has motivated various regularization and parameterization techniques for solving ill-posed model calibration inverse problems. However, complex geologic formations, such as fluvial facies distribution, are not amenable to generic regularization techniques; hence, more specific prior models about the shape and connectivity of the underlying geologic patterns are necessary for constraining the solution properly. Inspired by recent advances in signal processing, sparsity regularization uses effective basis functions to compactly represent complex geologic patterns for efficient model calibration. Here, we present a novel group-sparsity regularization that can discriminate between alternative plausible prior models based on the dynamic response data. This regularization property is used to select prior models that better reconstruct the complex geo-spatial connectivity during calibration. With group sparsity, the dominant spatial connectivity patterns are encoded into several parameter groups where each group is tuned to represent certain types of geologic patterns. In the model calibration process, dynamic flow and pressure data are used to select a small subset of groups to estimate aquifer properties. We demonstrate the effectiveness of the group sparsity regularization for solving ill-posed model calibration inverse problems.
Calibrating corneal material model parameters using only inflation data: an ill-posed problem.
Kok, S; Botha, N; Inglis, H M
2014-12-01
Goldmann applanation tonometry (GAT) is a method used to estimate the intraocular pressure by measuring the indentation resistance of the cornea. A popular approach to investigate the sensitivity of GAT results to material and geometry variations is to perform numerical modelling using the finite element method, for which a calibrated material model is required. These material models are typically calibrated using experimental inflation data by solving an inverse problem. In the inverse problem, the underlying material constitutive behaviour is inferred from the measured macroscopic response (chamber pressure versus apical displacement). In this study, a biomechanically motivated elastic fibre-reinforced corneal material model is chosen. The inverse problem of calibrating the corneal material model parameters using only experimental inflation data is demonstrated to be ill-posed, with small variations in the experimental data leading to large differences in the calibrated model parameters. This can result in different groups of researchers, calibrating their material model with the same inflation test data, drawing vastly different conclusions about the effect of material parameters on GAT results. It is further demonstrated that multiple loading scenarios, such as inflation as well as bending, would be required to reliably calibrate such a corneal material model.
NASA Astrophysics Data System (ADS)
Lin, Z.; Radcliffe, D. E.; Doherty, J.
2004-12-01
Automatic calibration has been applied to conceptual rainfall-runoff models for more than three decades, usually to lumped models. Even when a (semi-)distributed model that allows spatial variability of parameters is calibrated using an automated process, the parameters of the model are often lumped over space so that the model is simplified as a lumped model. Our objective was to develop a two-stage routine for automatically calibrating the Soil Water Assessment Tool (SWAT, a semi-distributed watershed model) that would find the optimal values for the model parameters, preserve the spatial variability in essential parameters, and lead to a measure of the model prediction uncertainty. In the first stage of this proposed calibration scheme, a global search method, namely, the Shuffled Complex Evolution (SCE-UA) method, was employed to find the ``best'' values for the lumped model parameters. That is, in order to limit the number of the calibrated parameters, the model parameters were assumed to be invariant over different subbasins and hydrologic response units (HRU, the basic calculation unit in the SWAT model). However, in the second stage, the spatial variability of the original model parameters was restored and the number of the calibrated parameters was dramatically increased (from a few to near a hundred). Hence, a local search method, namely, a variation of Levenberg-Marquart method, was preferred to find the more distributed set of parameters using the results of the previous stage as starting values. Furthermore, in order to prevent the parameters from taking extreme values, a strategy called ``regularization'' was adopted, through which the distributed parameters were constrained to vary as little as possible from the initial values of the lumped parameters. We calibrated the stream flow in the Etowah River measured at Canton, GA (a watershed area of 1,580 km2) for the years 1983-1992 and used the years 1993-2001 for validation. Calibration for daily and
Sparkle/AM1 Parameters for the Modeling of Samarium(III) and Promethium(III) Complexes.
Freire, Ricardo O; da Costa, Nivan B; Rocha, Gerd B; Simas, Alfredo M
2006-01-01
The Sparkle/AM1 model is extended to samarium(III) and promethium(III) complexes. A set of 15 structures of high crystallographic quality (R factor < 0.05 Å), with ligands chosen to be representative of all samarium complexes in the Cambridge Crystallographic Database 2004, CSD, with nitrogen or oxygen directly bonded to the samarium ion, was used as a training set. In the validation procedure, we used a set of 42 other complexes, also of high crystallographic quality. The results show that this parametrization for the Sm(III) ion is similar in accuracy to the previous parametrizations for Eu(III), Gd(III), and Tb(III). On the other hand, promethium is an artificial radioactive element with no stable isotope. So far, there are no promethium complex crystallographic structures in CSD. To circumvent this, we confirmed our previous result that RHF/STO-3G/ECP, with the MWB effective core potential (ECP), appears to be the most efficient ab initio model chemistry in terms of coordination polyhedron crystallographic geometry predictions from isolated lanthanide complex ion calculations. We thus generated a set of 15 RHF/STO-3G/ECP promethium complex structures with ligands chosen to be representative of complexes available in the CSD for all other trivalent lanthanide cations, with nitrogen or oxygen directly bonded to the lanthanide ion. For the 42 samarium(III) complexes and 15 promethium(III) complexes considered, the Sparkle/AM1 unsigned mean error, for all interatomic distances between the Ln(III) ion and the ligand atoms of the first sphere of coordination, is 0.07 and 0.06 Å, respectively, a level of accuracy comparable to present day ab initio/ECP geometries, while being hundreds of times faster.
NASA Astrophysics Data System (ADS)
Bolisetti, T.; Datta, A. R.; Balachandar, R.
2009-05-01
Studies on impact assessment and the corresponding uncertainties in hydrologic regime predictions is of paramount in developing water resources management plans under climate change scenarios,. The variability in hydrologic model parameters is one of the major sources of uncertainties associated with climate change impact on streamflow. Uncertainty in hydrologic model parameters may arise from the choice of model calibration technique, model calibration period, model structure and response variables. The recent studies show that consideration of uncertainties in input variables (precipitation, evapotranspiration etc.) during calibration of a hydrologic model has resulted in decrease in prediction uncertainty. The present study has examined the significance of input uncertainty in hydrologic model calibration for climate change impact studies. A physically distributed hydrologic model, Soil and Water Assessment Tool (SWAT), is calibrated considering uncertainties in (i) model parameters only, and (ii) both model parameters and precipitation input. The Markov chain Monte Carlo algorithm is used to estimate the posterior probability density function of hydrologic model parameters. The observed daily precipitation and streamflow data of the Canard River watershed of Essex region, Ontario, Canada are used as input and output variables, respectively, during calibration. The parameter sets of the 100 most skillful hydrologic model simulations obtained from each calibration technique are used for predicting streamflow by 2070s under climate change conditions. In each run, the climate predictions of the Canadian Regional Climate Model (CRCM) for SRES scenario A2 are used as input to the hydrologic model for streamflow prediction. The paper presents the results of uncertainty in seasonal and annual streamflow prediction. The outcome of the study is expected to contribute to the assessment of uncertainty in climate change impact studies and better management of available
Visible spectroscopy calibration transfer model in determining pH of Sala mangoes
NASA Astrophysics Data System (ADS)
Yahaya, O. K. M.; MatJafri, M. Z.; Aziz, A. A.; Omar, A. F.
2015-05-01
The purpose of this study is to compare the efficiency of calibration transfer procedures between three spectrometers involving two Ocean Optics Inc. spectrometers, namely, QE65000 and Jaz, and also, ASD FieldSpec 3 in measuring the pH of Sala mango by visible reflectance spectroscopy. This study evaluates the ability of these spectrometers in measuring the pH of Sala mango by applying similar calibration algorithms through direct calibration transfer. This visible reflectance spectroscopy technique defines a spectrometer as a master instrument and another spectrometer as a slave. The multiple linear regression (MLR) of calibration model generated using the QE65000 spectrometer is transferred to the Jaz spectrometer and vice versa for Set 1. The same technique is applied for Set 2 with QE65000 spectrometer is transferred to the FieldSpec3 spectrometer and vice versa. For Set 1, the result showed that the QE65000 spectrometer established a calibration model with higher accuracy than that of the Jaz spectrometer. In addition, the calibration model developed on Jaz spectrometer successfully predicted the pH of Sala mango, which was measured using QE65000 spectrometer, with a root means square error of prediction RMSEP = 0.092 pH and coefficients of determination R2 = 0.892. Moreover, the best prediction result is obtained for Set 2 when the calibration model developed on QE65000 spectrometer is successfully transferred to FieldSpec 3 with R2 = 0.839 and RMSEP = 0.16 pH.
NASA Astrophysics Data System (ADS)
Moeck, Christian; Von Freyberg, Jana; Schrimer, Maria
2016-04-01
An important question in recharge impact studies is how model choice, structure and calibration period affect recharge predictions. It is still unclear if a certain model type or structure is less affected by running the model on time periods with different hydrological conditions compared to the calibration period. This aspect, however, is crucial to ensure reliable predictions of groundwater recharge. In this study, we quantify and compare the effect of groundwater recharge model choice, model parametrization and calibration period in a systematic way. This analysis was possible thanks to a unique data set from a large-scale lysimeter in a pre-alpine catchment where daily long-term recharge rates are available. More specifically, the following issues are addressed: We systematically evaluate how the choice of hydrological models influences predictions of recharge. We assess how different parameterizations of models due to parameter non-identifiability affect predictions of recharge by applying a Monte Carlo approach. We systematically assess how the choice of calibration periods influences predictions of recharge within a differential split sample test focusing on the model performance under extreme climatic and hydrological conditions. Results indicate that all applied models (simple lumped to complex physically based models) were able to simulate the observed recharge rates for five different calibration periods. However, there was a marked impact of the calibration period when the complete 20 years validation period was simulated. Both, seasonal and annual differences between simulated and observed daily recharge rates occurred when the hydrological conditions were different to the calibration period. These differences were, however, less distinct for the physically based models, whereas the simpler models over- or underestimate the observed recharge depending on the considered season. It is, however, possible to reduce the differences for the simple models by
Role of Imaging Specrometer Data for Model-based Cross-calibration of Imaging Sensors
NASA Technical Reports Server (NTRS)
Thome, Kurtis John
2014-01-01
Site characterization benefits from imaging spectrometry to determine spectral bi-directional reflectance of a well-understood surface. Cross calibration approaches, uncertainties, role of imaging spectrometry, model-based site characterization, and application to product validation.
In this study, the calibration of subsurface batch and reactive-transport models involving complex biogeochemical processes was systematically evaluated. Two hypothetical nitrate biodegradation scenarios were developed and simulated in numerical experiments to evaluate the perfor...
NASA Astrophysics Data System (ADS)
Neilson, B. T.; Chapra, S. C.; Stevens, D. K.; Bandaragoda, C.
2010-12-01
This paper presents the formulation and calibration of the temperature portion of a two-zone temperature and solute (TZTS) model which separates transient storage into surface (STS) and subsurface transient storage (HTS) zones. The inclusion of temperature required the TZTS model formulation to differ somewhat from past transient storage models in order to accommodate terms associated with heat transfer. These include surface heat fluxes in the main channel (MC) and STS, heat and mass exchange between the STS and MC, heat and mass exchange between the HTS and MC, and heat exchange due to bed and deeper ground conduction. To estimate the additional parameters associated with a two-zone model, a data collection effort was conducted to provide temperature time series within each zone. Both single-objective and multiobjective calibration algorithms were then linked to the TZTS model to assist in parameter estimation. Single-objective calibrations based on MC temperatures at two different locations along the study reach provided reasonable predictions in the MC and STS. The HTS temperatures, however, were typically poorly estimated. The two-objective calibration using MC temperatures simultaneously at two locations illustrated that the TZTS model accurately predicts temperatures observed in MC, STS, and HTS zones, including those not used in the calibration. These results suggest that multiple data sets representing different characteristics of the system should be used when calibrating complex in-stream models.
Predictive sensor based x-ray calibration using a physical model.
de la Fuente, Matías; Lutz, Peter; Wirtz, Dieter C; Radermacher, Klaus
2007-04-01
Many computer assisted surgery systems are based on intraoperative x-ray images. To achieve reliable and accurate results these images have to be calibrated concerning geometric distortions, which can be distinguished between constant distortions and distortions caused by magnetic fields. Instead of using an intraoperative calibration phantom that has to be visible within each image resulting in overlaying markers, the presented approach directly takes advantage of the physical background of the distortions. Based on a computed physical model of an image intensifier and a magnetic field sensor, an online compensation of distortions can be achieved without the need of an intraoperative calibration phantom. The model has to be adapted once to each specific image intensifier through calibration, which is based on an optimization algorithm systematically altering the physical model parameters, until a minimal error is reached. Once calibrated, the model is able to predict the distortions caused by the measured magnetic field vector and build an appropriate dewarping function. The time needed for model calibration is not yet optimized and takes up to 4 h on a 3 GHz CPU. In contrast, the time needed for distortion correction is less than 1 s and therefore absolutely acceptable for intraoperative use. First evaluations showed that by using the model based dewarping algorithm the distortions of an XRII with a 21 cm FOV could be significantly reduced. The model was able to predict and compensate distortions by approximately 80% to a remaining error of 0.45 mm (max) (0.19 mm rms). PMID:17500446
Predictive sensor based x-ray calibration using a physical model
Fuente, Matias de la; Lutz, Peter; Wirtz, Dieter C.; Radermacher, Klaus
2007-04-15
Many computer assisted surgery systems are based on intraoperative x-ray images. To achieve reliable and accurate results these images have to be calibrated concerning geometric distortions, which can be distinguished between constant distortions and distortions caused by magnetic fields. Instead of using an intraoperative calibration phantom that has to be visible within each image resulting in overlaying markers, the presented approach directly takes advantage of the physical background of the distortions. Based on a computed physical model of an image intensifier and a magnetic field sensor, an online compensation of distortions can be achieved without the need of an intraoperative calibration phantom. The model has to be adapted once to each specific image intensifier through calibration, which is based on an optimization algorithm systematically altering the physical model parameters, until a minimal error is reached. Once calibrated, the model is able to predict the distortions caused by the measured magnetic field vector and build an appropriate dewarping function. The time needed for model calibration is not yet optimized and takes up to 4 h on a 3 GHz CPU. In contrast, the time needed for distortion correction is less than 1 s and therefore absolutely acceptable for intraoperative use. First evaluations showed that by using the model based dewarping algorithm the distortions of an XRII with a 21 cm FOV could be significantly reduced. The model was able to predict and compensate distortions by approximately 80% to a remaining error of 0.45 mm (max) (0.19 mm rms)
Ecologically-focused Calibration of Hydrological Models for Environmental Flow Applications
NASA Astrophysics Data System (ADS)
Adams, S. K.; Bledsoe, B. P.
2015-12-01
Hydrologic alteration resulting from watershed urbanization is a common cause of aquatic ecosystem degradation. Developing environmental flow criteria for urbanizing watersheds requires quantitative flow-ecology relationships that describe biological responses to streamflow alteration. Ideally, gaged flow data are used to develop flow-ecology relationships; however, biological monitoring sites are frequently ungaged. For these ungaged locations, hydrologic models must be used to predict streamflow characteristics through calibration and testing at gaged sites, followed by extrapolation to ungaged sites. Physically-based modeling of rainfall-runoff response has frequently utilized "best overall fit" calibration criteria, such as the Nash-Sutcliffe Efficiency (NSE), that do not necessarily focus on specific aspects of the flow regime relevant to biota of interest. This study investigates the utility of employing flow characteristics known a priori to influence regional biological endpoints as "ecologically-focused" calibration criteria compared to traditional, "best overall fit" criteria. For this study, 19 continuous HEC-HMS 4.0 models were created in coastal southern California and calibrated to hourly USGS streamflow gages with nearby biological monitoring sites using one "best overall fit" and three "ecologically-focused" criteria: NSE, Richards-Baker Flashiness Index (RBI), percent of time when the flow is < 1 cfs (%<1), and a Combined Calibration (RBI and %<1). Calibrated models were compared using calibration accuracy, environmental flow metric reproducibility, and the strength of flow-ecology relationships. Results indicate that "ecologically-focused" criteria can be calibrated with high accuracy and may provide stronger flow-ecology relationships than "best overall fit" criteria, especially when multiple "ecologically-focused" criteria are used in concert, despite inabilities to accurately reproduce additional types of ecological flow metrics to which the
Multivariate Calibration Models for Sorghum Composition using Near-Infrared Spectroscopy
Wolfrum, E.; Payne, C.; Stefaniak, T.; Rooney, W.; Dighe, N.; Bean, B.; Dahlberg, J.
2013-03-01
NREL developed calibration models based on near-infrared (NIR) spectroscopy coupled with multivariate statistics to predict compositional properties relevant to cellulosic biofuels production for a variety of sorghum cultivars. A robust calibration population was developed in an iterative fashion. The quality of models developed using the same sample geometry on two different types of NIR spectrometers and two different sample geometries on the same spectrometer did not vary greatly.
Strain gage balance for half models 302-6. Calibration report
NASA Astrophysics Data System (ADS)
Blaettler, Heinz
1986-02-01
A six-component strain gage balance for half models 302-6 for the transonic wind tunnel was developed and calibrated. The calibration was executed with a special lever, so that forces and moments could be loaded at the point of attack of the model. Point 8 (for recording buffering) was also measured. The balance is conceived for: X = +/- 100 (N); Mx = +/- 200 (Nm); Y = +/- 200 (N); My = +/- 35 (Nm); Z = +/- 1000 (N); and Mz = +/- 30 (Nm).
A Methodology for Calibrating a WATFLOOD Model of the Upper South Saskatchewan River
NASA Astrophysics Data System (ADS)
Dunning, C. F.; Soulis, R. D.; Craig, J. R.
2009-05-01
The upper South Saskatchewan River consists of the Red Deer River, the Bow River, and the Old Man River. With a contributing area of 120,000 km2, these three watersheds flow through a diverse range of land types including mountains, foothills and prairies. Using WATFLOOD, a model has been developed to simulate stream flow in this basin and this model is used as the case study for a straightforward calibration approach. The input for this model is interpolated rainfall data from twenty-three rain gauges throughout the basin, and the model output (stream flow) will be compared to measured stream flow data from thirty stream gauges. The basin is divided into nine land classes and four river classes. Because of the diversity of land types in this basin, proper identification of the parameters for individual land classes and river classes contributes significantly to the accuracy of the model. Critical land class and river class parameters are initially calibrated manually in representative sub-basins (comprised of >90%) of a single land class to determine the effect each parameter has on the system and to determine a reasonable starting estimate of each parameter. Once manual calibration is complete, DDS (Dynamically Dimensioned Search Algorithm) is used to automatically calibrate the model one sub-basin at a time. During this process only the parameters found significant during the manual calibration are altered and focus is on the land classes and river classes that dominate that sub-basin. The process of automated calibration is repeated once more but is done with multiple sub-basins and uses a stream flow weighting method. This is the final step towards a model that is calibrated to represent the diversity of the entire basin. The technique described is intended to be a general method for calibrating a regional scale model with diverse land types. The method is straight forward and allows adjusted parameters to provide relative accuracy over the entire basin.
Model Calibration and Optics Correction Using Orbit Response Matrix in the Fermilab Booster
Lebedev, V.A.; Prebys, E.; Petrenko, A.V.; Kopp, S.E.; McAteer, M.J.; /Texas U.
2012-05-01
We have calibrated the lattice model and measured the beta and dispersion functions in Fermilab's fast-ramping Booster synchrotron using the Linear Optics from Closed Orbit (LOCO) method. We used the calibrated model to implement ramped coupling, dispersion, and beta-beating corrections throughout the acceleration cycle, reducing horizontal beta beating from its initial magnitude of {approx}30% to {approx}10%, and essentially eliminating vertical beta-beating and transverse coupling.
Objective calibration of regional climate models: Application over Europe and North America
NASA Astrophysics Data System (ADS)
Bellprat, O.; de Elía, R.; Frigon, A.; Kotlarski, S.; Lüthi, D.; Laprise, R.; Schär, C.
2014-12-01
An important source of model uncertainty in climate models arises from unconfined model parameters in physical parameterizations. These parameters are commonly estimated on the basis of manual adjustments (expert tuning), which carries the risk of over-tuning the parameters for a specific climate region or time period. This issue is particularly germane in the case of regional climate models (RCM), which are often developed and used in one or a few geographical regions only. Here we address the role of objective parameter calibration in this context. Using a previously developed objective calibration methodology, we calibrate an RCM over two regions (Europe and North America) and investigate the transferability of the results. A total of eight different model parameters are calibrated, using a metamodel to account for parameter interactions. We demonstrate that the calibration is effective in reducing model biases in both domains. For Europe, this concerns in particular a pronounced reduction of the summer warm bias and the associated overestimation of interannual temperature variability, that has persisted previous expert tuning efforts and that is common in many global and regional climate models. The key process responsible behind this improvement is an increased hydrological conductivity. Over North America, there is also some reduction of the summer warm bias, but in addition the calibration achieves a pronounced reduction of winter biases in interannual temperature variability. We also find that the calibrated parameter values are almost identical for both domains, i.e. the parameter calibration is transferable between the two regions. This is a promising result and indicates that models may be more universal than previously considered.
Tian, Hai-Qing; Wang, Chun-Guang; Zhang, Hai-Jun; Yu, Zhi-Hong; Li, Jian-Kang
2012-11-01
Outlier samples strongly influence the precision of the calibration model in soluble solids content measurement of melons using NIR Spectra. According to the possible sources of outlier samples, three methods (predicted concentration residual test; Chauvenet test; leverage and studentized residual test) were used to discriminate these outliers respectively. Nine suspicious outliers were detected from calibration set which including 85 fruit samples. Considering the 9 suspicious outlier samples maybe contain some no-outlier samples, they were reclaimed to the model one by one to see whether they influence the model and prediction precision or not. In this way, 5 samples which were helpful to the model joined in calibration set again, and a new model was developed with the correlation coefficient (r) 0. 889 and root mean square errors for calibration (RMSEC) 0.6010 Brix. For 35 unknown samples, the root mean square errors prediction (RMSEP) was 0.854 degrees Brix. The performance of this model was more better than that developed with non outlier was eliminated from calibration set (r = 0.797, RMSEC= 0.849 degrees Brix, RMSEP = 1.19 degrees Brix), and more representative and stable with all 9 samples were eliminated from calibration set (r = 0.892, RMSEC = 0.605 degrees Brix, RMSEP = 0.862 degrees).
NASA Astrophysics Data System (ADS)
Jepsen, S. M.; Harmon, T. C.; Shi, Y.
2016-04-01
Calibration of watershed models to the shape of the base flow recession curve is a way to capture the important relationship between groundwater discharge and subsurface water storage in a catchment. In some montane Mediterranean regions, such as the midelevation Providence Creek catchment in the southern Sierra Nevada of California (USA), nearly all base flow recession occurs after snowmelt, and during this time evapotranspiration (ET) usually exceeds base flow. We assess the accuracy to which watershed models can be calibrated to ET-dominated base flow recession in Providence Creek, both in terms of fitting a discharge time-series and realistically capturing the observed discharge-storage relationship for the catchment. Model parameters estimated from calibrations to ET-dominated recession are compared to parameters estimated from reference calibrations to base flow recession with ET-effects removed ("potential recession"). We employ the Penn State Integrated Hydrologic Model (PIHM) for simulations of base flow and ET, and methods that are otherwise general in nature. In models calibrated to ET-dominated recession, simulation errors in ET and the targeted relationship for recession (-dQ/dt versus Q) contribute substantially (up to 57% and 46%, respectively) to overestimates in the discharge-storage differential, defined as d(lnQ)/dS, relative to that derived from water flux observations. These errors result in overestimates of deep-subsurface hydraulic conductivity in models calibrated to ET-dominated recession, by up to an order of magnitude, relative to reference calibrations to potential recession. These results illustrate a potential opportunity for improving model representation of discharge-storage dynamics by calibrating to the shape of base flow recession after removing the complicating effects of ET.
Anh Bui; Nam Dinh; Brian Williams
2013-09-01
In addition to validation data plan, development of advanced techniques for calibration and validation of complex multiscale, multiphysics nuclear reactor simulation codes are a main objective of the CASL VUQ plan. Advanced modeling of LWR systems normally involves a range of physico-chemical models describing multiple interacting phenomena, such as thermal hydraulics, reactor physics, coolant chemistry, etc., which occur over a wide range of spatial and temporal scales. To a large extent, the accuracy of (and uncertainty in) overall model predictions is determined by the correctness of various sub-models, which are not conservation-laws based, but empirically derived from measurement data. Such sub-models normally require extensive calibration before the models can be applied to analysis of real reactor problems. This work demonstrates a case study of calibration of a common model of subcooled flow boiling, which is an important multiscale, multiphysics phenomenon in LWR thermal hydraulics. The calibration process is based on a new strategy of model-data integration, in which, all sub-models are simultaneously analyzed and calibrated using multiple sets of data of different types. Specifically, both data on large-scale distributions of void fraction and fluid temperature and data on small-scale physics of wall evaporation were simultaneously used in this work’s calibration. In a departure from traditional (or common-sense) practice of tuning/calibrating complex models, a modern calibration technique based on statistical modeling and Bayesian inference was employed, which allowed simultaneous calibration of multiple sub-models (and related parameters) using different datasets. Quality of data (relevancy, scalability, and uncertainty) could be taken into consideration in the calibration process. This work presents a step forward in the development and realization of the “CIPS Validation Data Plan” at the Consortium for Advanced Simulation of LWRs to enable
NASA Astrophysics Data System (ADS)
Parajka, J.; Merz, R.; Blöschl, G.
2007-02-01
We examine the value of additional information in multiple objective calibration in terms of model performance and parameter uncertainty. We calibrate and validate a semi-distributed conceptual catchment model for two 11-year periods in 320 Austrian catchments and test three approaches of parameter calibration: (a) traditional single objective calibration (SINGLE) on daily runoff; (b) multiple objective calibration (MULTI) using daily runoff and snow cover data; (c) multiple objective calibration (APRIORI) that incorporates an a priori expert guess about the parameter distribution as additional information to runoff and snow cover data. Results indicate that the MULTI approach performs slightly poorer than the SINGLE approach in terms of runoff simulations, but significantly better in terms of snow cover simulations. The APRIORI approach is essentially as good as the SINGLE approach in terms of runoff simulations but is slightly poorer than the MULTI approach in terms of snow cover simulations. An analysis of the parameter uncertainty indicates that the MULTI approach significantly decreases the uncertainty of the model parameters related to snow processes but does not decrease the uncertainty of other model parameters as compared to the SINGLE case. The APRIORI approach tends to decrease the uncertainty of all model parameters as compared to the SINGLE case. Copyright
NASA Astrophysics Data System (ADS)
Kunnath-Poovakka, A.; Ryu, D.; Renzullo, L. J.; George, B.
2016-04-01
Calibration of spatially distributed hydrologic models is frequently limited by the availability of ground observations. Remotely sensed (RS) hydrologic information provides an alternative source of observations to inform models and extend modelling capability beyond the limits of ground observations. This study examines the capability of RS evapotranspiration (ET) and soil moisture (SM) in calibrating a hydrologic model and its efficacy to improve streamflow predictions. SM retrievals from the Advanced Microwave Scanning Radiometer-EOS (AMSR-E) and daily ET estimates from the CSIRO MODIS ReScaled potential ET (CMRSET) are used to calibrate a simplified Australian Water Resource Assessment - Landscape model (AWRA-L) for a selection of parameters. The Shuffled Complex Evolution Uncertainty Algorithm (SCE-UA) is employed for parameter estimation at eleven catchments in eastern Australia. A subset of parameters for calibration is selected based on the variance-based Sobol' sensitivity analysis. The efficacy of 15 objective functions for calibration is assessed based on streamflow predictions relative to control cases, and relative merits of each are discussed. Synthetic experiments were conducted to examine the effect of bias in RS ET observations on calibration. The objective function containing the root mean square deviation (RMSD) of ET result in best streamflow predictions and the efficacy is superior for catchments with medium to high average runoff. Synthetic experiments revealed that accurate ET product can improve the streamflow predictions in catchments with low average runoff.
NASA Astrophysics Data System (ADS)
Matott, L. S.; Rabideau, A. J.
2006-05-01
Nitrate-contaminated groundwater discharge may be a significant source of pollutant loading to impaired water- bodies, and this contribution may be assessed via large-scale regional modeling of subsurface nitrogen transport. Several aspects of large-scale subsurface transport modeling make automated calibration a difficult task. First, the appropriate level of model complexity for a regional subsurface nitrogen transport model is not obvious. Additionally, there are immense computational costs associated with large-scale transport modeling, and these costs are further exacerbated by automated calibration, which can require thousands of model evaluations. Finally, available evidence suggests that highly complex reactive transport models suffer from parameter non-uniqueness, a characteristic that can frustrate traditional regression-based calibration algorithms. These difficulties are the topic of ongoing research at the University at Buffalo, and a preliminary modeling and calibration approach will be presented. The approach is in the early stages of development and is being tested on a 400 square kilometer model that encompasses an agricultural research site in the Neuse River Basin (the Lizzie Research Station), located on an active and privately owned hog farm. Early results highlight the sensitivity of calibrated denitrification rate constants to a variety of secondary processes, including surface complexation of iron and manganese, ion exchange, and the precipitation/dissolution of calcite and metals.
Bayesian calibration for electrochemical thermal model of lithium-ion cells
NASA Astrophysics Data System (ADS)
Tagade, Piyush; Hariharan, Krishnan S.; Basu, Suman; Verma, Mohan Kumar Singh; Kolake, Subramanya Mayya; Song, Taewon; Oh, Dukjin; Yeo, Taejung; Doo, Seokgwang
2016-07-01
Pseudo-two dimensional electrochemical thermal (P2D-ECT) model contains many parameters that are difficult to evaluate experimentally. Estimation of these model parameters is challenging due to computational cost and the transient model. Due to lack of complete physical understanding, this issue gets aggravated at extreme conditions like low temperature (LT) operations. This paper presents a Bayesian calibration framework for estimation of the P2D-ECT model parameters. The framework uses a matrix variate Gaussian process representation to obtain a computationally tractable formulation for calibration of the transient model. Performance of the framework is investigated for calibration of the P2D-ECT model across a range of temperatures (333 Ksbnd 263 K) and operating protocols. In the absence of complete physical understanding, the framework also quantifies structural uncertainty in the calibrated model. This information is used by the framework to test validity of the new physical phenomena before incorporation in the model. This capability is demonstrated by introducing temperature dependence on Bruggeman's coefficient and lithium plating formation at LT. With the incorporation of new physics, the calibrated P2D-ECT model accurately predicts the cell voltage with high confidence. The accurate predictions are used to obtain new insights into the low temperature lithium ion cell behavior.
Thornton, Peter E; Wang, Weile; Law, Beverly E.; Nemani, Ramakrishna R
2009-01-01
The increasing complexity of ecosystem models represents a major difficulty in tuning model parameters and analyzing simulated results. To address this problem, this study develops a hierarchical scheme that simplifies the Biome-BGC model into three functionally cascaded tiers and analyzes them sequentially. The first-tier model focuses on leaf-level ecophysiological processes; it simulates evapotranspiration and photosynthesis with prescribed leaf area index (LAI). The restriction on LAI is then lifted in the following two model tiers, which analyze how carbon and nitrogen is cycled at the whole-plant level (the second tier) and in all litter/soil pools (the third tier) to dynamically support the prescribed canopy. In particular, this study analyzes the steady state of these two model tiers by a set of equilibrium equations that are derived from Biome-BGC algorithms and are based on the principle of mass balance. Instead of spinning-up the model for thousands of climate years, these equations are able to estimate carbon/nitrogen stocks and fluxes of the target (steady-state) ecosystem directly from the results obtained by the first-tier model. The model hierarchy is examined with model experiments at four AmeriFlux sites. The results indicate that the proposed scheme can effectively calibrate Biome-BGC to simulate observed fluxes of evapotranspiration and photosynthesis; and the carbon/nitrogen stocks estimated by the equilibrium analysis approach are highly consistent with the results of model simulations. Therefore, the scheme developed in this study may serve as a practical guide to calibrate/analyze Biome-BGC; it also provides an efficient way to solve the problem of model spin-up, especially for applications over large regions. The same methodology may help analyze other similar ecosystem models as well.
More efficient evolutionary strategies for model calibration with watershed model for demonstration
NASA Astrophysics Data System (ADS)
Baggett, J. S.; Skahill, B. E.
2008-12-01
Evolutionary strategies allow automatic calibration of more complex models than traditional gradient based approaches, but they are more computationally intensive. We present several efficiency enhancements for evolution strategies, many of which are not new, but when combined have been shown to dramatically decrease the number of model runs required for calibration of synthetic problems. To reduce the number of expensive model runs we employ a surrogate objective function for an adaptively determined fraction of the population at each generation (Kern et al., 2006). We demonstrate improvements to the adaptive ranking strategy that increase its efficiency while sacrificing little reliability and further reduce the number of model runs required in densely sampled parts of parameter space. Furthermore, we include a gradient individual in each generation that is usually not selected when the search is in a global phase or when the derivatives are poorly approximated, but when selected near a smooth local minimum can dramatically increase convergence speed (Tahk et al., 2007). Finally, the selection of the gradient individual is used to adapt the size of the population near local minima. We show, by incorporating these enhancements into the Covariance Matrix Adaption Evolution Strategy (CMAES; Hansen, 2006), that their synergetic effect is greater than their individual parts. This hybrid evolutionary strategy exploits smooth structure when it is present but degrades to an ordinary evolutionary strategy, at worst, if smoothness is not present. Calibration of 2D-3D synthetic models with the modified CMAES requires approximately 10%-25% of the model runs of ordinary CMAES. Preliminary demonstration of this hybrid strategy will be shown for watershed model calibration problems. Hansen, N. (2006). The CMA Evolution Strategy: A Comparing Review. In J.A. Lozano, P. Larrañga, I. Inza and E. Bengoetxea (Eds.). Towards a new evolutionary computation. Advances in estimation of
NASA Astrophysics Data System (ADS)
Looper, Jonathan P.; Vieux, Baxter E.; Moreno, Maria A.
2012-02-01
SummaryPhysics-based distributed (PBD) hydrologic models predict runoff throughout a basin using the laws of conservation of mass and momentum, and benefit from more accurate and representative precipitation input. V flo™ is a gridded distributed hydrologic model that predicts runoff and continuously updates soil moisture. As a participating model in the second Distributed Model Intercomparison Project (DMIP2), V flo™ is applied to the Illinois and Blue River basins in Oklahoma. Model parameters are derived from geospatial data for initial setup, and then adjusted to reproduce the observed flow under continuous time-series simulations and on an event basis. Simulation results demonstrate that certain runoff events are governed by saturation excess processes, while in others, infiltration-rate excess processes dominate. Streamflow prediction accuracy is enhanced when multi-sensor precipitation estimates (MPE) are bias corrected through re-analysis of the MPE provided in the DMIP2 experiment, resulting in gauge-corrected precipitation estimates (GCPE). Model calibration identified a set of parameters that minimized objective functions for errors in runoff volume and instantaneous discharge. Simulated streamflow for the Blue and Illinois River basins, have Nash-Sutcliffe efficiency coefficients between 0.61 and 0.68, respectively, for the 1996-2002 period using GCPE. The streamflow prediction accuracy improves by 74% in terms of Nash-Sutcliffe efficiency when GCPE is used during the calibration period. Without model calibration, excellent agreement between hourly simulated and observed discharge is obtained for the Illinois, whereas in the Blue River, adjustment of parameters affecting both saturation and infiltration-rate excess processes were necessary. During the 1996-2002 period, GCPE input was more important than model calibration for the Blue River, while model calibration proved more important for the Illinois River. During the verification period (2002
NASA Astrophysics Data System (ADS)
Trendafiloski, G.; Gaspa Rebull, O.; Ewing, C.; Podlaha, A.; Magee, B.
2012-04-01
Calibration and validation are crucial steps in the production of the catastrophe models for the insurance industry in order to assure the model's reliability and to quantify its uncertainty. Calibration is needed in all components of model development including hazard and vulnerability. Validation is required to ensure that the losses calculated by the model match those observed in past events and which could happen in future. Impact Forecasting, the catastrophe modelling development centre of excellence within Aon Benfield, has recently launched its earthquake model for Algeria as a part of the earthquake model for the Maghreb region. The earthquake model went through a detailed calibration process including: (1) the seismic intensity attenuation model by use of macroseismic observations and maps from past earthquakes in Algeria; (2) calculation of the country-specific vulnerability modifiers by use of past damage observations in the country. The use of Benouar, 1994 ground motion prediction relationship was proven as the most appropriate for our model. Calculation of the regional vulnerability modifiers for the country led to 10% to 40% larger vulnerability indexes for different building types compared to average European indexes. The country specific damage models also included aggregate damage models for residential, commercial and industrial properties considering the description of the buildings stock given by World Housing Encyclopaedia and the local rebuilding cost factors equal to 10% for damage grade 1, 20% for damage grade 2, 35% for damage grade 3, 75% for damage grade 4 and 100% for damage grade 5. The damage grades comply with the European Macroseismic Scale (EMS-1998). The model was validated by use of "as-if" historical scenario simulations of three past earthquake events in Algeria M6.8 2003 Boumerdes, M7.3 1980 El-Asnam and M7.3 1856 Djidjelli earthquake. The calculated return periods of the losses for client market portfolio align with the
Radiative type III seesaw model and its collider phenomenology
NASA Astrophysics Data System (ADS)
von der Pahlen, Federico; Palacio, Guillermo; Restrepo, Diego; Zapata, Oscar
2016-08-01
We analyze the present bounds of a scotogenic model, the radiative type III seesaw, in which an additional scalar doublet and at least two fermion triplets of S U (2 )L are added to the Standard Model. In the radiative type III seesaw, the new physics (NP) sector is odd under an exact global Z2 symmetry. This symmetry guaranties that the lightest NP neutral particle is stable, providing a natural dark matter candidate, and leads to naturally suppressed neutrino masses generated by a one-loop realization of an effective Weinberg operator. We focus on the region with the highest sensitivity in present and future LHC searches, with light scalar dark matter and at least one NP fermion triplet at the sub-TeV scale. This region allows for significant production cross sections of NP fermion pairs at the LHC. We reinterpret a set of searches for supersymmetric particles at the LHC obtained using the package CheckMATE, to set limits on our model as a function of the masses of the NP particles and their Yukawa interactions. The most sensitive search channel is found to be dileptons plus missing transverse energy. In order to target the case of tau enhanced decays and the case of compressed spectra, we reinterpret the recent slepton and chargino search bounds by ATLAS. For a lightest NP fermion triplet with a maximal branching ratio to either electrons or muons, we exclude NP fermion masses of up to 650 GeV, while this bound is reduced to approximately 400 GeV in the tau-philic case. Allowing for a general flavor structure, we set limits on the Yukawa couplings, which are directly related to the neutrino flavor structure.
NASA Astrophysics Data System (ADS)
Uddameri, V.; Kuchanur, M.
2007-01-01
Soil moisture balance studies provide a convenient approach to estimate aquifer recharge when only limited site-specific data are available. A monthly mass-balance approach has been utilized in this study to estimate recharge in a small watershed in the coastal bend of South Texas. The developed lumped parameter model employs four adjustable parameters to calibrate model predicted stream runoff to observations at a gaging station. A new procedure was developed to correctly capture the intermittent nature of rainfall. The total monthly rainfall was assigned to a single-equivalent storm whose duration was obtained via calibration. A total of four calibrations were carried out using an evolutionary computing technique called genetic algorithms as well as the conventional gradient descent (GD) technique. Ordinary least squares and the heteroscedastic maximum likelihood error (HMLE) based objective functions were evaluated as part of this study as well. While the genetic algorithm based calibrations were relatively better in capturing the peak runoff events, the GD based calibration did slightly better in capturing the low flow events. Treating the Box-Cox exponent in the HMLE function as a calibration parameter did not yield better estimates and the study corroborates the suggestion made in the literature of fixing this exponent at 0.3. The model outputs were compared against available information and results indicate that the developed modeling approach provides a conservative estimate of recharge.
NASA Technical Reports Server (NTRS)
Scott, W. A.
1984-01-01
The propulsion simulator calibration laboratory (PSCL) in which calibrations can be performed to determine the gross thrust and airflow of propulsion simulators installed in wind tunnel models is described. The preliminary checkout, evaluation and calibration of the PSCL's 3 component force measurement system is reported. Methods and equipment were developed for the alignment and calibration of the force measurement system. The initial alignment of the system demonstrated the need for more efficient means of aligning system's components. The use of precision alignment jigs increases both the speed and accuracy with which the system is aligned. The calibration of the force measurement system shows that the methods and equipment for this procedure can be successful.
Calibrating the Johnson-Holmquist Ceramic Model for SiC using CTH
NASA Astrophysics Data System (ADS)
Cazamias, James
2009-06-01
The Johnson-Holmquist ceramic material model has been calibrated and successfully applied to numerically simulate ballistic events using the Lagrangian code EPIC. While the majority of the constants are ``physics'' based, two of the constants for the failed material response are calibrated using ballistic experiments conducted on a confined cylindrical ceramic target. The maximum strength of the failed ceramic is calibrated by matching the penetration velocity. The second refers to the equivalent plastic strain at failure under constant pressure and is calibrated using the dwell time. Use of these two constants in the CTH Eulerian hydrocode does not predict the ballistic response. This difference may be due to the phenomenological nature of the model and the different numerical schemes used by the codes. This paper determines the afore mentioned material constants for SiC suitable for simulating ballistic events using CTH.
Calibrating the Johnson-Holmquist Ceramic Model for sic Using Cth
NASA Astrophysics Data System (ADS)
Cazamias, J. U.; Bilyk, S. R.
2009-12-01
The Johnson-Holmquist ceramic material model has been calibrated and successfully applied to numerically simulate ballistic events using the Lagrangian code EPIC. While the majority of the constants are "physics" based, two of the constants for the failed material response are calibrated using ballistic experiments conducted on a confined cylindrical ceramic target. The maximum strength of the failed ceramic is calibrated by matching the penetration velocity. The second refers to the equivalent plastic strain at failure under constant pressure and is calibrated using the dwell time. Use of these two constants in the CTH Eulerian hydrocode does not predict the ballistic response. This difference may be due to the phenomenological nature of the model and the different numerical schemes used by the codes. This paper determines the aforementioned material constants for SiC suitable for simulating ballistic events using CTH.
CALIBRATING THE JOHNSON-HOLMQUIST CERAMIC MODEL FOR SIC USING CTH
Cazamias, J. U.; Bilyk, S. R.
2009-12-28
The Johnson-Holmquist ceramic material model has been calibrated and successfully applied to numerically simulate ballistic events using the Lagrangian code EPIC. While the majority of the constants are ''physics'' based, two of the constants for the failed material response are calibrated using ballistic experiments conducted on a confined cylindrical ceramic target. The maximum strength of the failed ceramic is calibrated by matching the penetration velocity. The second refers to the equivalent plastic strain at failure under constant pressure and is calibrated using the dwell time. Use of these two constants in the CTH Eulerian hydrocode does not predict the ballistic response. This difference may be due to the phenomenological nature of the model and the different numerical schemes used by the codes. This paper determines the aforementioned material constants for SiC suitable for simulating ballistic events using CTH.
NASA Astrophysics Data System (ADS)
Tian, Jialin; Smith, William L.; Gazarik, Michael J.
2008-12-01
The ultimate remote sensing benefits of the high resolution Infrared radiance spectrometers will be realized with their geostationary satellite implementation in the form of imaging spectrometers. This will enable dynamic features of the atmosphere's thermodynamic fields and pollutant and greenhouse gas constituents to be observed for revolutionary improvements in weather forecasts and more accurate air quality and climate predictions. As an important step toward realizing this application objective, the Geostationary Imaging Fourier Transform Spectrometer (GIFTS) Engineering Demonstration Unit (EDU) was successfully developed under the NASA New Millennium Program, 2000-2006. The GIFTS-EDU instrument employs three focal plane arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The GIFTS calibration is achieved using internal blackbody calibration references at ambient (260 K) and hot (286 K) temperatures. In this paper, we introduce a refined calibration technique that utilizes Principle Component (PC) analysis to compensate for instrument distortions and artifacts, therefore, enhancing the absolute calibration accuracy. This method is applied to data collected during the GIFTS Ground Based Measurement (GBM) experiment, together with simultaneous observations by the accurately calibrated AERI (Atmospheric Emitted Radiance Interferometer), both simultaneously zenith viewing the sky through the same external scene mirror at ten-minute intervals throughout a cloudless day at Logan Utah on September 13, 2006. The accurately calibrated GIFTS radiances are produced using the first four PC scores in the GIFTS-AERI regression model. Temperature and moisture profiles retrieved from the PC-calibrated GIFTS radiances are verified against radiosonde measurements collected throughout the GIFTS sky measurement period. Using the GIFTS GBM calibration model, we compute the calibrated radiances from data
Inaccuracy Determination in Mathematical Model of Labsocs Efficiency Calibration Program
NASA Astrophysics Data System (ADS)
Kuznetsov, M.; Nikishkin, T.; Chursin, S.
2016-08-01
The study of radioactive materials quantitative inaccuracy determination caused by semiconductor detector aging is presented in the article. The study was conducted using a p- type coaxial GC 1518 detector made of a high-purity germanium produced by Canberra Company and LabSOCS mathematical efficiency calibration program. It was discovered that during 8 years of operation the efficiency of the detector had decreased due to increase of the dead layer of the germanium crystal. Increasing the thickness of the dead layer leads to 2 effects, which influence on the efficiency decrease: the shielding effect and the effect of reducing the active volume of the germanium crystal. It is found that the shielding effect contributes at energies below 88 keV. At energies above 88 keV the inaccuracy is connected with the decrease of the germanium crystal active volume, caused by lithium thermal diffusion.
Impact of the calibration period on the conceptual rainfall-runoff model parameter estimates
NASA Astrophysics Data System (ADS)
Todorovic, Andrijana; Plavsic, Jasna
2015-04-01
A conceptual rainfall-runoff model is defined by its structure and parameters, which are commonly inferred through model calibration. Parameter estimates depend on objective function(s), optimisation method, and calibration period. Model calibration over different periods may result in dissimilar parameter estimates, while model efficiency decreases outside calibration period. Problem of model (parameter) transferability, which conditions reliability of hydrologic simulations, has been investigated for decades. In this paper, dependence of the parameter estimates and model performance on calibration period is analysed. The main question that is addressed is: are there any changes in optimised parameters and model efficiency that can be linked to the changes in hydrologic or meteorological variables (flow, precipitation and temperature)? Conceptual, semi-distributed HBV-light model is calibrated over five-year periods shifted by a year (sliding time windows). Length of the calibration periods is selected to enable identification of all parameters. One water year of model warm-up precedes every simulation, which starts with the beginning of a water year. The model is calibrated using the built-in GAP optimisation algorithm. The objective function used for calibration is composed of Nash-Sutcliffe coefficient for flows and logarithms of flows, and volumetric error, all of which participate in the composite objective function with approximately equal weights. Same prior parameter ranges are used in all simulations. The model is calibrated against flows observed at the Slovac stream gauge on the Kolubara River in Serbia (records from 1954 to 2013). There are no trends in precipitation nor in flows, however, there is a statistically significant increasing trend in temperatures at this catchment. Parameter variability across the calibration periods is quantified in terms of standard deviations of normalised parameters, enabling detection of the most variable parameters
Expression of CA III in rodent models of obesity.
Stanton, L W; Ponte, P A; Coleman, R T; Snyder, M A
1991-06-01
To achieve a better understanding of the biochemical basis of obesity, we have undertaken comparative analyses of adipose tissue of lean and obese mice. By two-dimensional gel analysis, carbonic anhydrase-III (CA III) has been identified as a major constituent of murine adipose tissue. Quantitative comparisons of CA III protein and mRNA levels indicate that this enzyme is expressed at lower levels in adipose tissue from animals that were either genetically obese or had experimentally induced obesity compared to levels in the corresponding lean controls. This decrease in CA III expression was unique to adipose tissue, since other CA III-containing organs and tissues did not show a change when lean and obese animals were compared. Additionally, levels of CA III in adipose tissue from obese animals responded to acute changes in energy balance of the animal. These results are discussed in light of possible metabolic roles for CA III.
NASA Astrophysics Data System (ADS)
Keawbunsong, P.; Supanakoon, P.; Promwong, S.
2015-05-01
This article presents Hata's path loss model calibration in order to predict a design of the Digital Terrestrial Television (DTTV) Propagation in an urban area of the south of Thailand through measuring power signal of the network operators’ broadcasting in 4 channels within Haadyai urban area, Songkla Province. The chosen location is a density area, a distance of 2.5-6.5 km. from the broadcasting station. The calibration was conducted through a statistical method of Root Mean Square Error (RMSE) from received power signal and compared with a path loss model from the prediction, followed by looking for relative errors to indicate the efficiency of the calibration model. The RMSE analytical result of CH 26 with the frequency of 514 MHz; CH 42 with the frequency of 642 MHz; CH 46 with the frequency of 674 MHz; CH 54 with the frequency of 738 MHz shows that Hata's path loss model calibration is closer to the measured data than the original and the other model whereas the relative errors are closer to zero than the predicted path loss model. This makes the Hata's path loss model calibration become more accurate in the prediction and subsequently more suitable for use in planning the network design.
Zhu, Feng; Dong, Liqiang; Jin, Xin; Jiang, Binhui; Kalra, Anil; Shen, Ming; Yang, King H
2015-11-01
Anthropometric test devices (ATDs), such as the Hybrid III crash-test dummy, have been used to simulate lowerextremity responses to military personnel subjected to loading conditions from anti-vehicular (AV) landmine blasts. Numerical simulations [e.g., finite element (FE) analysis] of such high-speed vertical loading on ATD parts require accurate material parameters that are dependent on strain rate. This study presents a combined experimental and computational study to calibrate the rate-dependent properties of three materials on the lower extremities of the Hybrid III dummy. The three materials are heelpad foam, foot skin, and lower-leg flesh, and each has properties that can affect simulation results of forces and moments transferred to the lower extremities. Specifically, the behavior of the heel-pad foam was directly calibrated through standard compression tests, and the properties of the foot skin and lower-leg flesh were calibrated based on an optimization procedure in which the material parameters were adjusted for best fit between the calculated force-deflection responses and least squares of the experimental data. The material models updated with strain-rate effects were then integrated into an ATD full-body FE model (FEM), which was used to simulate vertical impulsive loading responses at different speeds. Results of validations using this model demonstrated basic replication of experimentally obtained response patterns of the tibia. The bending moments matched those calculated from the experimental data 25-40% more accurately than those obtained from the original model, and axial forces were 60-90% more accurate. However, neither the original nor the modified models well captured whole-body response patterns, and further improvements are required. As a generalized approach, the optimization method presented in this paper can be applied to characterize material constants for a wide range of materials. PMID:26660755
Methane oxidation in a biofilter (Part 2): A lab-scale experiment for model calibration.
Amodeo, Corrado; Masi, Salvatore; Van Hulle, Stijn W H; Zirpoli, Pier Francesco; Mancini, Ignazio M; Caniani, Donatella
2015-01-01
In this study an experimental study on a biological methane oxidation column presented with the aim to calibrate a mathematical model developed in an earlier study. The column was designed to reproduce at lab-scale a real biofilter trying to consider the more probable landfill boundary conditions. Although the methane oxidation efficiency in the column was lower than the expected (around 35%), an appropriate model implementation showed an acceptable agreement between the outcomes data of the model simulation and the experimental data (with Theil's Inequality Coefficient value of 0.08). A calibrated model allows a better management of the biofilter performance in terms of methane oxidation.
NASA Astrophysics Data System (ADS)
Romanowicz, Renata; Osuch, Marzena; Grabowiecka, Magdalena
2013-12-01
Despite the development of new measuring techniques, monitoring systems and advances in computer technology, rainfall-flow modelling is still a challenge. The reasons are multiple and fairly well known. They include the distributed, heterogeneous nature of the environmental variables affecting flow from the catchment. These are precipitation, evapotranspiration and in some seasons and catchments in Poland, snow melt also. This paper presents a review of work done on the calibration and validation of rainfall-runoff modelling, with a focus on the conceptual HBV model. We give a synthesis of the problems and propose a practical guide to the calibration and validation of rainfall-runoff models.
ERIC Educational Resources Information Center
Kubinger, Klaus D.
2005-01-01
This article emphasizes that the Rasch model is not only very useful for psychological test calibration but is also necessary if the number of solved items is to be used as an examinee's score. Simplified proof that the Rasch model implies specific objective parameter comparisons is given. Consequently, a model check per se is possible. For data…
ERIC Educational Resources Information Center
Kubinger, Klaus D.
2005-01-01
In this article, we emphasize that the Rasch model is not only very useful for psychological test calibration but is also necessary if the number of solved items is to be used as an examinee's score. Simplified proof that the Rasch model implies specific objective parameter comparisons is given. Consequently, a model check per se is possible. For…
Maximin Calibration Designs for the Nominal Response Model: An Empirical Evaluation
ERIC Educational Resources Information Center
Passos, Valeria Lima; Berger, Martijn P. F.
2004-01-01
The problem of finding optimal calibration designs for dichotomous item response theory (IRT) models has been extensively studied in the literature. In this study, this problem will be extended to polytomous IRT models. Focus is given to items described by the nominal response model (NRM). The optimizations objective is to minimize the generalized…
Rafique, Rashad; Fienen, Michael N.; Parkin, Timothy B.; Anex, Robert P.
2013-01-01
DayCent is a biogeochemical model of intermediate complexity widely used to simulate greenhouse gases (GHG), soil organic carbon and nutrients in crop, grassland, forest and savannah ecosystems. Although this model has been applied to a wide range of ecosystems, it is still typically parameterized through a traditional “trial and error” approach and has not been calibrated using statistical inverse modelling (i.e. algorithmic parameter estimation). The aim of this study is to establish and demonstrate a procedure for calibration of DayCent to improve estimation of GHG emissions. We coupled DayCent with the parameter estimation (PEST) software for inverse modelling. The PEST software can be used for calibration through regularized inversion as well as model sensitivity and uncertainty analysis. The DayCent model was analysed and calibrated using N2O flux data collected over 2 years at the Iowa State University Agronomy and Agricultural Engineering Research Farms, Boone, IA. Crop year 2003 data were used for model calibration and 2004 data were used for validation. The optimization of DayCent model parameters using PEST significantly reduced model residuals relative to the default DayCent parameter values. Parameter estimation improved the model performance by reducing the sum of weighted squared residual difference between measured and modelled outputs by up to 67 %. For the calibration period, simulation with the default model parameter values underestimated mean daily N2O flux by 98 %. After parameter estimation, the model underestimated the mean daily fluxes by 35 %. During the validation period, the calibrated model reduced sum of weighted squared residuals by 20 % relative to the default simulation. Sensitivity analysis performed provides important insights into the model structure providing guidance for model improvement.
Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation
Robertson, J.; Polly, B.; Collis, J.
2013-09-01
This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define 'explicit' input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAE 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.
Toward cosmological-model-independent calibrations for the luminosity relations of Gamma-Ray Bursts
NASA Astrophysics Data System (ADS)
Ding, Xuheng; Li, Zhengxiang; Zhu, Zong-Hong
2015-05-01
Gamma-ray bursts (GRBs), have been widely used as distance indicators to measure the cosmic expansion and explore the nature of dark energy. A popular method adopted in previous works is to calibrate the luminosity relations which are responsible for distance estimation of GRBs with more primary (low redshift) cosmic distance ladder objects, type Ia supernovae (SNe Ia). Since distances of SNe Ia in all SN Ia samples used to calibrate GRB luminosity relations were usually derived from the global fit in a specific cosmological model, the distance of GRB at a given redshift calibrated with matching SNe Ia was still cosmological-model-dependent. In this paper, we first directly determine the distances of SNe Ia with the Angular Diameter Distances (ADDs) of galaxy clusters without any assumption for the background of the universe, and then calibrate GRB luminosity relations with our cosmology-independent distances of SNe Ia. The results suggest that, compared to the previous original manner where distances of SNe Ia used as calibrators are determined from the global fit in a particular cosmological model, our treatments proposed here yield almost the same calibrations of GRB luminosity relations and the cosmological implications of them do not suffer any circularity.
Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation
and Ben Polly, Joseph Robertson; Polly, Ben; Collis, Jon
2013-09-01
This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define "explicit" input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAE 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.
ATOMIC DATA AND SPECTRAL MODEL FOR Fe III
Bautista, Manuel A.; Ballance, Connor P.; Quinet, Pascal
2010-08-01
We present new atomic data (radiative transitions rates and collision strengths) from large-scale calculations and a non-LTE spectral model for Fe III. This model is in very good agreement with observed astronomical emission spectra, in contrast with previous models that yield large discrepancies in observations. The present atomic computations employ a combination of atomic physics methods, e.g., relativistic Hartree-Fock, the Thomas-Fermi-Dirac potential, and Dirac-Fock computation of A-values and the R-matrix with intermediate coupling frame transformation and the Dirac R-matrix. We study advantages and shortcomings of each method. It is found that the Dirac R-matrix collision strengths yield excellent agreement with observations, much improved over previously available models. By contrast, the transformation of the LS-coupling R-matrix fails to yield accurate effective collision strengths at around 10{sup 4} K, despite using very large configuration expansions, due to the limited treatment of spin-orbit effects in the near-threshold resonances of the collision strengths. The present work demonstrates that accurate atomic data for low-ionization iron-peak species are now within reach.
Rebaudo, François; Faye, Emile; Dangles, Olivier
2016-01-01
A large body of literature has recently recognized the role of microclimates in controlling the physiology and ecology of species, yet the relevance of fine-scale climatic data for modeling species performance and distribution remains a matter of debate. Using a 6-year monitoring of three potato moth species, major crop pests in the tropical Andes, we asked whether the spatiotemporal resolution of temperature data affect the predictions of models of moth performance and distribution. For this, we used three different climatic data sets: (i) the WorldClim dataset (global dataset), (ii) air temperature recorded using data loggers (weather station dataset), and (iii) air crop canopy temperature (microclimate dataset). We developed a statistical procedure to calibrate all datasets to monthly and yearly variation in temperatures, while keeping both spatial and temporal variances (air monthly temperature at 1 km² for the WorldClim dataset, air hourly temperature for the weather station, and air minute temperature over 250 m radius disks for the microclimate dataset). Then, we computed pest performances based on these three datasets. Results for temperature ranging from 9 to 11°C revealed discrepancies in the simulation outputs in both survival and development rates depending on the spatiotemporal resolution of the temperature dataset. Temperature and simulated pest performances were then combined into multiple linear regression models to compare predicted vs. field data. We used an additional set of study sites to test the ability of the results of our model to be extrapolated over larger scales. Results showed that the model implemented with microclimatic data best predicted observed pest abundances for our study sites, but was less accurate than the global dataset model when performed at larger scales. Our simulations therefore stress the importance to consider different temperature datasets depending on the issue to be solved in order to accurately predict species
Rebaudo, François; Faye, Emile; Dangles, Olivier
2016-01-01
A large body of literature has recently recognized the role of microclimates in controlling the physiology and ecology of species, yet the relevance of fine-scale climatic data for modeling species performance and distribution remains a matter of debate. Using a 6-year monitoring of three potato moth species, major crop pests in the tropical Andes, we asked whether the spatiotemporal resolution of temperature data affect the predictions of models of moth performance and distribution. For this, we used three different climatic data sets: (i) the WorldClim dataset (global dataset), (ii) air temperature recorded using data loggers (weather station dataset), and (iii) air crop canopy temperature (microclimate dataset). We developed a statistical procedure to calibrate all datasets to monthly and yearly variation in temperatures, while keeping both spatial and temporal variances (air monthly temperature at 1 km² for the WorldClim dataset, air hourly temperature for the weather station, and air minute temperature over 250 m radius disks for the microclimate dataset). Then, we computed pest performances based on these three datasets. Results for temperature ranging from 9 to 11°C revealed discrepancies in the simulation outputs in both survival and development rates depending on the spatiotemporal resolution of the temperature dataset. Temperature and simulated pest performances were then combined into multiple linear regression models to compare predicted vs. field data. We used an additional set of study sites to test the ability of the results of our model to be extrapolated over larger scales. Results showed that the model implemented with microclimatic data best predicted observed pest abundances for our study sites, but was less accurate than the global dataset model when performed at larger scales. Our simulations therefore stress the importance to consider different temperature datasets depending on the issue to be solved in order to accurately predict species
NASA Astrophysics Data System (ADS)
Minville, Marie; Cartier, Dominique; Guay, Catherine; Leclaire, Louis-Alexandre; Audet, Charles; Le Digabel, Sébastien; Merleau, James
2014-06-01
Different sets of calibrated model parameters can yield divergent hydrological simulations which in turn can lead to different operational decisions or scientific conclusions. In order to obtain reliable hydrological results, proper calibration is therefore fundamental. This article proposes a new calibration approach for conceptual hydrological models based on the paradigm that hydrological process representation, along with the reproduction of observed streamflows, need to be taken into account when assessing the performance of a hydrological model. Several studies have shown that complementary data can be used to improve hydrological process representation and make hydrological modeling more robust. In the current study, the process of interest is actual evapotranspiration (AET). In order to obtain a more realistic representation of AET, meteorological variables and the AET mean annual cycle simulated by a regional climate model (RCM) driven by reanalysis are used to impose constraints during the optimization procedure. This calibration strategy is compared to a second strategy which relies on AET derived from reference data and to the classical approach based solely on the reproduction of observed discharges. The different methodologies are applied to calibrate the lumped conceptual model HSAMI, used operationally at Hydro-Québec, for six Canadian snow-dominated basins with various hydrometeorological and physiographical characteristics.
Rafique, Rashid; Kumar, Sandeep; Luo, Yiqi; Kiely, Gerard; Asrar, Ghassem R.
2015-02-01
he accurate calibration of complex biogeochemical models is essential for the robust estimation of soil greenhouse gases (GHG) as well as other environmental conditions and parameters that are used in research and policy decisions. DayCent is a popular biogeochemical model used both nationally and internationally for this purpose. Despite DayCent’s popularity, its complex parameter estimation is often based on experts’ knowledge which is somewhat subjective. In this study we used the inverse modelling parameter estimation software (PEST), to calibrate the DayCent model based on sensitivity and identifi- ability analysis. Using previously published N2 O and crop yield data as a basis of our calibration approach, we found that half of the 140 parameters used in this study were the primary drivers of calibration dif- ferences (i.e. the most sensitive) and the remaining parameters could not be identified given the data set and parameter ranges we used in this study. The post calibration results showed improvement over the pre-calibration parameter set based on, a decrease in residual differences 79% for N2O fluxes and 84% for crop yield, and an increase in coefficient of determination 63% for N2O fluxes and 72% for corn yield. The results of our study suggest that future studies need to better characterize germination tem- perature, number of degree-days and temperature dependency of plant growth; these processes were highly sensitive and could not be adequately constrained by the data used in our study. Furthermore, the sensitivity and identifiability analysis was helpful in providing deeper insight for important processes and associated parameters that can lead to further improvement in calibration of DayCent model.
When are multiobjective calibration trade-offs in hydrologic models meaningful?
NASA Astrophysics Data System (ADS)
Kollat, J. B.; Reed, P. M.; Wagener, T.
2012-03-01
This paper applies a four-objective calibration strategy focusing on peak flows, low flows, water balance, and flashiness to 392 model parameter estimation experiment (MOPEX) watersheds across the United States. Our analysis explores the influence of model structure by analyzing how the multiobjective calibration trade-offs for two conceptual hydrologic models, the Hydrology Model (HYMOD) and the Hydrologiska Byråns Vattenbalansavdelning (HBV) model, compare for each of the 392 catchments. Our results demonstrate that for modern multiobjective calibration frameworks to identify any meaningful measure of model structural failure, users must be able to carefully control the precision by which they evaluate their trade-offs. Our study demonstrates that the concept of epsilon-dominance provides an effective means of attaining bounded and meaningful hydrologic model calibration trade-offs. When analyzed at an appropriate precision, we found that meaningful multiobjective trade-offs are far less frequent than prior literature has suggested. However, when trade-offs do exist at a meaningful precision, they have significant value for supporting hydrologic model selection, distinguishing core model deficiencies, and identifying hydroclimatic regions where hydrologic model prediction is highly challenging.
Necpálová, Magdalena; Anex, Robert P.; Fienen, Michael N.; Del Grosso, Stephen J.; Castellano, Michael J.; Sawyer, John E.; Iqbal, Javed; Pantoja, Jose L.; Barker, Daniel W.
2015-01-01
The ability of biogeochemical ecosystem models to represent agro-ecosystems depends on their correct integration with field observations. We report simultaneous calibration of 67 DayCent model parameters using multiple observation types through inverse modeling using the PEST parameter estimation software. Parameter estimation reduced the total sum of weighted squared residuals by 56% and improved model fit to crop productivity, soil carbon, volumetric soil water content, soil temperature, N2O, and soil3NO− compared to the default simulation. Inverse modeling substantially reduced predictive model error relative to the default model for all model predictions, except for soil 3NO− and 4NH+. Post-processing analyses provided insights into parameter–observation relationships based on parameter correlations, sensitivity and identifiability. Inverse modeling tools are shown to be a powerful way to systematize and accelerate the process of biogeochemical model interrogation, improving our understanding of model function and the underlying ecosystem biogeochemical processes that they represent.
Validation of the predictive power of a calibrated physical stochastic resist model
NASA Astrophysics Data System (ADS)
Robertson, Stewart A.; Biafore, John J.; Smith, Mark D.; Reilly, Michael T.; Wandell, Jerome
2009-12-01
A newly developed stochastic resist model, implemented in a prototype version of the PROLITH lithography simulation software is fitted to experimental data for a commercially available immersion ArF photoresist, EPIC 2013 (Dow Electronic Materials). Calibration is performed only considering the mean CD value through focus and dose for three line/space features of varying pitch (dense, semi-dense and isolated). An unweighted Root Mean Squared Error (RMSE) of approximately 2.0 nm is observed when the calibrated model is compared to the experimental data. Although the model is calibrated only to mean CD values, it is able to accurately predict LER through focus to better than 1.5 nm RMSE and highly accurate CDU distributions at fixed focus and dose conditions. It is also shown how a stochastic model can be used to the describe the bridging behavior often observed at marginal focus and exposure conditions.
NASA Astrophysics Data System (ADS)
Wentworth, Mami Tonoe
Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification
Effectiveness of a regional model calibrated to different parts of a flow regime in regionalisation
NASA Astrophysics Data System (ADS)
Kim, H. S.
2015-07-01
The objective of this study was to reduce the parameter uncertainty which has an effect on the identification of the relationship between the catchment characteristics and the catchment response dynamics in ungauged catchments. A water balance model calibrated to represent the rainfall runoff characteristics over long time scales had a potential limitation in the modelling capacity to accurately predict the hydrological effects of non-stationary catchment response dynamics under different climate conditions (distinct wet and dry periods). The accuracy and precision of hydrological modelling predictions was assessed to yield a better understanding for the potential improvement of the model's predictability. In the assessment of model structure suitability to represent the non-stationary catchment response characteristics, there was a flow-dependent bias in the runoff simulations. In particular, over-prediction of the streamflow was dominant for the dry period. The poor model performance during the dry period was associated with the largely different impulse response estimates for the entire period and the dry period. The refined calibration approach was established based on assessment of model deficiencies. The rainfall-runoff models were separately calibrated to different parts of the flow regime, and the calibrated models for the separated time series were used to establish the regional models of relevant parts of the flow regime (i.e. wet and dry periods). The effectiveness of the parameter values for the refined approach in regionalisation was evaluated through investigating the accuracy of predictions of the regional models. The predictability was demonstrated using only the dry period to highlight the improvement in model performance easily veiled by the performance of the model for the whole period. The regional models from the refined calibration approach clearly enhanced the hydrological behaviour by improving the identification of the relationships between
Calibrated Blade-Element/Momentum Theory Aerodynamic Model of the MARIN Stock Wind Turbine: Preprint
Goupee, A.; Kimball, R.; de Ridder, E. J.; Helder, J.; Robertson, A.; Jonkman, J.
2015-04-02
In this paper, a calibrated blade-element/momentum theory aerodynamic model of the MARIN stock wind turbine is developed and documented. The model is created using open-source software and calibrated to closely emulate experimental data obtained by the DeepCwind Consortium using a genetic algorithm optimization routine. The provided model will be useful for those interested in validating interested in validating floating wind turbine numerical simulators that rely on experiments utilizing the MARIN stock wind turbine—for example, the International Energy Agency Wind Task 30’s Offshore Code Comparison Collaboration Continued, with Correlation project.
NASA Astrophysics Data System (ADS)
Rientjes, T. H. M.; Muthuwatta, L. P.; Bos, M. G.; Booij, M. J.; Bhatti, H. A.
2013-11-01
A procedure is tested to complete energy balance based daily ETa series by MODIS data.The HVB model is calibrated on 2 water balance terms; ETa and stream flow (Q).HBV calibration on Q shows poor ETa results for inter-rainfall and recession periods.Multi-variable (MV) vs. single variable calibration showed best HBV performance.Large volume differences in Q and ETa do not essentially effect MV calibration.
Hill, M.C.; D'Agnese, F. A.; Faunt, C.C.
2000-01-01
Fourteen guidelines are described which are intended to produce calibrated groundwater models likely to represent the associated real systems more accurately than typically used methods. The 14 guidelines are discussed in the context of the calibration of a regional groundwater flow model of the Death Valley region in the southwestern United States. This groundwater flow system contains two sites of national significance from which the subsurface transport of contaminants could be or is of concern: Yucca Mountain, which is the potential site of the United States high-level nuclear-waste disposal; and the Nevada Test Site, which contains a number of underground nuclear-testing locations. This application of the guidelines demonstrates how they may be used for model calibration and evaluation, and also to direct further model development and data collection.Fourteen guidelines are described which are intended to produce calibrated groundwater models likely to represent the associated real systems more accurately than typically used methods. The 14 guidelines are discussed in the context of the calibration of a regional groundwater flow model of the Death Valley region in the southwestern United States. This groundwater flow system contains two sites of national significance from which the subsurface transport of contaminants could be or is of concern: Yucca Mountain, which is the potential site of the United States high-level nuclear-waste disposal; and the Nevada Test Site, which contains a number of underground nuclear-testing locations. This application of the guidelines demonstrates how they may be used for model calibration and evaluation, and also to direct further model development and data collection.
Sun, Kaiyu; Yan, Da; Hong, Tianzhen; Guo, Siyue
2014-02-28
Overtime is a common phenomenon around the world. Overtime drives both internal heat gains from occupants, lighting and plug-loads, and HVAC operation during overtime periods. Overtime leads to longer occupancy hours and extended operation of building services systems beyond normal working hours, thus overtime impacts total building energy use. Current literature lacks methods to model overtime occupancy because overtime is stochastic in nature and varies by individual occupants and by time. To address this gap in the literature, this study aims to develop a new stochastic model based on the statistical analysis of measured overtime occupancy data from an office building. A binomial distribution is used to represent the total number of occupants working overtime, while an exponential distribution is used to represent the duration of overtime periods. The overtime model is used to generate overtime occupancy schedules as an input to the energy model of a second office building. The measured and simulated cooling energy use during the overtime period is compared in order to validate the overtime model. A hybrid approach to energy model calibration is proposed and tested, which combines ASHRAE Guideline 14 for the calibration of the energy model during normal working hours, and a proposed KS test for the calibration of the energy model during overtime. The developed stochastic overtime model and the hybrid calibration approach can be used in building energy simulations to improve the accuracy of results, and better understand the characteristics of overtime in office buildings.
Meininger, Daniel J; Chee-Garza, Max; Arman, Hadi D; Tonzetich, Zachary J
2016-03-01
Gallium(III) tetraphenylporphyrinates (TPP) containing anionic sulfur ligands have been prepared and characterized in the solid state and solution. The complexes serve as structural models for iron(III) heme sites containing sulfur coordination that otherwise prove challenging to synthesize due to the propensity for reduction to iron(II). The compounds prepared include the first well-characterized example of a trivalent metalloporphyrinate containing a terminal hydrosulfide ligand, [Ga(SH)(TPP)], as well as [Ga(SEt)(TPP)], [Ga(SPh)(TPP)], and [Ga(SSi(i)Pr3)(TPP)]. The stability of these compounds toward reduction has permitted an investigation of their solid-state structures and electrochemistry. The structural features and reaction chemistry of the complexes in relation to their iron(III) analogs is discussed.
Bayesian calibration of the Unified budburst model in six temperate tree species
NASA Astrophysics Data System (ADS)
Fu, Yongshuo H.; Campioli, Matteo; Demarée, Gaston; Deckmyn, Alex; Hamdi, Rafiq; Janssens, Ivan A.; Deckmyn, Gaby
2012-01-01
Numerous phenology models developed to predict the budburst date of trees have been merged into one Unified model (Chuine, 2000, J. Theor. Biol. 207, 337-347). In this study, we tested a simplified version of the Unified model (Unichill model) on six woody species. Budburst and temperature data were available for five sites across Belgium from 1957 to 1995. We calibrated the Unichill model using a Bayesian calibration procedure, which reduced the uncertainty of the parameter coefficients and quantified the prediction uncertainty. The model performance differed among species. For two species (chestnut and black locust), the model showed good performance when tested against independent data not used for calibration. For the four other species (beech, oak, birch, ash), the model performed poorly. Model performance improved substantially for most species when using site-specific parameter coefficients instead of across-site parameter coefficients. This suggested that budburst is influenced by local environment and/or genetic differences among populations. Chestnut, black locust and birch were found to be temperature-driven species, and we therefore analyzed the sensitivity of budburst date to forcing temperature in those three species. Model results showed that budburst advanced with increasing temperature for 1-3 days °C-1, which agreed with the observed trends. In synthesis, our results suggest that the Unichill model can be successfully applied to chestnut and black locust (with both across-site and site-specific calibration) and to birch (with site-specific calibration). For other species, temperature is not the only determinant of budburst and additional influencing factors will need to be included in the model.
NASA Astrophysics Data System (ADS)
Wang, Ling; van Meerveld, Ilja; Seibert, Jan
2016-04-01
Streamflow isotope samples taken during rainfall-runoff events are very useful for multi-criteria model calibration because they can help decrease parameter uncertainty and improve internal model consistency. However, the number of samples that can be collected and analysed is often restricted by practical and financial constraints. It is, therefore, important to choose an appropriate sampling strategy and to obtain samples that have the highest information content for model calibration. We used the Birkenes hydrochemical model and synthetic rainfall, streamflow and isotope data to explore which samples are most informative for model calibration. Starting with error-free observations, we investigated how many samples are needed to obtain a certain model fit. Based on different parameter sets, representing different catchments, and different rainfall events, we also determined which sampling times provide the most informative data for model calibration. Our results show that simulation performance for models calibrated with the isotopic data from two intelligently selected samples was comparable to simulations based on isotopic data for all 100 time steps. The models calibrated with the intelligently selected samples also performed better than the model calibrations with two benchmark sampling strategies (random selection and selection based on hydrologic information). Surprisingly, samples on the rising limb and at the peak were less informative than expected and, generally, samples taken at the end of the event were most informative. The timing of the most informative samples depends on the proportion of different flow components (baseflow, slow response flow, fast response flow and overflow). For events dominated by baseflow and slow response flow, samples taken at the end of the event after the fast response flow has ended were most informative; when the fast response flow was dominant, samples taken near the peak were most informative. However when overflow
NASA Technical Reports Server (NTRS)
Jung, Hahn Chul; Jasinski, Michael; Kim, Jin-Woo; Shum, C. K.; Bates, Paul; Lee, Hgongki; Neal, Jeffrey; Alsdorf, Doug
2012-01-01
Two-dimensional (2D) satellite imagery has been increasingly employed to improve prediction of floodplain inundation models. However, most focus has been on validation of inundation extent, with little attention on the 2D spatial variations of water elevation and slope. The availability of high resolution Interferometric Synthetic Aperture Radar (InSAR) imagery offers unprecedented opportunity for quantitative validation of surface water heights and slopes derived from 2D hydrodynamic models. In this study, the LISFLOOD-ACC hydrodynamic model is applied to the central Atchafalaya River Basin, Louisiana, during high flows typical of spring floods in the Mississippi Delta region, for the purpose of demonstrating the utility of InSAR in coupled 1D/2D model calibration. Two calibration schemes focusing on Manning s roughness are compared. First, the model is calibrated in terms of water elevations at a single in situ gage during a 62 day simulation period from 1 April 2008 to 1 June 2008. Second, the model is calibrated in terms of water elevation changes calculated from ALOS PALSAR interferometry during 46 days of the image acquisition interval from 16 April 2008 to 1 June 2009. The best-fit models show that the mean absolute errors are 3.8 cm for a single in situ gage calibration and 5.7 cm/46 days for InSAR water level calibration. The optimum values of Manning's roughness coefficients are 0.024/0.10 for the channel/floodplain, respectively, using a single in situ gage, and 0.028/0.10 for channel/floodplain the using SAR. Based on the calibrated water elevation changes, daily storage changes within the size of approx 230 sq km of the model area are also calculated to be of the order of 107 cubic m/day during high water of the modeled period. This study demonstrates the feasibility of SAR interferometry to support 2D hydrodynamic model calibration and as a tool for improved understanding of complex floodplain hydrodynamics
Examining the Invariance of Rater and Project Calibrations Using a Multi-facet Rasch Model.
ERIC Educational Resources Information Center
O'Neill, Thomas R.; Lunz, Mary E.
To generalize test results beyond the particular test administration, an examinee's ability estimate must be independent of the particular items attempted, and the item difficulty calibrations must be independent of the particular sample of people attempting the items. This stability is a key concept of the Rasch model, a latent trait model of…
Model Calibration Efforts for the International Space Station's Solar Array Mast
NASA Technical Reports Server (NTRS)
Elliott, Kenny B.; Horta, Lucas G.; Templeton, Justin D.; Knight, Norman F., Jr.
2012-01-01
The International Space Station (ISS) relies on sixteen solar-voltaic blankets to provide electrical power to the station. Each pair of blankets is supported by a deployable boom called the Folding Articulated Square Truss Mast (FAST Mast). At certain ISS attitudes, the solar arrays can be positioned in such a way that shadowing of either one or three longerons causes an unexpected asymmetric thermal loading that if unchecked can exceed the operational stability limits of the mast. Work in this paper documents part of an independent NASA Engineering and Safety Center effort to assess the existing operational limits. Because of the complexity of the system, the problem is being worked using a building-block progression from components (longerons), to units (single or multiple bays), to assembly (full mast). The paper presents results from efforts to calibrate the longeron components. The work includes experimental testing of two types of longerons (straight and tapered), development of Finite Element (FE) models, development of parameter uncertainty models, and the establishment of a calibration and validation process to demonstrate adequacy of the models. Models in the context of this paper refer to both FE model and probabilistic parameter models. Results from model calibration of the straight longerons show that the model is capable of predicting the mean load, axial strain, and bending strain. For validation, parameter values obtained from calibration of straight longerons are used to validate experimental results for the tapered longerons.
Technology Transfer Automated Retrieval System (TEKTRAN)
Process-based watershed models typically require a large number of parameters to describe complex hydrologic and biogeochemical processes in highly variable environments. Most of such parameters are not directly measured in field and require calibration, in most cases through matching modeled fluxes...
NASA Astrophysics Data System (ADS)
Corbari, Chiara; Manchini, Marco; Li, Jiren; Su, Zhongbo
2013-12-01
Calibration and validation of distributed models at basin scale generally refer to external variables, which are integrated catchment model outputs, and usually depend on the comparison between simulated and observed discharges at the available rivers cross sections, which are usually very few. However distributed models allow an internal validation due to their intrinsic structure, so that internal processes and variables of the model can be controlled in each cell of the domain. In particular this work investigates the potentiality to control evapotranspiration and its spatial and temporal variability through the detection of land surface temperature from satellite remote sensing. This study proposes a methodology for the calibration of distributed hydrological models at basin scale through the constraints on an internal model variable using remote sensing data of land surface temperature. The model (FEST-EWB) algorithm solves the system of energy and mass balances in term of the equilibrium pixel temperature or representative equilibrium temperature that governs the fluxes of energy and mass over the basin domain. This equilibrium surface temperature, which is a critical model state variable, is compared to land surface temperature from MODIS and AATSR. So soil hydraulic parameters and vegetation variables will be calibrated according to the comparison between observed and simulated land surface temperature minimizing the errors. A similar procedure will also be applied performing the traditional calibration using only discharge measurements. These analyses are performed for Upper Yangtze River basin (China) in framework of DRAGON-2 and DRAGON-3 Programme funded by NRSCC and ESA.
Matsui, Y; Itoshiro, S; Buma, M; Matsushita, T; Hosogoe, K; Yuasa, A; Shinoda, S; Inoue, T
2002-01-01
Hydrological diffuse pollution models require calibration before they can be used to make accurate long-term predictions for a range of hydrological and meteorological conditions. As such, the applicability of the models to the dispersion of new pesticides is limited due to the lack of calibration data. In this study, the performance of a GIS-based basin-scale runoff model for predicting the concentrations of paddy-farming pesticides in river water was examined when calibrated using hydrological data alone, without optimization based on empirical pesticide concentration data. The prediction accuracy on a daily or hourly scale was somewhat unsatisfactory due to inevitable compromises concerning rice farming schedules. However, the month-averaged pesticide concentrations were satisfactorily accurate; more than 50% of predicted values were between half and twice the observed values, considering the deficiencies of the input data, particularly for pesticide usage, which may include up to 50% error.
Atomic model of the type III secretion system needle.
Loquet, Antoine; Sgourakis, Nikolaos G; Gupta, Rashmi; Giller, Karin; Riedel, Dietmar; Goosmann, Christian; Griesinger, Christian; Kolbe, Michael; Baker, David; Becker, Stefan; Lange, Adam
2012-05-20
Pathogenic bacteria using a type III secretion system (T3SS) to manipulate host cells cause many different infections including Shigella dysentery, typhoid fever, enterohaemorrhagic colitis and bubonic plague. An essential part of the T3SS is a hollow needle-like protein filament through which effector proteins are injected into eukaryotic host cells. Currently, the three-dimensional structure of the needle is unknown because it is not amenable to X-ray crystallography and solution NMR, as a result of its inherent non-crystallinity and insolubility. Cryo-electron microscopy combined with crystal or solution NMR subunit structures has recently provided a powerful hybrid approach for studying supramolecular assemblies, resulting in low-resolution and medium-resolution models. However, such approaches cannot deliver atomic details, especially of the crucial subunit-subunit interfaces, because of the limited cryo-electron microscopic resolution obtained in these studies. Here we report an alternative approach combining recombinant wild-type needle production, solid-state NMR, electron microscopy and Rosetta modelling to reveal the supramolecular interfaces and ultimately the complete atomic structure of the Salmonella typhimurium T3SS needle. We show that the 80-residue subunits form a right-handed helical assembly with roughly 11 subunits per two turns, similar to that of the flagellar filament of S. typhimurium. In contrast to established models of the needle in which the amino terminus of the protein subunit was assumed to be α-helical and positioned inside the needle, our model reveals an extended amino-terminal domain that is positioned on the surface of the needle, while the highly conserved carboxy terminus points towards the lumen.
NASA Astrophysics Data System (ADS)
Camici, Stefania; Tito Aronica, Giuseppe; Tarpanelli, Angelica; Moramarco, Tommaso
2013-04-01
Hydraulic models are an essential tool in many fields, e.g. civil engineering, flood hazard and risk assessments, evaluation of flood control measures, etc. Nowadays there are many models of different complexity regarding the mathematical foundation and spatial dimensions available, and most of them are comparatively easy to operate due to sophisticated tools for model setup and control. However, the calibration of these models is still underdeveloped in contrast to other models like e.g. hydrological models or models used in ecosystem analysis. This has basically two reasons. First, the lack of relevant data necessary for the model calibration. Indeed, flood events are very rarely monitored due to the disturbances inflicted by them and the lack of appropriate measuring equipment. The second reason is related to the choice of a suitable performance measures for calibrating and to evaluate model predictions in a credible and consistent way (and to reduce the uncertainty). This study takes a well documented flood event in November 2012 in Paglia river basin (Central Italy). For this area a detailed description of the main channel morphology, obtained from an accurate topographical surveys and by a DEM with spatial resolution of 2 m, and several points within the floodplain areas, in which the maximum water level has been measured, were available for the post-event analysis. On basis of these information two-dimensional inertial finite element hydraulic model was set up and calibrated using different performance measures. Manning roughness coefficients obtained from the different calibrations were then used for the delineation of inundation maps including also uncertainty. The water levels of three hydrometric stations and flooded area extensions, derived by video recording the day after the flood event, have been used for the validation of the model.
Study of the performance of stereoscopic panomorph systems calibrated with traditional pinhole model
NASA Astrophysics Data System (ADS)
Poulin-Girard, Anne-Sophie; Thibault, Simon; Laurendeau, Denis
2016-06-01
With their large field of view, anamorphosis, and areas of enhanced magnification, panomorph lenses are an interesting choice for navigation systems for mobile robotics in which knowledge of the surroundings is mandatory. However, panomorph lenses special characteristics can be challenging during the calibration process. This study focuses on the calibration of two panomorph stereoscopic systems with a model and technique developed for narrow-angle lenses, the "Camera Calibration Toolbox for MATLAB." In order to assess the performance of the systems, the mean reprojection error (MRE) related to the calibration and the reconstruction error of control points of an object of interest at various locations in the field of view are used. The calibrations were successful and exhibit MREs of less than one pixel in all cases. However, some poorly reconstructed control points illustrate that an acceptable MRE guarantees neither the quality of 3-D reconstruction nor its uniformity in the field of view. In addition, the nonuniformity in the 3-D reconstruction quality indicates that panomorph lenses require a more accurate estimation of the principal point (center of distortion) coordinates to improve the calibration and therefore the 3-D reconstruction.
NASA Technical Reports Server (NTRS)
Jung, Hahn Chul; Jasinski, Michael; Kim, Jin-Woo; Shum, C. K.; Bates, Paul; Neal, Jeffrey; Lee, Hyongki; Alsdorf, Doug
2011-01-01
This study focuses on the feasibility of using SAR interferometry to support 2D hydrodynamic model calibration and provide water storage change in the floodplain. Two-dimensional (2D) flood inundation modeling has been widely studied using storage cell approaches with the availability of high resolution, remotely sensed floodplain topography. The development of coupled 1D/2D flood modeling has shown improved calculation of 2D floodplain inundation as well as channel water elevation. Most floodplain model results have been validated using remote sensing methods for inundation extent. However, few studies show the quantitative validation of spatial variations in floodplain water elevations in the 2D modeling since most of the gauges are located along main river channels and traditional single track satellite altimetry over the floodplain are limited. Synthetic Aperture Radar (SAR) interferometry recently has been proven to be useful for measuring centimeter-scale water elevation changes over the floodplain. In the current study, we apply the LISFLOOD hydrodynamic model to the central Atchafalaya River Basin, Louisiana, during a 62 day period from 1 April to 1 June 2008 using two different calibration schemes for Manning's n. First, the model is calibrated in terms of water elevations from a single in situ gauge that represents a more traditional approach. Due to the gauge location in the channel, the calibration shows more sensitivity to channel roughness relative to floodplain roughness. Second, the model is calibrated in terms of water elevation changes calculated from ALOS PALSAR interferometry during 46 days of the image acquisition interval from 16 April 2008 to 1 June 2009. Since SAR interferometry receives strongly scatters in floodplain due to double bounce effect as compared to specular scattering of open water, the calibration shows more dependency to floodplain roughness. An iterative approach is used to determine the best-fit Manning's n for the two
Multi-metric calibration of hydrological model to capture overall flow regimes
NASA Astrophysics Data System (ADS)
Zhang, Yongyong; Shao, Quanxi; Zhang, Shifeng; Zhai, Xiaoyan; She, Dunxian
2016-08-01
Flow regimes (e.g., magnitude, frequency, variation, duration, timing and rating of change) play a critical role in water supply and flood control, environmental processes, as well as biodiversity and life history patterns in the aquatic ecosystem. The traditional flow magnitude-oriented calibration of hydrological model was usually inadequate to well capture all the characteristics of observed flow regimes. In this study, we simulated multiple flow regime metrics simultaneously by coupling a distributed hydrological model with an equally weighted multi-objective optimization algorithm. Two headwater watersheds in the arid Hexi Corridor were selected for the case study. Sixteen metrics were selected as optimization objectives, which could represent the major characteristics of flow regimes. Model performance was compared with that of the single objective calibration. Results showed that most metrics were better simulated by the multi-objective approach than those of the single objective calibration, especially the low and high flow magnitudes, frequency and variation, duration, maximum flow timing and rating. However, the model performance of middle flow magnitude was not significantly improved because this metric was usually well captured by single objective calibration. The timing of minimum flow was poorly predicted by both the multi-metric and single calibrations due to the uncertainties in model structure and input data. The sensitive parameter values of the hydrological model changed remarkably and the simulated hydrological processes by the multi-metric calibration became more reliable, because more flow characteristics were considered. The study is expected to provide more detailed flow information by hydrological simulation for the integrated water resources management, and to improve the simulation performances of overall flow regimes.
NASA Astrophysics Data System (ADS)
Dung, N. V.; Merz, B.; Bárdossy, A.; Thang, T. D.; Apel, H.
2011-04-01
Automatic and multi-objective calibration of hydrodynamic models is - compared to other disciplines like e.g. hydrology - still underdeveloped. This has mainly two reasons: the lack of appropriate data and the large computational demand in terms of CPU-time. Both aspects are aggravated in large-scale applications. However, there are recent developments that improve the situation on both the data and computing side. Remote sensing, especially radar-based techniques proved to provide highly valuable information on flood extents, and in case high precision DEMs are present, also on spatially distributed inundation depths. On the computing side the use of parallelization techniques brought significant performance gains. In the presented study we build on these developments by calibrating a large-scale 1-dimensional hydrodynamic model of the whole Mekong Delta downstream of Kratie in Cambodia: we combined in-situ data from a network of river gauging stations, i.e. data with high temporal but low spatial resolution, with a series of inundation maps derived from ENVISAT Advanced Synthetic Aperture Radar (ASAR) satellite images, i.e. data with low temporal but high spatial resolution, in an multi-objective automatic calibration process. It is shown that an automatic, multi-objective calibration of hydrodynamic models, even of such complexity and on a large scale and complex as a model for the Mekong Delta, is possible. Furthermore, the calibration process revealed model deficiencies in the model structure, i.e. the representation of the dike system in Vietnam, which would have been difficult to detect by a standard manual calibration procedure.
NASA Astrophysics Data System (ADS)
Dung, N. V.; Merz, B.; Bárdossy, A.; Thang, T. D.; Apel, H.
2010-12-01
Calibration of hydrodynamic models is - compared to other disciplines like e.g. hydrology - still underdeveloped. This has mainly two reasons: the lack of appropriate data and the large computational demand in terms of CPU-time. Both aspects are aggravated in large-scale applications. However, there are recent developments that improve the situation on both the data and computing side. Remote sensing, especially radar-based techniques proved to provide highly valuable information on flood extents, and in case high precision DEMs are present, also on spatially distributed inundation depths. On the computing side the use of parallelization techniques brought significant performance gains. In the presented study we build on these developments by calibrating a large-scale 1-D hydrodynamic model of the whole Mekong Delta downstream of Kratie in Cambodia: we combined in-situ data from a network of river gauging stations, i.e. data with high temporal but low spatial resolution, with a series of inundation maps derived from ENVISAT Advanced Synthetic Aperture Radar (ASAR) satellite images, i.e. data with low temporal but high spatial resolution, in an multi-objective automatic calibration process. It is shown that an automatic, multi-objective calibration of hydrodynamic models, even of such complexity and on a large scale and complex as a model for the Mekong Delta, is possible. Furthermore, the calibration process revealed model deficiencies in the model structure, i.e. the representation of the dike system in Vietnam, which would have been difficult to detect by a standard manual calibration procedure.
AUTOMATIC CALIBRATION OF A STOCHASTIC-LAGRANGIAN TRANSPORT MODEL (SLAM)
Numerical models are a useful tool in evaluating and designing NAPL remediation systems. Traditional constitutive finite difference and finite element models are complex and expensive to apply. For this reason, this paper presents the application of a simplified stochastic-Lagran...
THE EFFECT OF METALLICITY-DEPENDENT T-τ RELATIONS ON CALIBRATED STELLAR MODELS
Tanner, Joel D.; Basu, Sarbani; Demarque, Pierre
2014-04-10
Mixing length theory is the predominant treatment of convection in stellar models today. Usually described by a single free parameter, α, the common practice is to calibrate it using the properties of the Sun, and apply it to all other stellar models as well. Asteroseismic data from Kepler and CoRoT provide precise properties of other stars which can be used to determine α as well, and a recent study of stars in the Kepler field of view found α to vary with metallicity. Interpreting α obtained from calibrated stellar models, however, is complicated by the fact that the value for α depends on the surface boundary condition of the stellar model, or T-τ relation. Calibrated models that use typical T-τ relations, which are static and insensitive to chemical composition, do not include the complete effect of metallicity on α. We use three-dimensional radiation-hydrodynamic simulations to extract metallicity-dependent T-τ relations and use them in calibrated stellar models. We find the previously reported α-metallicity trend to be robust, and not significantly affected by the surface boundary condition of the stellar models.
Estimating the Health Impact of Climate Change with Calibrated Climate Model Output.
Zhou, Jingwen; Chang, Howard H; Fuentes, Montserrat
2012-09-01
Studies on the health impacts of climate change routinely use climate model output as future exposure projection. Uncertainty quantification, usually in the form of sensitivity analysis, has focused predominantly on the variability arise from different emission scenarios or multi-model ensembles. This paper describes a Bayesian spatial quantile regression approach to calibrate climate model output for examining to the risks of future temperature on adverse health outcomes. Specifically, we first estimate the spatial quantile process for climate model output using nonlinear monotonic regression during a historical period. The quantile process is then calibrated using the quantile functions estimated from the observed monitoring data. Our model also down-scales the gridded climate model output to the point-level for projecting future exposure over a specific geographical region. The quantile regression approach is motivated by the need to better characterize the tails of future temperature distribution where the greatest health impacts are likely to occur. We applied the methodology to calibrate temperature projections from a regional climate model for the period 2041 to 2050. Accounting for calibration uncertainty, we calculated the number of of excess deaths attributed to future temperature for three cities in the US state of Alabama.
Comparison of a priori calibration models for respiratory inductance plethysmography during running.
Leutheuser, Heike; Heyde, Christian; Gollhofer, Albert; Eskofier, Bjoern M
2014-01-01
Respiratory inductive plethysmography (RIP) has been introduced as an alternative for measuring ventilation by means of body surface displacement (diameter changes in rib cage and abdomen). Using a posteriori calibration, it has been shown that RIP may provide accurate measurements for ventilatory tidal volume under exercise conditions. Methods for a priori calibration would facilitate the application of RIP. Currently, to the best knowledge of the authors, none of the existing ambulant procedures for RIP calibration can be used a priori for valid subsequent measurements of ventilatory volume under exercise conditions. The purpose of this study is to develop and validate a priori calibration algorithms for ambulant application of RIP data recorded in running exercise. We calculated Volume Motion Coefficients (VMCs) using seven different models on resting data and compared the root mean squared error (RMSE) of each model applied on running data. Least squares approximation (LSQ) without offset of a two-degree-of-freedom model achieved the lowest RMSE value. In this work, we showed that a priori calibration of RIP exercise data is possible using VMCs calculated from 5 min resting phase where RIP and flowmeter measurements were performed simultaneously. The results demonstrate that RIP has the potential for usage in ambulant applications.
Comparison of a priori calibration models for respiratory inductance plethysmography during running.
Leutheuser, Heike; Heyde, Christian; Gollhofer, Albert; Eskofier, Bjoern M
2014-01-01
Respiratory inductive plethysmography (RIP) has been introduced as an alternative for measuring ventilation by means of body surface displacement (diameter changes in rib cage and abdomen). Using a posteriori calibration, it has been shown that RIP may provide accurate measurements for ventilatory tidal volume under exercise conditions. Methods for a priori calibration would facilitate the application of RIP. Currently, to the best knowledge of the authors, none of the existing ambulant procedures for RIP calibration can be used a priori for valid subsequent measurements of ventilatory volume under exercise conditions. The purpose of this study is to develop and validate a priori calibration algorithms for ambulant application of RIP data recorded in running exercise. We calculated Volume Motion Coefficients (VMCs) using seven different models on resting data and compared the root mean squared error (RMSE) of each model applied on running data. Least squares approximation (LSQ) without offset of a two-degree-of-freedom model achieved the lowest RMSE value. In this work, we showed that a priori calibration of RIP exercise data is possible using VMCs calculated from 5 min resting phase where RIP and flowmeter measurements were performed simultaneously. The results demonstrate that RIP has the potential for usage in ambulant applications. PMID:25571459
Self-calibrating models for dynamic monitoring and diagnosis
NASA Technical Reports Server (NTRS)
Kuipers, Benjamin
1994-01-01
The present goal in qualitative reasoning is to develop methods for automatically building qualitative and semiquantitative models of dynamic systems and to use them for monitoring and fault diagnosis. The qualitative approach to modeling provides a guarantee of coverage while our semiquantitative methods support convergence toward a numerical model as observations are accumulated. We have developed and applied methods for automatic creation of qualitative models, developed two methods for obtaining tractable results on problems that were previously intractable for qualitative simulation, and developed more powerful methods for learning semiquantitative models from observations and deriving semiquantitative predictions from them. With these advances, qualitative reasoning comes significantly closer to realizing its aims as a practical engineering method.
A computer program for calculating relative-transmissivity input arrays to aid model calibration
Weiss, Emanuel
1982-01-01
A program is documented that calculates a transmissivity distribution for input to a digital ground-water flow model. Factors that are taken into account in the calculation are: aquifer thickness, ground-water viscosity and its dependence on temperature and dissolved solids, and permeability and its dependence on overburden pressure. Other factors affecting ground-water flow are indicated. With small changes in the program code, leakance also could be calculated. The purpose of these calculations is to provide a physical basis for efficient calibration, and to extend rational transmissivity trends into areas where model calibration is insensitive to transmissivity values.
NASA Astrophysics Data System (ADS)
Chijimatsu, Masakazu; Börgesson, Lenart; Fujita, Tomoo; Jussila, Petri; Nguyen, Son; Rutqvist, Jonny; Jing, Lanru
2009-05-01
In the international DECOVALEX-THMC project, five research teams study the influence of thermal-hydro-mechanical (THM) coupling on the safety of a hypothetical geological repository for spent fuel. In order to improve the analyses, the teams calibrated their bentonite models with results from laboratory experiments, including swelling pressure tests, water uptake tests, thermally gradient tests, and the CEA mock-up THM experiment. This paper describes the mathematical models used by the teams, and compares the results of their calibrations with the experimental data.
Multi-objective calibration of a reservoir model: aggregation and non-dominated sorting approaches
NASA Astrophysics Data System (ADS)
Huang, Y.
2012-12-01
Numerical reservoir models can be helpful tools for water resource management. These models are generally calibrated against historical measurement data made in reservoirs. In this study, two methods are proposed for the multi-objective calibration of such models: aggregation and non-dominated sorting methods. Both methods use a hybrid genetic algorithm as an optimization engine and are different in fitness assignment. In the aggregation method, a weighted sum of scaled simulation errors is designed as an overall objective function to measure the fitness of solutions (i.e. parameter values). The contribution of this study to the aggregation method is the correlation analysis and its implication to the choice of weight factors. In the non-dominated sorting method, a novel method based on non-dominated sorting and the method of minimal distance is used to calculate the dummy fitness of solutions. The proposed methods are illustrated using a water quality model that was set up to simulate the water quality of Pepacton Reservoir, which is located to the north of New York City and is used for water supply of city. The study also compares the aggregation and the non-dominated sorting methods. The purpose of this comparison is not to evaluate the pros and cons between the two methods but to determine whether the parameter values, objective function values (simulation errors) and simulated results obtained are significantly different with each other. The final results (objective function values) from the two methods are good compromise between all objective functions, and none of these results are the worst for any objective function. The calibrated model provides an overall good performance and the simulated results with the calibrated parameter values match the observed data better than the un-calibrated parameters, which supports and justifies the use of multi-objective calibration. The results achieved in this study can be very useful for the calibration of water
Chijimatsu, M.; Borgesson, L.; Fujita, T.; Jussila, P.; Nguyen, S.; Rutqvist, J.; Jing, L.; Hernelind, J.
2009-02-01
In Task A of the international DECOVALEX-THMC project, five research teams study the influence of thermal-hydro-mechanical (THM) coupling on the safety of a hypothetical geological repository for spent fuel. In order to improve the analyses, the teams calibrated their bentonite models with results from laboratory experiments, including swelling pressure tests, water uptake tests, thermally gradient tests, and the CEA mock-up THM experiment. This paper describes the mathematical models used by the teams, and compares the results of their calibrations with the experimental data.
Toward diagnostic model calibration and evaluation: Approximate Bayesian computation
NASA Astrophysics Data System (ADS)
Vrugt, Jasper A.; Sadegh, Mojtaba
2013-07-01
The ever increasing pace of computational power, along with continued advances in measurement technologies and improvements in process understanding has stimulated the development of increasingly complex hydrologic models that simulate soil moisture flow, groundwater recharge, surface runoff, root water uptake, and river discharge at different spatial and temporal scales. Reconciling these high-order system models with perpetually larger volumes of field data is becoming more and more difficult, particularly because classical likelihood-based fitting methods lack the power to detect and pinpoint deficiencies in the model structure. Gupta et al. (2008) has recently proposed steps (amongst others) toward the development of a more robust and powerful method of model evaluation. Their diagnostic approach uses signature behaviors and patterns observed in the input-output data to illuminate to what degree a representation of the real world has been adequately achieved and how the model should be improved for the purpose of learning and scientific discovery. In this paper, we introduce approximate Bayesian computation (ABC) as a vehicle for diagnostic model evaluation. This statistical methodology relaxes the need for an explicit likelihood function in favor of one or multiple different summary statistics rooted in hydrologic theory that together have a clearer and more compelling diagnostic power than some average measure of the size of the error residuals. Two illustrative case studies are used to demonstrate that ABC is relatively easy to implement, and readily employs signature based indices to analyze and pinpoint which part of the model is malfunctioning and in need of further improvement.
Calibration of a flood inundation model using a SAR image: influence of acquisition time
NASA Astrophysics Data System (ADS)
Van Wesemael, Alexandra; Gobeyn, Sacha; Neal, Jeffrey; Lievens, Hans; Van Eerdenbrugh, Katrien; De Vleeschouwer, Niels; Schumann, Guy; Vernieuwe, Hilde; Di Baldassarre, Giuliano; De Baets, Bernard; Bates, Paul; Verhoest, Niko
2016-04-01
Flood risk management has always been in a search for effective prediction approaches. As such, the calibration of flood inundation models is continuously improved. In practice, this calibration process consists of finding the optimal roughness parameters, both channel and floodplain Manning coefficients, since these values considerably influence the flood extent in a catchment. In addition, Synthetic Aperture Radar (SAR) images have been proven to be a very useful tool in calibrating the flood extent. These images can distinguish between wet (flooded) and dry (non-flooded) pixels through the intensity of backscattered radio waves. To this date, however, satellite overpass often occurs only once during a flood event. Therefore, this study is specifically concerned with the effect of the timing of the SAR data acquisition on calibration results. In order to model the flood extent, the raster-based inundation model, LISFLOOD-FP, is used together with a high resolution synthetic aperture radar image (ERS-2 SAR) of a flood event of the river Dee, Wales, in December 2006. As only one satellite image of the considered case study is available, a synthetic framework is implemented in order to generate a time series of SAR observations. These synthetic observations are then used to calibrate the model at different time instants. In doing so, the sensitivity of the model output to the channel and floodplain Manning coefficients is studied through time. As results are examined, these suggest that there is a clear difference in the spatial variability to which water is held within the floodplain. Furthermore, these differences seem to be variable through time. Calibration by means of satellite flood observations obtained from the rising or receding limb, would generally lead to more reliable results rather than near peak flow observations.
Essa, Mohamed; Sayed, Tarek
2015-11-01
Several studies have investigated the relationship between field-measured conflicts and the conflicts obtained from micro-simulation models using the Surrogate Safety Assessment Model (SSAM). Results from recent studies have shown that while reasonable correlation between simulated and real traffic conflicts can be obtained especially after proper calibration, more work is still needed to confirm that simulated conflicts provide safety measures beyond what can be expected from exposure. As well, the results have emphasized that using micro-simulation model to evaluate safety without proper model calibration should be avoided. The calibration process adjusts relevant simulation parameters to maximize the correlation between field-measured and simulated conflicts. The main objective of this study is to investigate the transferability of calibrated parameters of the traffic simulation model (VISSIM) for safety analysis between different sites. The main purpose is to examine whether the calibrated parameters, when applied to other sites, give reasonable results in terms of the correlation between the field-measured and the simulated conflicts. Eighty-three hours of video data from two signalized intersections in Surrey, BC were used in this study. Automated video-based computer vision techniques were used to extract vehicle trajectories and identify field-measured rear-end conflicts. Calibrated VISSIM parameters obtained from the first intersection which maximized the correlation between simulated and field-observed conflicts were used to estimate traffic conflicts at the second intersection and to compare the results to parameters optimized specifically for the second intersection. The results show that the VISSIM parameters are generally transferable between the two locations as the transferred parameters provided better correlation between simulated and field-measured conflicts than using the default VISSIM parameters. Of the six VISSIM parameters identified as
NASA Astrophysics Data System (ADS)
Khaninezhad, M. R. M.; Jafarpour, B.
2014-12-01
Inference of spatially distributed reservoir and aquifer properties from scattered and spatially limited data poses a poorly constrained nonlinear inverse problem that can have many solutions. In particular, the uncertainty in the geologic continuity model can remarkably degrade the quality of fluid displacement predictions, hence, the efficiency of resource development plans. For model calibration, instead of estimating aquifer properties for each grid cell in the model, the sparse representation of the aquifer properties is estimated from nonlinear production data. The resulting calibration problem can be solved using recent developments in sparse signal processing, widely known as compressed sensing. This novel formulation leads to a sparse data inversion technique that effectively searches for relevant geologic patterns that can explain the available spatiotemporal data. We recently introduced a new model calibration framework by using sparse geologic dictionaries that are constructed from uncertain prior geologic models. Here, we first demonstrate the effectiveness of the proposed sparse geologic dictionaries for flexible and robust model calibration under prior geologic uncertainty. We illustrate the effectiveness of the proposed approach in using limited nonlinear production data to identify a consistent geologic scenario from a number of candidate scenarios, which is usually a challenging problem in geostatistical reservoir characterization. We then evaluate the feasibility of adopting this framework for field application. In particular, we present subsurface field model calibration applications in which sparse geologic dictionaries are learned from uncertain prior information on large-scale reservoir property descriptions. We consider two large-scale field case studies, the Brugges and the Norne field examples. We discuss the construction of geologic dictionaries for large-scale problems and present reduced-order methods to speed up the computational
Calibration of SWAT model for woody plant encroachment using paired experimental watershed data
NASA Astrophysics Data System (ADS)
Qiao, Lei; Zou, Chris B.; Will, Rodney E.; Stebler, Elaine
2015-04-01
Globally, rangeland has been undergoing a transition from herbaceous dominated grasslands into tree or shrub dominated woodlands with great uncertainty of associated changes in water budget. Previous modeling studies simulated the impact of woody plant encroachment on hydrological processes using models calibrated and constrained primarily by historic streamflow from intermediate sized watersheds. In this study, we calibrated the Soil and Water Assessment Tool (SWAT model), a widely used model for cropping and grazing systems, for a prolifically encroaching juniper species, eastern redcedar (Juniperus virginiana), in the south-central Great Plains using species-specific biophysical and hydrological parameters and in situ meteorological forcing from three pairs of experimental watersheds (grassland versus eastern redcedar woodland) for a period of 3-years covering a dry-to-wet cycle. The multiple paired watersheds eliminated the potentially confounding edaphic and topographic influences from changes in hydrological processes related to woody encroachment. The SWAT model was optimized with the Shuffled complexes with Principal component analysis (SP-UCI) algorithm developed from the Shuffled Complexes Evolution (SCE_UA). The mean Nash-Sutcliff coefficient (NSCE) values of the calibrated model for daily and monthly runoff from experimental watersheds reached 0.96 and 0.97 for grassland, respectively, and 0.90 and 0.84 for eastern redcedar woodland, respectively. We then validated the calibrated model with a nearby, larger watershed undergoing rapid eastern redcedar encroachment. The NSCE value for monthly streamflow over a period of 22 years was 0.79. We provide detailed biophysical and hydrological parameters for tallgrass prairie under moderate grazing and eastern redcedar, which can be used to calibrate any model for further validation and application by the hydrologic modeling community.
Chen, Da; Grant, Edward
2012-11-01
When paired with high-powered chemometric analysis, spectrometric methods offer great promise for the high-throughput analysis of complex systems. Effective classification or quantification often relies on signal preprocessing to reduce spectral interference and optimize the apparent performance of a calibration model. However, less frequently addressed by systematic research is the affect of preprocessing on the statistical accuracy of a calibration result. The present work demonstrates the effectiveness of two criteria for validating the performance of signal preprocessing in multivariate models in the important dimensions of bias and precision. To assess the extent of bias, we explore the applicability of the elliptic joint confidence region (EJCR) test and devise a new means to evaluate precision by a bias-corrected root mean square error of prediction. We show how these criteria can effectively gauge the success of signal pretreatments in suppressing spectral interference while providing a straightforward means to determine the optimal level of model complexity. This methodology offers a graphical diagnostic by which to visualize the consequences of pretreatment on complex multivariate models, enabling optimization with greater confidence. To demonstrate the application of the EJCR criterion in this context, we evaluate the validity of representative calibration models using standard pretreatment strategies on three spectral data sets. The results indicate that the proposed methodology facilitates the reliable optimization of a well-validated calibration model, thus improving the capability of spectrophotometric analysis.
Liu, Jianchun; Sonnenthal, Eric L.; Bodvarsson, Gudmundur S.
2002-09-01
In this study, porewater chloride data from Yucca Mountain, Nevada, are analyzed and modeled by 3-D chemical transport simulations and analytical methods. The simulation modeling approach is based on a continuum formulation of coupled multiphase fluid flow and tracer transport processes through fractured porous rock, using a dual-continuum concept. Infiltration-rate calibrations were using the pore water chloride data. Model results of chloride distributions were improved in matching the observed data with the calibrated infiltration rates. Statistical analyses of the frequency distribution for overall percolation fluxes and chloride concentration in the unsaturated zone system demonstrate that the use of the calibrated infiltration rates had insignificant effect on the distribution of simulated percolation fluxes but significantly changed the predicated distribution of simulated chloride concentrations. An analytical method was also applied to model transient chloride transport. The method was verified by 3-D simulation results as able to capture major chemical transient behavior and trends. Effects of lateral flow in the Paintbrush nonwelded unit on percolation fluxes and chloride distribution were studied by 3-D simulations with increased horizontal permeability. The combined results from these model calibrations furnish important information for the UZ model studies, contributing to performance assessment of the potential repository.
How Does Knowing Snowpack Distribution Help Model Calibration and Reservoir Management?
NASA Astrophysics Data System (ADS)
Graham, C. B.; Mazurkiewicz, A.; McGurk, B. J.; Painter, T. H.
2014-12-01
Well calibrated hydrologic models are a necessary tool for reservoir managers to meet increasingly complicated regulatory, environmental and consumptive demands on water supply systems. Achieving these objectives is difficult during periods of drought, such as seen in the Sierra Nevada in recent years. This emphasizes the importance of accurate watershed modeling and forecasting of runoff. While basin discharge has traditionally been the main criteria for model calibration, many studies have shown it to be a poor control on model calibration where correct understanding of the subbasin hydrologic processes are required. Additional data sources such as snowpack accumulation and melt are often required to create a reliable model calibration. When allocating resources for monitoring snowpack conditions, water system managers often must choose between monitoring point locations at high temporal resolution (i.e. real time weather and snow monitoring stations) and large spatial surveys (i.e. remote sensing). NASA's Airborne Snow Observatory (ASO) provides a unique opportunity to test the relative value of spatially dense, temporally sparse measurements vs. temporally dense, spatially sparse measurements for hydrologic model calibration. The ASO is a demonstration mission using coupled LiDAR and imaging spectrometer mounted to an aircraft flying at 6100 m to collect high spatial density measurements of snow water content and albedo over the 1189 km2 Tuolumne River Basin. Snow depth and albedo were collected weekly throughout the snowmelt runoff period at 5 m2 resolution during the 2013-2014 snowmelt. We developed an implementation of the USGS Precipitation Runoff Modeling System (PRMS) for the Tuolumne River above Hetch Hetchy Reservoir, the primary water source for San Francisco. The modeled snow accumulation and ablation was calibrated in 2 models using either 2 years of weekly measurements of distributed snow water equivalent from the ASO, or 2 years of 15 minute snow
Modelling, calibration, and error analysis of seven-hole pressure probes
NASA Technical Reports Server (NTRS)
Zillac, G. G.
1993-01-01
This report describes the calibration of a nonnulling, conical, seven-hole pressure probe over a large range of flow onset angles. The calibration procedure is based on the use of differential pressures to determine the three components of velocity. The method allows determination of the flow angle and velocity magnitude to within an average error of 1.0 deg and 1.0 percent, respectively. Greater accuracy can be achieved by using high-quality pressure transducers. Also included is an examination of the factors which limit the use of the probe, a description of the measurement chain, an error analysis, and a typical experimental result. In addition, a new general analytical model of pressure probe behavior is described, and the validity of the model is demonstrated by comparing it with experimentally measured calibration data for a three-hole yaw meter and a seven-hole probe.
NASA Astrophysics Data System (ADS)
Ye, Yan; Song, Xiaomeng; Zhang, Jianyun; Kong, Fanzhe; Ma, Guangwen
2014-06-01
Practical experience has demonstrated that single objective functions, no matter how carefully chosen, prove to be inadequate in providing proper measurements for all of the characteristics of the observed data. One strategy to circumvent this problem is to define multiple fitting criteria that measure different aspects of system behavior, and to use multi-criteria optimization to identify non-dominated optimal solutions. Unfortunately, these analyses require running original simulation models thousands of times. As such, they demand prohibitively large computational budgets. As a result, surrogate models have been used in combination with a variety of multi-objective optimization algorithms to approximate the true Pareto-front within limited evaluations for the original model. In this study, multi-objective optimization based on surrogate modeling (multivariate adaptive regression splines, MARS) for a conceptual rainfall-runoff model (Xin'anjiang model, XAJ) was proposed. Taking the Yanduhe basin of Three Gorges in the upper stream of the Yangtze River in China as a case study, three evaluation criteria were selected to quantify the goodness-of-fit of observations against calculated values from the simulation model. The three criteria chosen were the Nash-Sutcliffe efficiency coefficient, the relative error of peak flow, and runoff volume (REPF and RERV). The efficacy of this method is demonstrated on the calibration of the XAJ model. Compared to the single objective optimization results, it was indicated that the multi-objective optimization method can infer the most probable parameter set. The results also demonstrate that the use of surrogate-modeling enables optimization that is much more efficient; and the total computational cost is reduced by about 92.5%, compared to optimization without using surrogate modeling. The results obtained with the proposed method support the feasibility of applying parameter optimization to computationally intensive simulation
Near infrared spectroscopic calibration models for real time monitoring of powder density.
Román-Ospino, Andrés D; Singh, Ravendra; Ierapetritou, Marianthi; Ramachandran, Rohit; Méndez, Rafael; Ortega-Zuñiga, Carlos; Muzzio, Fernando J; Romañach, Rodolfo J
2016-10-15
Near infrared spectroscopic (NIRS) calibration models for real time prediction of powder density (tap, bulk and consolidated) were developed for a pharmaceutical formulation. Powder density is a critical property in the manufacturing of solid oral dosages, related to critical quality attributes such as tablet mass, hardness and dissolution. The establishment of calibration techniques for powder density is highly desired towards the development of control strategies. Three techniques were evaluated to obtain the required variation in powder density for calibration sets: 1) different tap density levels (for a single component), 2) generating different strain levels in powders blends (and as consequence powder density), through a modified shear Couette Cell, and 3) applying normal forces during a compressibility test with a powder rheometer to a pharmaceutical blend. For each variation in powder density, near infrared spectra were acquired to develop partial least squares (PLS) calibration models. Test samples were predicted with a relative standard error of prediction of 0.38%, 7.65% and 0.93% for tap density (single component), shear and rheometer respectively. Spectra obtained in real time in a continuous manufacturing (CM) plant were compared to the spectra from the three approaches used to vary powder density. The calibration based on the application of different strain levels showed the greatest similarity with the blends produced in the CM plant.
Stellar models with mixing length and T(τ) relations calibrated on 3D convection simulations
NASA Astrophysics Data System (ADS)
Salaris, Maurizio; Cassisi, Santi
2015-05-01
The calculation of the thermal stratification in the superadiabatic layers of stellar models with convective envelopes is a long-standing problem of stellar astrophysics, and has a major impact on predicted observational properties such as radius and effective temperature. The mixing length theory, almost universally used to model the superadiabatic convective layers, contains one free parameter to be calibrated (αml) whose value controls the resulting effective temperature. Here we present the first self-consistent stellar evolution models calculated by employing the atmospheric temperature stratification, Rosseland opacities, and calibrated variable αml (dependent on effective temperature and surface gravity) from a recently published large suite of three-dimensional radiation hydrodynamics simulations of stellar convective envelopes and atmospheres for solar stellar composition. From our calculations (with the same composition of the radiation hydrodynamics simulations), we find that the effective temperatures of models with the hydro-calibrated variable αml (that ranges between ~1.6 and ~2.0 in the parameter space covered by the simulations) present only minor differences, by at most ~30-50 K, compared to models calculated at constant solar αml (equal to 1.76, as obtained from the same simulations). The depth of the convective regions is essentially the same in both cases. We also analyzed the role played by the hydro-calibrated T(τ) relationships in determining the evolution of the model effective temperatures, when compared to alternative T(τ) relationships often used in stellar model computations. The choice of the T(τ) can have a larger impact than the use of a variable αml compared to a constant solar value. We found that the solar semi-empirical T(τ) by Vernazza et al. (1981, ApJS, 45, 635) provides stellar model effective temperatures that agree quite well with the results with the hydro-calibrated relationships.
NASA Astrophysics Data System (ADS)
liu, li; Solmon, Fabien; Giorgi, Filippo; Vautard, Robert
2014-05-01
Ragweed Ambrosia artemisiifolia L. is a highly allergenic invasive plant. Its pollen can be transported over large distances and has been recognized as a significant cause of hayfever and asthma (D'Amato et al., 2007). In the context of the ATOPICA EU program we are studying the links between climate, land use and ecological changes on the ragweed pollen emissions and concentrations. For this purpose, we implemented a pollen emission/transport module in the RegCM4 regional climate model in collaboration with ATOPICA partners. The Abdus Salam International Centre for Theoretical Physics (ICTP) regional climate model, i.e. RegCM4 was adapted to incorporate the pollen emissions from (ORCHIDEE French) Global Land Surface Model and a pollen tracer model for describing pollen convective transport, turbulent mixing, dry and wet deposition over extensive domains, using consistent assumption regarding the transport of multiple species (Fabien et al., 2008). We performed two families of recent-past simulations on the Euro-Cordex domain (simulation for future condition is been considering). Hindcast simulations (2000~2011) were driven by the ERA-Interim re-analyses and designed to best simulate past periods airborne pollens, which were calibrated with parts of observations and verified by comparison with the additional observations. Historical simulations (1985~2004) were driven by HadGEM CMPI5 and designed to serve as a baseline for comparison with future airborne concentrations as obtained from climate and land-use scenarios. To reduce the uncertainties on the ragweed pollen emission, an assimilation-like method (Rouǐl et al., 2009) was used to calibrate release based on airborne pollen observations. The observations were divided into two groups and used for calibration and validation separately. A wide range of possible calibration coefficients were tested for each calibration station, making the bias between observations and simulations within an admissible value then
Calibrating Bayesian Network Representations of Social-Behavioral Models
Whitney, Paul D.; Walsh, Stephen J.
2010-04-08
While human behavior has long been studied, recent and ongoing advances in computational modeling present opportunities for recasting research outcomes in human behavior. In this paper we describe how Bayesian networks can represent outcomes of human behavior research. We demonstrate a Bayesian network that represents political radicalization research – and show a corresponding visual representation of aspects of this research outcome. Since Bayesian networks can be quantitatively compared with external observations, the representation can also be used for empirical assessments of the research which the network summarizes. For a political radicalization model based on published research, we show this empirical comparison with data taken from the Minorities at Risk Organizational Behaviors database.
Kinetic modeling of antimony(III) oxidation and sorption in soils.
Cai, Yongbing; Mi, Yuting; Zhang, Hua
2016-10-01
Kinetic batch and saturated column experiments were performed to study the oxidation, adsorption and transport of Sb(III) in two soils with contrasting properties. Kinetic and column experiment results clearly demonstrated the extensive oxidation of Sb(III) in soils, and this can in return influence the adsorption and transport of Sb. Both sorption capacity and kinetic oxidation rate were much higher in calcareous Huanjiang soil than in acid red Yingtan soil. The results indicate that soil serve as a catalyst in promoting oxidation of Sb(III) even under anaerobic conditions. A PHREEQC model with kinetic formulations was developed to simulate the oxidation, sorption and transport of Sb(III) in soils. The model successfully described Sb(III) oxidation and sorption data in kinetic batch experiment. It was less successful in simulating the reactive transport of Sb(III) in soil columns. Additional processes such as colloid facilitated transport need to be quantified and considered in the model.
Kinetic modeling of antimony(III) oxidation and sorption in soils.
Cai, Yongbing; Mi, Yuting; Zhang, Hua
2016-10-01
Kinetic batch and saturated column experiments were performed to study the oxidation, adsorption and transport of Sb(III) in two soils with contrasting properties. Kinetic and column experiment results clearly demonstrated the extensive oxidation of Sb(III) in soils, and this can in return influence the adsorption and transport of Sb. Both sorption capacity and kinetic oxidation rate were much higher in calcareous Huanjiang soil than in acid red Yingtan soil. The results indicate that soil serve as a catalyst in promoting oxidation of Sb(III) even under anaerobic conditions. A PHREEQC model with kinetic formulations was developed to simulate the oxidation, sorption and transport of Sb(III) in soils. The model successfully described Sb(III) oxidation and sorption data in kinetic batch experiment. It was less successful in simulating the reactive transport of Sb(III) in soil columns. Additional processes such as colloid facilitated transport need to be quantified and considered in the model. PMID:27214003
Self-calibrating models for dynamic monitoring and diagnosis
NASA Technical Reports Server (NTRS)
Kuipers, Benjamin
1996-01-01
A method for automatically building qualitative and semi-quantitative models of dynamic systems, and using them for monitoring and fault diagnosis, is developed and demonstrated. The qualitative approach and semi-quantitative method are applied to monitoring observation streams, and to design of non-linear control systems.
Remote sensing estimation of evapotranspiration for SWAT Model Calibration
Technology Transfer Automated Retrieval System (TEKTRAN)
Hydrological models are used to assess many water resource problems from water quantity to water quality issues. The accurate assessment of the water budget, primarily the influence of precipitation and evapotranspiration (ET), is a critical first-step evaluation, which is often overlooked in hydro...
Calibration of Airframe and Occupant Models for Two Full-Scale Rotorcraft Crash Tests
NASA Technical Reports Server (NTRS)
Annett, Martin S.; Horta, Lucas G.; Polanco, Michael A.
2012-01-01
Two full-scale crash tests of an MD-500 helicopter were conducted in 2009 and 2010 at NASA Langley's Landing and Impact Research Facility in support of NASA s Subsonic Rotary Wing Crashworthiness Project. The first crash test was conducted to evaluate the performance of an externally mounted composite deployable energy absorber under combined impact conditions. In the second crash test, the energy absorber was removed to establish baseline loads that are regarded as severe but survivable. Accelerations and kinematic data collected from the crash tests were compared to a system integrated finite element model of the test article. Results from 19 accelerometers placed throughout the airframe were compared to finite element model responses. The model developed for the purposes of predicting acceleration responses from the first crash test was inadequate when evaluating more severe conditions seen in the second crash test. A newly developed model calibration approach that includes uncertainty estimation, parameter sensitivity, impact shape orthogonality, and numerical optimization was used to calibrate model results for the second full-scale crash test. This combination of heuristic and quantitative methods was used to identify modeling deficiencies, evaluate parameter importance, and propose required model changes. It is shown that the multi-dimensional calibration techniques presented here are particularly effective in identifying model adequacy. Acceleration results for the calibrated model were compared to test results and the original model results. There was a noticeable improvement in the pilot and co-pilot region, a slight improvement in the occupant model response, and an over-stiffening effect in the passenger region. This approach should be adopted early on, in combination with the building-block approaches that are customarily used, for model development and test planning guidance. Complete crash simulations with validated finite element models can be used
Prieto, D; Das, T K
2016-03-01
Uncertainty of pandemic influenza viruses continue to cause major preparedness challenges for public health policymakers. Decisions to mitigate influenza outbreaks often involve tradeoff between the social costs of interventions (e.g., school closure) and the cost of uncontrolled spread of the virus. To achieve a balance, policymakers must assess the impact of mitigation strategies once an outbreak begins and the virus characteristics are known. Agent-based (AB) simulation is a useful tool for building highly granular disease spread models incorporating the epidemiological features of the virus as well as the demographic and social behavioral attributes of tens of millions of affected people. Such disease spread models provide excellent basis on which various mitigation strategies can be tested, before they are adopted and implemented by the policymakers. However, to serve as a testbed for the mitigation strategies, the AB simulation models must be operational. A critical requirement for operational AB models is that they are amenable for quick and simple calibration. The calibration process works as follows: the AB model accepts information available from the field and uses those to update its parameters such that some of its outputs in turn replicate the field data. In this paper, we present our epidemiological model based calibration methodology that has a low computational complexity and is easy to interpret. Our model accepts a field estimate of the basic reproduction number, and then uses it to update (calibrate) the infection probabilities in a way that its effect combined with the effects of the given virus epidemiology, demographics, and social behavior results in an infection pattern yielding a similar value of the basic reproduction number. We evaluate the accuracy of the calibration methodology by applying it for an AB simulation model mimicking a regional outbreak in the US. The calibrated model is shown to yield infection patterns closely replicating
Ozone Monitoring Instrument flight-model on-ground calibration from a scientific point of view
NASA Astrophysics Data System (ADS)
Dobber, M.; Dirksen, R.; Levelt, P.; van den Oord, B.; Jaross, G.; Kowalewski, M.; Mount, G.; Heath, D.; Hilsenrath, E.; de Vries, J.
2003-04-01
In 2002 the on-ground calibration of the flight-model of the Ozone Monitoring Instrument (OMI), scheduled for launch in January 2004 on the EOS-AURA satellite, was performed by the industrial contractor in close cooperation with the Principal Investigator team from the Royal Netherlands Meteorological Institute (KNMI) and the scientific Calibration Working Group with members from The Netherlands, Finland, and the United States. OMI will observe the Earth in nadir with a field of view of 114 degrees, providing a daily Earth coverage. The instrument observes the sun once per day via one of three on-board diffusers, and once per week and month via one of the other two diffusers, respectively, to monitor the diffuser degradation. The instrument is equipped with an internal white light source for detector calibration purposes. This presentation will focus on the measurements, analyses and preliminary results of the on-ground calibration from a scientific perspective, which are particularly important for meeting the scientific objectives of the OMI instrument. The OMI instrument is operated in flight at close to vacuum pressure and at temperatures of the CCD detectors and the instrument optical bench of 266 K and 265 K, respectively. Special attention is therefor given to the methods for obtaining a flight-representative on-ground calibration of the instrument. The OMI instrument has a large observational swath field of view of 114 degrees and a high spectral resolution. For this reason the on-ground calibration of the swath angle dependence of the radiometric calibration and the calibration description of spectral instrument features are discussed in the presentation. The spectral instrument features are important in view of the Differential Optical Absorption Spectroscopy (DOAS) retrieval technique used for the OMI instrument. They originate from the innovative polarisation scrambling device in the Earth mode and from the three on-board diffusers in the sun mode
Dowding, Kevin J.; Hills, Richard Guy
2005-04-01
Numerical models of complex phenomena often contain approximations due to our inability to fully model the underlying physics, the excessive computational resources required to fully resolve the physics, the need to calibrate constitutive models, or in some cases, our ability to only bound behavior. Here we illustrate the relationship between approximation, calibration, extrapolation, and model validation through a series of examples that use the linear transient convective/dispersion equation to represent the nonlinear behavior of Burgers equation. While the use of these models represents a simplification relative to the types of systems we normally address in engineering and science, the present examples do support the tutorial nature of this document without obscuring the basic issues presented with unnecessarily complex models.
Knot, E.A.; de Jong, E.; ten Cate, J.W.; Iburg, A.H.; Henny, C.P.; Bruin, T.; Stibbe, J.
1986-01-01
Purified human radioiodinated antithrombin III (125I-AT III) was used to study its metabolism in six members from three different families with a known hereditary AT III deficiency. Six healthy volunteers served as a control group. Sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and crossed immunoelectrophoresis (CIE) showed the purified AT III to be homogeneous. Amino acid analysis of the protein revealed a composition identical to a highly purified internal standard. The specific activity was 5.6 U/mg. Analysis of plasma radioactivity data was performed, using a three-compartment model. Neither plasma disappearance half-times nor fractional catabolic rate constants differed significantly between patients and control subjects. The mean absolute catabolic rate in the patient group was significantly lower than that of the control group at 2.57 +/- 0.44 and 4.46 +/- 0.80 mg/kg/day, respectively. In addition, the mean patient alpha 1-phase, flux ratio (k1,2 and k2,1) of the second compartment alpha 2-phase and influx (k3,1) of the third compartment were significantly reduced as compared with control values. It has been tentatively concluded that the observed reduction in the second compartment may be caused by a decrease in endothelial cell surface binding.
NASA Astrophysics Data System (ADS)
Price, K.; Purucker, T.; Kraemer, S.; Babendreier, J. E.
2011-12-01
Four nested sub-watersheds (21 to 10100 km^2) of the Neuse River in North Carolina are used to investigate calibration tradeoffs in goodness-of-fit metrics using multiple likelihood methods. Calibration of watershed hydrologic models is commonly achieved by optimizing a single goodness-of-fit metric to characterize simulated versus observed flows (e.g., R^2 and Nash-Sutcliffe Efficiency Coefficient, or NSE). However, each of these objective functions heavily weights a particular aspect of streamflow. For example, NSE and R^2 both emphasize high flows in evaluating simulation fit, while the Modified Nash-Sutcliffe Efficiency Coefficient (MNSE) emphasizes low flows. Other metrics, such as the ratio of the simulated versus observed flow standard deviations (SDR), prioritize overall flow variability. In this comparison, we use informal likelihood methods to investigate the tradeoffs of calibrating streamflow on three standard goodness-of-fit metrics (NSE, MNSE, and SDR), as well as an index metric that equally weights these three objective functions to address a range of flow characteristics. We present a flexible method that allows calibration targets to be determined by modeling goals. In this process, we begin by using Latin Hypercube Sampling (LHS) to reduce the simulations required to explore the full parameter space. The correlation structure of a large suite of goodness-of-fit metrics is explored to select metrics for use in an index function that incorporates a range of flow characteristics while avoiding redundancy. An iterative informal likelihood procedure is used to narrow parameter ranges after each simulation set to areas of the range with the most support from the observed data. A stopping rule is implemented to characterize the overall goodness-of-fit associated with the parameter set for each pass, with the best-fit pass distributions used as the calibrated set for the next simulation set. This process allows a great deal of flexibility. The process is
Calibration of a distributed hydrologic model using observed spatial patterns from MODIS data
NASA Astrophysics Data System (ADS)
Demirel, Mehmet C.; González, Gorka M.; Mai, Juliane; Stisen, Simon
2016-04-01
Distributed hydrologic models are typically calibrated against streamflow observations at the outlet of the basin. Along with these observations from gauging stations, satellite based estimates offer independent evaluation data such as remotely sensed actual evapotranspiration (aET) and land surface temperature. The primary objective of the study is to compare model calibrations against traditional downstream discharge measurements with calibrations against simulated spatial patterns and combinations of both types of observations. While the discharge based model calibration typically improves the temporal dynamics of the model, it seems to give rise to minimum improvement of the simulated spatial patterns. In contrast, objective functions specifically targeting the spatial pattern performance could potentially increase the spatial model performance. However, most modeling studies, including the model formulations and parameterization, are not designed to actually change the simulated spatial pattern during calibration. This study investigates the potential benefits of incorporating spatial patterns from MODIS data to calibrate the mesoscale hydrologic model (mHM). This model is selected as it allows for a change in the spatial distribution of key soil parameters through the optimization of pedo-transfer function parameters and includes options for using fully distributed daily Leaf Area Index (LAI) values directly as input. In addition the simulated aET can be estimated at a spatial resolution suitable for comparison to the spatial patterns observed with MODIS data. To increase our control on spatial calibration we introduced three additional parameters to the model. These new parameters are part of an empirical equation to the calculate crop coefficient (Kc) from daily LAI maps and used to update potential evapotranspiration (PET) as model inputs. This is done instead of correcting/updating PET with just a uniform (or aspect driven) factor used in the mHM model
GROUNDWATER FLOW MODEL CALIBRATION USING WATER LEVEL MEASUREMENTS AT SHORT INTERVALS
Groundwater flow models are usually calibrated with respect to water level measurements collected at intervals of several months or even years. Measurements of these kinds are not sensitive to sudden or short stress conditions, such as impact from stormwater drainage flow or flas...
Note: curve fit models for atomic force microscopy cantilever calibration in water.
Kennedy, Scott J; Cole, Daniel G; Clark, Robert L
2011-11-01
Atomic force microscopy stiffness calibrations performed on commercial instruments using the thermal noise method on the same cantilever in both air and water can vary by as much as 20% when a simple harmonic oscillator model and white noise are used in curve fitting. In this note, several fitting strategies are described that reduce this difference to about 11%.
Complex permittivity model for time domain reflectometry soil water content sensing: II. Calibration
Technology Transfer Automated Retrieval System (TEKTRAN)
Despite numerous applications of time domain reflectometry (TDR), serious difficulties in estimating accurate soil water contents under field conditions remain, especially in fine-textured soils. Our objectives were to calibrate a complex dielectric mixing model described by Schwartz et al. (this is...
ERIC Educational Resources Information Center
Zhang, Mo; Williamson, David M.; Breyer, F. Jay; Trapani, Catherine
2012-01-01
This article describes two separate, related studies that provide insight into the effectiveness of "e-rater" score calibration methods based on different distributional targets. In the first study, we developed and evaluated a new type of "e-rater" scoring model that was cost-effective and applicable under conditions of absent human rating and…
Calibration of TSI model 3025 ultrafine condensation particle counter
Kesten, J.; Reineking, A.; Porstendoerfer, J. )
1991-01-01
The registration efficiency of the TSI model 3025 ultrafine condensation particle counter for Ag and NaCl particles of between 2 and 20 nm in diameter was determined. Taking into account the different shapes of the input aerosol size distributions entering the differential mobility analyzer (DMA) and the transfer function of the DMA, the counting efficiencies of condensation nucleus counters (CNC) for monodisperse Ag and NaCl particles were estimated. In addition, the dependence of the CNC registration efficiency on the particle concentration was investigated.
Double-layer parallelization for hydrological model calibration on HPC systems
NASA Astrophysics Data System (ADS)
Zhang, Ang; Li, Tiejian; Si, Yuan; Liu, Ronghua; Shi, Haiyun; Li, Xiang; Li, Jiaye; Wu, Xia
2016-04-01
Large-scale problems that demand high precision have remarkably increased the computational time of numerical simulation models. Therefore, the parallelization of models has been widely implemented in recent years. However, computing time remains a major challenge when a large model is calibrated using optimization techniques. To overcome this difficulty, we proposed a double-layer parallel system for hydrological model calibration using high-performance computing (HPC) systems. The lower-layer parallelism is achieved using a hydrological model, the Digital Yellow River Integrated Model, which was parallelized by decomposing river basins. The upper-layer parallelism is achieved by simultaneous hydrological simulations with different parameter combinations in the same generation of the genetic algorithm and is implemented using the job scheduling functions of an HPC system. The proposed system was applied to the upstream of the Qingjian River basin, a sub-basin of the middle Yellow River, to calibrate the model effectively by making full use of the computing resources in the HPC system and to investigate the model's behavior under various parameter combinations. This approach is applicable to most of the existing hydrology models for many applications.
Generic camera model and its calibration for computational integral imaging and 3D reconstruction.
Li, Weiming; Li, Youfu
2011-03-01
Integral imaging (II) is an important 3D imaging technology. To reconstruct 3D information of the viewed objects, modeling and calibrating the optical pickup process of II are necessary. This work focuses on the modeling and calibration of an II system consisting of a lenslet array, an imaging lens, and a charge-coupled device camera. Most existing work on such systems assumes a pinhole array model (PAM). In this work, we explore a generic camera model that accommodates more generality. This model is an empirical model based on measurements, and we constructed a setup for its calibration. Experimental results show a significant difference between the generic camera model and the PAM. Images of planar patterns and 3D objects were computationally reconstructed with the generic camera model. Compared with the images reconstructed using the PAM, the images present higher fidelity and preserve more high spatial frequency components. To the best of our knowledge, this is the first attempt in applying a generic camera model to an II system.
NASA Astrophysics Data System (ADS)
Jackson-Blake, Leah; Helliwell, Rachel
2015-04-01
Process-based catchment water quality models are increasingly used as tools to inform land management. However, for such models to be reliable they need to be well calibrated and shown to reproduce key catchment processes. Calibration can be challenging for process-based models, which tend to be complex and highly parameterised. Calibrating a large number of parameters generally requires a large amount of monitoring data, spanning all hydrochemical conditions. However, regulatory agencies and research organisations generally only sample at a fortnightly or monthly frequency, even in well-studied catchments, often missing peak flow events. The primary aim of this study was therefore to investigate how the quality and uncertainty of model simulations produced by a process-based, semi-distributed catchment model, INCA-P (the INtegrated CAtchment model of Phosphorus dynamics), were improved by calibration to higher frequency water chemistry data. Two model calibrations were carried out for a small rural Scottish catchment: one using 18 months of daily total dissolved phosphorus (TDP) concentration data, another using a fortnightly dataset derived from the daily data. To aid comparability, calibrations were carried out automatically using the Markov Chain Monte Carlo - DiffeRential Evolution Adaptive Metropolis (MCMC-DREAM) algorithm. Calibration to daily data resulted in improved simulation of peak TDP concentrations and improved model performance statistics. Parameter-related uncertainty in simulated TDP was large when fortnightly data was used for calibration, with a 95% credible interval of 26 μg/l. This uncertainty is comparable in size to the difference between Water Framework Directive (WFD) chemical status classes, and would therefore make it difficult to use this calibration to predict shifts in WFD status. The 95% credible interval reduced markedly with the higher frequency monitoring data, to 6 μg/l. The number of parameters that could be reliably auto-calibrated
NASA Astrophysics Data System (ADS)
Finger, David; Vis, Marc; Huss, Matthias; Seibert, Jan
2015-04-01
The assessment of snow, glacier, and rainfall runoff contribution to discharge in mountain streams is of major importance for an adequate water resource management. Such contributions can be estimated via hydrological models, provided that the modeling adequately accounts for snow and glacier melt, as well as rainfall runoff. We present a multiple data set calibration approach to estimate runoff composition using hydrological models with three levels of complexity. For this purpose, the code of the conceptual runoff model HBV-light was enhanced to allow calibration and validation of simulations against glacier mass balances, satellite-derived snow cover area and measured discharge. Three levels of complexity of the model were applied to glacierized catchments in Switzerland, ranging from 39 to 103 km2. The results indicate that all three observational data sets are reproduced adequately by the model, allowing an accurate estimation of the runoff composition in the three mountain streams. However, calibration against only runoff leads to unrealistic snow and glacier melt rates. Based on these results, we recommend using all three observational data sets in order to constrain model parameters and compute snow, glacier, and rain contributions. Finally, based on the comparison of model performance of different complexities, we postulate that the availability and use of different data sets to calibrate hydrological models might be more important than model complexity to achieve realistic estimations of runoff composition.
NASA Astrophysics Data System (ADS)
Skinner, Christopher J.; Bellerby, Timothy J.; Greatrex, Helen; Grimes, David I. F.
2015-03-01
The potential for satellite rainfall estimates to drive hydrological models has been long understood, but at the high spatial and temporal resolutions often required by these models the uncertainties in satellite rainfall inputs are both significant in magnitude and spatiotemporally autocorrelated. Conditional stochastic modelling of ensemble observed fields provides one possible approach to representing this uncertainty in a form suitable for hydrological modelling. Previous studies have concentrated on the uncertainty within the satellite rainfall estimates themselves, sometimes applying ensemble inputs to a pre-calibrated hydrological model. This approach does not account for the interaction between input uncertainty and model uncertainty and in particular the impact of input uncertainty on model calibration. Moreover, it may not be appropriate to use deterministic inputs to calibrate a model that is intended to be driven by using an ensemble. A novel whole-ensemble calibration approach has been developed to overcome some of these issues. This study used ensemble rainfall inputs produced by a conditional satellite-driven stochastic rainfall generator (TAMSIM) to drive a version of the Pitman rainfall-runoff model, calibrated using the whole-ensemble approach. Simulated ensemble discharge outputs were assessed using metrics adapted from ensemble forecast verification, showing that the ensemble outputs produced using the whole-ensemble calibrated Pitman model outperformed equivalent ensemble outputs created using a Pitman model calibrated against either the ensemble mean or a theoretical infinite-ensemble expected value. Overall, for the verification period the whole-ensemble calibration provided a mean RMSE of 61.7% of the mean wet season discharge, compared to 83.6% using a calibration based on the daily mean of the ensemble estimates. Using a Brier's Skill Score to assess the performance of the ensemble against a climatic estimate, the whole
A flexible modeling and calibration for the optical triangulation probe using a planar pattern
NASA Astrophysics Data System (ADS)
Lin, Yimin; Lu, Naiguang; Lou, Xiaoping
2013-12-01
The optical triangulation probe (OTP), which consists of a light spot projector and a camera, has found widespread applications for three-dimensional (3D) measurement and quality control of products in the industrial manufacturing. The OTP calibration is an extremely important issue, since the performances such as high accuracy and repeatability are crucially depended on the calibration results. This paper presents a flexible approach for modeling and calibration of the OTP, which only requires planar patterns observed from a few different orientations and light spots projected on the planes as well. For the calibration procedure, the structure parameters of the OTP are calculated, such as the camera extrinsic and intrinsic parameters which include the coefficients of the lens distortion, and the directional equation for the light axis of the projector. For the measuring procedure, the formulations of 3D computation are concisely described using the calibration results. Experimental tests of the real system confirm the suitable accuracy and repeatability. Furthermore, the technique proposed here is easily generalized for the OTP integration in robot arms or Coordinate Measuring Machines (CMMs).
NASA Technical Reports Server (NTRS)
Weiss, H.; Cebula, Richard P.; Laamann, K.; Mcpeters, R. D.
1994-01-01
The Solar Backscatter Ultraviolet Radiometer, Model 2 (SBUV/2) instruments, as part of their regular operation, deploy ground aluminum reflective diffusers to deflect solar irradiance into the instrument's field-of-view. Previous SBUV instrument diffusers have shown a tendency to degrade in their reflective efficiencies. This degradation will add a trend to the ozone measurements if left uncorrected. An extensive in-flight calibration system was designed into the SBUV/2 instruments to effectively measure the degradation of the solar diffuser (Ball Aerospace Systems Division 1981). Soon after launch, the NOAA-9 SBUV/2 calibration system was unable to track the diffuser's reflectivity changes due, in part, to design flows (Frederick et al. 1986). Subsequently, the NOAA-11 SBUV/2 calibration system was redesigned and an analysis of the first 2 years of data (Weiss et al. 1991) indicated the NOAA-11 SBUV/2 onboard calibration system's performance to be exceeding preflight expectations. This paper will describe the analysis of the first three years NOAA-11 SBUV/2 calibration system data.
NASA Astrophysics Data System (ADS)
Junker, Philipp; Hackl, Klaus
2016-06-01
Numerical simulations are a powerful tool to analyze the complex thermo-mechanically coupled material behavior of shape memory alloys during product engineering. The benefit of the simulations strongly depends on the quality of the underlying material model. In this contribution, we discuss a variational approach which is based solely on energetic considerations and demonstrate that unique calibration of such a model is sufficient to predict the material behavior at varying ambient temperature. In the beginning, we recall the necessary equations of the material model and explain the fundamental idea. Afterwards, we focus on the numerical implementation and provide all information that is needed for programing. Then, we show two different ways to calibrate the model and discuss the results. Furthermore, we show how this model is used during real-life industrial product engineering.
Generic Raman-based calibration models enabling real-time monitoring of cell culture bioreactors.
Mehdizadeh, Hamidreza; Lauri, David; Karry, Krizia M; Moshgbar, Mojgan; Procopio-Melino, Renee; Drapeau, Denis
2015-01-01
Raman-based multivariate calibration models have been developed for real-time in situ monitoring of multiple process parameters within cell culture bioreactors. Developed models are generic, in the sense that they are applicable to various products, media, and cell lines based on Chinese Hamster Ovarian (CHO) host cells, and are scalable to large pilot and manufacturing scales. Several batches using different CHO-based cell lines and corresponding proprietary media and process conditions have been used to generate calibration datasets, and models have been validated using independent datasets from separate batch runs. All models have been validated to be generic and capable of predicting process parameters with acceptable accuracy. The developed models allow monitoring multiple key bioprocess metabolic variables, and hence can be utilized as an important enabling tool for Quality by Design approaches which are strongly supported by the U.S. Food and Drug Administration.
Calibration Of 2D Hydraulic Inundation Models In The Floodplain Region Of The Lower Tagus River
NASA Astrophysics Data System (ADS)
Pestanana, R.; Matias, M.; Canelas, R.; Araujo, A.; Roque, D.; Van Zeller, E.; Trigo-Teixeira, A.; Ferreira, R.; Oliveira, R.; Heleno, S.
2013-12-01
In terms of inundated area, the largest floods in Portugal occur in the Lower Tagus River. On average, the river overflows every 2.5 years, at times blocking roads and causing important agricultural damages. This paper focus on the calibration of 2D-horizontal flood simulation models for the floods of 2001 and 2006 on a 70-km stretch of the Lower Tagus River. Flood extent maps, derived from ERS SAR and ENVISAT ASAR imagery were compared with the flood extent maps obtained for each simulation, to calibrate roughness coefficients. The combination of the calibration results from the 2001 and 2006 floods provided a preliminary Manning coefficient map of the study area.
Calibration of sensory and cognitive judgments: a single model for both.
Ferrell, W R
1994-12-01
In a recent issue of this journal, Winman and Juslin (34, 135-148, 1993) present a model of the calibration of subjective probability judgments for sensory discrimination tasks. They claim that the model predicts a pervasive underconfidence bias observed in such tasks, and present evidence from a training experiment that they interpret as supporting the notion that different models are needed to describe judgment of confidence in sensory and in cognitive tasks. The model is actually part of the more comprehensive decision variable partition model of subjective probability calibration that was originally proposed in Ferrell and McGoey (Organizational Behavior and Human Performance, 26, 32-53, 1980). The characteristics of the model are described and it is demonstrated that the model does not predict underconfidence, that it is fully compatible with the overconfidence frequently found in calibration studies with cognitive tasks, and that it well represents experimental results from such studies. It is concluded that only a single model is needed for both types of task. PMID:7809583
Scott, D.T.; Gooseff, M.N.; Bencala, K.E.; Runkel, R.L.
2003-01-01
The hydrologic processes of advection, dispersion, and transient storage are the primary physical mechanisms affecting solute transport in streams. The estimation of parameters for a conservative solute transport model is an essential step to characterize transient storage and other physical features that cannot be directly measured, and often is a preliminary step in the study of reactive solutes. Our study used inverse modeling to estimate parameters of the transient storage model OTIS (One dimensional Transport with Inflow and Storage). Observations from a tracer injection experiment performed on Uvas Creek, California, USA, are used to illustrate the application of automated solute transport model calibration to conservative and nonconservative stream solute transport. A computer code for universal inverse modeling (UCODE) is used for the calibrations. Results of this procedure are compared with a previous study that used a trial-and-error parameter estimation approach. The results demonstrated 1) importance of the proper estimation of discharge and lateral inflow within the stream system; 2) that although the fit of the observations is not much better when transient storage is invoked, a more randomly distributed set of residuals resulted (suggesting non-systematic error), indicating that transient storage is occurring; 3) that inclusion of transient storage for a reactive solute (Sr2+) provided a better fit to the observations, highlighting the importance of robust model parameterization; and 4) that applying an automated calibration inverse modeling estimation approach resulted in a comprehensive understanding of the model results and the limitation of input data.
NASA Astrophysics Data System (ADS)
Boyle, Douglas P.; Gupta, Hoshin V.; Sorooshian, Soroosh
2000-12-01
Automatic methods for model calibration seek to take advantage of the speed and power of digital computers, while being objective and relatively easy to implement. However, they do not provide parameter estimates and hydrograph simulations that are considered acceptable by the hydrologists responsible for operational forecasting and have therefore not entered into widespread use. In contrast, the manual approach which has been developed and refined over the years to result in excellent model calibrations is complicated and highly labor-intensive, and the expertise acquired by one individual with a specific model is not easily transferred to another person (or model). In this paper, we propose a hybrid approach that combines the strengths of each. A multicriteria formulation is used to "model" the evaluation techniques and strategies used in manual calibration, and the resulting optimization problem is solved by means of a computerized algorithm. The new approach provides a stronger test of model performance than methods that use a single overall statistic to aggregate model errors over a large range of hydrologie behaviors. The power of the new approach is illustrated by means of a case study using the Sacramento Soil Moisture Accounting model.
Khan, Yasin; Mathur, Jyotirmay; Bhandari, Mahabir S
2016-01-01
The paper describes a case study of an information technology office building with a radiant cooling system and a conventional variable air volume (VAV) system installed side by side so that performancecan be compared. First, a 3D model of the building involving architecture, occupancy, and HVAC operation was developed in EnergyPlus, a simulation tool. Second, a different calibration methodology was applied to develop the base case for assessing the energy saving potential. This paper details the calibration of the whole building energy model to the component level, including lighting, equipment, and HVAC components such as chillers, pumps, cooling towers, fans, etc. Also a new methodology for the systematic selection of influence parameter has been developed for the calibration of a simulated model which requires large time for the execution. The error at the whole building level [measured in mean bias error (MBE)] is 0.2%, and the coefficient of variation of root mean square error (CvRMSE) is 3.2%. The total errors in HVAC at the hourly are MBE = 8.7% and CvRMSE = 23.9%, which meet the criteria of ASHRAE 14 (2002) for hourly calibration. Different suggestions have been pointed out to generalize the energy saving of radiant cooling system through the existing building system. So a base case model was developed by using the calibrated model for quantifying the energy saving potential of the radiant cooling system. It was found that a base case radiant cooling system integrated with DOAS can save 28% energy compared with the conventional VAV system.
Caruso, Rosario; Gambino, Grazia Laura; Scordino, Monica; Sabatino, Leonardo; Traulo, Pasqualino; Gagliano, Giacomo
2011-12-01
The influence of the wine distillation process on methanol content has been determined by quantitative analysis using gas chromatographic flame ionization (GC-FID) detection. A comparative study between direct injection of diluted wine and injection of distilled wine was performed. The distillation process does not affect methanol quantification in wines in proportions higher than 10%. While quantification performed on distilled samples gives more reliable results, a screening method for wine injection after a 1:5 water dilution could be employed. The proposed technique was found to be a compromise between the time consuming distillation process and direct wine injection. In the studied calibration range, the stability of the volatile compounds in the reference solution is concentration-dependent. The stability is higher in the less concentrated reference solution. To shorten the operation time, a stronger temperature ramp and carrier flow rate was employed. With these conditions, helium consumption and column thermal stress were increased. However, detection limits, calibration limits, and analytical method performances are not affected substantially by changing from normal to forced GC conditions. Statistical data evaluation were made using both ordinary (OLS) and bivariate least squares (BLS) calibration models. Further confirmation was obtained that limit of detection (LOD) values, calculated according to the 3sigma approach, are lower than the respective Hubaux-Vos (H-V) calculation method. H-V LOD depends upon background noise, calibration parameters and the number of reference standard solutions employed in producing the calibration curve. These remarks are confirmed by both calibration models used. PMID:22312744
Calibration and Monte Carlo modelling of neutron long counters
NASA Astrophysics Data System (ADS)
Tagziria, Hamid; Thomas, David J.
2000-10-01
The Monte Carlo technique has become a very powerful tool in radiation transport as full advantage is taken of enhanced cross-section data, more powerful computers and statistical techniques, together with better characterisation of neutron and photon source spectra. At the National Physical Laboratory, calculations using the Monte Carlo radiation transport code MCNP-4B have been combined with accurate measurements to characterise two long counters routinely used to standardise monoenergetic neutron fields. New and more accurate response function curves have been produced for both long counters. A novel approach using Monte Carlo methods has been developed, validated and used to model the response function of the counters and determine more accurately their effective centres, which have always been difficult to establish experimentally. Calculations and measurements agree well, especially for the De Pangher long counter for which details of the design and constructional material are well known. The sensitivity of the Monte Carlo calculations for the efficiency of the De Pangher long counter to perturbations in density and cross-section of the polyethylene used in the construction has been investigated.
Hidden Connections between Regression Models of Strain-Gage Balance Calibration Data
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert
2013-01-01
Hidden connections between regression models of wind tunnel strain-gage balance calibration data are investigated. These connections become visible whenever balance calibration data is supplied in its design format and both the Iterative and Non-Iterative Method are used to process the data. First, it is shown how the regression coefficients of the fitted balance loads of a force balance can be approximated by using the corresponding regression coefficients of the fitted strain-gage outputs. Then, data from the manual calibration of the Ames MK40 six-component force balance is chosen to illustrate how estimates of the regression coefficients of the fitted balance loads can be obtained from the regression coefficients of the fitted strain-gage outputs. The study illustrates that load predictions obtained by applying the Iterative or the Non-Iterative Method originate from two related regression solutions of the balance calibration data as long as balance loads are given in the design format of the balance, gage outputs behave highly linear, strict statistical quality metrics are used to assess regression models of the data, and regression model term combinations of the fitted loads and gage outputs can be obtained by a simple variable exchange.
Marker-based monitoring of seated spinal posture using a calibrated single-variable threshold model.
Walsh, Pauline; Dunne, Lucy E; Caulfield, Brian; Smyth, Barry
2006-01-01
This work, as part of a larger project developing wearable posture monitors for the work environment, seeks to monitor and model seated posture during computer use. A non-wearable marker-based optoelectronic motion capture system was used to monitor seated posture for ten healthy subjects during a calibration exercise and a typing task. Machine learning techniques were used to select overall spinal sagittal flexion as the best indicator of posture from a set of marker and vector variables. Overall flexion data from the calibration exercise were used to define a threshold model designed to classify posture for each subject, which was then applied to the typing task data. Results of the model were analysed visually by qualified physiotherapists with experience in ergonomics and posture analysis to confirm the accuracy of the calibration. The calibration formula was found to be accurate on 100% subjects. This process will be used as a comparative measure in the evaluation of several wearable posture sensors, and to inform the design of the wearable system. PMID:17946301
An initial inverse calibration of the ground-water flow model for the Hanford unconfined aquifer
Jacobson, E.A. . Desert Research Inst.); Freshly, M.D. )
1990-03-01
Large volumes of process cooling water are discharged to the ground form U.S. Department of Energy (DOE) nuclear fuel processing operations in the central portion of the Hanford Site in southeastern Washington. Over the years, these large volumes of waste water have recharged the unconfined aquifer at the Site. This artificial recharge has affected ground-water levels and contaminant movement in the unconfined aquifer. Ground-water flow and contaminant transport models have been applied to assess the impacts of site operations on the rate and direction of ground-water flow and contaminant transport in unconfined aquifer at the Hanford Site. The inverse calibration method developed by Neuman and modified by Jacobson was applied to improve calibration of a ground-water flow model of the unconfined aquifer at the Hanford Site. All information about estimates of hydraulic properties of the aquifer, hydraulic heads, boundary conditions, and discharges to and withdrawals form the aquifer is included in the inverse method to obtain an initial calibration of the ground-water flow model. The purpose of this report is to provide a description of the inverse method, its initial application to the unconfined aquifer at Hanford, and to present results of the initial inverse calibration. 28 refs., 19 figs., 1 tab.
NASA Astrophysics Data System (ADS)
Goldberg, D.; Heimbach, P.; Joughin, I.; Smith, B.
2015-08-01
A glacial flow model of Smith, Pope and Kohler Glaciers has been calibrated by means of inverse methods against time-varying, annualy resolved observations of ice height and velocities, covering the period 2002 to 2011. The inversion - termed "transient calibration" - produces an optimal set of time-mean, spatially varying parameters together with a time-evolving state that accounts for the transient nature of observations and the model dynamics. Serving as an optimal initial condition, the estimated state for 2011 is used, with no additional forcing, for predicting grounded ice volume loss and grounding line retreat over the ensuing 30 years. The transiently calibrated model predicts a near-steady loss of grounded ice volume of approximately 21 km3 a-1 over this period, as well as loss of 33 km2 a-1 grounded area. We contrast this prediction with one obtained following a commonly used "snapshot" or steady-state inversion, which does not consider time dependence and assumes all observations to be contemporaneous. Transient calibration is shown to achieve a better fit with observations of thinning and grounding line retreat histories, and yields a quantitatively different projection with respect to ice volume loss and ungrounding. Sensitivity studies suggest large near-future levels of unforced, i.e. committed sea level contribution from these ice streams under reasonable assumptions regarding uncertainties of the unknown parameters.
NASA Astrophysics Data System (ADS)
Goldberg, D. N.; Heimbach, P.; Joughin, I.; Smith, B.
2015-12-01
A glacial flow model of Smith, Pope and Kohler Glaciers has been calibrated by means of inverse methods against time-varying, annualy resolved observations of ice height and velocities, covering the period 2002 to 2011. The inversion -- termed ``transient calibration'' -- produces an optimal set of time-mean, spatially varying parameters together with a time-evolving state that accounts for the transient nature of observations and the model dynamics. Serving as an optimal initial condition, the estimated state for 2011 is used, with no additional forcing, for predicting grounded ice volume loss and grounding line retreat over the ensuing 30 years. The transiently calibrated model predicts a near-steady loss of grounded ice volume of approximately 21 km3/a over this period, as well as loss of 33 km2/a grounded area. We contrast this prediction with one obtained following a commonly used ``snapshot'' or steady-state inversion, which does not consider time dependence and assumes all observations to be contemporaneous. Transient calibration is shown to achieve a better fit with observations of thinning and grounding line retreat histories, and yields a quantitatively different projection with respect to ice volume loss and ungrounding. Sensitivity studies suggest large near-future levels of unforced, i.e. committed sea level contribution from these ice streams under reasonable assumptions regarding uncertainties of the unknown parameters.
Model calibration and validation for OFMSW and sewage sludge co-digestion reactors
Esposito, G.; Frunzo, L.; Panico, A.; Pirozzi, F.
2011-12-15
Highlights: > Disintegration is the limiting step of the anaerobic co-digestion process. > Disintegration kinetic constant does not depend on the waste particle size. > Disintegration kinetic constant depends only on the waste nature and composition. > The model calibration can be performed on organic waste of any particle size. - Abstract: A mathematical model has recently been proposed by the authors to simulate the biochemical processes that prevail in a co-digestion reactor fed with sewage sludge and the organic fraction of municipal solid waste. This model is based on the Anaerobic Digestion Model no. 1 of the International Water Association, which has been extended to include the co-digestion processes, using surface-based kinetics to model the organic waste disintegration and conversion to carbohydrates, proteins and lipids. When organic waste solids are present in the reactor influent, the disintegration process is the rate-limiting step of the overall co-digestion process. The main advantage of the proposed modeling approach is that the kinetic constant of such a process does not depend on the waste particle size distribution (PSD) and rather depends only on the nature and composition of the waste particles. The model calibration aimed to assess the kinetic constant of the disintegration process can therefore be conducted using organic waste samples of any PSD, and the resulting value will be suitable for all the organic wastes of the same nature as the investigated samples, independently of their PSD. This assumption was proven in this study by biomethane potential experiments that were conducted on organic waste samples with different particle sizes. The results of these experiments were used to calibrate and validate the mathematical model, resulting in a good agreement between the simulated and observed data for any investigated particle size of the solid waste. This study confirms the strength of the proposed model and calibration procedure, which can
On Inertial Body Tracking in the Presence of Model Calibration Errors.
Miezal, Markus; Taetz, Bertram; Bleser, Gabriele
2016-07-22
In inertial body tracking, the human body is commonly represented as a biomechanical model consisting of rigid segments with known lengths and connecting joints. The model state is then estimated via sensor fusion methods based on data from attached inertial measurement units (IMUs). This requires the relative poses of the IMUs w.r.t. the segments-the IMU-to-segment calibrations, subsequently called I2S calibrations-to be known. Since calibration methods based on static poses, movements and manual measurements are still the most widely used, potentially large human-induced calibration errors have to be expected. This work compares three newly developed/adapted extended Kalman filter (EKF) and optimization-based sensor fusion methods with an existing EKF-based method w.r.t. their segment orientation estimation accuracy in the presence of model calibration errors with and without using magnetometer information. While the existing EKF-based method uses a segment-centered kinematic chain biomechanical model and a constant angular acceleration motion model, the newly developed/adapted methods are all based on a free segments model, where each segment is represented with six degrees of freedom in the global frame. Moreover, these methods differ in the assumed motion model (constant angular acceleration, constant angular velocity, inertial data as control input), the state representation (segment-centered, IMU-centered) and the estimation method (EKF, sliding window optimization). In addition to the free segments representation, the optimization-based method also represents each IMU with six degrees of freedom in the global frame. In the evaluation on simulated and real data from a three segment model (an arm), the optimization-based method showed the smallest mean errors, standard deviations and maximum errors throughout all tests. It also showed the lowest dependency on magnetometer information and motion agility. Moreover, it was insensitive w.r.t. I2S position and
On Inertial Body Tracking in the Presence of Model Calibration Errors.
Miezal, Markus; Taetz, Bertram; Bleser, Gabriele
2016-01-01
In inertial body tracking, the human body is commonly represented as a biomechanical model consisting of rigid segments with known lengths and connecting joints. The model state is then estimated via sensor fusion methods based on data from attached inertial measurement units (IMUs). This requires the relative poses of the IMUs w.r.t. the segments-the IMU-to-segment calibrations, subsequently called I2S calibrations-to be known. Since calibration methods based on static poses, movements and manual measurements are still the most widely used, potentially large human-induced calibration errors have to be expected. This work compares three newly developed/adapted extended Kalman filter (EKF) and optimization-based sensor fusion methods with an existing EKF-based method w.r.t. their segment orientation estimation accuracy in the presence of model calibration errors with and without using magnetometer information. While the existing EKF-based method uses a segment-centered kinematic chain biomechanical model and a constant angular acceleration motion model, the newly developed/adapted methods are all based on a free segments model, where each segment is represented with six degrees of freedom in the global frame. Moreover, these methods differ in the assumed motion model (constant angular acceleration, constant angular velocity, inertial data as control input), the state representation (segment-centered, IMU-centered) and the estimation method (EKF, sliding window optimization). In addition to the free segments representation, the optimization-based method also represents each IMU with six degrees of freedom in the global frame. In the evaluation on simulated and real data from a three segment model (an arm), the optimization-based method showed the smallest mean errors, standard deviations and maximum errors throughout all tests. It also showed the lowest dependency on magnetometer information and motion agility. Moreover, it was insensitive w.r.t. I2S position and
NASA Astrophysics Data System (ADS)
Singh, A.; Karsten, A.
2011-06-01
The accuracy of the calibration model for the single and double integrating sphere systems are compared for a white light system. A calibration model is created from a matrix of samples with known absorption and reduced scattering coefficients. In this instance the samples are made using different concentrations of intralipid and black ink. The total and diffuse transmittance and reflectance is measured on both setups and the accuracy of each model compared by evaluating the prediction errors of the calibration model for the different systems. Current results indicate that the single integrating sphere setup is more accurate than the double system method. This is based on the low prediction errors of the model for the single sphere system for a He-Ne laser as well as a white light source. The model still needs to be refined for more absorption factors. Tests on the prediction accuracies were then determined by extracting the optical properties of solid resin based phantoms for each system. When these properties of the phantoms were used as input to the modeling software excellent agreement between measured and simulated data was found for the single sphere systems.
Calibration of the heat balance model for prediction of car climate
NASA Astrophysics Data System (ADS)
Pokorný, Jan; Fišer, Jan; Jícha, Miroslav
2012-04-01
In the paper, the authors refer to development a heat balance model to predict car climate and power heat load. Model is developed in Modelica language using Dymola as interpreter. It is a dynamical system, which describes a heat exchange between car cabin and ambient. Inside a car cabin, there is considered heat exchange between air zone, interior and air-conditioning system. It is considered 1D heat transfer with a heat accumulation and a relative movement Sun respect to the car cabin, whilst car is moving. Measurements of the real operating conditions of gave us data for model calibration. The model was calibrated for Škoda Felicia parking-summer scenarios.
Liu, Miao; Yang, Shourui; Wang, Zhangying; Huang, Shujun; Liu, Yue; Niu, Zhenqi; Zhang, Xiaoxuan; Zhu, Jigui; Zhang, Zonghua
2016-05-30
Augmented reality system can be applied to provide precise guidance for various kinds of manual works. The adaptability and guiding accuracy of such systems are decided by the computational model and the corresponding calibration method. In this paper, a novel type of augmented reality guiding system and the corresponding designing scheme are proposed. Guided by external positioning equipment, the proposed system can achieve high relative indication accuracy in a large working space. Meanwhile, the proposed system is realized with a digital projector and the general back projection model is derived with geometry relationship between digitized 3D model and the projector in free space. The corresponding calibration method is also designed for the proposed system to obtain the parameters of projector. To validate the proposed back projection model, the coordinate data collected by a 3D positioning equipment is used to calculate and optimize the extrinsic parameters. The final projecting indication accuracy of the system is verified with subpixel pattern projecting technique.
A directional HF noise model: Calibration and validation in the Australian region
NASA Astrophysics Data System (ADS)
Pederick, L. H.; Cervera, M. A.
2016-01-01
The performance of systems using HF (high frequency) radio waves, such as over-the-horizon radars, is strongly dependent on the external noise environment. However, this environment has complex behavior and is known to vary with location, time, season, sunspot number, and radio frequency. It is also highly anisotropic, with the directional variation of noise being very important for the design and development of next generation over-the-horizon radar. By combining global maps of lightning occurrence, raytracing propagation, a model background ionosphere and ionospheric absorption, the behavior of noise at HF may be modeled. This article outlines the principles, techniques, and current progress of the model and calibrates it against a 5 year data set of background noise measurements. The calibrated model is then compared with data at a second site.
Technology Transfer Automated Retrieval System (TEKTRAN)
Process based and distributed watershed models possess a large number of parameters that are not directly measured in field and need to be calibrated through matching modeled in-stream fluxes with monitored data. Recently, there have been waves of concern about the reliability of this common practic...
Impact of Land Model Calibration on Coupled Land-Atmosphere Prediction
NASA Technical Reports Server (NTRS)
Santanello, Joseph A., Jr.; Kumar, Sujay V.; Peters-Lidard, Christa D.; Harrison, Ken; Zhou, Shujia
2012-01-01
Land-atmosphere (L-A) interactions play a critical role in determining the diurnal evolution of both planetary boundary layer (PBL) and land surface heat and moisture budgets, as well as controlling feedbacks with clouds and precipitation that lead to the persistence of dry and wet regimes. Recent efforts to quantify the strength of L-A coupling in prediction models have produced diagnostics that integrate across both the land and PBL components of the system. In this study, we examine the impact of improved specification of land surface states, anomalies, and fluxes on coupled WRF forecasts during the summers of extreme dry and wet land surface conditions in the U.S. Southern Great Plains. The improved land initialization and surface flux parameterizations are obtained through calibration of the Noah land surface model using the new optimization and uncertainty estimation subsystem in NASA's Land Information System (LIS-OPT/UE). The impact of the calibration on the a) spinup of the land surface used as initial conditions, and b) the simulated heat and moisture states and fluxes of the coupled WRF simulations is then assessed. Changes in ambient weather and land-atmosphere coupling are evaluated along with measures of uncertainty propagation into the forecasts. In addition, the sensitivity of this approach to the period of calibration (dry, wet, average) is investigated. Results indicate that the offline calibration leads to systematic improvements in land-PBL fluxes and near-surface temperature and humidity, and in the process provide guidance on the questions of what, how, and when to calibrate land surface models for coupled model prediction.
Kay, D; McDonald, A
1983-01-01
This paper reports on the calibration and use of a multiple regression model designed to predict concentrations of Escherichia coli and total coliforms in two upland British impoundments. The multivariate approach has improved predictive capability over previous univariate linear models because it includes predictor variables for the timing and magnitude of hydrological input to the reservoirs and physiochemical parameters of water quality. The significance of these results for catchment management research is considered. PMID:6639016
Comparison of Various Optimization Methods for Calibration of Conceptual Rainfall-Runoff Models
NASA Astrophysics Data System (ADS)
Bhatt, Divya; Jain, Ashu
2010-05-01
Runoff forecasts are needed in many water resources activities such as flood and drought management, irrigation practices, and water distribution systems, etc. Runoff is generally forecasted using rainfall-runoff models by using hydrologic data in the catchment. Computer based hydrologic models have become popular with practicing hydrologists and water resources engineers for performing hydrologic forecasts and for managing water systems. Rainfall-runoff library (RRL) is computer software developed by Cooperative Research Centre for Catchment Hydrology (CRCCH), Australia. The RRL consists of five different conceptual rainfall-runoff models and has been in operation in many water resources applications in Australia. RRL is designed to simulate catchment runoff by using daily rainfall and evapotranspiration data. In this paper, the results from an investigation on the use of different optimization methods for the calibration of various conceptual rainfall-runoff models available in RRL toolkit are presented. Out of the five conceptual models in the RRL toolkit, AWBM (The Australian Water Balance Model) has been employed. Seven different optimization methods are investigated for the calibration of the AWBM model. The optimization methods investigated include uniform random sampling, pattern search, multi start pattern search, Rosenbrock search, Rosenbrock multi-start search, Shuffled Complex Evolution (SCE-UA) and Genetic Algorithm (GA). Trial and error procedures were employed to arrive at the best values of various parameters involved in the optimizers for all to develop the AWBM. The results obtained from the best configuration of the AWBM are presented here for all optimization methods. The daily rainfall and runoff data derived from Bird Creek Basin, Oklahoma, USA have been employed to develop all the models included here. A wide range of error statistics have been used to evaluate the performance of all the models developed in this study. It has been found that
Including sugar cane in the agro-ecosystem model ORCHIDEE-STICS: calibration and validation
NASA Astrophysics Data System (ADS)
Valade, A.; Vuichard, N.; Ciais, P.; Viovy, N.
2011-12-01
Sugarcane is currently the most efficient bioenergy crop with regards to the energy produced per hectare. With approximately half the global bioethanol production in 2005, and a devoted land area expected to expand globally in the years to come, sugar cane is at the heart of the biofuel debate. Dynamic global vegetation models coupled with agronomical models are powerful and novel tools to tackle many of the environmental issues related to biofuels if they are carefully calibrated and validated against field observations. Here we adapt the agro-terrestrial model ORCHIDEE-STICS for sugar cane simulations. Observation data of LAI are used to evaluate the sensitivity of the model to parameters of nitrogen absorption and phenology, which are calibrated in a systematic way for six sites in Australia and La Reunion. We find that the optimal set of parameters is highly dependent on the sites' characteristics and that the model can reproduce satisfactorily the evolution of LAI. This careful calibration of ORCHIDEE-STICS for sugar cane biomass production for different locations and technical itineraries provides a strong basis for further analysis of the impacts of bioenergy-related land use change on carbon cycle budgets. As a next step, a sensitivity analysis is carried out to estimate the uncertainty of the model in biomass and carbon flux simulation due to its parameterization.
NASA Astrophysics Data System (ADS)
Walton, Richard S.; Hunter, Heather M.
2009-11-01
SummaryQuantifying relationships between stream water quality and catchment land uses is a major goal of many water quality monitoring programs. This is a challenging task that is rarely achieved through simple analysis of raw data alone. Multiple regression analysis provides one approach, which despite significant limitations, can be successful when very large data sets are available and only annual estimates are required. However, regression techniques have limited application to sub-annual data sets. We present a new method for isolating the water quality responses of different land uses from monitoring data through hydrological model calibration, using a process of simultaneous calibration at several monitoring sites. In addition to model parameters, model algorithm complexity and the number of land-attribute groups are also used as calibration 'parameters'. This helps increase model parameter uniqueness and model predictive certainty. We applied the technique to water quality data from the Johnstone River catchment (1602 km 2) in north-east Australia, using the HSPF model. The data comprised >4000 samples from over five years of monitoring at 16 sites, which drained sub-catchments of differing land area and proportions of each land use. Monitoring occurred at flow gauging sites during high stream flows, and regularly at all sites during non-event periods. Variables modelled included discharge, suspended sediment, and various forms of nitrogen and phosphorus. The calibration process aimed to maximise both goodness-of-fit and parameter sensitivity. We achieved a substantial simplification of HSPF algorithms without appreciable reduction in goodness-of-fit, by a combination of: fixing parameters, tying parameters, and introducing new, simpler equations. Two key calibration tools were reducing the number of land-use groups (by combining land uses) and tying parameters between the three flow paths modelled (surface flow, interflow and base flow). These in turn
Calibration and validation of an integrated nitrate transport model within a well capture zone.
Bonton, Alexandre; Bouchard, Christian; Rouleau, Alain; Rodriguez, Manuel J; Therrien, René
2012-02-01
Groundwater contamination by nitrate was investigated in an agricultural area in southern Quebec, Canada, where a municipal well is the local source of drinking water. A network of 38 piezometers was installed within the capture zone of the municipal well to monitor water table levels and nitrate concentrations in the aquifer. Nitrate concentrations were also measured in the municipal well. A Water flow and Nitrate transport Global Model (WNGM) was developed to simulate the impact of agricultural activities on nitrate concentrations in both the aquifer and municipal well. The WNGM first uses the Agriflux model to simulate vertical water and nitrate fluxes below the root zone for each of the seventy agricultural fields located within the capture zone of the municipal well. The WNGM then uses the HydroGeoSphere model to simulate three-dimensional variably-saturated groundwater flow and nitrate transport in the aquifer using water and nitrate fluxes computed with the Agriflux model as the top boundary conditions. The WNGM model was calibrated by reproducing water levels measured from 2005 to 2007 in the network of piezometers and nitrate concentrations measured in the municipal well from 1997 to 2007. The nitrate concentrations measured in the network of piezometers, however, showed greater variability than in the municipal well and could not be reproduced by the calibrated model. After calibration, the model was validated by successfully reproducing the decrease of nitrate concentrations observed in the municipal well in 2006 and 2007. Although it cannot predict nitrate concentrations in individual piezometers, the calibrated and validated WNGM can be used to assess the impact of changes in agricultural practices on global nitrate concentrations in the aquifer and in the municipal well.
Modeling rare earth complexes: Sparkle/AM1 parameters for thulium (III)
NASA Astrophysics Data System (ADS)
Freire, Ricardo O.; Rocha, Gerd B.; Simas, Alfredo M.
2005-08-01
The Sparkle/AM1 model, recently defined for Eu(III), Gd(III) and Tb(III) [R.O. Freire, G.B. Rocha, A.M., Simas, Inorg. Chem. 44 (2005) 3299], is extended to Tm(III). A set of 15 structures of high crystallographic quality from the Cambridge Crystallographic Database, with ligands chosen to be representative of all complexes with nitrogen or oxygen directly bonded to the Tm(III) ion, was used as a training set. For the 15 complexes, the Sparkle/AM1 unsigned mean error, for all interatomic distances between the Tm(III) ion and the oxygen or nitrogen ligand atoms of the first sphere of coordination, is 0.07 Å, a level of accuracy useful for luminescent complex design.
Yan, Huiping; Qian, Yun; Lin, Guang; Leung, Lai-Yung R.; Yang, Ben; Fu, Q.
2014-03-25
Convective parameterizations used in weather and climate models all display sensitivity to model resolution and variable skill in different climatic regimes. Although parameters in convective schemes can be calibrated using observations to reduce model errors, it is not clear if the optimal parameters calibrated based on regional data can robustly improve model skill across different model resolutions and climatic regimes. In this study, this issue is investigated using a regional modeling framework based on the Weather Research and Forecasting (WRF) model. To quantify the response and sensitivity of model performance to model parameters, we identified five key input parameters and specified their ranges in the Kain-Fritsch (KF) convection scheme in WRF and calibrated them across different spatial resolutions, climatic regimes, and radiation schemes using observed precipitation data. Results show that the optimal values for the five input parameters in the KF scheme are close and model sensitivity and error exhibit similar dependence on the input parameters for all experiments conducted in this study despite differences in the precipitation climatology. We found that the model overall performances in simulating precipitation are more sensitive to the coefficients of downdraft (Pd) and entrainment (Pe) mass flux and starting height of downdraft (Ph). However, we found that rainfall biases, which are probably more related to structural errors, still exist over some regions in the simulation even with the optimal parameters, suggesting further studies are needed to identify the sources of uncertainties and reduce the model biases or structural errors associated with missed or misrepresented physical processes and/or potential problems with the modeling framework.
J.C. Rowland; D.R. Harp; C.J. Wilson; A.L. Atchley; V.E. Romanovsky; E.T. Coon; S.L. Painter
2016-02-02
This Modeling Archive is in support of an NGEE Arctic publication available at doi:10.5194/tc-10-341-2016. This dataset contains an ensemble of thermal-hydro soil parameters including porosity, thermal conductivity, thermal conductivity shape parameters, and residual saturation of peat and mineral soil. The ensemble was generated using a Null-Space Monte Carlo analysis of parameter uncertainty based on a calibration to soil temperatures collected at the Barrow Environmental Observatory site by the NGEE team. The micro-topography of ice wedge polygons present at the site is included in the analysis using three 1D column models to represent polygon center, rim and trough features. The Arctic Terrestrial Simulator (ATS) was used in the calibration to model multiphase thermal and hydrological processes in the subsurface.
Wind waves modelling on the water body with coupled WRF and WAVEWATCH III models
NASA Astrophysics Data System (ADS)
Kuznetsova, Alexandra; Troitskaya, Yuliya; Kandaurov, Alexander; Baydakov, Georgy; Vdovin, Maxim; Papko, Vladislav; Sergeev, Daniil
2015-04-01
Simulation of ocean and sea waves is an accepted instrument for the improvement of the weather forecasts. Wave modelling, coupled models modelling is applied to open seas [1] and is less developed for moderate and small inland water reservoirs and lakes, though being of considerable interest for inland navigation. Our goal is to tune the WAVEWATCH III model to the conditions of the inland reservoir and to carry out the simulations of surface wind waves with coupled WRF (Weather Research and Forecasting) and WAVEWATCH III models. Gorky Reservoir, an artificial lake in the central part of the Volga River formed by a hydroelectric dam, was considered as an example of inland reservoir. Comparing to [2] where moderate constant winds (u10 is up to 9 m/s) of different directions blowing steadily all over the surface of the reservoir were considered, here we apply atmospheric model WRF to get wind input to WAVEWATCH III. WRF computations were held on the Yellowstone supercomputer for 4 nested domains with minimum scale of 1 km. WAVEWATCH III model was tuned for the conditions of the Gorky Reservoir. Satellite topographic data on altitudes ranged from 56,6° N to 57,5° N and from 42.9° E to 43.5° E with increments 0,00833 ° in both directions was used. 31 frequencies ranged from 0,2 Hz to 4 Hz and 30 directions were considered. The minimal significant wave height was changed to the lower one. The waves in the model were developing from some initial seeding spectral distribution (Gaussian in frequency and space, cosine in direction). The range of the observed significant wave height in the numerical experiment was from less than 1 cm up to 30 cm. The field experiments were carried out in the south part of the Gorky reservoir from the boat [2, 3]. 1-D spectra of the field experiment were compared with those obtained in the numerical experiments with different parameterizations of flux provided in WAVEWATCH III both with constant wind input and WRF wind input. For all the
Not Available
1981-10-29
This volume contains a description of the software comprising the National Utility Financial Statement Model (NUFS). This is the third of three volumes describing NUFS provided by ICF Incorporated under contract DEAC-01-79EI-10579. The three volumes are entitled: model overview and description, user's guide, and software guide.
Calibration of a bubble evolution model to observed bubble incidence in divers.
Gault, K A; Tikuisis, P; Nishi, R Y
1995-09-01
The method of maximum likelihood was used to calibrate a probabilistic bubble evolution model against data of bubbles detected in divers. These data were obtained from a diverse set of 2,064 chamber man-dives involving air and heliox with and without oxygen decompression. Bubbles were measured with Doppler ultrasound and graded according to the Kisman-Masurel code from which a single maximum bubble grade (BG) per diver was compared to the maximum bubble radius (Rmax) predicted by the model. This comparison was accomplished using multinomial statistics by relating BG to Rmax through a series of probability functions. The model predicted the formation of the bubble according to the critical radius concept and its evolution was predicted by assuming a linear rate of inert gas exchange across the bubble boundary. Gas exchange between the model compartment and blood was assumed to be perfusion-limited. The most successful calibration of the model was found using a trinomial grouping of BG according to no bubbles, low, and high bubble activity, and by assuming a single tissue compartment. Parameter estimations converge to a tissue volume of 0.00036 cm3, a surface tension of 5.0 dyne.cm-1, respective time constants of 27.9 and 9.3 min for nitrogen and helium, and respective Ostwald tissue solubilities of 0.0438 and 0.0096. Although not part of the calibration algorithm, the predicted evolution of bubble size compares reasonably well with the temporal recordings of BGs.
Simplification of high order polynomial calibration model for fringe projection profilometry
NASA Astrophysics Data System (ADS)
Yu, Liandong; Zhang, Wei; Li, Weishi; Pan, Chengliang; Xia, Haojie
2016-10-01
In fringe projection profilometry systems, high order polynomial calibration models can be employed to improve the accuracy. However, it is not stable to fit a high order polynomial model with least-squares algorithms. In this paper, a novel method is presented to analyze the significance of each polynomial term and simplify the high order polynomial calibration model. Term significance is evaluated by comparing the loading vector elements of the first few principal components which are obtained with the principal component analysis, and trivial terms are identified and neglected from the high order polynomial calibration model. As a result, the high order model is simplified with significant improvement of computation stability and little loss of reconstruction accuracy. An interesting finding is that some terms of 0 and 1st order, as well as some high order terms related to the image direction that is vertical to the phase change direction, are trivial terms for this specific problem. Experimental results are shown to validate of the proposed method.
A model-based approach to the spatial and spectral calibration of NIRSpec onboard JWST
NASA Astrophysics Data System (ADS)
Dorner, B.; Giardino, G.; Ferruit, P.; Alves de Oliveira, C.; Birkmann, S. M.; Böker, T.; De Marchi, G.; Gnata, X.; Köhler, J.; Sirianni, M.; Jakobsen, P.
2016-08-01
Context. The NIRSpec instrument for the James Webb Space Telescope (JWST) can be operated in multiobject spectroscopy (MOS), long-slit, and integral field unit (IFU) mode with spectral resolutions from 100 to 2700. Its MOS mode uses about a quarter of a million individually addressable minislits for object selection, covering a field of view of ~9 arcmin2. Aims: The pipeline used to extract wavelength-calibrated spectra from NIRSpec detector images relies heavily on a model of NIRSpec optical geometry. We demonstrate how dedicated calibration data from a small subset of NIRSpec modes and apertures can be used to optimize this parametric model to the necessary levels of fidelity. Methods: Following an iterative procedure, the initial fiducial values of the model parameters are manually adjusted and then automatically optimized, so that the model predicted location of the images and spectral lines from the fixed slits, the IFU, and a small subset of the MOS apertures matches their measured location in the main optical planes of the instrument. Results: The NIRSpec parametric model is able to reproduce the spatial and spectral position of the input spectra with high fidelity. The intrinsic accuracy (1-sigma, rms) of the model, as measured from the extracted calibration spectra, is better than 1/10 of a pixel along the spatial direction and better than 1/20 of a resolution element in the spectral direction for all of the grating-based spectral modes. This is fully consistent with the corresponding allocation in the spatial and spectral calibration budgets of NIRSpec.
ERIC Educational Resources Information Center
Nyasulu, Frazier; Barlag, Rebecca
2011-01-01
The well-known colorimetric determination of the equilibrium constant of the iron(III-thiocyanate complex is simplified by preparing solutions in a cuvette. For the calibration plot, 0.10 mL increments of 0.00100 M KSCN are added to 4.00 mL of 0.200 M Fe(NO[subscript 3])[subscript 3], and for the equilibrium solutions, 0.50 mL increments of…
On the behavior of mud floc size distribution: model calibration and model behavior
NASA Astrophysics Data System (ADS)
Mietta, Francesca; Chassagne, Claire; Verney, Romaric; Winterwerp, Johan C.
2011-03-01
In this paper, we study a population balance equation (PBE) where flocs are distributed into classes according to their mass. Each class i contains i primary particles with mass m p and size L p. All differently sized flocs can aggregate, binary breakup into two equally sized flocs is used, and the floc's fractal dimension is d 0 = 2, independently of their size. The collision efficiency is kept constant, and the collision frequency derived by Saffman and Turner (J Fluid Mech 1:16-30, 1956) is used. For the breakup rate, the formulation by Winterwerp (J Hydraul Eng Res 36(3):309-326, 1998), which accounts for the porosity of flocs, is used. We show that the mean floc size computed with the PBE varies with the shear rate as the Kolmogorov microscale, as observed both in laboratory and in situ. Moreover, the equilibrium mean floc size varies linearly with a global parameter P which is proportional to the ratio between the rates of aggregation and breakup. The ratio between the parameters of aggregation and breakup can therefore be estimated analytically from the observed equilibrium floc size. The parameter for aggregation can be calibrated from the temporal evolution of the mean floc size. We calibrate the PBE model using mixing jar flocculation experiments, see Mietta et al. (J Colloid Interface Sci 336(1):134-141, 2009a, Ocean Dyn 59:751-763, 2009b) for details. We show that this model can reproduce the experimental data fairly accurately. The collision efficiency α and the ratio between parameters for aggregation and breakup α and E are shown to decrease linearly with increasing absolute value of the ζ-potential, both for mud and kaolinite suspensions. Suspensions at high pH and different dissolved salt type and concentration have been used. We show that the temporal evolution of the floc size distribution computed with this PBE is very similar to that computed with the PBE developed by Verney et al. (Cont Shelf Res, 2010) where classes are distributed
NASA Astrophysics Data System (ADS)
Sueoka, Stacey
2016-05-01
The Daniel K Inouye Solar Telescope (DKIST) will have a suite of first-light polarimetric instrumentation requiring calibration of a complex off-axis optical path. The DKIST polarization calibration process requires modeling and fitting for several optical, thermal and mechanical effects. Three dimensional polarization ray trace codes (PolarisM) allow modeling of polarization errors inherent in assuming a linear retardation as a function of angle of incidence for our calibration retarders at Gregorian and Coudé foci. Stress induced retardation effects from substrate and coating absorption, mechanical mounting stresses, and inherent polishing uniformity tolerances introduce polarization effects at significant levels. These effects require careful characterization and modeling for mitigation during design, construction, calibration and science observations. Modeling efforts, amplitude estimates and mitigation efforts will be presented for the suite of DKIST calibration optics planned for first-light operations.
Semi-automated calibration method for modelling of mountain permafrost evolution in Switzerland
NASA Astrophysics Data System (ADS)
Marmy, A.; Rajczak, J.; Delaloye, R.; Hilbich, C.; Hoelzle, M.; Kotlarski, S.; Lambiel, C.; Noetzli, J.; Phillips, M.; Salzmann, N.; Staub, B.; Hauck, C.
2015-09-01
Permafrost is a widespread phenomenon in the European Alps. Many important topics such as the future evolution of permafrost related to climate change and the detection of permafrost related to potential natural hazards sites are of major concern to our society. Numerical permafrost models are the only tools which facilitate the projection of the future evolution of permafrost. Due to the complexity of the processes involved and the heterogeneity of Alpine terrain, models must be carefully calibrated and results should be compared with observations at the site (borehole) scale. However, a large number of local point data are necessary to obtain a broad overview of the thermal evolution of mountain permafrost over a larger area, such as the Swiss Alps, and the site-specific model calibration of each point would be time-consuming. To face this issue, this paper presents a semi-automated calibration method using the Generalized Likelihood Uncertainty Estimation (GLUE) as implemented in a 1-D soil model (CoupModel) and applies it to six permafrost sites in the Swiss Alps prior to long-term permafrost evolution simulations. We show that this automated calibration method is able to accurately reproduce the main thermal condition characteristics with some limitations at sites with unique conditions such as 3-D air or water circulation, which have to be calibrated manually. The calibration obtained was used for RCM-based long-term simulations under the A1B climate scenario specifically downscaled at each borehole site. The projection shows general permafrost degradation with thawing at 10 m, even partially reaching 20 m depths until the end of the century, but with different timing among the sites. The degradation is more rapid at bedrock sites whereas ice-rich sites with a blocky surface cover showed a reduced sensitivity to climate change. The snow cover duration is expected to be reduced drastically (between -20 to -37 %) impacting the ground thermal regime. However
NASA Astrophysics Data System (ADS)
Han, Feng; Zheng, Yi
2016-02-01
While watershed water quality (WWQ) models have been widely used to support water quality management, their profound modeling uncertainty remains an unaddressed issue. Data assimilation via Bayesian calibration is a promising solution to the uncertainty, but has been rarely practiced for WWQ modeling. This study applied multiple-response Bayesian calibration (MRBC) to SWAT, a classic WWQ model, using the nitrate pollution in the Newport Bay Watershed (southern California, USA) as the study case. How typical input and model structure errors would impact modeling uncertainty, parameter identification and management decision-making was systematically investigated through both synthetic and real-situation modeling cases. The main study findings include: (1) with an efficient sampling scheme, MRBC is applicable to WWQ modeling in characterizing its parametric and predictive uncertainties; (2) incorporating hydrology responses, which are less susceptible to input and model structure errors than water quality responses, can improve the Bayesian calibration results and benefit potential modeling-based management decisions; and (3) the value of MRBC to modeling-based decision-making essentially depends on pollution severity, management objective and decision maker's risk tolerance.
Automatic Multi-Scale Calibration Procedure for Nested Hydrological-Hydrogeological Regional Models
NASA Astrophysics Data System (ADS)
Labarthe, B.; Abasq, L.; Flipo, N.; de Fouquet, C. D.
2014-12-01
Large hydrosystem modelling and understanding is a complex process depending on regional and local processes. A nested interface concept has been implemented in the hydrosystem modelling platform for a large alluvial plain model (300 km2) part of a 11000 km2 multi-layer aquifer system, included in the Seine basin (65000 km2, France). The platform couples hydrological and hydrogeological processes through four spatially distributed modules (Mass balance, Unsaturated Zone, River and Groundwater). An automatic multi-scale calibration procedure is proposed. Using different data sets from regional scale (117 gauging stations and 183 piezometers over the 65000 km2) to the intermediate scale(dense past piezometric snapshot), it permits the calibration and homogenization of model parameters over scales.The stepwise procedure starts with the optimisation of the water mass balance parameters at regional scale using a conceptual 7 parameters bucket model coupled with the inverse modelling tool PEST. The multi-objective function is derived from river discharges and their de-composition by hydrograph separation. The separation is performed at each gauging station using an automatic procedure based one Chapman filter. Then, the model is run at the regional scale to provide recharge estimate and regional fluxes to the groundwater local model. Another inversion method is then used to determine the local hydrodynamic parameters. This procedure used an initial kriged transmissivity field which is successively updated until the simulated hydraulic head distribution equals a reference one obtained by krigging. Then, the local parameters are upscaled to the regional model by renormalisation procedure.This multi-scale automatic calibration procedure enhances both the local and regional processes representation. Indeed, it permits a better description of local heterogeneities and of the associated processes which are transposed into the regional model, improving the overall performances
On Inertial Body Tracking in the Presence of Model Calibration Errors
Miezal, Markus; Taetz, Bertram; Bleser, Gabriele
2016-01-01
In inertial body tracking, the human body is commonly represented as a biomechanical model consisting of rigid segments with known lengths and connecting joints. The model state is then estimated via sensor fusion methods based on data from attached inertial measurement units (IMUs). This requires the relative poses of the IMUs w.r.t. the segments—the IMU-to-segment calibrations, subsequently called I2S calibrations—to be known. Since calibration methods based on static poses, movements and manual measurements are still the most widely used, potentially large human-induced calibration errors have to be expected. This work compares three newly developed/adapted extended Kalman filter (EKF) and optimization-based sensor fusion methods with an existing EKF-based method w.r.t. their segment orientation estimation accuracy in the presence of model calibration errors with and without using magnetometer information. While the existing EKF-based method uses a segment-centered kinematic chain biomechanical model and a constant angular acceleration motion model, the newly developed/adapted methods are all based on a free segments model, where each segment is represented with six degrees of freedom in the global frame. Moreover, these methods differ in the assumed motion model (constant angular acceleration, constant angular velocity, inertial data as control input), the state representation (segment-centered, IMU-centered) and the estimation method (EKF, sliding window optimization). In addition to the free segments representation, the optimization-based method also represents each IMU with six degrees of freedom in the global frame. In the evaluation on simulated and real data from a three segment model (an arm), the optimization-based method showed the smallest mean errors, standard deviations and maximum errors throughout all tests. It also showed the lowest dependency on magnetometer information and motion agility. Moreover, it was insensitive w.r.t. I2S position and
Finsterle, S.; Kowalsky, M.B.
2010-10-15
We propose a modification to the Levenberg-Marquardt minimization algorithm for a more robust and more efficient calibration of highly parameterized, strongly nonlinear models of multiphase flow through porous media. The new method combines the advantages of truncated singular value decomposition with those of the classical Levenberg-Marquardt algorithm, thus enabling a more robust solution of underdetermined inverse problems with complex relations between the parameters to be estimated and the observable state variables used for calibration. The truncation limit separating the solution space from the calibration null space is re-evaluated during the iterative calibration process. In between these re-evaluations, fewer forward simulations are required, compared to the standard approach, to calculate the approximate sensitivity matrix. Truncated singular values are used to calculate the Levenberg-Marquardt parameter updates, ensuring that safe small steps along the steepest-descent direction are taken for highly correlated parameters of low sensitivity, whereas efficient quasi-Gauss-Newton steps are taken for independent parameters with high impact. The performance of the proposed scheme is demonstrated for a synthetic data set representing infiltration into a partially saturated, heterogeneous soil, where hydrogeological, petrophysical, and geostatistical parameters are estimated based on the joint inversion of hydrological and geophysical data.
Hydrologic Modeling in the Kenai River Watershed using Event Based Calibration
NASA Astrophysics Data System (ADS)
Wells, B.; Toniolo, H. A.; Stuefer, S. L.
2015-12-01
Understanding hydrologic changes is key for preparing for possible future scenarios. On the Kenai Peninsula in Alaska the yearly salmon runs provide a valuable stimulus to the economy. It is the focus of a large commercial fishing fleet, but also a prime tourist attraction. Modeling of anadromous waters provides a tool that assists in the prediction of future salmon run size. Beaver Creek, in Kenai, Alaska, is a lowlands stream that has been modeled using the Army Corps of Engineers event based modeling package HEC-HMS. With the use of historic precipitation and discharge data, the model was calibrated to observed discharge values. The hydrologic parameters were measured in the field or calculated, while soil parameters were estimated and adjusted during the calibration. With the calibrated parameter for HEC-HMS, discharge estimates can be used by other researches studying the area and help guide communities and officials to make better-educated decisions regarding the changing hydrology in the area and the tied economic drivers.
The role of modeling in the calibration of the Chandra's optics
NASA Astrophysics Data System (ADS)
Jerius, Diab H.; Cohen, Lester; Edgar, Richard J.; Freeman, Mark; Gaetz, Terrance J.; Hughes, John P.; Nguyen, Dan; Podgorski, William A.; Tibbetts, Michael; Van Speybroeck, Leon P.; Zhao, Ping
2004-02-01
The mirrors flown in the Chandra Observatory are, without doubt, some of the most exquisite optics ever flown on a space mission. Their angular resolution is matched by no other X-ray observatory, existing or planned. The promise of that performance, along with a goal of achieving 1% calibration of the optics' characteristics, led to a decision early in the construction and assembly phase of the mission to develop an accurate and detailed model of the optics and their support structure. This model has served in both engineering and scientific capacities; as a cross-check of the design and a predictor of scientific performance; as a driver of the ground calibration effort; and as a diagnostic of the as-built performance. Finally, it serves, directly and indirectly, as the primary vehicle with which Chandra observers interpret the contribution of the optics' characteristics to their data. We present the underlying concepts in the model, as well the mechanical, engineering and metrology inputs. We discuss its use during ground calibration and as a characterization of on-orbit performance. Finally, we present measures of the model's accuracy, where further improvements may be made, and its applicability to other missions.
NASA Astrophysics Data System (ADS)
Seibert, Jan
2015-04-01
Simple runoff models with a low number of model parameters are often able to simulated catchment runoff reasonably well, but these models usually rely on model calibration, which makes their use in ungauged basins challenging. Here a dataset of 600+ gauged basins in the US was used to study how good model performances could be achieved when instead of stream flow data only stream level data would be available. The latter obviously is easier to observe and in practice several approaches could be used for such stream level observations: water level loggers have become less expensive and easier to install; stream levels will in the near future be increasingly available from satellite remote sensing resulting in evenly space time series; community-based approaches (e.g., crowdhydrology.org), finally, can offer level observations at irregular time intervals. Here we present a study where a runoff model (the HBV model) was calibrated for the 600+ gauged basins. Pretending that only stream level observations at different time intervals, representing the temporal resolution of the different observation approaches mentioned before, were available, the model was calibrated based on these data subsets. Afterwards the simulations were evaluated on the full observed stream flow record. The results indicate that stream level data alone already can provide surprisingly good model simulation results in humid catchments, whereas in arid catchments some form of quantitative information (stream flow observation or regional average value) is needed to obtain good results. These results are encouraging for hydrological observations in data scarce regions as level observations are much easier to obtain than stream flow observations. Based on runoff modeling it might be possible to derive stream flow series from level observations using loggers, satellites or community-based approaches. The approach presented here also allows comparing the value of different types of observations
NASA Technical Reports Server (NTRS)
Annett, Martin S.; Horta, Lucas G.; Jackson, Karen E.; Polanco, Michael A.; Littell, Justin D.
2012-01-01
Two full-scale crash tests of an MD-500 helicopter were conducted in 2009 and 2010 at NASA Langley's Landing and Impact Research Facility in support of NASA s Subsonic Rotary Wing Crashworthiness Project. The first crash test was conducted to evaluate the performance of an externally mounted composite deployable energy absorber (DEA) under combined impact conditions. In the second crash test, the energy absorber was removed to establish baseline loads that are regarded as severe but survivable. The presence of this energy absorbing device reduced the peak impact acceleration levels by a factor of three. Accelerations and kinematic data collected from the crash tests were compared to a system-integrated finite element model of the test article developed in parallel with the test program. In preparation for the full-scale crash test, a series of sub-scale and MD-500 mass simulator tests were conducted to evaluate the impact performances of various components and subsystems, including new crush tubes and the DEA blocks. Parameters defined for the system-integrated finite element model were determined from these tests. Results from 19 accelerometers placed throughout the airframe were compared to finite element model responses. The model developed for the purposes of predicting acceleration responses from the first crash test was inadequate when evaluating more severe conditions seen in the second crash test. A newly developed model calibration approach that includes uncertainty estimation, parameter sensitivity, impact shape orthogonality, and numerical optimization was used to calibrate model results for the full-scale crash test without the DEA. This combination of heuristic and quantitative methods identified modeling deficiencies, evaluated parameter importance, and proposed required model changes. The multidimensional calibration techniques presented here are particularly effective in identifying model adequacy. Acceleration results for the calibrated model were
Calibrating landscape process modelling with Caesium-137 data and typhoon rainfall records
NASA Astrophysics Data System (ADS)
Schoorl, J. M.; Chang, K. T.; Chiu, Y. J.; Veldkamp, A.
2009-04-01
Calibration of landscape evolution models (LEMs) needs long term input data on climate and soil redistribution. Rainfall data over the last decades can often be estimated from field stations and interpolation. A decadal estimate of soil redistribution can be derived from analysing the spatial variation of the Caesium-137 inventory in the soil. The objective of this case study is to calibrate LEM LAPSUS for a small watershed in Taiwan, using historical rainfall records and Caesium-137 derived soil redistribution estimates. In general, the point location soil redistribution estimates from the Caesium-137 activity can be modelled with LAPSUS within a certain margin of error. However, the LAPSUS soil redistribution maps can differ considerably from the point-interpolation derived maps. This is mainly due to the process based LAPSUS methodology, where the DEM and water flow routing are the main driving factors.
Real-scale 3D models of the scoliotic spine from biplanar radiography without calibration objects.
Moura, Daniel C; Barbosa, Jorge G
2014-10-01
This paper presents a new method for modelling the spines of subjects and making accurate 3D measurements using standard radiologic systems without requiring calibration objects. The method makes use of the focal distance and statistical models for estimating the geometrical parameters of the system. A dataset of 32 subjects was used to assess this method. The results show small errors for the main clinical indices, such as an RMS error of 0.49° for the Cobb angle, 0.50° for kyphosis, 0.38° for lordosis, and 2.62mm for the spinal length. This method is the first to achieve this level of accuracy without requiring the use of calibration objects when acquiring radiographs. We conclude that the proposed method allows for the evaluation of scoliosis with a much simpler setup than currently available methods. PMID:24908193
Michalas, L. Marcelli, R.; Wang, F.; Brillard, C.; Theron, D.
2015-11-30
This paper presents the full modeling and a methodology for de-embedding the interferometric scanning microwave microscopy measurements by means of dopant profile calibration. A Si calibration sample with different boron-doping level areas is used to that end. The analysis of the experimentally obtained S{sub 11} amplitudes based on the proposed model confirms the validity of the methodology. As a specific finding, changes in the tip radius between new and used tips have been clearly identified, leading to values for the effective tip radius in the range of 45 nm to 85 nm, respectively. Experimental results are also discussed in terms of the effective area concept, taking into consideration details related to the nature of tip-to-sample interaction.
Calibration of the k- ɛ model constants for use in CFD applications
NASA Astrophysics Data System (ADS)
Glover, Nina; Guillias, Serge; Malki-Epshtein, Liora
2011-11-01
The k- ɛ turbulence model is a popular choice in CFD modelling due to its robust nature and the fact that it has been well validated. However it has been noted in previous research that the k- ɛ model has problems predicting flow separation as well as unconfined and transient flows. The model contains five empirical model constants whose values were found through data fitting for a wide range of flows (Launder 1972) but ad-hoc adjustments are often made to these values depending on the situation being modeled. Here we use the example of flow within a regular street canyon to perform a Bayesian calibration of the model constants against wind tunnel data. This allows us to assess the sensitivity of the CFD model to changes in these constants, find the most suitable values for the constants as well as quantifying the uncertainty related to the constants and the CFD model as a whole.
Cole, Charles R.; Bergeron, Marcel P.; Wurstner, Signe K.; Thorne, Paul D.; Orr, Samuel; Mckinley, Mathew I.
2001-05-31
This report describes a new initiative to strengthen the technical defensibility of predictions made with the Hanford site-wide groundwater flow and transport model. The focus is on characterizing major uncertainties in the current model. PNNL will develop and implement a calibration approach and methodology that can be used to evaluate alternative conceptual models of the Hanford aquifer system. The calibration process will involve a three-dimensional transient inverse calibration of each numerical model to historical observations of hydraulic and water quality impacts to the unconfined aquifer system from Hanford operations since the mid-1940s.
Li, Rui; Ye, Hongfei; Zhang, Weisheng; Ma, Guojun; Su, Yewang
2015-10-29
Spring constant calibration of the atomic force microscope (AFM) cantilever is of fundamental importance for quantifying the force between the AFM cantilever tip and the sample. The calibration within the framework of thin plate theory undoubtedly has a higher accuracy and broader scope than that within the well-established beam theory. However, thin plate theory-based accurate analytic determination of the constant has been perceived as an extremely difficult issue. In this paper, we implement the thin plate theory-based analytic modeling for the static behavior of rectangular AFM cantilevers, which reveals that the three-dimensional effect and Poisson effect play important roles in accurate determination of the spring constants. A quantitative scaling law is found that the normalized spring constant depends only on the Poisson's ratio, normalized dimension and normalized load coordinate. Both the literature and our refined finite element model validate the present results. The developed model is expected to serve as the benchmark for accurate calibration of rectangular AFM cantilevers.
Extremely Low-Stress Triaxiality Tests in Calibration of Fracture Models in Metal-Cutting Simulation
NASA Astrophysics Data System (ADS)
Šebek, František; Kubík, Petr; Petruška, Jindřich; Hůlka, Jiří
2016-04-01
The cutting process is now combined with machining, milling, or drilling as one of the widespread manufacturing operations. It is used across various fields of engineering. From an economical point of view, it is desirable to maintain the process in the most effective way in terms of the fracture surface quality or minimizing the burr. It is not possible to manage this experimentally in mass production. Therefore, it is convenient to use numerical computation. To include the crack initiation and propagation in the computations, it is necessary to implement a suitable ductile fracture criterion. Uncoupled ductile fracture models need to be calibrated first from fracture tests when the test selection is crucial. In the present article, there were selected widespread uncoupled ductile fracture models calibrated with, among others, an extremely low-stress triaxiality test realized through the compression of a cylinder with a specific recess. The whole experimental program together with the cutting process experiment were carried out on AISI 1045 carbon steel. After the fracture models were calibrated and the cutting process was simulated with their use, fracture surfaces and force responses from computations were compared with those experimentally obtained and concluding remarks were made.
Li, Rui; Ye, Hongfei; Zhang, Weisheng; Ma, Guojun; Su, Yewang
2015-01-01
Spring constant calibration of the atomic force microscope (AFM) cantilever is of fundamental importance for quantifying the force between the AFM cantilever tip and the sample. The calibration within the framework of thin plate theory undoubtedly has a higher accuracy and broader scope than that within the well-established beam theory. However, thin plate theory-based accurate analytic determination of the constant has been perceived as an extremely difficult issue. In this paper, we implement the thin plate theory-based analytic modeling for the static behavior of rectangular AFM cantilevers, which reveals that the three-dimensional effect and Poisson effect play important roles in accurate determination of the spring constants. A quantitative scaling law is found that the normalized spring constant depends only on the Poisson’s ratio, normalized dimension and normalized load coordinate. Both the literature and our refined finite element model validate the present results. The developed model is expected to serve as the benchmark for accurate calibration of rectangular AFM cantilevers. PMID:26510769
Extremely Low-Stress Triaxiality Tests in Calibration of Fracture Models in Metal-Cutting Simulation
NASA Astrophysics Data System (ADS)
Šebek, František; Kubík, Petr; Petruška, Jindřich; Hůlka, Jiří
2016-11-01
The cutting process is now combined with machining, milling, or drilling as one of the widespread manufacturing operations. It is used across various fields of engineering. From an economical point of view, it is desirable to maintain the process in the most effective way in terms of the fracture surface quality or minimizing the burr. It is not possible to manage this experimentally in mass production. Therefore, it is convenient to use numerical computation. To include the crack initiation and propagation in the computations, it is necessary to implement a suitable ductile fracture criterion. Uncoupled ductile fracture models need to be calibrated first from fracture tests when the test selection is crucial. In the present article, there were selected widespread uncoupled ductile fracture models calibrated with, among others, an extremely low-stress triaxiality test realized through the compression of a cylinder with a specific recess. The whole experimental program together with the cutting process experiment were carried out on AISI 1045 carbon steel. After the fracture models were calibrated and the cutting process was simulated with their use, fracture surfaces and force responses from computations were compared with those experimentally obtained and concluding remarks were made.
Greenland Ice Sheet Annually-resolved Accumulation Rates (1958-2007), a Spatially Calibrated Model
NASA Astrophysics Data System (ADS)
Burgess, E. W.; Forster, R. R.; Box, J. W.; Smith, L. C.; Bromwich, D. H.
2008-12-01
The Greenland Ice Sheet (GIS) has responded dramatically to recent temperature increases, making it an important contributor to sea level rise. Accurate predictions of Greenland's future contribution to sea level will require a scrupulous understanding of the GIS system and refining our understanding of accumulation is a critical step towards this goal. The most accurate existing estimates of Greenland accumulation rates are multi-year averages; existing annual estimates contain poorly quantified uncertainties. This project developed a superior Greenland accumulation dataset that is spatially comprehensive, has annual resolution, is calibrated to field observations and contains sound uncertainty estimates. Accumulation output from a 1958- 2007 run of the Fifth Generation Mesoscale Model modified for polar climates (PMM5) was calibrated against 133 firn cores and coastal meteorological stations. PMM5 accumulation rate estimates contained spatially dependent systematic biases that were modeled and removed using spatial interpolation of zonally derived regressions. The calibrated accumulation dataset contains residual uncertainties exhibiting a strong spatial pattern that was modeled to estimate ice-sheet wide uncertainty. No significant 1958-2007 trends in Greenland accumulation are evident. Average annual accumulation rate is estimated at 0.339m.w.e. or 593km3 with an RMSE uncertainty of +/-83 km3 or +/-14%. The accumulation dataset will be made publicly available.
A test of the facultative calibration/reactive heritability model of extraversion
Haysom, Hannah J.; Mitchem, Dorian G.; Lee, Anthony J.; Wright, Margaret J.; Martin, Nicholas G.; Keller, Matthew C.; Zietsch, Brendan P.
2015-01-01
A model proposed by Lukaszewski and Roney (2011) suggests that each individual’s level of extraversion is calibrated to other traits that predict the success of an extraverted behavioural strategy. Under ‘facultative calibration’, extraversion is not directly heritable, but rather exhibits heritability through its calibration to directly heritable traits (“reactive heritability”). The current study uses biometrical modelling of 1659 identical and non-identical twins and their siblings to assess whether the genetic variation in extraversion is calibrated to variation in facial attractiveness, intelligence, height in men and body mass index (BMI) in women. Extraversion was significantly positively correlated with facial attractiveness in both males (r=.11) and females (r=.18), but correlations between extraversion and the other variables were not consistent with predictions. Further, twin modelling revealed that the genetic variation in facial attractiveness did not account for a substantial proportion of the variation in extraversion in either males (2.4%) or females (0.5%). PMID:26880866
Modeling and calibration of pointing errors with alt-az telescope
NASA Astrophysics Data System (ADS)
Huang, Long; Ma, Wenli; Huang, Jinlong
2016-08-01
This paper presents a new model for improving the pointing accuracy of a telescope. The Denavit-Hartenberg (D-H) convention was used to perform an error analysis of the telescope's kinematics. A kinematic model was used to relate pointing errors to mechanical errors and the parameters of the kinematic model were estimated with a statistical model fit using data from two large astronomical telescopes. The model illustrates the geometric errors caused by imprecision in manufacturing and assembly processes and their effects on the pointing accuracy of the telescope. A kinematic model relates pointing error to axis position when certain geometric errors are assumed to be present in a telescope. In the parameter estimation portion, the semi-parametric regression model was introduced to compensate for remaining nonlinear errors. The experimental results indicate that the proposed semi-parametric regression model eliminates both geometric and nonlinear errors, and that the telescope's pointing accuracy significantly improves after this calibration.
NASA Astrophysics Data System (ADS)
Tolley, D. G.; Foglia, L.; Neumann, J.; Harter, T.
2014-12-01
Late summer streamflow for the Scott River in northern California has decreased approximately 50% since the mid 1960's, resulting in increased water temperatures and disconnection of certain portions of the stream which negatively impacts aquatic habitat of fish species such as coho and fall-run Chinook salmon. In collaboration with local stakeholders, the Scott Valley Integrated Hydrologic Model has been developed, which combines a water budget model and a groundwater-surface water model (MODLFOW) of the 200 km2 basin. The goal of the integrated model is to better understand the hydrologic system of the valley and explore effects of different groundwater management scenarios on late summer streamflow. The groundwater model has a quarter-hectare resolution with aggregated monthly stress periods over a 21 year period (1990-2011). The Scott River is represented using either the river (RIV) or streamflow routing (SFR) package. UCODE was used for sensitivity analysis and calibration using head observations for 52 wells in the basin and gain/loss observations for two sections of the river. Of 32 model parameters (hydraulic conductivity, specific storage, riverbed conductance and mountain recharge), 13 were found significantly sensitive to observations. Results from the calibration show excellent agreement between modeled and observed heads and to seasonal and interannual variations in streamflow. The calibrated model was used to evaluate several management scenarios: 1) alternative water budget which takes into account measured irrigation rates in the valley, 2) in-lieu recharge where surface-water instead of groundwater is used to irrigate fields near the river while streamflow is sufficiently high, and 3) managed recharge on agricultural fields in gulches on the eastern side of the valley in the winter months. Preliminary results indicate that alternative water management scenarios (in-lieu and managed recharge) significantly increase late summer streamflow by keeping
A comparison of Type III metric radio bursts and global solar potential field models
NASA Technical Reports Server (NTRS)
Jackson, B. V.; Levine, R. H.
1981-01-01
Evidence of coronal magnetic fields from polarized metric type III radio bursts is compared with (1) global potential field models, (2) direct averages of the observed photospheric magnetic field, and (3) H-alpha synoptic charts. The comparison clearly indicates both that the principal aspects of type III burst radiation are understood and that global potential field models are a significantly more accurate representation of coronal magnetic field structure than either the large-scale photospheric field or H-alpha synoptic charts.
Kenoyer, David A.; Anderson, Kurt S.; Myrabo, Leik N.
2008-04-28
A detailed description is provided of the flight dynamics model and development, as well as the procedures used and results obtained in the verification, validation, and calibration of a further refined, flight dynamics system model for a laser lightcraft. The full system model is composed of individual aerodynamic, engine, laser beam, variable vehicle inertial, and 6 DOF dynamics models which have been integrated to represent all major phenomena in a consistent framework. The resulting system level model and associated code was then validated and calibrated using experimental flight information from a 16 flight trajectory data base. This model and code are being developed for the purpose of providing a physics-based predictive tool, which may be used to evaluate the performance of proposed future lightcraft vehicle concepts, engine systems, beam shapes, and active control strategies, thereby aiding in the development of the next generation of laser propelled lightcraft. This paper describes the methods used for isolating the effects of individual component models (e.g. beam, engine, dynamics, etc.) so that the performance of each of these key components could be assessed and adjusted as necessary. As the individual component models were validated, a protocol was developed which permitted the investigators to focus on individual aspects of the system and thereby identify phenomena which explain system behavior, and account for observed deviations between portions of the simulation predictions from experimental flights. These protocols are provided herein, along with physics-based explanations for deviations observed.
NASA Astrophysics Data System (ADS)
Xu, T.; Valocchi, A. J.
2014-12-01
Effective water resource management typically relies on numerical models to analyse groundwater flow and solute transport processes. These models are usually subject to model structure error due to simplification and/or misrepresentation of the real system. As a result, the model outputs may systematically deviate from measurements, thus violating a key assumption for traditional regression-based calibration and uncertainty analysis. On the other hand, model structure error induced bias can be described statistically in an inductive, data-driven way based on historical model-to-measurement misfit. We adopt a fully Bayesian approach that integrates a Gaussian process error model to account for model structure error to the calibration, prediction and uncertainty analysis of groundwater models. The posterior distributions of parameters of the groundwater model and the Gaussian process error model are jointly inferred using DREAM, an efficient Markov chain Monte Carlo sampler. We test the usefulness of the fully Bayesian approach towards a synthetic case study of surface-ground water interaction under changing pumping conditions. We first illustrate through this example that traditional least squares regression without accounting for model structure error yields biased parameter estimates due to parameter compensation as well as biased predictions. In contrast, the Bayesian approach gives less biased parameter estimates. Moreover, the integration of a Gaussian process error model significantly reduces predictive bias and leads to prediction intervals that are more consistent with observations. The results highlight the importance of explicit treatment of model structure error especially in circumstances where subsequent decision-making and risk analysis require accurate prediction and uncertainty quantification. In addition, the data-driven error modelling approach is capable of extracting more information from observation data than using a groundwater model alone.
Cai, Longyan; He, Hong S; Wu, Zhiwei; Lewis, Benard L; Liang, Yu
2014-01-01
Understanding the fire prediction capabilities of fuel models is vital to forest fire management. Various fuel models have been developed in the Great Xing'an Mountains in Northeast China. However, the performances of these fuel models have not been tested for historical occurrences of wildfires. Consequently, the applicability of these models requires further investigation. Thus, this paper aims to develop standard fuel models. Seven vegetation types were combined into three fuel models according to potential fire behaviors which were clustered using Euclidean distance algorithms. Fuel model parameter sensitivity was analyzed by the Morris screening method. Results showed that the fuel model parameters 1-hour time-lag loading, dead heat content, live heat content, 1-hour time-lag SAV(Surface Area-to-Volume), live shrub SAV, and fuel bed depth have high sensitivity. Two main sensitive fuel parameters: 1-hour time-lag loading and fuel bed depth, were determined as adjustment parameters because of their high spatio-temporal variability. The FARSITE model was then used to test the fire prediction capabilities of the combined fuel models (uncalibrated fuel models). FARSITE was shown to yield an unrealistic prediction of the historical fire. However, the calibrated fuel models significantly improved the capabilities of the fuel models to predict the actual fire with an accuracy of 89%. Validation results also showed that the model can estimate the actual fires with an accuracy exceeding 56% by using the calibrated fuel models. Therefore, these fuel models can be efficiently used to calculate fire behaviors, which can be helpful in forest fire management.
Cai, Longyan; He, Hong S; Wu, Zhiwei; Lewis, Benard L; Liang, Yu
2014-01-01
Understanding the fire prediction capabilities of fuel models is vital to forest fire management. Various fuel models have been developed in the Great Xing'an Mountains in Northeast China. However, the performances of these fuel models have not been tested for historical occurrences of wildfires. Consequently, the applicability of these models requires further investigation. Thus, this paper aims to develop standard fuel models. Seven vegetation types were combined into three fuel models according to potential fire behaviors which were clustered using Euclidean distance algorithms. Fuel model parameter sensitivity was analyzed by the Morris screening method. Results showed that the fuel model parameters 1-hour time-lag loading, dead heat content, live heat content, 1-hour time-lag SAV(Surface Area-to-Volume), live shrub SAV, and fuel bed depth have high sensitivity. Two main sensitive fuel parameters: 1-hour time-lag loading and fuel bed depth, were determined as adjustment parameters because of their high spatio-temporal variability. The FARSITE model was then used to test the fire prediction capabilities of the combined fuel models (uncalibrated fuel models). FARSITE was shown to yield an unrealistic prediction of the historical fire. However, the calibrated fuel models significantly improved the capabilities of the fuel models to predict the actual fire with an accuracy of 89%. Validation results also showed that the model can estimate the actual fires with an accuracy exceeding 56% by using the calibrated fuel models. Therefore, these fuel models can be efficiently used to calculate fire behaviors, which can be helpful in forest fire management. PMID:24714164
A fully Bayesian method for jointly fitting instrumental calibration and X-ray spectral models
Xu, Jin; Yu, Yaming; Van Dyk, David A.; Kashyap, Vinay L.; Siemiginowska, Aneta; Drake, Jeremy; Ratzlaff, Pete; Connors, Alanna; Meng, Xiao-Li E-mail: yamingy@ics.uci.edu E-mail: vkashyap@cfa.harvard.edu E-mail: jdrake@cfa.harvard.edu E-mail: meng@stat.harvard.edu
2014-10-20
Owing to a lack of robust principled methods, systematic instrumental uncertainties have generally been ignored in astrophysical data analysis despite wide recognition of the importance of including them. Ignoring calibration uncertainty can cause bias in the estimation of source model parameters and can lead to underestimation of the variance of these estimates. We previously introduced a pragmatic Bayesian method to address this problem. The method is 'pragmatic' in that it introduced an ad hoc technique that simplified computation by neglecting the potential information in the data for narrowing the uncertainty for the calibration product. Following that work, we use a principal component analysis to efficiently represent the uncertainty of the effective area of an X-ray (or γ-ray) telescope. Here, however, we leverage this representation to enable a principled, fully Bayesian method that coherently accounts for the calibration uncertainty in high-energy spectral analysis. In this setting, the method is compared with standard analysis techniques and the pragmatic Bayesian method. The advantage of the fully Bayesian method is that it allows the data to provide information not only for estimation of the source parameters but also for the calibration product—here the effective area, conditional on the adopted spectral model. In this way, it can yield more accurate and efficient estimates of the source parameters along with valid estimates of their uncertainty. Provided that the source spectrum can be accurately described by a parameterized model, this method allows rigorous inference about the effective area by quantifying which possible curves are most consistent with the data.
A Fully Bayesian Method for Jointly Fitting Instrumental Calibration and X-Ray Spectral Models
NASA Astrophysics Data System (ADS)
Xu, Jin; van Dyk, David A.; Kashyap, Vinay L.; Siemiginowska, Aneta; Connors, Alanna; Drake, Jeremy; Meng, Xiao-Li; Ratzlaff, Pete; Yu, Yaming
2014-10-01
Owing to a lack of robust principled methods, systematic instrumental uncertainties have generally been ignored in astrophysical data analysis despite wide recognition of the importance of including them. Ignoring calibration uncertainty can cause bias in the estimation of source model parameters and can lead to underestimation of the variance of these estimates. We previously introduced a pragmatic Bayesian method to address this problem. The method is "pragmatic" in that it introduced an ad hoc technique that simplified computation by neglecting the potential information in the data for narrowing the uncertainty for the calibration product. Following that work, we use a principal component analysis to efficiently represent the uncertainty of the effective area of an X-ray (or γ-ray) telescope. Here, however, we leverage this representation to enable a principled, fully Bayesian method that coherently accounts for the calibration uncertainty in high-energy spectral analysis. In this setting, the method is compared with standard analysis techniques and the pragmatic Bayesian method. The advantage of the fully Bayesian method is that it allows the data to provide information not only for estimation of the source parameters but also for the calibration product—here the effective area, conditional on the adopted spectral model. In this way, it can yield more accurate and efficient estimates of the source parameters along with valid estimates of their uncertainty. Provided that the source spectrum can be accurately described by a parameterized model, this method allows rigorous inference about the effective area by quantifying which possible curves are most consistent with the data.
Exploring calibration strategies of SEDD model in two olive orchard watersheds
NASA Astrophysics Data System (ADS)
Burguet Marimón, Maria; Taguas, Encarnación V.; Gómez, José A.
2016-04-01
To optimize soil conservation strategies in catchments, an accurate diagnosis of areas contributing to soil erosion using models such as SEDD (Ferro and Minacapilly, 1995) is required. In this study, different calibration strategies of the SEDD model were explored in two commercial olive microcatchments in Spain, Setenil (6.7 ha) and Conchuela (8 ha) monitored for 6 years. The main objectives were to calibrate the model to the study watersheds with different environmental characteristics, soil management ways, and runoff conditions, and to evaluate the temporal variability of the sediment delivery ratio (SDR) at the event and annual scales. The calibration used five different erosivity scenarios with different weights of precipitation components and concentrated flow. To optimize the calibration, biweekly and annual C-RUSLE values and the weight of the travel times of the different watershed morphological units were evaluated. The SEDD model was calibrated successfully in the Conchuela watershed, whereas poor adjustments were found for the Setenil watershed. In Conchuela, the best calibration scenarios were associated with concentrated flow, while the erosivity of Setenil was only rain-dependent. Biweekly C-RUSLE values provided suitable, consistent results in Conchuela where soil moisture over the year. In contrast, there were no appreciable improvements between annual and biweekly C-RUSLE values in Setenil, probably due to the narrower variation interval. The analysis of the SDR function justified the grouping of the different β values according to their sign (positive or negative) as a calibration strategy in Setenil. The medians of these groups of events allowed them to be adjusted (E = 0.7; RMSE = 6.4). In the Conchuela watershed, this variation in the model calibration produced only minor improvements to an adjustment which was already good. The sediment delivery ratios (SDR) in both watersheds indicate very dynamic sediment transport. The mean annual SDR
Modeling As(III) oxidation and removal with iron electrocoagulation in groundwater.
Li, Lei; van Genuchten, Case M; Addy, Susan E A; Yao, Juanjuan; Gao, Naiyun; Gadgil, Ashok J
2012-11-01
Understanding the chemical kinetics of arsenic during electrocoagulation (EC) treatment is essential for a deeper understanding of arsenic removal using EC under a variety of operating conditions and solution compositions. We describe a highly constrained, simple chemical dynamic model of As(III) oxidation and As(III,V), Si, and P sorption for the EC system using model parameters extracted from some of our experimental results and previous studies. Our model predictions agree well with both data extracted from previous studies and our observed experimental data over a broad range of operating conditions (charge dosage rate) and solution chemistry (pH, co-occurring ions) without free model parameters. Our model provides insights into why higher pH and lower charge dosage rate (Coulombs/L/min) facilitate As(III) removal by EC and sheds light on the debate in the recent published literature regarding the mechanism of As(III) oxidation during EC. Our model also provides practically useful estimates of the minimum amount of iron required to remove 500 μg/L As(III) to <50 μg/L. Parameters measured in this work include the ratio of rate constants for Fe(II) and As(III) reactions with Fe(IV) in synthetic groundwater (k(1)/k(2) = 1.07) and the apparent rate constant of Fe(II) oxidation with dissolved oxygen at pH 7 (k(app) = 10(0.22) M(-1)s(-1)).
Monte Carlo modeling provides accurate calibration factors for radionuclide activity meters.
Zagni, F; Cicoria, G; Lucconi, G; Infantino, A; Lodi, F; Marengo, M
2014-12-01
Accurate determination of calibration factors for radionuclide activity meters is crucial for quantitative studies and in the optimization step of radiation protection, as these detectors are widespread in radiopharmacy and nuclear medicine facilities. In this work we developed the Monte Carlo model of a widely used activity meter, using the Geant4 simulation toolkit. More precisely the "PENELOPE" EM physics models were employed. The model was validated by means of several certified sources, traceable to primary activity standards, and other sources locally standardized with spectrometry measurements, plus other experimental tests. Great care was taken in order to accurately reproduce the geometrical details of the gas chamber and the activity sources, each of which is different in shape and enclosed in a unique container. Both relative calibration factors and ionization current obtained with simulations were compared against experimental measurements; further tests were carried out, such as the comparison of the relative response of the chamber for a source placed at different positions. The results showed a satisfactory level of accuracy in the energy range of interest, with the discrepancies lower than 4% for all the tested parameters. This shows that an accurate Monte Carlo modeling of this type of detector is feasible using the low-energy physics models embedded in Geant4. The obtained Monte Carlo model establishes a powerful tool for first instance determination of new calibration factors for non-standard radionuclides, for custom containers, when a reference source is not available. Moreover, the model provides an experimental setup for further research and optimization with regards to materials and geometrical details of the measuring setup, such as the ionization chamber itself or the containers configuration.
NASA Technical Reports Server (NTRS)
Lerch, F. J.; Nerem, R. S.; Chinn, D. S.; Chan, J. C.; Patel, G. B.; Klosko, S. M.
1993-01-01
A new method has been developed to provide a direct test of the error calibrations of gravity models based on actual satellite observations. The basic approach projects the error estimates of the gravity model parameters onto satellite observations, and the results of these projections are then compared with data residual computed from the orbital fits. To allow specific testing of the gravity error calibrations, subset solutions are computed based on the data set and data weighting of the gravity model. The approach is demonstrated using GEM-T3 to show that the gravity error estimates are well calibrated and that reliable predictions of orbit accuracies can be achieved for independent orbits.
NASA Astrophysics Data System (ADS)
Pappenberger, F.; Beven, K. J.; Frodsham, K.; Matgen, P.
2005-12-01
Flood inundation models play an increasingly important role in assessing flood risk. The growth of 2D inundation models that are intimately related to raster maps of floodplains is occurring at the same time as an increase in the availability of 2D remote data (e.g. SAR images and aerial photographs), against which model performancee can be evaluated. This requires new techniques to be explored in order to evaluate model performance in two dimensional space. In this paper we present a fuzzified pattern matching algorithm which compares favorably to a set of traditional measures. However, we further argue that model calibration has to go beyond the comparison of physical properties and should demonstrate how a weighting towards consequences, such as loss of property, can enhance model focus and prediction. Indeed, it will be necessary to abandon a fully spatial comparison in many scenarios to concentrate the model calibration exercise on specific points such as hospitals, police stations or emergency response centers. It can be shown that such point evaluations lead to significantly different flood hazard maps due to the averaging effect of a spatial performance measure. A strategy to balance the different needs (accuracy at certain spatial points and acceptable spatial performance) has to be based in a public and political decision making process.
Calibration of complex models through Bayesian evidence synthesis: a demonstration and tutorial.
Jackson, Christopher H; Jit, Mark; Sharples, Linda D; De Angelis, Daniela
2015-02-01
Decision-analytic models must often be informed using data that are only indirectly related to the main model parameters. The authors outline how to implement a Bayesian synthesis of diverse sources of evidence to calibrate the parameters of a complex model. A graphical model is built to represent how observed data are generated from statistical models with unknown parameters and how those parameters are related to quantities of interest for decision making. This forms the basis of an algorithm to estimate a posterior probability distribution, which represents the updated state of evidence for all unknowns given all data and prior beliefs. This process calibrates the quantities of interest against data and, at the same time, propagates all parameter uncertainties to the results used for decision making. To illustrate these methods, the authors demonstrate how a previously developed Markov model for the progression of human papillomavirus (HPV-16) infection was rebuilt in a Bayesian framework. Transition probabilities between states of disease severity are inferred indirectly from cross-sectional observations of prevalence of HPV-16 and HPV-16-related disease by age, cervical cancer incidence, and other published information. Previously, a discrete collection of plausible scenarios was identified but with no further indication of which of these are more plausible. Instead, the authors derive a Bayesian posterior distribution, in which scenarios are implicitly weighted according to how well they are supported by the data. In particular, we emphasize the appropriate choice of prior distributions and checking and comparison of fitted models.
Calibration at regional scale for rainfall-runoff modeling in ungauged catchments.
NASA Astrophysics Data System (ADS)
Montosi, E.; Montanari, A.; Toth, E.; Parajka, J.; Blöschl, G.
2012-04-01
The objective of this study is to explore one possible solution to optimise the parameters of rainfall-runoff models in ungauged catchments. We propose a cross-calibration procedure based on the adoption, for selected pairs of catchments, of a unique, space- invariant parameter set, which can be identified by using information that refers to gauged catchments in the same region. A basin in turn in the study region is selected and identified as target catchment and treated as ungauged. We will refer to all the remaining catchments in the same region as the donors. The R-R model is calibrated on each donor in turn, therefore identifying the donor which provides the most reliable parameter set. Then, a similarity measure is elaborated to assist in the selection of the most performing donor catchment, therefore proposing a quantitative criteria to identify the most appropriate information to be used in ungauged conditions. The similarity measure, which depends on geomorphoclimatic behaviours, can be used to identify more than one donor catchment in the case one needs to increase the consistency of the available data-base. We want to analyse the trade-off between assuming the parameters homogeneous in space and adding new information as the cross-calibration evolves. The analysis is performed by referring to the case study of a set of 7 catchments located in Northern Italy.
Efficient computation of net analyte signal vector in inverse multivariate calibration models.
Faber, N K
1998-12-01
The net analyte signal vector has been defined by Lorber as the part of a mixture spectrum that is unique for the analyte of interest; i.e., it is orthogonal to the spectra of the interferences. It plays a key role in the development of multivariate analytical figures of merit. Applications have been reported that imply its utility for spectroscopic wavelength selection as well as calibration method comparison. Currently available methods for computing the net analyte signal vector in inverse multivariate calibration models are based on the evaluation of projection matrices. Due to the size of these matrices (p × p, with p the number of wavelengths) the computation may be highly memory- and time-consuming. This paper shows that the net analyte signal vector can be obtained in a highly efficient manner by a suitable scaling of the regression vector. Computing the scaling factor only requires the evaluation of an inner product (p multiplications and additions). The mathematical form of the newly derived expression is discussed, and the generalization to multiway calibration models is briefly outlined.
NASA Astrophysics Data System (ADS)
Saikia, C. K.; Woods, B. B.; Thio, H. K.
- Regional crustal waveguide calibration is essential to the retrieval of source parameters and the location of smaller (M<4.8) seismic events. This path calibration of regional seismic phases is strongly dependent on the accuracy of hypocentral locations of calibration (or master) events. This information can be difficult to obtain, especially for smaller events. Generally, explosion or quarry blast generated travel-time data with known locations and origin times are useful for developing the path calibration parameters, but in many regions such data sets are scanty or do not exist. We present a method which is useful for regional path calibration independent of such data, i.e. with earthquakes, which is applicable for events down to Mw = 4 and which has successfully been applied in India, central Asia, western Mediterranean, North Africa, Tibet and the former Soviet Union. These studies suggest that reliably determining depth is essential to establishing accurate epicentral location and origin time for events. We find that the error in source depth does not necessarily trade-off only with the origin time for events with poor azimuthal coverage, but with the horizontal location as well, thus resulting in poor epicentral locations. For example, hypocenters for some events in central Asia were found to move from their fixed-depth locations by about 20km. Such errors in location and depth will propagate into path calibration parameters, particularly with respect to travel times. The modeling of teleseismic depth phases (pP, sP) yields accurate depths for earthquakes down to magnitude Mw = 4.7. This Mwthreshold can be lowered to four if regional seismograms are used in conjunction with a calibrated velocity structure model to determine depth, with the relative amplitude of the Pnl waves to the surface waves and the interaction of regional sPmP and pPmP phases being good indicators of event depths. We also found that for deep events a seismic phase which follows an S
The Benefit of Multi-Mission Altimetry Series for the Calibration of Hydraulic Models
NASA Astrophysics Data System (ADS)
Domeneghetti, Alessio; Tarpanelli, Angelica; Tourian, Mohammad J.; Brocca, Luca; Moramarco, Tommaso; Castellarin, Attilio; Sneeuw, Nico
2016-04-01
The growing availability of satellite altimetric time series during last decades has fostered their use in many hydrological and hydraulic applications. However, the use of remotely sensed water level series still remains hampered by the limited temporal resolution that characterizes each sensor (i.e. revisit time varying from 10 to 35 days), as well as by the accuracy of different instrumentation adopted for monitoring inland water. As a consequence, each sensor is characterized by distinctive potentials and limitations that constrain its use for hydrological applications. In this study we refer to a stretch of about 140 km of the Po River (the longest Italian river) in order to investigate the performance of different altimetry series for the calibration of a quasi-2d model built with detailed topographic information. The usefulness of remotely sensed water surface elevation is tested using data collected by different altimetry missions (i.e., ERS-2, ENVISAT, TOPEX/Poseidon, JASON-2 and SARAL/Altika) by investigating the effect of (i) record length (i.e. number of satellite measurements provided by a given sensor at a specific satellite track) and (ii) data uncertainty (i.e. altimetry measurements errors). Since the relatively poor time resolution of satellites constrains the operational use of altimetric time series, in this study we also investigate the use of multi-mission altimetry series obtained by merging datasets sensed by different sensors over the study area. Benefits of the highest temporal frequency of multi-mission series are tested by calibrating the quasi-2d model referring in turn to original satellite series and multi-mission datasets. Jason-2 and ENVISAT outperform other sensors, ensuring the reliability on the calibration process for shorter time series. The multi-mission dataset appears particularly reliable and suitable for the calibration of hydraulic model. If short time periods are considered, the performance of the multi-mission dataset
Automatic calibration of a global flow routing model in the Amazon basin using virtual SWOT data
NASA Astrophysics Data System (ADS)
Rogel, P. Y.; Mouffe, M.; Getirana, A.; Ricci, S. M.; Lion, C.; Mognard, N. M.; Biancamaria, S.; Boone, A.
2012-12-01
The Surface Water and Ocean Topography (SWOT) wide swath altimetry mission will provide a global coverage of surface water elevation, which will be used to help correct water height and discharge prediction from hydrological models. Here, the aim is to investigate the use of virtually generated SWOT data to improve water height and discharge simulation using calibration of model parameters (like river width, river depth and roughness coefficient). In this work, we use the HyMAP model to estimate water height and discharge on the Amazon catchment area. Before reaching the river network, surface and subsurface runoff are delayed by a set of linear and independent reservoirs. The flow routing is performed by the kinematic wave equation.. Since the SWOT mission has not yet been launched, virtual SWOT data are generated with a set of true parameters for HyMAP as well as measurement errors from a SWOT data simulator (i.e. a twin experiment approach is implemented). These virtual observations are used to calibrate key parameters of HyMAP through the minimization of a cost function defining the difference between the simulated and observed water heights over a one-year simulation period. The automatic calibration procedure is achieved using the MOCOM-UA multicriteria global optimization algorithm as well as the local optimization algorithm BC-DFO that is considered as a computational cost saving alternative. First, to reduce the computational cost of the calibration procedure, each spatially distributed parameter (Manning coefficient, river width and river depth) is corrupted through the multiplication of a spatially uniform factor that is the only factor optimized. In this case, it is shown that, when the measurement errors are small, the true water heights and discharges are easily retrieved. Because of equifinality, the true parameters are not always identified. A spatial correction of the model parameters is then investigated and the domain is divided into 4 regions
Langevin, Christian D.; Hughes, Joseph D.
2010-01-01
A model with a small amount of numerical dispersion was used to represent saltwater 7 intrusion in a homogeneous aquifer for a 10-year historical calibration period with one 8 groundwater withdrawal location followed by a 10-year prediction period with two groundwater 9 withdrawal locations. Time-varying groundwater concentrations at arbitrary locations in this low-10 dispersion model were then used as observations to calibrate a model with a greater amount of 11 numerical dispersion. The low-dispersion model was solved using a Total Variation Diminishing 12 numerical scheme; an implicit finite difference scheme with upstream weighting was used for 13 the calibration simulations. Calibration focused on estimating a three-dimensional hydraulic 14 conductivity field that was parameterized using a regular grid of pilot points in each layer and a 15 smoothness constraint. Other model parameters (dispersivity, porosity, recharge, etc.) were 16 fixed at the known values. The discrepancy between observed and simulated concentrations 17 (due solely to numerical dispersion) was reduced by adjusting hydraulic conductivity through the 18 calibration process. Within the transition zone, hydraulic conductivity tended to be lower than 19 the true value for the calibration runs tested. The calibration process introduced lower hydraulic 20 conductivity values to compensate for numerical dispersion and improve the match between 21 observed and simulated concentration breakthrough curves at monitoring locations. 22 Concentrations were underpredicted at both groundwater withdrawal locations during the 10-23 year prediction period.
NASA Astrophysics Data System (ADS)
Luo, Yue; Ye, Shujun; Wu, Jichun; Wang, Hanmei; Jiao, Xun
2016-05-01
Land-subsidence prediction depends on an appropriate subsidence model and the calibration of its parameter values. A modified inverse procedure is developed and applied to calibrate five parameters in a compacting confined aquifer system using records of field data from vertical extensometers and corresponding hydrographs. The inverse procedure of COMPAC (InvCOMPAC) has been used in the past for calibrating vertical hydraulic conductivity of the aquitards, nonrecoverable and recoverable skeletal specific storages of the aquitards, skeletal specific storage of the aquifers, and initial preconsolidation stress within the aquitards. InvCOMPAC is modified to increase robustness in this study. There are two main differences in the modified InvCOMPAC model (MInvCOMPAC). One is that field data are smoothed before diagram analysis to reduce local oscillation of data and remove abnormal data points. A robust locally weighted regression method is applied to smooth the field data. The other difference is that the Newton-Raphson method, with a variable scale factor, is used to conduct the computer-based inverse adjustment procedure. MInvCOMPAC is then applied to calibrate parameters in a land subsidence model of Shanghai, China. Five parameters of aquifers and aquitards at 15 multiple-extensometer sites are calibrated. Vertical deformation of sedimentary layers can be predicted by the one-dimensional COMPAC model with these calibrated parameters at extensometer sites. These calibrated parameters could also serve as good initial values for parameters of three-dimensional regional land subsidence models of Shanghai.
Calibration of interphase fluorescence in situ hybridization cutoff by mathematical models.
Du, Qinghua; Li, Qingshan; Sun, Daochun; Chen, Xiaoyan; Yu, Bizhen; Ying, Yi
2016-03-01
Fluorescence in situ hybridization (FISH) continues to play an important role in clinical investigations. Laboratories may create their own cutoff, a percentage of positive nuclei to determine whether a specimen is positive or negative, to eliminate false positives that are created by signal overlap in most cases. In some cases, it is difficult to determine the cutoff value because of differences in both the area of nuclei and the number of signals. To address these problems, we established two mathematical models using probability theory. To verify these two models, normal disomy cells from healthy individuals were used to simulate cells with different numbers of signals by hybridization with different probes. We used an X/Y probe to obtain the average distance between two signals and the probability of signal overlap in different nuclei area. Frequencies of all signal patterns were scored and compared with theoretical frequencies, and models were assessed using a goodness of fit test. We used five BCR/ABL1-positive samples, 20 BCR/ABL1-negative samples and two samples with ambiguous results to verify the cutoff calibrated by these two models. The models were in agreement with experimental results. The dynamic cutoff can classify cases in routine analysis correctly, and it can also correct for influences from nuclei area and the number of signals in some ambiguous cases. The probability models can be used to assess the effect of signal overlap and calibrate the cutoff. PMID:26580488
Self-calibration of digital aerial camera using combined orthogonal models
NASA Astrophysics Data System (ADS)
Babapour, Hadi; Mokhtarzade, Mehdi; Valadan Zoej, Mohamad Javad
2016-07-01
The emergence of new digital aerial cameras and the diverse design and technology used in this type of cameras require in-situ calibration. Self-calibration methods, e.g. the Fourier model, are primarily used; however, additional parameters employed in such methods have not yet met the expectations to desirably model the complex multiple distortions existing in the digital aerial cameras. The present study proposes the Chebyshev-Fourier (CHF) and Jacobi-Fourier (JF) combined orthogonal models. The models are evaluated for the multiple distortions using both simulated and real data, the latter being derived from an UltraCam digital camera. The results indicate that the JF model is superior to the other methods where, e.g., in the UltraCam scenario, it improves the planimetric and vertical accuracy over the Fourier model by 18% and 22%, respectively. Furthermore, a 30% and 16% of reduction in external and internal correlation is obtained via this approach which is very promising.
Chambers, Robert S.; Tandon, Rajan; Stavig, Mark E.
2015-07-07
In this study, to analyze the stresses and strains generated during the solidification of glass-forming materials, stress and volume relaxation must be predicted accurately. Although the modeling attributes required to depict physical aging in organic glassy thermosets strongly resemble the structural relaxation in inorganic glasses, the historical modeling approaches have been distinctly different. To determine whether a common constitutive framework can be applied to both classes of materials, the nonlinear viscoelastic simplified potential energy clock (SPEC) model, developed originally for glassy thermosets, was calibrated for the Schott 8061 inorganic glass and used to analyze a number of tests. A practical methodology for material characterization and model calibration is discussed, and the structural relaxation mechanism is interpreted in the context of SPEC model constitutive equations. SPEC predictions compared to inorganic glass data collected from thermal strain measurements and creep tests demonstrate the ability to achieve engineering accuracy and make the SPEC model feasible for engineering applications involving a much broader class of glassy materials.
Chambers, Robert S.; Tandon, Rajan; Stavig, Mark E.
2015-07-07
In this study, to analyze the stresses and strains generated during the solidification of glass-forming materials, stress and volume relaxation must be predicted accurately. Although the modeling attributes required to depict physical aging in organic glassy thermosets strongly resemble the structural relaxation in inorganic glasses, the historical modeling approaches have been distinctly different. To determine whether a common constitutive framework can be applied to both classes of materials, the nonlinear viscoelastic simplified potential energy clock (SPEC) model, developed originally for glassy thermosets, was calibrated for the Schott 8061 inorganic glass and used to analyze a number of tests. A practicalmore » methodology for material characterization and model calibration is discussed, and the structural relaxation mechanism is interpreted in the context of SPEC model constitutive equations. SPEC predictions compared to inorganic glass data collected from thermal strain measurements and creep tests demonstrate the ability to achieve engineering accuracy and make the SPEC model feasible for engineering applications involving a much broader class of glassy materials.« less
NASA Astrophysics Data System (ADS)
Gilson, L.; Rabet, L.; Imad, A.; Kakogiannis, D.; Coghe, F.
2016-05-01
Among the different material surrogates used to study the effect of small calibre projectiles on the human body, ballistic gelatine is one of the most commonly used because of its specific material properties. For many applications, numerical simulations of this material could give an important added value to understand the different phenomena observed during ballistic testing. However, the material response of gelatine is highly non-linear and complex. Recent developments in this field are available in the literature. Experimental and numerical data on the impact of rigid steel spheres in gelatine available in the literature were considered as a basis for the selection of the best model for further work. For this a comparison of two models for Fackler gelatine has been made. The selected model is afterwards exploited for a real threat consisting of two types of ammunitions: 9 mm and .44 Magnum calibre projectiles. A high-speed camera and a pressure sensor were used in order to measure the velocity decay of the projectiles and the pressure at a given location in the gelatine during penetration of the projectile. The observed instability of the 9 mm bullets was also studied. Four numerical models were developed and solved with LS-DYNA and compared with the experimental data. Good agreement was obtained between the models and the experiments validating the selected gelatine model for future use.
Calibrating a forest landscape model to simulate frequent fire in Mediterranean-type shrublands
Syphard, A.D.; Yang, J.; Franklin, J.; He, H.S.; Keeley, J.E.
2007-01-01
In Mediterranean-type ecosystems (MTEs), fire disturbance influences the distribution of most plant communities, and altered fire regimes may be more important than climate factors in shaping future MTE vegetation dynamics. Models that simulate the high-frequency fire and post-fire response strategies characteristic of these regions will be important tools for evaluating potential landscape change scenarios. However, few existing models have been designed to simulate these properties over long time frames and broad spatial scales. We refined a landscape disturbance and succession (LANDIS) model to operate on an annual time step and to simulate altered fire regimes in a southern California Mediterranean landscape. After developing a comprehensive set of spatial and non-spatial variables and parameters, we calibrated the model to simulate very high fire frequencies and evaluated the simulations under several parameter scenarios representing hypotheses about system dynamics. The goal was to ensure that observed model behavior would simulate the specified fire regime parameters, and that the predictions were reasonable based on current understanding of community dynamics in the region. After calibration, the two dominant plant functional types responded realistically to different fire regime scenarios. Therefore, this model offers a new alternative for simulating altered fire regimes in MTE landscapes. ?? 2007 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Keijsers, J. G. S.; Schoorl, J. M.; Chang, K.-T.; Chiang, S.-H.; Claessens, L.; Veldkamp, A.
2011-10-01
In this paper we optimise the spatially explicit prediction of landslide hazard, landslide triggering and subsequent movement downslope of materials for a mountainous catchment in Taiwan. The location prediction is optimised by subsequently adding three location parameters: rainfall distribution, land-use classes and DEM derived slopes. Then the three most important model parameters are calibrated to find the best prediction for both stable and unstable areas. The landslides predicted by the LAPSUS-LS model are compared with a landslide inventory to validate the output. The optimal model settings for the calibration area are then applied to a validation area. Results show that model performance can be improved by adding the spatial distribution of rainfall and by stratifying according to land-use classes. Landslide prediction is better with fine resolution DEMs, mainly because the local topography is smoothed in coarser resolutions. Although in general the amount of landslides is over-predicted, the overall performance indicates that the model is able to capture the important factors determining landslide location. Additional spatially distributed data such as regolith or soil depth and regeneration rates of the legacy effect can further enhance the model's prediction.
NASA Astrophysics Data System (ADS)
Chanumolu, Anantha; Jones, Damien; Thirupathi, Sivarani
2015-06-01
We present a modelling scheme that predicts the centroids of spectral line features for a high resolution Echelle spectrograph to a high accuracy. Towards this, a computing scheme is used, whereby any astronomical spectrograph can be modelled and controlled without recourse to a ray tracing program. The computations are based on paraxial ray trace and exact corrections added for certain surface types and Buchdahl aberration coefficients for complex modules. The resultant chain of paraxial ray traces and corrections for all relevant components is used to calculate the location of any spectral line on the detector under all normal operating conditions with a high degree of certainty. This will allow a semi-autonomous control using simple in-house, programming modules. The scheme is simple enough to be implemented even in a spreadsheet or in any scripting language. Such a model along with an optimization routine can represent the real time behaviour of the instrument. We present here a case study for Hanle Echelle Spectrograph. We show that our results match well with a popular commercial ray tracing software. The model is further optimized using Thorium Argon calibration lamp exposures taken during the preliminary alignment of the instrument. The model predictions matched the calibration frames at a level of 0.08 pixel. Monte Carlo simulations were performed to show the photon noise effect on the model predictions.
Calibration and use of integrated hydrological models in a large groundwater basin in Northern Italy
NASA Astrophysics Data System (ADS)
Gandolfi, Claudio; Giudici, Mauro; Ponzini, Giansilvio; Agostani, Davide; Rienzner, Michele
2010-05-01
We present and discuss the main steps of the implementation and use of the ground water flow model of a large alluvial aquifer system underlying a densely settled and heavily irrigated territory, with a special focus on the estimation of the distributed recharge and on the calibration of the model. The 2500 km² grounwater basin lies in the Padana plain (Northern Italy), one of the most developed industrial and agricultural areas of Europe, and is bordered by the rivers Adda, Oglio and Po. The model implementation was urged by the water management and administration authorities in the area, which in the last years have been under increasing pressure for the release of pumping consents, especially from the irrigation sector. Indeed, the limitation to water withdrawal from rivers to ensure the minimum instream flow, along with a sequence of very dry years, pushed the farmers to seek new sources of irrigation water. On the other side the water authorities are trying to drive a process of transformation of the irrigation systems, towards an increase of their water use efficiency. The same authorities, however, are aware that this process must be carefully controlled in order to protect a number of groundwater dependent ecosystems, that are largely dependent on the distributed recharge due to irrigation. Therefore, the main practical goals of the model is to provide a tool for the assessment of both the sustainability of increased groundwater withdrawals and the effects of changes of the irrigation systems characteristics. Distributed recharge, mainly due to rainfall and irrigation, has been often treated in a simplified way in many applications of groundwater models, in spite of the fact that the unsaturated zone scientific community has achieved significant progresses in the modelling of soil-water-atmosphere interactions. Indeed, especially when irrigation systems are densely spread over a large area but poorly efficient, the distributed recharge term may represent
Assessing the Birkenes model of stream acidification using a multisignal calibration methodology
Hooper, R.P.; Stone, A.; Christophersen, N.; De Grosbois, E.; Seip, H.M.
1988-01-01
The Birkenes model of streamwater acidification has been revised to incorporate additional chemical and hydrologic information gained since its original construction. An analysis of the hydrologic submodel with the goal of extending it to predict concentrations of a conservative tracer in stream water is given. An objective calibration of the model indicated that the model is overparameterized. Only one passive store is identifiable, not two as currently contained in the model and the routing between the two reservoirs is not determined by the data. Inclusion of the conservative tracer improved the identifiability of the dimensional parameters, but had little effect on the rate or routing parameters. If the hydrologic structure is to be determined from the hydrograph and conservative tracer alone, it must be simplified to eliminate unidentifiable parameters. The validity of using more complex rainfall-runoff models in hydrochemical models which seek to test chemical mechanisms is called into question by this analysis. -from Authors
An advanced simulation model for membrane bioreactors: development, calibration and validation.
Ludwig, T; Gaida, D; Keysers, C; Pinnekamp, J; Bongards, M; Kern, P; Wolf, C; Sousa Brito, A L
2012-01-01
Membrane wastewater treatment plants (WWTPs) have several advantages compared with conventionally designed WWTPs with classical purification techniques. The filtration process is the key to their commercial success in Germany with respect to energy consumption and effectiveness, enabled by the optimization of filtration using a dynamic simulation model. This work is focused on the development of a robust, flexible and practically applicable membrane simulation model for submerged hollow-fibre and flat-sheet membrane modules. The model is based on standard parameters usually measured on membrane WWTPs. The performance of the model is demonstrated by successful calibration and validation for three different full-scale membrane WWTPs achieving good results. Furthermore, the model is combinable with Activated Sludge Models.
Lake Michigan eutrophication model: calibration, sensitivity, and five-year hindcast analysis
Lesht, B.M.
1984-09-01
A dynamic, deterministic, eutrophication model of Lake Michigan that was developed by Rodgers and Salisbury (1981) and installed at Argonne National Laboratotry as part of Interagency Agreement AD-89 F-0-145-0 is described in this report. The focus is on model formulation, calibration and verification, and the relationship between these processes and the available field data. Field data are too sparse for detailed analysis, but the model does produce a reasonable five-year simulation of several water quality variables, including total phosphorus and chlorophyll-a. The model provides a valuable framework for understanding the nutrient cycle in Lake Michigan, but forecasts made using the model must be considered within the context of model limitations. 20 references, 37 figures, 7 tables.
Liu, Yan; Cai, Wensheng; Shao, Xueguang
2016-12-01
Calibration transfer is essential for practical applications of near infrared (NIR) spectroscopy because the measurements of the spectra may be performed on different instruments and the difference between the instruments must be corrected. For most of calibration transfer methods, standard samples are necessary to construct the transfer model using the spectra of the samples measured on two instruments, named as master and slave instrument, respectively. In this work, a method named as linear model correction (LMC) is proposed for calibration transfer without standard samples. The method is based on the fact that, for the samples with similar physical and chemical properties, the spectra measured on different instruments are linearly correlated. The fact makes the coefficients of the linear models constructed by the spectra measured on different instruments are similar in profile. Therefore, by using the constrained optimization method, the coefficients of the master model can be transferred into that of the slave model with a few spectra measured on slave instrument. Two NIR datasets of corn and plant leaf samples measured with different instruments are used to test the performance of the method. The results show that, for both the datasets, the spectra can be correctly predicted using the transferred partial least squares (PLS) models. Because standard samples are not necessary in the method, it may be more useful in practical uses. PMID:27380302
NASA Astrophysics Data System (ADS)
Stephenson, John; Gallagher, Kerry; Holmes, Chris
2006-10-01
We present a new approach for modelling annealing of fission tracks in apatite, aiming to address various problems with existing models. We cast the model in a fully Bayesian context, which allows us explicitly to deal with data and parameter uncertainties and correlations, and also to deal with the predictive uncertainties. We focus on a well-known annealing algorithm [Laslett, G.M., Green, P.F., Duddy, I.R., Gleadow. A.J.W., 1987. Thermal annealing of fission tracks in apatite. 2. A quantitative-analysis. Chem. Geol., 65 (1), 1-13], and build a hierachical Bayesian model to incorporate both laboratory and geological timescale data as direct constraints. Relative to the original model calibration, we find a better (in terms of likelihood) model conditioned just on the reported laboratory data. We then include the uncertainty on the temperatures recorded during the laboratory annealing experiments. We again find a better model, but the predictive uncertainty when extrapolated to geological timescales is increased due to the uncertainty on the laboratory temperatures. Finally, we explictly include a data set [Vrolijk, P., Donelick, R.A., Quenq, J., Cloos. M., 1992. Testing models of fission track annealing in apatite in a simple thermal setting: site 800, leg 129. In: Larson, R., Lancelet, Y. (Eds.), Proceedings of the Ocean Drilling Program, Scientific Results, vol. 129, pp. 169-176] which provides low-temperature geological timescale constraints for the model calibration. When combined with the laboratory data, we find a model which satisfies both the low-temperature and high-temperature geological timescale benchmarks, although the fit to the original laboratory data is degraded. However, when extrapolated to geological timescales, this combined model significantly reduces the well-known rapid recent cooling artifact found in many published thermal models for geological samples.
Approaches to highly parameterized inversion-A guide to using PEST for groundwater-model calibration
Doherty, John E.; Hunt, Randall J.
2010-01-01
Highly parameterized groundwater models can create calibration difficulties. Regularized inversion-the combined use of large numbers of parameters with mathematical approaches for stable parameter estimation-is becoming a common approach to address these difficulties and enhance the transfer of information contained in field measurements to parameters used to model that system. Though commonly used in other industries, regularized inversion is somewhat imperfectly understood in the groundwater field. There is concern that this unfamiliarity can lead to underuse, and misuse, of the methodology. This document is constructed to facilitate the appropriate use of regularized inversion for calibrating highly parameterized groundwater models. The presentation is directed at an intermediate- to advanced-level modeler, and it focuses on the PEST software suite-a frequently used tool for highly parameterized model calibration and one that is widely supported by commercial graphical user interfaces. A brief overview of the regularized inversion approach is provided, and techniques for mathematical regularization offered by PEST are outlined, including Tikhonov, subspace, and hybrid schemes. Guidelines for applying regularized inversion techniques are presented after a logical progression of steps for building suitable PEST input. The discussion starts with use of pilot points as a parameterization device and processing/grouping observations to form multicomponent objective functions. A description of potential parameter solution methodologies and resources available through the PEST software and its supporting utility programs follows. Directing the parameter-estimation process through PEST control variables is then discussed, including guidance for monitoring and optimizing the performance of PEST. Comprehensive listings of PEST control variables, and of the roles performed by PEST utility support programs, are presented in the appendixes.
Calibration of Gurson-type models for porous sheet metals with anisotropic non-quadratic plasticity
NASA Astrophysics Data System (ADS)
Gologanu, M.; Kami, A.; Comsa, D. S.; Banabic, D.
2016-08-01
The growth and coalescence of voids in sheet metals are not only the main active mechanisms in the final stages of fracture in a necking band, but they also contribute to the forming limits via changes in the normal directions to the yield surface. A widely accepted method to include void effects is the development of a Gurson-type model for the appropriate yield criterion, based on an approximate limit analysis of a unit cell containing a single spherical, spheroidal or ellipsoidal void. We have recently [2] obtained dissipation functions and Gurson-type models for porous sheet metals with ellipsoidal voids and anisotropic non-quadratic plasticity, including yield criteria based on linear transformations (Yld91 and Yld2004-18p) and a pure plane stress yield criteria (BBC2005). These Gurson-type models contain several parameters that depend on the void and cell geometries and on the selected yield criterion. Best results are obtained when these key parameters are calibrated via numerical simulations using the same unit cell and a few representative loading conditions. The single most important such loading condition corresponds to a pure hydrostatic macroscopic stress (pure pressure) and the corresponding velocity field found during the solution of the limit analysis problem describes the expansion of the cavity. However, for the case of sheet metals, the condition of plane stress precludes macroscopic stresses with large triaxiality or ratio of mean stress to equivalent stress, including the pure hydrostatic case. Also, pure plane stress yield criteria like BBC2005 must first be extended to 3D stresses before attempting to develop a Gurson-type model and such extensions are purely phenomenological with no due account for the out- of-plane anisotropic properties of the sheet. Therefore, we propose a new calibration method for Gurson- type models that uses only boundary conditions compatible with the plane stress requirement. For each such boundary condition we use
NASA Astrophysics Data System (ADS)
Vazdekis, A.; Cenarro, A. J.; Gorgas, J.; Cardiel, N.; Peletier, R. F.
2003-04-01
We present a new evolutionary stellar population synthesis model, which predicts spectral energy distributions for single-age single-metallicity stellar populations (SSPs) at resolution 1.5 Å (FWHM) in the spectral region of the near-infrared CaII triplet feature. The main ingredient of the model is a new extensive empirical stellar spectral library that has been recently presented by Cenarro et al., which is composed of more than 600 stars with an unprecedented coverage of the stellar atmospheric parameters. Two main products of interest for stellar population analysis are presented. The first is a spectral library for SSPs with metallicities -1.7 < [Fe/H] < +0.2, a large range of ages (0.1-18 Gyr) and initial mass function (IMF) types. They are well suited to modelling galaxy data, since the SSP spectra, with flux-calibrated response curves, can be smoothed to the resolution of the observational data, taking into account the internal velocity dispersion of the galaxy, allowing the user to analyse the observed spectrum in its own system. We also produce integrated absorption-line indices (namely CaT*, CaT and PaT) for the same SSPs in the form of equivalent widths. We find the following behaviour for the CaII triplet feature in old-aged SSPs: (i) the strength of the CaT* index does not change much with time for all metallicities for ages larger than ~3 Gyr; (ii) this index shows a strong dependence on metallicity for values below [M/H]~-0.5 and (iii) for larger metallicities this feature does not show a significant dependence either on age or on the metallicity, being more sensitive to changes in the slope of power-like IMF shapes. The SSP spectra have been calibrated with measurements for globular clusters by Armandroff & Zinn, which are well reproduced, probing the validity of using the integrated CaII triplet feature for determining the metallicities of these systems. Fitting the models to two early-type galaxies of different luminosities (NGC 4478 and 4365
Brasil, Beatriz; Bettencourt da Silva, Ricardo J N; Camões, M Filomena G F C; Salgueiro, Pedro A S
2013-12-01
The linear weighted regression model (LW) can be used to calibrate analytical instrumentation in a range of quantities (e.g. concentration or mass) wider than possible by the linear unweighted regression model, LuW (i.e. the least squares regression model), since this model can be applied when signals are not equally precise through the calibration range. If precision of signals varies within the calibration range, the regression line should be defined taking into account that more precise signals are more reliable and should count more to define regression parameters. Nevertheless, the LW requires the determination of the variation of signals precision through the calibration range. Typically, this information is collected experimentally for each calibration, requiring a large number of replicate collection of signals of calibrators. This work proposes reducing the number of signals needed to perform LW calibrations by developing models of weighing factors robust to daily variations of instrument sensibility. These models were applied to the determination of the ionic composition of the water soluble fraction of explosives. The adequacy of the developed models was tested through the analysis of control standards, certified reference materials and the ion balance of anions and cations in aqueous extracts of explosives, considering the measurement uncertainty estimated by detailed metrological models. The high success rate of the comparisons between estimated and known quantity values of reference solutions, considering results uncertainty, proves the validity of developed metrological models. The relative expanded measurement uncertainty of single determinations ranged from 1.93% to 35.7% for calibrations performed along 4 months. PMID:24267095
Brasil, Beatriz; Bettencourt da Silva, Ricardo J N; Camões, M Filomena G F C; Salgueiro, Pedro A S
2013-12-01
The linear weighted regression model (LW) can be used to calibrate analytical instrumentation in a range of quantities (e.g. concentration or mass) wider than possible by the linear unweighted regression model, LuW (i.e. the least squares regression model), since this model can be applied when signals are not equally precise through the calibration range. If precision of signals varies within the calibration range, the regression line should be defined taking into account that more precise signals are more reliable and should count more to define regression parameters. Nevertheless, the LW requires the determination of the variation of signals precision through the calibration range. Typically, this information is collected experimentally for each calibration, requiring a large number of replicate collection of signals of calibrators. This work proposes reducing the number of signals needed to perform LW calibrations by developing models of weighing factors robust to daily variations of instrument sensibility. These models were applied to the determination of the ionic composition of the water soluble fraction of explosives. The adequacy of the developed models was tested through the analysis of control standards, certified reference materials and the ion balance of anions and cations in aqueous extracts of explosives, considering the measurement uncertainty estimated by detailed metrological models. The high success rate of the comparisons between estimated and known quantity values of reference solutions, considering results uncertainty, proves the validity of developed metrological models. The relative expanded measurement uncertainty of single determinations ranged from 1.93% to 35.7% for calibrations performed along 4 months.
Modeling Spectralon's Bidirectional Reflectance for In-flight Calibration of Earth-Orbiting Sensors
NASA Technical Reports Server (NTRS)
Flasse, Stephane P.; Verstraete, Michel M.; Pinty, Bernard; Bruegge, Carol J.
1993-01-01
The in-flight calibration of the EOS Multi-angle Imaging SpectroRadiometer (MISR) will be achieved, in part, by observing deployable Spectralon panels. This material reflects light diffusely, and allows all cameras to view a near constant radiance field. This is particularly true when a panel is illuminated near the surface normal. To meet the challenging MISR calibration requirements, however, very accurate knowledge of the panel reflectance must be known for all utilized angles of illumination, and for all camera and monitoring photodiode view angles. It is believed that model predictions of the panels Bidirectional Reflectance Distribution Function (BRDF) can be used in conjunction with a measurements program to provide the required characterization. This paper describes the results of a model inversion which was conducted using measured Spectralon BRDF data at several illumination angles. Four physical parameters of the material were retrieved, and are available for use with the model to predict reflectance for any arbitrary illumination or view angle. With these data the root mean square difference between the model and the observations is currently of the order of the noise in the data, at about +/- l%. With this success the model will now be used in a variety of future studies, including the development of a measurements test plan, the validation of these data, and the prediction of a new BRDF profile, should the material degrade in space.
Conaway, Jeffrey S.; Moran, Edward H.
2004-01-01
Bathymetric and hydraulic data were collected by the U.S. Geological Survey on the Tanana River in proximity to Alaska Department of Transportation and Public Facilities' bridge number 505 at mile 80.5 of the Alaska Highway. Data were collected from August 7-9, 2002, over an approximate 5,000- foot reach of the river. These data were combined with topographic data provided by Alaska Department of Transportation and Public Facilities to generate a two-dimensional hydrodynamic model. The hydrodynamic model was calibrated with water-surface elevations, flow velocities, and flow directions collected at a discharge of 25,600 cubic feet per second. The calibrated model was then used for a simulation of the 100-year recurrence interval discharge of 51,900 cubic feet per second. The existing bridge piers were removed from the model geometry in a second simulation to model the hydraulic conditions in the channel without the piers' influence. The water-surface elevations, flow velocities, and flow directions from these simulations can be used to evaluate the influence of the piers on flow hydraulics and will assist the Alaska Department of Transportation and Public Facilities in the design of a replacement bridge.
NASA Astrophysics Data System (ADS)
Yang, Kun; Zhu, La; Chen, Yingying; Zhao, Long; Qin, Jun; Lu, Hui; Tang, Wenjun; Han, Menglei; Ding, Baohong; Fang, Nan
2016-02-01
Soil moisture is a key variable in climate system, and its accurate simulation needs effective soil parameter values. Conventional approaches may obtain soil parameter values at point scale, but they are costly and not efficient at grid scale (10-100 km) of current climate models. This study explores the possibility to estimate soil parameter values by assimilating AMSR-E (Advanced Microwave Scanning Radiometer for Earth Observing System) brightness temperature (TB) data. In the assimilation system, the TB is simulated by the coupled system of a land surface model (LSM) and a radiative transfer model (RTM), and the simulation errors highly depend on parameters in both the LSM and the RTM. Thus, sensitive soil parameters may be inversely estimated through minimizing the TB errors. A crucial step for the parameter estimation is made to suppress the contamination of uncertainties in atmospheric forcing data. The effectiveness of the estimated parameter values is evaluated against intensive measurements of soil parameters and soil moisture in three grasslands of the Tibetan Plateau and the Mongolian Plateau. The results indicate that this satellite data-based approach can improve the data quality of soil porosity, a key parameter for soil moisture modeling, and LSM simulations with the estimated parameter values reasonably reproduce the measured soil moisture. This demonstrates it is feasible to calibrate LSMs for soil moisture simulations at grid scale by assimilating microwave satellite data, although more efforts are expected to improve the robustness of the model calibration.
Calibration of GEOtop for a Mountainous Watershed—a Hydrological Land-Surface Model.
NASA Astrophysics Data System (ADS)
Fullhart, A. T.; Kelleners, T.
2015-12-01
GEOtop is a distributed finite-difference hydrological land-surface model with a built-in snow evolution package. Ongoing model calibrations and solutions are presented for a very small, low-order watershed within a forested mountain range at ~10,000 ft. elevation. The catchment has a hydrological budget that is dominated by snow input. During model calibration, potential configurations for spatial discretization and resolution are tested by comparison to field measurements—as are alternative soil properties and surface runoff parameters. Also demonstrated is the effect of variable geomorphology as it relates to the energy budget and the subsequent distribution of modeled outputs. Within the larger scope of the WYCEHG research group (i.e. The Wyoming Center for Environmental Hydrology and Geophysics), which works towards a multi-disciplinary approach to field modeling, additional complexities beyond stream flow and soil moisture can be conceptualized and tested based on measurements of snowpacks, evapotranspiration, and geophysical imaging. A combination of these give a better understanding of critical components of the hydrological balance—some of which are in states of flux, e.g., tree cover (due to beetle-kill), and future climate change scenarios.
Cao, Jianping; Xiong, Jianyin; Wang, Lixin; Xu, Ying; Zhang, Yinping
2016-09-01
Solid-phase microextraction (SPME) is regarded as a nonexhaustive sampling technique with a smaller extraction volume and a shorter extraction time than traditional sampling techniques and is hence widely used. The SPME sampling process is affected by the convection or diffusion effect along the coating surface, but this factor has seldom been studied. This paper derives an analytical model to characterize SPME sampling for semivolatile organic compounds (SVOCs) as well as for volatile organic compounds (VOCs) by considering the surface mass transfer process. Using this model, the chemical concentrations in a sample matrix can be conveniently calculated. In addition, the model can be used to determine the characteristic parameters (partition coefficient and diffusion coefficient) for typical SPME chemical samplings (SPME calibration). Experiments using SPME samplings of two typical SVOCs, dibutyl phthalate (DBP) in sealed chamber and di(2-ethylhexyl) phthalate (DEHP) in ventilated chamber, were performed to measure the two characteristic parameters. The experimental results demonstrated the effectiveness of the model and calibration method. Experimental data from the literature (VOCs sampled by SPME) were used to further validate the model. This study should prove useful for relatively rapid quantification of concentrations of different chemicals in various circumstances with SPME. PMID:27476381
Mathematical Model and Calibration Procedure of a PSD Sensor Used in Local Positioning Systems
Rodríguez-Navarro, David; Lázaro-Galilea, José Luis; Bravo-Muñoz, Ignacio; Gardel-Vicente, Alfredo; Domingo-Perez, Francisco; Tsirigotis, Georgios
2016-01-01
Here, we propose a mathematical model and a calibration procedure for a PSD (position sensitive device) sensor equipped with an optical system, to enable accurate measurement of the angle of arrival of one or more beams of light emitted by infrared (IR) transmitters located at distances of between 4 and 6 m. To achieve this objective, it was necessary to characterize the intrinsic parameters that model the system and obtain their values. This first approach was based on a pin-hole model, to which system nonlinearities were added, and this was used to model the points obtained with the nA currents provided by the PSD. In addition, we analyzed the main sources of error, including PSD sensor signal noise, gain factor imbalances and PSD sensor distortion. The results indicated that the proposed model and method provided satisfactory calibration and yielded precise parameter values, enabling accurate measurement of the angle of arrival with a low degree of error, as evidenced by the experimental results. PMID:27649189
Mathematical Model and Calibration Procedure of a PSD Sensor Used in Local Positioning Systems.
Rodríguez-Navarro, David; Lázaro-Galilea, José Luis; Bravo-Muñoz, Ignacio; Gardel-Vicente, Alfredo; Domingo-Perez, Francisco; Tsirigotis, Georgios
2016-01-01
Here, we propose a mathematical model and a calibration procedure for a PSD (position sensitive device) sensor equipped with an optical system, to enable accurate measurement of the angle of arrival of one or more beams of light emitted by infrared (IR) transmitters located at distances of between 4 and 6 m. To achieve this objective, it was necessary to characterize the intrinsic parameters that model the system and obtain their values. This first approach was based on a pin-hole model, to which system nonlinearities were added, and this was used to model the points obtained with the nA currents provided by the PSD. In addition, we analyzed the main sources of error, including PSD sensor signal noise, gain factor imbalances and PSD sensor distortion. The results indicated that the proposed model and method provided satisfactory calibration and yielded precise parameter values, enabling accurate measurement of the angle of arrival with a low degree of error, as evidenced by the experimental results.
Mathematical Model and Calibration Procedure of a PSD Sensor Used in Local Positioning Systems.
Rodríguez-Navarro, David; Lázaro-Galilea, José Luis; Bravo-Muñoz, Ignacio; Gardel-Vicente, Alfredo; Domingo-Perez, Francisco; Tsirigotis, Georgios
2016-01-01
Here, we propose a mathematical model and a calibration procedure for a PSD (position sensitive device) sensor equipped with an optical system, to enable accurate measurement of the angle of arrival of one or more beams of light emitted by infrared (IR) transmitters located at distances of between 4 and 6 m. To achieve this objective, it was necessary to characterize the intrinsic parameters that model the system and obtain their values. This first approach was based on a pin-hole model, to which system nonlinearities were added, and this was used to model the points obtained with the nA currents provided by the PSD. In addition, we analyzed the main sources of error, including PSD sensor signal noise, gain factor imbalances and PSD sensor distortion. The results indicated that the proposed model and method provided satisfactory calibration and yielded precise parameter values, enabling accurate measurement of the angle of arrival with a low degree of error, as evidenced by the experimental results. PMID:27649189
NASA Astrophysics Data System (ADS)
Tolson, B. A.; Shoemaker, C. A.; Méndez, F.; Regis, R.
2003-12-01
This study compares multiple heuristic optimization algorithms for automatic calibration of a watershed model to flow and water quality data. The automatic calibration of the watershed model in this study involves simultaneously calibrating the model to flow, suspended sediment, particulate and total dissolved phosphorus data by modifying up to 20 model parameters. A single-objective function is defined by a weighted sum of the measures of model and data agreement for each of the measured constituents. The heuristic optimization algorithms compared in this study are the Shuffled Complex Evolution (SCE-UA) algorithm developed by Duan et al., a Genetic Algorithm (GA) and a newly developed function approximation algorithm by Regis and Shoemaker described in another paper appearing in this session. Although the SCE and GA algorithms have been previously used to calibrate hydrologic models, their application to calibration of watershed models to measured sediment or nutrient data is much less common. This is the first application of a function approximation algorithm to the calibration of a watershed model to flow, sediment and phosphorus data. The watershed model applied in this study is a modified version of the Soil and Water Assessment Tool (SWAT2000). The case study area is a small (37 km2), mainly rural watershed in Upstate New York called Town Brook that drains to New York City's (NYC) drinking water supply system. Town Brook is a sub-watershed in NYC's Cannonsville drinking water reservoir watershed (1178 km2). Mitigating phosphorus loading to this and other NYC drinking water reservoirs is critical to avoid construction of an estimated 8 billion dollar water filtration plant. A SWAT2000 model was previously developed by the authors to model phosphorus loading from the entire watershed to the Cannonsville Reservoir. Initial SWAT2000 model parameters for the Town Brook scale model were derived from the larger scale Cannonsville Basin model. This study is focused
Modeling and simulation research of a new calibration platform for visual test system
NASA Astrophysics Data System (ADS)
Bo, Liu; Dong, Ye; Che, Rensheng
2009-05-01
It's a new method that 3D motion parameters of rocket motor nozzle are measured by vision measuring technology, but the dynamic mesurement precision of vision measuring system should be evaluated. The calibration platform with nozzle model can simulate the actual motion of rocket motor nozzle, and supply standard motion parameters for dynamic calibration to vision measurement system. After analyzing the motion of some type rocket motor nozzle, a new parallel table for calibration is proposed. The mechanism is made up of a base, a moving table and three links. There are three degrees of freedom, rotation on X or Y coordinate axis, displacement on Z coordinate axis. The rotation angle is measured by photoelectric encoder, the displacement is measured by grating scale. The closed loop test system have two main features. First, the rotation center is fixed because of cross shaft. Second, the position and pose of table can measured with high precision. Then the normal kinematic solution to position-pose of the table is presented. The virtual prototype is constructed on Pro/E, and the movement simulation is processed through Adams, thus the correctness of normal kinematic solutions to position-pose is verified.
Cone-Probe Rake Design and Calibration for Supersonic Wind Tunnel Models
NASA Technical Reports Server (NTRS)
Won, Mark J.
1999-01-01
A series of experimental investigations were conducted at the NASA Langley Unitary Plan Wind Tunnel (UPWT) to calibrate cone-probe rakes designed to measure the flow field on 1-2% scale, high-speed wind tunnel models from Mach 2.15 to 2.4. The rakes were developed from a previous design that exhibited unfavorable measurement characteristics caused by a high probe spatial density and flow blockage from the rake body. Calibration parameters included Mach number, total pressure recovery, and flow angularity. Reference conditions were determined from a localized UPWT test section flow survey using a 10deg supersonic wedge probe. Test section Mach number and total pressure were determined using a novel iterative technique that accounted for boundary layer effects on the wedge surface. Cone-probe measurements were correlated to the surveyed flow conditions using analytical functions and recursive algorithms that resolved Mach number, pressure recovery, and flow angle to within +/-0.01, +/-1% and +/-0.1deg , respectively, for angles of attack and sideslip between +/-8deg. Uncertainty estimates indicated the overall cone-probe calibration accuracy was strongly influenced by the propagation of measurement error into the calculated results.
Sorption of Eu(III) on granite: EPMA, LA-ICP-MS, batch and modeling studies.
Fukushi, Keisuke; Hasegawa, Yusuke; Maeda, Koushi; Aoi, Yusuke; Tamura, Akihiro; Arai, Shoji; Yamamoto, Yuhei; Aosai, Daisuke; Mizuno, Takashi
2013-11-19
Eu(III) sorption on granite was assessed using combined microscopic and macroscopic approaches in neutral to acidic conditions where the mobility of Eu(III) is generally considered to be high. Polished thin sections of the granite were reacted with solutions containing 10 μM of Eu(III) and were analyzed using EPMA and LA-ICP-MS. On most of the biotite grains, Eu enrichment up to 6 wt % was observed. The Eu-enriched parts of biotite commonly lose K, which is the interlayer cation of biotite, indicating that the sorption mode of Eu(III) by the biotite is cation exchange in the interlayer. The distributions of Eu appeared along the original cracks of the biotite. Those occurrences indicate that the prior water-rock interaction along the cracks engendered modification of biotite to possess affinity to the Eu(III). Batch Eu(III) sorption experiments on granite and biotite powders were conducted as functions of pH, Eu(III) loading, and ionic strength. The macroscopic sorption behavior of biotite was consistent with that of granite. At pH > 4, there was little pH dependence but strong ionic strength dependence of Eu(III) sorption. At pH < 4, the sorption of Eu(III) abruptly decreased with decreased pH. The sorption behavior at pH > 4 was reproducible reasonably by the modeling considering single-site cation exchange reactions. The decrease of Eu(III) sorption at pH < 4 was explained by the occupation of exchangeable sites by dissolved cationic species such as Al and Fe from granite and biotite in low-pH conditions. Granites are complex mineral assemblages. However, the combined microscopic and macroscopic approaches revealed that elementary reactions by a single mineral phase can be representative of the bulk sorption reaction in complex mineral assemblages. PMID:24171426
Sorption of Eu(III) on granite: EPMA, LA-ICP-MS, batch and modeling studies.
Fukushi, Keisuke; Hasegawa, Yusuke; Maeda, Koushi; Aoi, Yusuke; Tamura, Akihiro; Arai, Shoji; Yamamoto, Yuhei; Aosai, Daisuke; Mizuno, Takashi
2013-11-19
Eu(III) sorption on granite was assessed using combined microscopic and macroscopic approaches in neutral to acidic conditions where the mobility of Eu(III) is generally considered to be high. Polished thin sections of the granite were reacted with solutions containing 10 μM of Eu(III) and were analyzed using EPMA and LA-ICP-MS. On most of the biotite grains, Eu enrichment up to 6 wt % was observed. The Eu-enriched parts of biotite commonly lose K, which is the interlayer cation of biotite, indicating that the sorption mode of Eu(III) by the biotite is cation exchange in the interlayer. The distributions of Eu appeared along the original cracks of the biotite. Those occurrences indicate that the prior water-rock interaction along the cracks engendered modification of biotite to possess affinity to the Eu(III). Batch Eu(III) sorption experiments on granite and biotite powders were conducted as functions of pH, Eu(III) loading, and ionic strength. The macroscopic sorption behavior of biotite was consistent with that of granite. At pH > 4, there was little pH dependence but strong ionic strength dependence of Eu(III) sorption. At pH < 4, the sorption of Eu(III) abruptly decreased with decreased pH. The sorption behavior at pH > 4 was reproducible reasonably by the modeling considering single-site cation exchange reactions. The decrease of Eu(III) sorption at pH < 4 was explained by the occupation of exchangeable sites by dissolved cationic species such as Al and Fe from granite and biotite in low-pH conditions. Granites are complex mineral assemblages. However, the combined microscopic and macroscopic approaches revealed that elementary reactions by a single mineral phase can be representative of the bulk sorption reaction in complex mineral assemblages.
NASA Astrophysics Data System (ADS)
Verstraeten, W. W.; Minnaert, M.; Meiresonne, L.; van Slycken, J.; Lust, N.; Muys, B.; Feyen, J.
Knowledge on hydrology and particularly on water use in forest ecosystems is rather scarce in Flanders. In order to assess the impact of forests in catchment hydrology, a model approach is required based on available or easily measurable parameters on me- teorology, forest patrimonium and soil cover. A pragmatic approach to calculate water use by forests is to implement a soil water balance model, which enables a reasonable estimate of the evapotranspiration (ET) despite of the fragmented forest, and therefore the strong boundary effects, typically for Flanders. The scientific objectives of this project are multiple: the calibration (i) and validation (ii) of the water balance model WAVE (Water and Agrochemicals in soil, crop and Vadose Environment) to calculate indirectly evapotranspiration of forests (for oak, beech, ash, poplar and pine) on 17 in- tensely and extensively sampled plots. Verification of the evapotranspiration from the WAVE-output with sap-flow measurements (iii). Comparison of evapotranspiration of forests to that of pasture and cropland will also be made (iv). Measurements of rainfall, throughfall, stemflow, capillary rise from the groundwater table (possibly recharge), percolation and changes in soil water content are conducted on weekly base, except for winter time (every two weeks). From these water balance terms the forest evapo- transpiration is derived. The Leaf-Area-Index was gained using hemispherical canopy images. This parameter is used for determining the soil evaporation and tree transpi- ration component from the simulated evaptranspiration. Sap-flow measurements are gathered using the Heat Field Deformation Method (Cermàk and Nadezhdina, 1998) in four plots (2 pine stands, popular, beech/oak). The preliminary results of the cal- ibration and validation of the soil water balance model WAVE for forest stands in Flanders are shown in part 2.
Xu, Lixin
2012-04-01
As so far, the redshift of Gamma-ray bursts (GRBs) can extend to z ∼ 8 which makes it as a complementary probe of dark energy to supernova Ia (SN Ia). However, the calibration of GRBs is still a big challenge when they are used to constrain cosmological models. Though, the absolute magnitude of GRBs is still unknown, the slopes of GRBs correlations can be used as a useful constraint to dark energy in a completely cosmological model independent way. In this paper, we follow Wang's model-independent distance measurement method and calculate their values by using 109 GRBs events via the so-called Amati relation. Then, we use the obtained model-independent distances to constrain ΛCDM model as an example.
Distribution system model calibration with big data from AMI and PV inverters
Peppanen, Jouni; Reno, Matthew J.; Broderick, Robert J.; Grijalva, Santiago
2016-03-03
Efficient management and coordination of distributed energy resources with advanced automation schemes requires accurate distribution system modeling and monitoring. Big data from smart meters and photovoltaic (PV) micro-inverters can be leveraged to calibrate existing utility models. This paper presents computationally efficient distribution system parameter estimation algorithms to improve the accuracy of existing utility feeder radial secondary circuit model parameters. The method is demonstrated using a real utility feeder model with advanced metering infrastructure (AMI) and PV micro-inverters, along with alternative parameter estimation approaches that can be used to improve secondary circuit models when limited measurement data is available. Lastly, themore » parameter estimation accuracy is demonstrated for both a three-phase test circuit with typical secondary circuit topologies and single-phase secondary circuits in a real mixed-phase test system.« less
Thermodynamics of shape memory alloy wire: Modeling, experimental calibration, and simulation
NASA Astrophysics Data System (ADS)
Chang, Bi-Chiau
A thermomechanical model for a shape memory alloy (SMA) wire under uniaxial loading is implemented in a finite element framework, and simulation results are compared with mechanical and infrared experimental data. The constitutive model is a one-dimensional strain-gradient continuum model of an SMA wire element, including two internal field variables, possible unstable mechanical behavior, and the relevant ther