Science.gov

Sample records for iii model calibration

  1. Calibrated Properties Model

    SciTech Connect

    C. Ahlers; H. Liu

    2000-03-12

    The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the ''AMR Development Plan for U0035 Calibrated Properties Model REV00. These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions.

  2. Calibrated Properties Model

    SciTech Connect

    T. Ghezzehej

    2004-10-04

    The purpose of this model report is to document the calibrated properties model that provides calibrated property sets for unsaturated zone (UZ) flow and transport process models (UZ models). The calibration of the property sets is performed through inverse modeling. This work followed, and was planned in, ''Technical Work Plan (TWP) for: Unsaturated Zone Flow Analysis and Model Report Integration'' (BSC 2004 [DIRS 169654], Sections 1.2.6 and 2.1.1.6). Direct inputs to this model report were derived from the following upstream analysis and model reports: ''Analysis of Hydrologic Properties Data'' (BSC 2004 [DIRS 170038]); ''Development of Numerical Grids for UZ Flow and Transport Modeling'' (BSC 2004 [DIRS 169855]); ''Simulation of Net Infiltration for Present-Day and Potential Future Climates'' (BSC 2004 [DIRS 170007]); ''Geologic Framework Model'' (GFM2000) (BSC 2004 [DIRS 170029]). Additionally, this model report incorporates errata of the previous version and closure of the Key Technical Issue agreement TSPAI 3.26 (Section 6.2.2 and Appendix B), and it is revised for improved transparency.

  3. Calibrated Properties Model

    SciTech Connect

    H. H. Liu

    2003-02-14

    This report has documented the methodologies and the data used for developing rock property sets for three infiltration maps. Model calibration is necessary to obtain parameter values appropriate for the scale of the process being modeled. Although some hydrogeologic property data (prior information) are available, these data cannot be directly used to predict flow and transport processes because they were measured on scales smaller than those characterizing property distributions in models used for the prediction. Since model calibrations were done directly on the scales of interest, the upscaling issue was automatically considered. On the other hand, joint use of data and the prior information in inversions can further increase the reliability of the developed parameters compared with those for the prior information. Rock parameter sets were developed for both the mountain and drift scales because of the scale-dependent behavior of fracture permeability. Note that these parameter sets, except those for faults, were determined using the 1-D simulations. Therefore, they cannot be directly used for modeling lateral flow because of perched water in the unsaturated zone (UZ) of Yucca Mountain. Further calibration may be needed for two- and three-dimensional modeling studies. As discussed above in Section 6.4, uncertainties for these calibrated properties are difficult to accurately determine, because of the inaccuracy of simplified methods for this complex problem or the extremely large computational expense of more rigorous methods. One estimate of uncertainty that may be useful to investigators using these properties is the uncertainty used for the prior information. In most cases, the inversions did not change the properties very much with respect to the prior information. The Output DTNs (including the input and output files for all runs) from this study are given in Section 9.4.

  4. Bayesian Calibration of Microsimulation Models.

    PubMed

    Rutter, Carolyn M; Miglioretti, Diana L; Savarino, James E

    2009-12-01

    Microsimulation models that describe disease processes synthesize information from multiple sources and can be used to estimate the effects of screening and treatment on cancer incidence and mortality at a population level. These models are characterized by simulation of individual event histories for an idealized population of interest. Microsimulation models are complex and invariably include parameters that are not well informed by existing data. Therefore, a key component of model development is the choice of parameter values. Microsimulation model parameter values are selected to reproduce expected or known results though the process of model calibration. Calibration may be done by perturbing model parameters one at a time or by using a search algorithm. As an alternative, we propose a Bayesian method to calibrate microsimulation models that uses Markov chain Monte Carlo. We show that this approach converges to the target distribution and use a simulation study to demonstrate its finite-sample performance. Although computationally intensive, this approach has several advantages over previously proposed methods, including the use of statistical criteria to select parameter values, simultaneous calibration of multiple parameters to multiple data sources, incorporation of information via prior distributions, description of parameter identifiability, and the ability to obtain interval estimates of model parameters. We develop a microsimulation model for colorectal cancer and use our proposed method to calibrate model parameters. The microsimulation model provides a good fit to the calibration data. We find evidence that some parameters are identified primarily through prior distributions. Our results underscore the need to incorporate multiple sources of variability (i.e., due to calibration data, unknown parameters, and estimated parameters and predicted values) when calibrating and applying microsimulation models.

  5. Model Calibration in Watershed Hydrology

    NASA Technical Reports Server (NTRS)

    Yilmaz, Koray K.; Vrugt, Jasper A.; Gupta, Hoshin V.; Sorooshian, Soroosh

    2009-01-01

    Hydrologic models use relatively simple mathematical equations to conceptualize and aggregate the complex, spatially distributed, and highly interrelated water, energy, and vegetation processes in a watershed. A consequence of process aggregation is that the model parameters often do not represent directly measurable entities and must, therefore, be estimated using measurements of the system inputs and outputs. During this process, known as model calibration, the parameters are adjusted so that the behavior of the model approximates, as closely and consistently as possible, the observed response of the hydrologic system over some historical period of time. This Chapter reviews the current state-of-the-art of model calibration in watershed hydrology with special emphasis on our own contributions in the last few decades. We discuss the historical background that has led to current perspectives, and review different approaches for manual and automatic single- and multi-objective parameter estimation. In particular, we highlight the recent developments in the calibration of distributed hydrologic models using parameter dimensionality reduction sampling, parameter regularization and parallel computing.

  6. Modeling metrology for calibration of OPC models

    NASA Astrophysics Data System (ADS)

    Mack, Chris A.; Raghunathan, Ananthan; Sturtevant, John; Deng, Yunfei; Zuniga, Christian; Adam, Kostas

    2016-03-01

    Optical Proximity Correction (OPC) has continually improved in accuracy over the years by adding more physically based models. Here, we further extend OPC modeling by adding the Analytical Linescan Model (ALM) to account for systematic biases in CD-SEM metrology. The ALM was added to a conventional OPC model calibration flow and the accuracy of the calibrated model with the ALM was compared to the standard model without the ALM using validation data. Without using any adjustable parameters in the ALM, OPC validation accuracy was improved by 5%. While very preliminary, these results give hope that modeling metrology could be an important next step in OPC model improvement.

  7. Calibration of landslide runout models

    NASA Astrophysics Data System (ADS)

    Cepeda, Jose; Alexánder Chávez, José; Cruz Martínez, Celina

    2010-05-01

    A review of the existing procedures for selection of runout model parameters from back analyses shows that these procedures do not allow integrating different types of runout criteria and generally lack a systematic approach. A new method based on Receiver Operating Characteristic (ROC) analyses and aimed to overcoming these limitations is herein proposed. The method consists of estimating discrete classifiers for every runout simulation associated with a set of model parameters. The set of parameters that yields the best prediction is selected using ROC metrics and space. The procedure is illustrated with the back analyses of a rainfall-triggered debris flow that killed 300-500 people in the Metropolitan Area of San Salvador (AMSS), El Salvador. The selected model parameters are used to estimate forward predictions for scenarios that correspond to different return periods. The proposed procedure may be useful in assessment of areas potentially affected by landslides. In turn, this information can be used in production or updating of land use plans and zonations, similar to that currently being carried out by the Office for Urban Planning of the Metropolitan Area of San Salvador (OPAMSS), El Salvador. Finally, practical aspects of the application of the method are discussed in the context of calibration at regional scales and considering uncertainty in the input variables.

  8. Calibration of models using groundwater age

    USGS Publications Warehouse

    Sanford, W.

    2011-01-01

    There have been substantial efforts recently by geochemists to determine the age of groundwater (time since water entered the system) and its uncertainty, and by hydrologists to use these data to help calibrate groundwater models. This essay discusses the calibration of models using groundwater age, with conclusions that emphasize what is practical given current limitations rather than theoretical possibilities.

  9. A Method to Test Model Calibration Techniques

    SciTech Connect

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-08-26

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  10. Calibrating etch model with SEM contours

    NASA Astrophysics Data System (ADS)

    Weisbuch, François; Omran, A.; Jantzen, Kenneth

    2015-03-01

    To ensure a high patterning quality, the etch effects have to be corrected within the OPC recipe in addition to the traditional lithographic effects. This requires the calibration of an accurate etch model and optimization of its implementation in the OPC flow. Using SEM contours is a promising approach to get numerous and highly reliable measurements especially for 2D structures for etch model calibration. A 28nm active layer was selected to calibrate and verify an etch model with 50 structures in total. We optimized the selection of the calibration structures as well as the model density. The implementation of the etch model to adjust the litho target layer allows a significant reduction of weak points. We also demonstrate that the etch model incorporated to the ORC recipe and run on large design can predict many hotspots.

  11. Efficient Calibration of Computationally Intensive Hydrological Models

    NASA Astrophysics Data System (ADS)

    Poulin, A.; Huot, P. L.; Audet, C.; Alarie, S.

    2015-12-01

    A new hybrid optimization algorithm for the calibration of computationally-intensive hydrological models is introduced. The calibration of hydrological models is a blackbox optimization problem where the only information available to the optimization algorithm is the objective function value. In the case of distributed hydrological models, the calibration process is often known to be hampered by computational efficiency issues. Running a single simulation may take several minutes and since the optimization process may require thousands of model evaluations, the computational time can easily expand to several hours or days. A blackbox optimization algorithm, which can substantially improve the calibration efficiency, has been developed. It merges both the convergence analysis and robust local refinement from the Mesh Adaptive Direct Search (MADS) algorithm, and the global exploration capabilities from the heuristic strategies used by the Dynamically Dimensioned Search (DDS) algorithm. The new algorithm is applied to the calibration of the distributed and computationally-intensive HYDROTEL model on three different river basins located in the province of Quebec (Canada). Two calibration problems are considered: (1) calibration of a 10-parameter version of HYDROTEL, and (2) calibration of a 19-parameter version of the same model. A previous study by the authors had shown that the original version of DDS was the most efficient method for the calibration of HYDROTEL, when compared to the MADS and the very well-known SCEUA algorithms. The computational efficiency of the hybrid DDS-MADS method is therefore compared with the efficiency of the DDS algorithm based on a 2000 model evaluations budget. Results show that the hybrid DDS-MADS method can reduce the total number of model evaluations by 70% for the 10-parameter version of HYDROTEL and by 40% for the 19-parameter version without compromising the quality of the final objective function value.

  12. Preserving Flow Variability in Watershed Model Calibrations

    EPA Science Inventory

    Background/Question/Methods Although watershed modeling flow calibration techniques often emphasize a specific flow mode, ecological conditions that depend on flow-ecology relationships often emphasize a range of flow conditions. We used informal likelihood methods to investig...

  13. Preserving Flow Variability in Watershed Model Calibrations

    EPA Science Inventory

    Background/Question/Methods Although watershed modeling flow calibration techniques often emphasize a specific flow mode, ecological conditions that depend on flow-ecology relationships often emphasize a range of flow conditions. We used informal likelihood methods to investig...

  14. Autotune Calibrates Models to Building Use Data

    ScienceCinema

    None

    2016-09-02

    Models of existing buildings are currently unreliable unless calibrated manually by a skilled professional. Autotune, as the name implies, automates this process by calibrating the model of an existing building to measured data, and is now available as open source software. This enables private businesses to incorporate Autotune into their products so that their customers can more effectively estimate cost savings of reduced energy consumption measures in existing buildings.

  15. Autotune Calibrates Models to Building Use Data

    SciTech Connect

    2016-08-26

    Models of existing buildings are currently unreliable unless calibrated manually by a skilled professional. Autotune, as the name implies, automates this process by calibrating the model of an existing building to measured data, and is now available as open source software. This enables private businesses to incorporate Autotune into their products so that their customers can more effectively estimate cost savings of reduced energy consumption measures in existing buildings.

  16. Scatterer Modeling/Calibration Study

    DTIC Science & Technology

    1992-06-01

    of the corner diffracted field in the section above and Figure 4.5 (which gives the cross section in dBsm), the corner diffracted field has a...Buyukdura, R. J. Marhefka, and W. Ebihara, " Radar cross - section stud- ies, Phase III," Technical Report 716622-1, The Ohio State University Electro...higher order terms are important to obtain more complete lower frequency behavior. In some cases correct amplitude information in lower cross section

  17. A Novel Protocol for Model Calibration in Biological Wastewater Treatment

    NASA Astrophysics Data System (ADS)

    Zhu, Ao; Guo, Jianhua; Ni, Bing-Jie; Wang, Shuying; Yang, Qing; Peng, Yongzhen

    2015-02-01

    Activated sludge models (ASMs) have been widely used for process design, operation and optimization in wastewater treatment plants. However, it is still a challenge to achieve an efficient calibration for reliable application by using the conventional approaches. Hereby, we propose a novel calibration protocol, i.e. Numerical Optimal Approaching Procedure (NOAP), for the systematic calibration of ASMs. The NOAP consists of three key steps in an iterative scheme flow: i) global factors sensitivity analysis for factors fixing; ii) pseudo-global parameter correlation analysis for non-identifiable factors detection; and iii) formation of a parameter subset through an estimation by using genetic algorithm. The validity and applicability are confirmed using experimental data obtained from two independent wastewater treatment systems, including a sequencing batch reactor and a continuous stirred-tank reactor. The results indicate that the NOAP can effectively determine the optimal parameter subset and successfully perform model calibration and validation for these two different systems. The proposed NOAP is expected to use for automatic calibration of ASMs and be applied potentially to other ordinary differential equations models.

  18. Robust calibration of a global aerosol model

    NASA Astrophysics Data System (ADS)

    Lee, L.; Carslaw, K. S.; Pringle, K. J.; Reddington, C.

    2013-12-01

    Comparison of models and observations is vital for evaluating how well computer models can simulate real world processes. However, many current methods are lacking in their assessment of the model uncertainty, which introduces questions regarding the robustness of the observationally constrained model. In most cases, models are evaluated against observations using a single baseline simulation considered to represent the models' best estimate. The model is then improved in some way so that its comparison to observations is improved. Continuous adjustments in such a way may result in a model that compares better to observations but there may be many compensating features which make prediction with the newly calibrated model difficult to justify. There may also be some model outputs whose comparison to observations becomes worse in some regions/seasons as others improve. In such cases calibration cannot be considered robust. We present details of the calibration of a global aerosol model, GLOMAP, in which we consider not just a single model setup but a perturbed physics ensemble with 28 uncertain parameters. We first quantify the uncertainty in various model outputs (CCN, CN) for the year 2008 and use statistical emulation to identify which of the 28 parameters contribute most to this uncertainty. We then compare the emulated model simulations in the entire parametric uncertainty space to observations. Regions where the entire ensemble lies outside the error of the observations indicate structural model error or gaps in current knowledge which allows us to target future research areas. Where there is some agreement with the observations we use the information on the sources of the model uncertainty to identify geographical regions in which the important parameters are similar. Identification of regional calibration clusters helps us to use information from observation rich regions to calibrate regions with sparse observations and allow us to make recommendations for

  19. Adaptable Multivariate Calibration Models for Spectral Applications

    SciTech Connect

    THOMAS,EDWARD V.

    1999-12-20

    Multivariate calibration techniques have been used in a wide variety of spectroscopic situations. In many of these situations spectral variation can be partitioned into meaningful classes. For example, suppose that multiple spectra are obtained from each of a number of different objects wherein the level of the analyte of interest varies within each object over time. In such situations the total spectral variation observed across all measurements has two distinct general sources of variation: intra-object and inter-object. One might want to develop a global multivariate calibration model that predicts the analyte of interest accurately both within and across objects, including new objects not involved in developing the calibration model. However, this goal might be hard to realize if the inter-object spectral variation is complex and difficult to model. If the intra-object spectral variation is consistent across objects, an effective alternative approach might be to develop a generic intra-object model that can be adapted to each object separately. This paper contains recommendations for experimental protocols and data analysis in such situations. The approach is illustrated with an example involving the noninvasive measurement of glucose using near-infrared reflectance spectroscopy. Extensions to calibration maintenance and calibration transfer are discussed.

  20. Objective calibration of regional climate models

    NASA Astrophysics Data System (ADS)

    Bellprat, O.; Kotlarski, S.; Lüthi, D.; SchäR, C.

    2012-12-01

    Climate models are subject to high parametric uncertainty induced by poorly confined model parameters of parameterized physical processes. Uncertain model parameters are typically calibrated in order to increase the agreement of the model with available observations. The common practice is to adjust uncertain model parameters manually, often referred to as expert tuning, which lacks objectivity and transparency in the use of observations. These shortcomings often haze model inter-comparisons and hinder the implementation of new model parameterizations. Methods which would allow to systematically calibrate model parameters are unfortunately often not applicable to state-of-the-art climate models, due to computational constraints facing the high dimensionality and non-linearity of the problem. Here we present an approach to objectively calibrate a regional climate model, using reanalysis driven simulations and building upon a quadratic metamodel presented by Neelin et al. (2010) that serves as a computationally cheap surrogate of the model. Five model parameters originating from different parameterizations are selected for the optimization according to their influence on the model performance. The metamodel accurately estimates spatial averages of 2 m temperature, precipitation and total cloud cover, with an uncertainty of similar magnitude as the internal variability of the regional climate model. The non-linearities of the parameter perturbations are well captured, such that only a limited number of 20-50 simulations are needed to estimate optimal parameter settings. Parameter interactions are small, which allows to further reduce the number of simulations. In comparison to an ensemble of the same model which has undergone expert tuning, the calibration yields similar optimal model configurations, but leading to an additional reduction of the model error. The performance range captured is much wider than sampled with the expert-tuned ensemble and the presented

  1. Modelling PTB's spatial angle autocollimator calibrator

    NASA Astrophysics Data System (ADS)

    Kranz, Oliver; Geckeler, Ralf D.; Just, Andreas; Krause, Michael

    2013-05-01

    The accurate and traceable form measurement of optical surfaces has been greatly advanced by a new generation of surface profilometers which are based on the reflection of light at the surface and the measurement of the reflection angle. For this application, high-resolution electronic autocollimators provide accurate and traceable angle metrology. In recent years, great progress has been made at the Physikalisch-Technische Bundesanstalt (PTB) in autocollimator calibration. For an advanced autocollimator characterisation, a novel calibration device has been built up at PTB: the Spatial Angle Autocollimator Calibrator (SAAC). The system makes use of an innovative Cartesian arrangement of three autocollimators (two reference autocollimators and the autocollimator to be calibrated), which allows a precise measurement of the angular orientation of a reflector cube. Each reference autocollimator is sensitive primarily to changes in one of the two relevant tilt angles, whereas the autocollimator to be calibrated is sensitive to both. The distance between the reflector cube and the autocollimator to be calibrated can be varied flexibly. In this contribution, we present the SAAC and aspects of the mathematical modelling of the system for deriving analytical expressions for the autocollimators' angle responses. These efforts will allow advancing the form measurement substantially with autocollimator-based profilometers and approaching fundamental measurement limits. Additionally, they will help manufacturers of autocollimators to improve their instruments and will provide improved angle measurement methods for precision engineering.

  2. Calibration and validation of rockfall models

    NASA Astrophysics Data System (ADS)

    Frattini, Paolo; Valagussa, Andrea; Zenoni, Stefania; Crosta, Giovanni B.

    2013-04-01

    Calibrating and validating landslide models is extremely difficult due to the particular characteristic of landslides: limited recurrence in time, relatively low frequency of the events, short durability of post-event traces, poor availability of continuous monitoring data, especially for small landslide and rockfalls. For this reason, most of the rockfall models presented in literature completely lack calibration and validation of the results. In this contribution, we explore different strategies for rockfall model calibration and validation starting from both an historical event and a full-scale field test. The event occurred in 2012 in Courmayeur (Western Alps, Italy), and caused serious damages to quarrying facilities. This event has been studied soon after the occurrence through a field campaign aimed at mapping the blocks arrested along the slope, the shape and location of the detachment area, and the traces of scars associated to impacts of blocks on the slope. The full-scale field test was performed by Geovert Ltd in the Christchurch area (New Zealand) after the 2011 earthquake. During the test, a number of large blocks have been mobilized from the upper part of the slope and filmed with high velocity cameras from different viewpoints. The movies of each released block were analysed to identify the block shape, the propagation path, the location of impacts, the height of the trajectory and the velocity of the block along the path. Both calibration and validation of rockfall models should be based on the optimization of the agreement between the actual trajectories or location of arrested blocks and the simulated ones. A measure that describe this agreement is therefore needed. For calibration purpose, this measure should simple enough to allow trial and error repetitions of the model for parameter optimization. In this contribution we explore different calibration/validation measures: (1) the percentage of simulated blocks arresting within a buffer of the

  3. Calibrating binary lumped parameter models

    NASA Astrophysics Data System (ADS)

    Morgenstern, Uwe; Stewart, Mike

    2017-04-01

    Groundwater at its discharge point is a mixture of water from short and long flowlines, and therefore has a distribution of ages rather than a single age. Various transfer functions describe the distribution of ages within the water sample. Lumped parameter models (LPMs), which are mathematical models of water transport based on simplified aquifer geometry and flow configuration can account for such mixing of groundwater of different age, usually representing the age distribution with two parameters, the mean residence time, and the mixing parameter. Simple lumped parameter models can often match well the measured time varying age tracer concentrations, and therefore are a good representation of the groundwater mixing at these sites. Usually a few tracer data (time series and/or multi-tracer) can constrain both parameters. With the building of larger data sets of age tracer data throughout New Zealand, including tritium, SF6, CFCs, and recently Halon-1301, and time series of these tracers, we realised that for a number of wells the groundwater ages using a simple lumped parameter model were inconsistent between the different tracer methods. Contamination or degradation of individual tracers is unlikely because the different tracers show consistent trends over years and decades. This points toward a more complex mixing of groundwaters with different ages for such wells than represented by the simple lumped parameter models. Binary (or compound) mixing models are able to represent a more complex mixing, with mixing of water of two different age distributions. The problem related to these models is that they usually have 5 parameters which makes them data-hungry and therefore difficult to constrain all parameters. Two or more age tracers with different input functions, with multiple measurements over time, can provide the required information to constrain the parameters of the binary mixing model. We obtained excellent results using tritium time series encompassing

  4. Validating instrument models through the calibration process

    NASA Astrophysics Data System (ADS)

    Bingham, G. E.; Tansock, J. J.

    2006-08-01

    The performance of modern IR instruments is becoming so good that meeting science requirements requires an accurate instrument model be used throughout the design and development process. The huge cost overruns on recent major programs are indicative that the design and cost models being used to predict performance have lagged behind anticipated performance. Tuning these models to accurately reflect the true performance of target instruments requires a modeling process that has been developed over several instruments and validated by careful calibration. The process of developing a series of Engineering Development Models is often used on longer duration programs to achieve this end. The accuracy of the models and their components has to be validated by a carefully planned calibration process, preferably considered in the instrument design. However, a good model does not satisfy all the requirements to bring acquisition programs under control. Careful detail in the specification process and a similar, validated model on the government side will also be required. This paper discusses the model development process and calibration approaches used to verify and update the models of several new instruments, including Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) and Far Infrared Spectroscopy of the Troposphere (FIRST).

  5. New Method of Calibrating IRT Models.

    ERIC Educational Resources Information Center

    Jiang, Hai; Tang, K. Linda

    This discussion of new methods for calibrating item response theory (IRT) models looks into new optimization procedures, such as the Genetic Algorithm (GA) to improve on the use of the Newton-Raphson procedure. The advantages of using a global optimization procedure like GA is that this kind of procedure is not easily affected by local optima and…

  6. Methods and guidelines for effective model calibration

    USGS Publications Warehouse

    Hill, M.C.

    2004-01-01

    This paper briefly describes nonlinear regression methods, a set of 14 guidelines for model calibration, how they are implemented in and supported by two public domain computer programs, and a demonstration and a test of the methods and guidelines. Copyright ASCE 2004.

  7. A frequentist approach to computer model calibration

    SciTech Connect

    Wong, Raymond K. W.; Storlie, Curtis Byron; Lee, Thomas C. M.

    2016-05-05

    The paper considers the computer model calibration problem and provides a general frequentist solution. Under the framework proposed, the data model is semiparametric with a non-parametric discrepancy function which accounts for any discrepancy between physical reality and the computer model. In an attempt to solve a fundamentally important (but often ignored) identifiability issue between the computer model parameters and the discrepancy function, the paper proposes a new and identifiable parameterization of the calibration problem. It also develops a two-step procedure for estimating all the relevant quantities under the new parameterization. This estimation procedure is shown to enjoy excellent rates of convergence and can be straightforwardly implemented with existing software. For uncertainty quantification, bootstrapping is adopted to construct confidence regions for the quantities of interest. As a result, the practical performance of the methodology is illustrated through simulation examples and an application to a computational fluid dynamics model.

  8. A frequentist approach to computer model calibration

    DOE PAGES

    Wong, Raymond K. W.; Storlie, Curtis Byron; Lee, Thomas C. M.

    2016-05-05

    The paper considers the computer model calibration problem and provides a general frequentist solution. Under the framework proposed, the data model is semiparametric with a non-parametric discrepancy function which accounts for any discrepancy between physical reality and the computer model. In an attempt to solve a fundamentally important (but often ignored) identifiability issue between the computer model parameters and the discrepancy function, the paper proposes a new and identifiable parameterization of the calibration problem. It also develops a two-step procedure for estimating all the relevant quantities under the new parameterization. This estimation procedure is shown to enjoy excellent rates ofmore » convergence and can be straightforwardly implemented with existing software. For uncertainty quantification, bootstrapping is adopted to construct confidence regions for the quantities of interest. As a result, the practical performance of the methodology is illustrated through simulation examples and an application to a computational fluid dynamics model.« less

  9. Using sensitivity analysis in model calibration efforts

    USGS Publications Warehouse

    Tiedeman, Claire R.; Hill, Mary C.

    2003-01-01

    In models of natural and engineered systems, sensitivity analysis can be used to assess relations among system state observations, model parameters, and model predictions. The model itself links these three entities, and model sensitivities can be used to quantify the links. Sensitivities are defined as the derivatives of simulated quantities (such as simulated equivalents of observations, or model predictions) with respect to model parameters. We present four measures calculated from model sensitivities that quantify the observation-parameter-prediction links and that are especially useful during the calibration and prediction phases of modeling. These four measures are composite scaled sensitivities (CSS), prediction scaled sensitivities (PSS), the value of improved information (VOII) statistic, and the observation prediction (OPR) statistic. These measures can be used to help guide initial calibration of models, collection of field data beneficial to model predictions, and recalibration of models updated with new field information. Once model sensitivities have been calculated, each of the four measures requires minimal computational effort. We apply the four measures to a three-layer MODFLOW-2000 (Harbaugh et al., 2000; Hill et al., 2000) model of the Death Valley regional ground-water flow system (DVRFS), located in southern Nevada and California. D’Agnese et al. (1997, 1999) developed and calibrated the model using nonlinear regression methods. Figure 1 shows some of the observations, parameters, and predictions for the DVRFS model. Observed quantities include hydraulic heads and spring flows. The 23 defined model parameters include hydraulic conductivities, vertical anisotropies, recharge rates, evapotranspiration rates, and pumpage. Predictions of interest for this regional-scale model are advective transport paths from potential contamination sites underlying the Nevada Test Site and Yucca Mountain.

  10. Calibrating reaction rates for the CREST model

    NASA Astrophysics Data System (ADS)

    Handley, Caroline A.; Christie, Michael A.

    2017-01-01

    The CREST reactive-burn model uses entropy-dependent reaction rates that, until now, have been manually tuned to fit shock-initiation and detonation data in hydrocode simulations. This paper describes the initial development of an automatic method for calibrating CREST reaction-rate coefficients, using particle swarm optimisation. The automatic method is applied to EDC32, to help develop the first CREST model for this conventional high explosive.

  11. Hydrological model calibration for enhancing global flood forecast skill

    NASA Astrophysics Data System (ADS)

    Hirpa, Feyera A.; Beck, Hylke E.; Salamon, Peter; Thielen-del Pozo, Jutta

    2016-04-01

    Early warning systems play a key role in flood risk reduction, and their effectiveness is directly linked to streamflow forecast skill. The skill of a streamflow forecast is affected by several factors; among them are (i) model errors due to incomplete representation of physical processes and inaccurate parameterization, (ii) uncertainty in the model initial conditions, and (iii) errors in the meteorological forcing. In macro scale (continental or global) modeling, it is a common practice to use a priori parameter estimates over large river basins or wider regions, resulting in suboptimal streamflow estimations. The aim of this work is to improve flood forecast skill of the Global Flood Awareness System (GloFAS; www.globalfloods.eu), a grid-based forecasting system that produces flood forecast unto 30 days lead, through calibration of the distributed hydrological model parameters. We use a combination of in-situ and satellite-based streamflow data for automatic calibration using a multi-objective genetic algorithm. We will present the calibrated global parameter maps and report the forecast skill improvements achieved. Furthermore, we discuss current challenges and future opportunities with regard to global-scale early flood warning systems.

  12. Grid based calibration of SWAT hydrological models

    NASA Astrophysics Data System (ADS)

    Gorgan, D.; Bacu, V.; Mihon, D.; Rodila, D.; Abbaspour, K.; Rouholahnejad, E.

    2012-07-01

    The calibration and execution of large hydrological models, such as SWAT (soil and water assessment tool), developed for large areas, high resolution, and huge input data, need not only quite a long execution time but also high computation resources. SWAT hydrological model supports studies and predictions of the impact of land management practices on water, sediment, and agricultural chemical yields in complex watersheds. The paper presents the gSWAT application as a web practical solution for environmental specialists to calibrate extensive hydrological models and to run scenarios, by hiding the complex control of processes and heterogeneous resources across the grid based high computation infrastructure. The paper highlights the basic functionalities of the gSWAT platform, and the features of the graphical user interface. The presentation is concerned with the development of working sessions, interactive control of calibration, direct and basic editing of parameters, process monitoring, and graphical and interactive visualization of the results. The experiments performed on different SWAT models and the obtained results argue the benefits brought by the grid parallel and distributed environment as a solution for the processing platform. All the instances of SWAT models used in the reported experiments have been developed through the enviroGRIDS project, targeting the Black Sea catchment area.

  13. High Accuracy Transistor Compact Model Calibrations

    SciTech Connect

    Hembree, Charles E.; Mar, Alan; Robertson, Perry J.

    2015-09-01

    Typically, transistors are modeled by the application of calibrated nominal and range models. These models consists of differing parameter values that describe the location and the upper and lower limits of a distribution of some transistor characteristic such as current capacity. Correspond- ingly, when using this approach, high degrees of accuracy of the transistor models are not expected since the set of models is a surrogate for a statistical description of the devices. The use of these types of models describes expected performances considering the extremes of process or transistor deviations. In contrast, circuits that have very stringent accuracy requirements require modeling techniques with higher accuracy. Since these accurate models have low error in transistor descriptions, these models can be used to describe part to part variations as well as an accurate description of a single circuit instance. Thus, models that meet these stipulations also enable the calculation of quantifi- cation of margins with respect to a functional threshold and uncertainties in these margins. Given this need, new model high accuracy calibration techniques for bipolar junction transis- tors have been developed and are described in this report.

  14. Objective calibration of numerical weather prediction models

    NASA Astrophysics Data System (ADS)

    Voudouri, A.; Khain, P.; Carmona, I.; Bellprat, O.; Grazzini, F.; Avgoustoglou, E.; Bettems, J. M.; Kaufmann, P.

    2017-07-01

    Numerical weather prediction (NWP) and climate models use parameterization schemes for physical processes, which often include free or poorly confined parameters. Model developers normally calibrate the values of these parameters subjectively to improve the agreement of forecasts with available observations, a procedure referred as expert tuning. A practicable objective multi-variate calibration method build on a quadratic meta-model (MM), that has been applied for a regional climate model (RCM) has shown to be at least as good as expert tuning. Based on these results, an approach to implement the methodology to an NWP model is presented in this study. Challenges in transferring the methodology from RCM to NWP are not only restricted to the use of higher resolution and different time scales. The sensitivity of the NWP model quality with respect to the model parameter space has to be clarified, as well as optimize the overall procedure, in terms of required amount of computing resources for the calibration of an NWP model. Three free model parameters affecting mainly turbulence parameterization schemes were originally selected with respect to their influence on the variables associated to daily forecasts such as daily minimum and maximum 2 m temperature as well as 24 h accumulated precipitation. Preliminary results indicate that it is both affordable in terms of computer resources and meaningful in terms of improved forecast quality. In addition, the proposed methodology has the advantage of being a replicable procedure that can be applied when an updated model version is launched and/or customize the same model implementation over different climatological areas.

  15. Gradient-based model calibration with proxy-model assistance

    NASA Astrophysics Data System (ADS)

    Burrows, Wesley; Doherty, John

    2016-02-01

    Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.

  16. SDSS-III/APOGEE: Science and Survey Calibrations and using Open Clusters

    NASA Astrophysics Data System (ADS)

    Frinchaboy, Peter M.; O'Connell, J.; Meszaros, Sz.; Cunha, K. M.; Smith, V. V.; Garcia Perez, A.; Shetrone, M. D.; Allende Prieto, C.; Johnson, J.; Zasowski, G.; Majewski, S. R.; Schiavon, R. P.; Holtzman, J. A.; Nidever, D.; Bizyaev, D.; Hearty, F. R.; Jackson, K.; Thompson, B. A.; Wilson, J. C.; Beers, T. C.

    2013-01-01

    We present results from the first year of the SDSS-III/Apache Point Obseratory Galactic Evolution Experiment (APOGEE) survey of open cluster targets. APOGEE is studying several key open clusters for calibration and science (e.g., M67, NGC 6791), and here we present early science results and comparison to previous work on a number of clusters focusing on radial velocities, stellar parameters, and abundances. Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is http://www.sdss3.org/. SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, University of Cambridge, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University.

  17. Process analytical technology case study, part III: calibration monitoring and transfer.

    PubMed

    Cogdill, Robert P; Anderson, Carl A; Drennen, James K

    2005-10-06

    This is the third of a series of articles detailing the development of near-infrared spectroscopy methods for solid dosage form analysis. Experiments were conducted at the Duquesne University Center for Pharmaceutical Technology to develop a system for continuous calibration monitoring and formulate an appropriate strategy for calibration transfer. Indicators of high-flux noise (noise factor level) and wavelength uncertainty were developed. These measurements, in combination with Hotelling's T(2) and Q residual, are used to continuously monitor instrument performance and model relevance. Four calibration transfer techniques were compared. Three established techniques, finite impulse response filtering, generalized least squares weighting, and piecewise direct standardization were evaluated. A fourth technique, baseline subtraction, was the most effective for calibration transfer. Using as few as 15 transfer samples, predictive capability of the analytical method was maintained across multiple instruments and major instrument maintenance.

  18. CALIBRATIONS OF ATMOSPHERIC PARAMETERS OBTAINED FROM THE FIRST YEAR OF SDSS-III APOGEE OBSERVATIONS

    SciTech Connect

    Mészáros, Sz.; Allende Prieto, C.; Holtzman, J.; García Pérez, A. E.; Chojnowski, S. D.; Hearty, F. R.; Majewski, S. R.; Schiavon, R. P.; Basu, S.; Bizyaev, D.; Chaplin, W. J.; Elsworth, Y.; Cunha, K.; Epstein, C.; Johnson, J. A.; Frinchaboy, P. M.; García, R. A.; Kallinger, T.; Koesterke, L.; and others

    2013-11-01

    The Sloan Digital Sky Survey III (SDSS-III) Apache Point Observatory Galactic Evolution Experiment (APOGEE) is a three-year survey that is collecting 10{sup 5} high-resolution spectra in the near-IR across multiple Galactic populations. To derive stellar parameters and chemical compositions from this massive data set, the APOGEE Stellar Parameters and Chemical Abundances Pipeline (ASPCAP) has been developed. Here, we describe empirical calibrations of stellar parameters presented in the first SDSS-III APOGEE data release (DR10). These calibrations were enabled by observations of 559 stars in 20 globular and open clusters. The cluster observations were supplemented by observations of stars in NASA's Kepler field that have well determined surface gravities from asteroseismic analysis. We discuss the accuracy and precision of the derived stellar parameters, considering especially effective temperature, surface gravity, and metallicity; we also briefly discuss the derived results for the abundances of the α-elements, carbon, and nitrogen. Overall, we find that ASPCAP achieves reasonably accurate results for temperature and metallicity, but suffers from systematic errors in surface gravity. We derive calibration relations that bring the raw ASPCAP results into better agreement with independently determined stellar parameters. The internal scatter of ASPCAP parameters within clusters suggests that metallicities are measured with a precision better than 0.1 dex, effective temperatures better than 150 K, and surface gravities better than 0.2 dex. The understanding provided by the clusters and Kepler giants on the current accuracy and precision will be invaluable for future improvements of the pipeline.

  19. Evaluation of “Autotune” calibration against manual calibration of building energy models

    SciTech Connect

    Chaudhary, Gaurav; New, Joshua; Sanyal, Jibonananda; Im, Piljae; O’Neill, Zheng; Garg, Vishal

    2016-08-26

    Our paper demonstrates the application of Autotune, a methodology aimed at automatically producing calibrated building energy models using measured data, in two case studies. In the first case, a building model is de-tuned by deliberately injecting faults into more than 60 parameters. This model was then calibrated using Autotune and its accuracy with respect to the original model was evaluated in terms of the industry-standard normalized mean bias error and coefficient of variation of root mean squared error metrics set forth in ASHRAE Guideline 14. In addition to whole-building energy consumption, outputs including lighting, plug load profiles, HVAC energy consumption, zone temperatures, and other variables were analyzed. In the second case, Autotune calibration is compared directly to experts’ manual calibration of an emulated-occupancy, full-size residential building with comparable calibration results in much less time. Lastly, our paper concludes with a discussion of the key strengths and weaknesses of auto-calibration approaches.

  20. Evaluation of “Autotune” calibration against manual calibration of building energy models

    SciTech Connect

    Chaudhary, Gaurav; New, Joshua; Sanyal, Jibonananda; Im, Piljae; O’Neill, Zheng; Garg, Vishal

    2016-08-26

    Our paper demonstrates the application of Autotune, a methodology aimed at automatically producing calibrated building energy models using measured data, in two case studies. In the first case, a building model is de-tuned by deliberately injecting faults into more than 60 parameters. This model was then calibrated using Autotune and its accuracy with respect to the original model was evaluated in terms of the industry-standard normalized mean bias error and coefficient of variation of root mean squared error metrics set forth in ASHRAE Guideline 14. In addition to whole-building energy consumption, outputs including lighting, plug load profiles, HVAC energy consumption, zone temperatures, and other variables were analyzed. In the second case, Autotune calibration is compared directly to experts’ manual calibration of an emulated-occupancy, full-size residential building with comparable calibration results in much less time. Lastly, our paper concludes with a discussion of the key strengths and weaknesses of auto-calibration approaches.

  1. Seepage Calibration Model and Seepage Testing Data

    SciTech Connect

    P. Dixon

    2004-02-17

    The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM is developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA (see upcoming REV 02 of CRWMS M&O 2000 [153314]), which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model (see BSC 2003 [161530]). The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross Drift to obtain the permeability structure for the seepage model; (3) to use inverse modeling to calibrate the SCM and to estimate seepage-relevant, model-related parameters on the drift scale; (4) to estimate the epistemic uncertainty of the derived parameters, based on the goodness-of-fit to the observed data and the sensitivity of calculated seepage with respect to the parameters of interest; (5) to characterize the aleatory uncertainty of

  2. Modelling Experimental Procedures for Manipulator Calibration

    DTIC Science & Technology

    1991-12-01

    AD-A245 603 NAVAL POSTGRADUATE SCHOOL Monterey, California RA D TIC THESIS F-a 1 . MODELLING EXPERIMENTAL PROCEDURES FOR MANIPULATOR CALIBRATION by...William E. Swayze December 1991 Thesis Advisor: Morris R. Driels Approved for public release; distribution is unlimited 92-03143 Uncl ass if ied...2 PERSONAL AUTHORS WILLI.TM E. SWAYZE 13a TYPE OF REPORT 13o TIME COVERED 14. DATE OF REPORT (Year, Mon.th Day) 15 PAGE COUNT Master’s Thesis FROM

  3. Thematic Mapper. Volume 1: Calibration report flight model, LANDSAT 5

    NASA Technical Reports Server (NTRS)

    Cooley, R. C.; Lansing, J. C.

    1984-01-01

    The calibration of the Flight 1 Model Thematic Mapper is discussed. Spectral response, scan profile, coherent noise, line spread profiles and white light leaks, square wave response, radiometric calibration, and commands and telemetry are specifically addressed.

  4. Sensor modelling and camera calibration for close-range photogrammetry

    NASA Astrophysics Data System (ADS)

    Luhmann, Thomas; Fraser, Clive; Maas, Hans-Gerd

    2016-05-01

    Metric calibration is a critical prerequisite to the application of modern, mostly consumer-grade digital cameras for close-range photogrammetric measurement. This paper reviews aspects of sensor modelling and photogrammetric calibration, with attention being focussed on techniques of automated self-calibration. Following an initial overview of the history and the state of the art, selected topics of current interest within calibration for close-range photogrammetry are addressed. These include sensor modelling, with standard, extended and generic calibration models being summarised, along with non-traditional camera systems. Self-calibration via both targeted planar arrays and targetless scenes amenable to SfM-based exterior orientation are then discussed, after which aspects of calibration and measurement accuracy are covered. Whereas camera self-calibration is largely a mature technology, there is always scope for additional research to enhance the models and processes employed with the many camera systems nowadays utilised in close-range photogrammetry.

  5. Calibrated predictions for multivariate competing risks models.

    PubMed

    Gorfine, Malka; Hsu, Li; Zucker, David M; Parmigiani, Giovanni

    2014-04-01

    Prediction models for time-to-event data play a prominent role in assessing the individual risk of a disease, such as cancer. Accurate disease prediction models provide an efficient tool for identifying individuals at high risk, and provide the groundwork for estimating the population burden and cost of disease and for developing patient care guidelines. We focus on risk prediction of a disease in which family history is an important risk factor that reflects inherited genetic susceptibility, shared environment, and common behavior patterns. In this work family history is accommodated using frailty models, with the main novel feature being allowing for competing risks, such as other diseases or mortality. We show through a simulation study that naively treating competing risks as independent right censoring events results in non-calibrated predictions, with the expected number of events overestimated. Discrimination performance is not affected by ignoring competing risks. Our proposed prediction methodologies correctly account for competing events, are very well calibrated, and easy to implement.

  6. Calibrating Treasure Valley Groundwater Model using MODFLOW

    NASA Astrophysics Data System (ADS)

    Hernandez, J.; Tan, K.

    2016-12-01

    In Idaho, groundwater plays an especially important role in the state. According to the Idaho Department of Environmental Quality, groundwater supplies 95% of the state's drinking water (2011). The USGS estimates that Idaho withdraws 117 million cubic meters (95,000 acre-feet) per year from groundwater sources for domestic usage which includes drinking water. The same report from the USGS also estimates that Idaho withdraws 5,140 million cubic meters (4,170,000 acre-feet) per year from groundwater sources for irrigation usage. Quantifying and managing that resource and estimating groundwater levels in the future is important for a variety of socio-economic reasons. As the population within the Treasure Valley continues to grow, the demand of clean usable groundwater increases. The objective of this study was to develop and calibrate a groundwater model with the purpose of understanding short- and long-term effects of existing and alternative land use scenarios on groundwater changes. Hydrologic simulations were done using the MODFLOW-2000 model. The model was calibrated for predevelopment period by reproducing and comparing groundwater levels of the years before 1925 using steady state boundary conditions representing no change in the land use. Depending on the reliability of the groundwater source, the economic growth of the area can be constrained or allowed to flourish. Mismanagement of the groundwater source can impact its sustainability, quality and could hamper development by increasing operation and maintenance costs. Proper water management is critical because groundwater is such a limited resource.

  7. Optimum Experimental Design applied to MEMS accelerometer calibration for 9-parameter auto-calibration model.

    PubMed

    Ye, Lin; Su, Steven W

    2015-01-01

    Optimum Experimental Design (OED) is an information gathering technique used to estimate parameters, which aims to minimize the variance of parameter estimation and prediction. In this paper, we further investigate an OED for MEMS accelerometer calibration of the 9-parameter auto-calibration model. Based on a linearized 9-parameter accelerometer model, we show the proposed OED is both G-optimal and rotatable, which are the desired properties for the calibration of wearable sensors for which only simple calibration devices are available. The experimental design is carried out with a newly developed wearable health monitoring device and desired experimental results have been achieved.

  8. Calibration of hydrological model with programme PEST

    NASA Astrophysics Data System (ADS)

    Brilly, Mitja; Vidmar, Andrej; Kryžanowski, Andrej; Bezak, Nejc; Šraj, Mojca

    2016-04-01

    PEST is tool based on minimization of an objective function related to the root mean square error between the model output and the measurement. We use "singular value decomposition", section of the PEST control file, and Tikhonov regularization method for successfully estimation of model parameters. The PEST sometimes failed if inverse problems were ill-posed, but (SVD) ensures that PEST maintains numerical stability. The choice of the initial guess for the initial parameter values is an important issue in the PEST and need expert knowledge. The flexible nature of the PEST software and its ability to be applied to whole catchments at once give results of calibration performed extremely well across high number of sub catchments. Use of parallel computing version of PEST called BeoPEST was successfully useful to speed up calibration process. BeoPEST employs smart slaves and point-to-point communications to transfer data between the master and slaves computers. The HBV-light model is a simple multi-tank-type model for simulating precipitation-runoff. It is conceptual balance model of catchment hydrology which simulates discharge using rainfall, temperature and estimates of potential evaporation. Version of HBV-light-CLI allows the user to run HBV-light from the command line. Input and results files are in XML form. This allows to easily connecting it with other applications such as pre and post-processing utilities and PEST itself. The procedure was applied on hydrological model of Savinja catchment (1852 km2) and consists of twenty one sub-catchments. Data are temporary processed on hourly basis.

  9. The Adaptive Calibration Model of stress responsivity

    PubMed Central

    Ellis, Bruce J.; Shirtcliff, Elizabeth A.

    2010-01-01

    This paper presents the Adaptive Calibration Model (ACM), an evolutionary-developmental theory of individual differences in the functioning of the stress response system. The stress response system has three main biological functions: (1) to coordinate the organism’s allostatic response to physical and psychosocial challenges; (2) to encode and filter information about the organism’s social and physical environment, mediating the organism’s openness to environmental inputs; and (3) to regulate the organism’s physiology and behavior in a broad range of fitness-relevant areas including defensive behaviors, competitive risk-taking, learning, attachment, affiliation and reproductive functioning. The information encoded by the system during development feeds back on the long-term calibration of the system itself, resulting in adaptive patterns of responsivity and individual differences in behavior. Drawing on evolutionary life history theory, we build a model of the development of stress responsivity across life stages, describe four prototypical responsivity patterns, and discuss the emergence and meaning of sex differences. The ACM extends the theory of biological sensitivity to context (BSC) and provides an integrative framework for future research in the field. PMID:21145350

  10. A pelagic ecosystem model calibrated with BATS data

    NASA Astrophysics Data System (ADS)

    Hurtt, George C.; Armstrong, Robert A.

    Mechanistic models of ocean ecosystem dynamics are of fundamental importance to understanding and predicting the role of marine ecosystems in the oceanic uptake of carbon. In this paper, a new pelagic ecosystem model that is descended from the model of Fasham et al. (Journal of Marine Research, 99 (1990) 591-639) (FDM model) is presented. During model development, the FDM model was first simplified to reduce the number of variables unconstrained by data and to reduce the number of parameters to be estimated. Many alternative simplified model formulations were tested in an attempt to fit 1988-1991 Bermuda Atlantic Time-series Study (BATS) data. The model presented here incorporates the changes found to be important. (i) A feature of the FDM physics that gives rise to a troublesome fall bloom was replaced. (ii) A biodiversity effect was added: the addition of larger algal and detrital size classes as phytoplankton and detrital biomasses increase. (iii) A phytoplankton physiological effect was also added: the adjustment of the chlorophyll-to-nitrogen ratio by phytoplankton in response to light and nutrient availabilities. The new model has only four state variables and a total of 11 biological parameters; yet it fits the average annual cycle in BATS data better than the FDM model. The new model also responds reasonably well to interannual variability in physical forcing. Based on the justification for changes (i)--(iii) from empirical studies and the success of this simple model at fitting BATS data, it is argued that these changes may be generally important. It is also shown that two alternative assumptions about ammonium concentrations lead to very different model calibrations, emphasizing the need for time series data on ammonium.

  11. Automated Calibration For Numerical Models Of Riverflow

    NASA Astrophysics Data System (ADS)

    Fernandez, Betsaida; Kopmann, Rebekka; Oladyshkin, Sergey

    2017-04-01

    Calibration of numerical models is fundamental since the beginning of all types of hydro system modeling, to approximate the parameters that can mimic the overall system behavior. Thus, an assessment of different deterministic and stochastic optimization methods is undertaken to compare their robustness, computational feasibility, and global search capacity. Also, the uncertainty of the most suitable methods is analyzed. These optimization methods minimize the objective function that comprises synthetic measurements and simulated data. Synthetic measurement data replace the observed data set to guarantee an existing parameter solution. The input data for the objective function derivate from a hydro-morphological dynamics numerical model which represents an 180-degree bend channel. The hydro- morphological numerical model shows a high level of ill-posedness in the mathematical problem. The minimization of the objective function by different candidate methods for optimization indicates a failure in some of the gradient-based methods as Newton Conjugated and BFGS. Others reveal partial convergence, such as Nelder-Mead, Polak und Ribieri, L-BFGS-B, Truncated Newton Conjugated, and Trust-Region Newton Conjugated Gradient. Further ones indicate parameter solutions that range outside the physical limits, such as Levenberg-Marquardt and LeastSquareRoot. Moreover, there is a significant computational demand for genetic optimization methods, such as Differential Evolution and Basin-Hopping, as well as for Brute Force methods. The Deterministic Sequential Least Square Programming and the scholastic Bayes Inference theory methods present the optimal optimization results. keywords: Automated calibration of hydro-morphological dynamic numerical model, Bayesian inference theory, deterministic optimization methods.

  12. A Comparison of Two Balance Calibration Model Building Methods

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard; Ulbrich, Norbert

    2007-01-01

    Simulated strain-gage balance calibration data is used to compare the accuracy of two balance calibration model building methods for different noise environments and calibration experiment designs. The first building method obtains a math model for the analysis of balance calibration data after applying a candidate math model search algorithm to the calibration data set. The second building method uses stepwise regression analysis in order to construct a model for the analysis. Four balance calibration data sets were simulated in order to compare the accuracy of the two math model building methods. The simulated data sets were prepared using the traditional One Factor At a Time (OFAT) technique and the Modern Design of Experiments (MDOE) approach. Random and systematic errors were introduced in the simulated calibration data sets in order to study their influence on the math model building methods. Residuals of the fitted calibration responses and other statistical metrics were compared in order to evaluate the calibration models developed with different combinations of noise environment, experiment design, and model building method. Overall, predicted math models and residuals of both math model building methods show very good agreement. Significant differences in model quality were attributable to noise environment, experiment design, and their interaction. Generally, the addition of systematic error significantly degraded the quality of calibration models developed from OFAT data by either method, but MDOE experiment designs were more robust with respect to the introduction of a systematic component of the unexplained variance.

  13. Mortality Probability Model III and Simplified Acute Physiology Score II

    PubMed Central

    Vasilevskis, Eduard E.; Kuzniewicz, Michael W.; Cason, Brian A.; Lane, Rondall K.; Dean, Mitzi L.; Clay, Ted; Rennie, Deborah J.; Vittinghoff, Eric; Dudley, R. Adams

    2009-01-01

    Background: To develop and compare ICU length-of-stay (LOS) risk-adjustment models using three commonly used mortality or LOS prediction models. Methods: Between 2001 and 2004, we performed a retrospective, observational study of 11,295 ICU patients from 35 hospitals in the California Intensive Care Outcomes Project. We compared the accuracy of the following three LOS models: a recalibrated acute physiology and chronic health evaluation (APACHE) IV-LOS model; and models developed using risk factors in the mortality probability model III at zero hours (MPM0) and the simplified acute physiology score (SAPS) II mortality prediction model. We evaluated models by calculating the following: (1) grouped coefficients of determination; (2) differences between observed and predicted LOS across subgroups; and (3) intraclass correlations of observed/expected LOS ratios between models. Results: The grouped coefficients of determination were APACHE IV with coefficients recalibrated to the LOS values of the study cohort (APACHE IVrecal) [R2 = 0.422], mortality probability model III at zero hours (MPM0 III) [R2 = 0.279], and simplified acute physiology score (SAPS II) [R2 = 0.008]. For each decile of predicted ICU LOS, the mean predicted LOS vs the observed LOS was significantly different (p ≤ 0.05) for three, two, and six deciles using APACHE IVrecal, MPM0 III, and SAPS II, respectively. Plots of the predicted vs the observed LOS ratios of the hospitals revealed a threefold variation in LOS among hospitals with high model correlations. Conclusions: APACHE IV and MPM0 III were more accurate than SAPS II for the prediction of ICU LOS. APACHE IV is the most accurate and best calibrated model. Although it is less accurate, MPM0 III may be a reasonable option if the data collection burden or the treatment effect bias is a consideration. PMID:19363210

  14. Seepage Calibration Model and Seepage Testing Data

    SciTech Connect

    S. Finsterle

    2004-09-02

    The purpose of this Model Report is to document the Seepage Calibration Model (SCM). The SCM was developed (1) to establish the conceptual basis for the Seepage Model for Performance Assessment (SMPA), and (2) to derive seepage-relevant, model-related parameters and their distributions for use in the SMPA and seepage abstraction in support of the Total System Performance Assessment for License Application (TSPA-LA). This Model Report has been revised in response to a comprehensive, regulatory-focused evaluation performed by the Regulatory Integration Team [''Technical Work Plan for: Regulatory Integration Evaluation of Analysis and Model Reports Supporting the TSPA-LA'' (BSC 2004 [DIRS 169653])]. The SCM is intended to be used only within this Model Report for the estimation of seepage-relevant parameters through calibration of the model against seepage-rate data from liquid-release tests performed in several niches along the Exploratory Studies Facility (ESF) Main Drift and in the Cross-Drift. The SCM does not predict seepage into waste emplacement drifts under thermal or ambient conditions. Seepage predictions for waste emplacement drifts under ambient conditions will be performed with the SMPA [''Seepage Model for PA Including Drift Collapse'' (BSC 2004 [DIRS 167652])], which inherits the conceptual basis and model-related parameters from the SCM. Seepage during the thermal period is examined separately in the Thermal Hydrologic (TH) Seepage Model [see ''Drift-Scale Coupled Processes (DST and TH Seepage) Models'' (BSC 2004 [DIRS 170338])]. The scope of this work is (1) to evaluate seepage rates measured during liquid-release experiments performed in several niches in the Exploratory Studies Facility (ESF) and in the Cross-Drift, which was excavated for enhanced characterization of the repository block (ECRB); (2) to evaluate air-permeability data measured in boreholes above the niches and the Cross-Drift to obtain the permeability structure for the seepage model

  15. Uncertainty Analysis of Inertial Model Attitude Sensor Calibration and Application with a Recommended New Calibration Method

    NASA Technical Reports Server (NTRS)

    Tripp, John S.; Tcheng, Ping

    1999-01-01

    Statistical tools, previously developed for nonlinear least-squares estimation of multivariate sensor calibration parameters and the associated calibration uncertainty analysis, have been applied to single- and multiple-axis inertial model attitude sensors used in wind tunnel testing to measure angle of attack and roll angle. The analysis provides confidence and prediction intervals of calibrated sensor measurement uncertainty as functions of applied input pitch and roll angles. A comparative performance study of various experimental designs for inertial sensor calibration is presented along with corroborating experimental data. The importance of replicated calibrations over extended time periods has been emphasized; replication provides independent estimates of calibration precision and bias uncertainties, statistical tests for calibration or modeling bias uncertainty, and statistical tests for sensor parameter drift over time. A set of recommendations for a new standardized model attitude sensor calibration method and usage procedures is included. The statistical information provided by these procedures is necessary for the uncertainty analysis of aerospace test results now required by users of industrial wind tunnel test facilities.

  16. Model calibration in the continual reassessment method.

    PubMed

    Lee, Shing M; Ying Kuen Cheung

    2009-06-01

    The continual reassessment method (CRM) is an adaptive model-based design used to estimate the maximum tolerated dose in dose finding clinical trials. A way to evaluate the sensitivity of a given CRM model including the functional form of the dose-toxicity curve, the prior distribution on the model parameter, and the initial guesses of toxicity probability at each dose is using indifference intervals. While the indifference interval technique provides a succinct summary of model sensitivity, there are infinitely many possible ways to specify the initial guesses of toxicity probability. In practice, these are generally specified by trial and error through extensive simulations. By using indifference intervals, the initial guesses used in the CRM can be selected by specifying a range of acceptable toxicity probabilities in addition to the target probability of toxicity. An algorithm is proposed for obtaining the indifference interval that maximizes the average percentage of correct selection across a set of scenarios of true probabilities of toxicity and providing a systematic approach for selecting initial guesses in a much less time-consuming manner than the trial-and-error method. The methods are compared in the context of two real CRM trials. For both trials, the initial guesses selected by the proposed algorithm had similar operating characteristics as measured by percentage of correct selection, average absolute difference between the true probability of the dose selected and the target probability of toxicity, percentage treated at each dose and overall percentage of toxicity compared to the initial guesses used during the conduct of the trials which were obtained by trial and error through a time-consuming calibration process. The average percentage of correct selection for the scenarios considered were 61.5 and 62.0% in the lymphoma trial, and 62.9 and 64.0% in the stroke trial for the trial-and-error method versus the proposed approach. We only present

  17. Improved Spectrophotometric Calibration of the SDSS-III BOSS Quasar Sample

    NASA Astrophysics Data System (ADS)

    Margala, Daniel; Kirkby, David; Dawson, Kyle; Bailey, Stephen; Blanton, Michael; Schneider, Donald P.

    2016-11-01

    We present a model for spectrophotometric calibration errors in observations of quasars from the third generation of the Sloan Digital Sky Survey Baryon Oscillation Spectroscopic Survey (BOSS) and describe the correction procedure we have developed and applied to this sample. Calibration errors are primarily due to atmospheric differential refraction and guiding offsets during each exposure. The corrections potentially reduce the systematics for any studies of BOSS quasars, including the measurement of baryon acoustic oscillations using the Lyα forest. Our model suggests that, on average, the observed quasar flux in BOSS is overestimated by ∼19% at 3600 Å and underestimated by ∼24% at 10,000 Å. Our corrections for the entire BOSS quasar sample are publicly available.

  18. Calibration of a fuel relocation model in BISON

    SciTech Connect

    Swiler, L. P.; Williamson, R. L.; Perez, D. M.

    2013-07-01

    We demonstrate parameter calibration in the context of the BISON nuclear fuels performance analysis code. Specifically, we present the calibration of a parameter governing fuel relocation: the power level at which the relocation model is activated. This relocation activation parameter is a critical value in obtaining reasonable comparison with fuel centerline temperature measurements. It also is the subject of some debate in terms of the optimal values. We show that the optimal value does vary across the calibration to individual rods. We also demonstrate an aggregated calibration, where we calibrate to observations from six rods. (authors)

  19. Residual bias in a multiphase flow model calibration and prediction

    USGS Publications Warehouse

    Poeter, E.P.; Johnson, R.H.

    2002-01-01

    When calibrated models produce biased residuals, we assume it is due to an inaccurate conceptual model and revise the model, choosing the most representative model as the one with the best-fit and least biased residuals. However, if the calibration data are biased, we may fail to identify an acceptable model or choose an incorrect model. Conceptual model revision could not eliminate biased residuals during inversion of simulated DNAPL migration under controlled conditions at the Borden Site near Ontario Canada. This paper delineates hypotheses for the source of bias, and explains the evolution of the calibration and resulting model predictions.

  20. Accuracy of hospital standardized mortality rates: effects of model calibration.

    PubMed

    Kipnis, Patricia; Liu, Vincent; Escobar, Gabriel J

    2014-04-01

    Risk-adjusted mortality rates are commonly used in quality report cards to compare hospital performance. The risk adjustment depends on models that are assessed for goodness-of-fit using various discrimination and calibration measures. However, the relationship between model fit and the accuracy of hospital comparisons is not well characterized. To evaluate the impact of imperfect model calibration (miscalibration) on the accuracy of hospital comparisons. We constructed Monte Carlo simulations where a risk-adjustment model is used in a population with a different mortality distribution than in the original model. We estimated the power of calibration metrics to detect miscalibration. We estimated the sensitivity and specificity of a hospital comparisons method under different imperfect model calibration scenarios using an empirical method. The U-statistics showed the highest power to detect intercept and slope deviations in the calibration curve, followed by the Hosmer-Lemeshow, and the calibration intercept and slope tests. The specificity decreased with increased intercept and slope deviations and with hospital size. The effect of an imperfect model fit on sensitivity is a function of the true standardized mortality ratio, the underlying mortality rate, sample size, and observed intercept and slope deviations. Poorly performing hospitals can appear as good performers and vice versa, depending on the deviation magnitude and direction. Deviations from perfect model calibration have a direct impact on the accuracy of hospital comparisons. Publishing the calibration intercept and slope of risk-adjustment models would allow the users to monitor their performance against the true standard population.

  1. Simultaneous spectrophotometric determination of Fe(III) and Al(III) using orthogonal signal correction-partial least squares calibration method after solidified floating organic drop microextraction

    NASA Astrophysics Data System (ADS)

    Rohani Moghadam, Masoud; Haji Shabani, Ali Mohammad; Dadfarnia, Shayessteh

    2015-01-01

    A solidified floating organic drop microextraction (SFODME) procedure was developed for the simultaneous extraction and preconcentration of Fe(III) and Al(III) from water samples. The method was based on the formation of cationic complexes between Fe(III) and Al(III) and 3,5,7,2‧,4‧-pentahydroxyflavone (morin) which were extracted into 1-undecanol as ion pairs with perchlorate ions. The absorbance of the extracted complexes was then measured in the wavelength range of 300-450 nm. Finally, the concentration of each metal ion was determined by the use of the orthogonal signal correction-partial least squares (OSC-PLS) calibration method. Several experimental parameters that may be affected on the extraction process such as the type and volume of extraction solvent, pH of the aqueous solution, morin and perchlorate concentration and extraction time were optimized. Under the optimum conditions, Fe(III) and Al(III) were determined in the ranges of 0.83-27.00 μg L-1 (R2 = 0.9985) and 1.00-32.00 μg L-1 (R2 = 0.9979) of Fe(III) and Al(III), respectively. The relative standard deviations (n = 6) at 12.80 μg L-1 of Fe(III) and 17.00 μg L-1 of Al(III) were 3.2% and 3.5%, respectively. An enhancement factors of 102 and 96 were obtained for Fe(III) and Al(III) ions, respectively. The procedure was successfully applied to determination of iron and aluminum in steam and water samples of thermal power plant; and the accuracy was assessed through the recovery experiments and independent analysis by electrothermal atomic absorption spectroscopy (ETAAS).

  2. Simultaneous spectrophotometric determination of Fe(III) and Al(III) using orthogonal signal correction-partial least squares calibration method after solidified floating organic drop microextraction.

    PubMed

    Rohani Moghadam, Masoud; Haji Shabani, Ali Mohammad; Dadfarnia, Shayessteh

    2015-01-25

    A solidified floating organic drop microextraction (SFODME) procedure was developed for the simultaneous extraction and preconcentration of Fe(III) and Al(III) from water samples. The method was based on the formation of cationic complexes between Fe(III) and Al(III) and 3,5,7,2',4'-pentahydroxyflavone (morin) which were extracted into 1-undecanol as ion pairs with perchlorate ions. The absorbance of the extracted complexes was then measured in the wavelength range of 300-450 nm. Finally, the concentration of each metal ion was determined by the use of the orthogonal signal correction-partial least squares (OSC-PLS) calibration method. Several experimental parameters that may be affected on the extraction process such as the type and volume of extraction solvent, pH of the aqueous solution, morin and perchlorate concentration and extraction time were optimized. Under the optimum conditions, Fe(III) and Al(III) were determined in the ranges of 0.83-27.00 μg L(-1) (R(2)=0.9985) and 1.00-32.00 μg L(-1) (R(2)=0.9979) of Fe(III) and Al(III), respectively. The relative standard deviations (n=6) at 12.80 μg L(-1) of Fe(III) and 17.00 μg L(-)(1) of Al(III) were 3.2% and 3.5%, respectively. An enhancement factors of 102 and 96 were obtained for Fe(III) and Al(III) ions, respectively. The procedure was successfully applied to determination of iron and aluminum in steam and water samples of thermal power plant; and the accuracy was assessed through the recovery experiments and independent analysis by electrothermal atomic absorption spectroscopy (ETAAS).

  3. Calibrating Historical IR Sensors Using GEO, and AVHRR Infrared Tropical Mean Calibration Models

    NASA Technical Reports Server (NTRS)

    Scarino, Benjamin; Doelling, David R.; Minnis, Patrick; Gopalan, Arun; Haney, Conor; Bhatt, Rajendra

    2014-01-01

    Long-term, remote-sensing-based climate data records (CDRs) are highly dependent on having consistent, wellcalibrated satellite instrument measurements of the Earth's radiant energy. Therefore, by making historical satellite calibrations consistent with those of today's imagers, the Earth-observing community can benefit from a CDR that spans a minimum of 30 years. Most operational meteorological satellites rely on an onboard blackbody and space looks to provide on-orbit IR calibration, but neither target is traceable to absolute standards. The IR channels can also be affected by ice on the detector window, angle dependency of the scan mirror emissivity, stray-light, and detector-to-detector striping. Being able to quantify and correct such degradations would mean IR data from any satellite imager could contribute to a CDR. Recent efforts have focused on utilizing well-calibrated modern hyper-spectral sensors to intercalibrate concurrent operational IR imagers to a single reference. In order to consistently calibrate both historical and current IR imagers to the same reference, however, another strategy is needed. Large, well-characterized tropical-domain Earth targets have the potential of providing an Earth-view reference accuracy of within 0.5 K. To that effort, NASA Langley is developing an IR tropical mean calibration model in order to calibrate historical Advanced Very High Resolution Radiometer (AVHRR) instruments. Using Meteosat-9 (Met-9) as a reference, empirical models are built based on spatially/temporally binned Met-9 and AVHRR tropical IR brightness temperatures. By demonstrating the stability of the Met-9 tropical models, NOAA-18 AVHRR can be calibrated to Met-9 by matching the AVHRR monthly histogram averages with the Met-9 model. This method is validated with ray-matched AVHRR and Met-9 biasdifference time series. Establishing the validity of this empirical model will allow for the calibration of historical AVHRR sensors to within 0.5 K, and thereby

  4. Calibrating historical IR sensors using GEO and AVHRR infrared tropical mean calibration models

    NASA Astrophysics Data System (ADS)

    Scarino, Benjamin; Doelling, David R.; Minnis, Patrick; Gopalan, Arun; Haney, Conor; Bhatt, Rajendra

    2014-09-01

    Long-term, remote-sensing-based climate data records (CDRs) are highly dependent on having consistent, wellcalibrated satellite instrument measurements of the Earth's radiant energy. Therefore, by making historical satellite calibrations consistent with those of today's imagers, the Earth-observing community can benefit from a CDR that spans a minimum of 30 years. Most operational meteorological satellites rely on an onboard blackbody and space looks to provide on-orbit IR calibration, but neither target is traceable to absolute standards. The IR channels can also be affected by ice on the detector window, angle dependency of the scan mirror emissivity, stray-light, and detector-to-detector striping. Being able to quantify and correct such degradations would mean IR data from any satellite imager could contribute to a CDR. Recent efforts have focused on utilizing well-calibrated modern hyper-spectral sensors to intercalibrate concurrent operational IR imagers to a single reference. In order to consistently calibrate both historical and current IR imagers to the same reference, however, another strategy is needed. Large, well-characterized tropical-domain Earth targets have the potential of providing an Earth-view reference accuracy of within 0.5 K. To that effort, NASA Langley is developing an IR tropical mean calibration model in order to calibrate historical Advanced Very High Resolution Radiometer (AVHRR) instruments. Using Meteosat-9 (Met-9) as a reference, empirical models are built based on spatially/temporally binned Met-9 and AVHRR tropical IR brightness temperatures. By demonstrating the stability of the Met-9 tropical models, NOAA-18 AVHRR can be calibrated to Met-9 by matching the AVHRR monthly histogram averages with the Met-9 model. This method is validated with ray-matched AVHRR and Met-9 bias difference time series. Establishing the validity of this empirical model will allow for the calibration of historical AVHRR sensors to within 0.5 K, and

  5. Evaluation of “Autotune” calibration against manual calibration of building energy models

    DOE PAGES

    Chaudhary, Gaurav; New, Joshua; Sanyal, Jibonananda; ...

    2016-08-26

    Our paper demonstrates the application of Autotune, a methodology aimed at automatically producing calibrated building energy models using measured data, in two case studies. In the first case, a building model is de-tuned by deliberately injecting faults into more than 60 parameters. This model was then calibrated using Autotune and its accuracy with respect to the original model was evaluated in terms of the industry-standard normalized mean bias error and coefficient of variation of root mean squared error metrics set forth in ASHRAE Guideline 14. In addition to whole-building energy consumption, outputs including lighting, plug load profiles, HVAC energy consumption,more » zone temperatures, and other variables were analyzed. In the second case, Autotune calibration is compared directly to experts’ manual calibration of an emulated-occupancy, full-size residential building with comparable calibration results in much less time. Lastly, our paper concludes with a discussion of the key strengths and weaknesses of auto-calibration approaches.« less

  6. METHODOLOGIES FOR CALIBRATION AND PREDICTIVE ANALYSIS OF A WATERSHED MODEL

    EPA Science Inventory

    The use of a fitted-parameter watershed model to address water quantity and quality management issues requires that it be calibrated under a wide range of hydrologic conditions. However, rarely does model calibration result in a unique parameter set. Parameter nonuniqueness can l...

  7. METHODOLOGIES FOR CALIBRATION AND PREDICTIVE ANALYSIS OF A WATERSHED MODEL

    EPA Science Inventory

    The use of a fitted-parameter watershed model to address water quantity and quality management issues requires that it be calibrated under a wide range of hydrologic conditions. However, rarely does model calibration result in a unique parameter set. Parameter nonuniqueness can l...

  8. Distributed calibrating snow models using remotely sensed snow cover information

    NASA Astrophysics Data System (ADS)

    Li, H.

    2015-12-01

    Distributed calibrating snow models using remotely sensed snow cover information Hongyi Li1, Tao Che1, Xin Li1, Jian Wang11. Cold and Arid Regions Environmental and Engineering Research Institute, Chinese Academy of Sciences, Lanzhou 730000, China For improving the simulation accuracy of snow model, remotely sensed snow cover data are used to calibrate spatial parameters of snow model. A physically based snow model is developed and snow parameters including snow surface roughness, new snow density and critical threshold temperature distinguishing snowfall from precipitation, are spatially calibrated in this study. The study region, Babaohe basin, located in northwestern China, have seasonal snow cover and with complex terrain. The results indicates that the spatially calibration of snow model parameters make the simulation results more reasonable, and the simulated snow accumulation days, plot-scale snow depth are more better than lumped calibration.

  9. A Method to Test Model Calibration Techniques: Preprint

    SciTech Connect

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-09-01

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  10. Polarimetric PALSAR System Model Assessment and Calibration

    NASA Astrophysics Data System (ADS)

    Touzi, R.; Shimada, M.

    2009-04-01

    Polarimetric PALSAR system parameters are assessed using data sets collected over various calibration sites. The data collected over the Amazonian forest permits validating the zero Faraday rotation hypotheses near the equator. The analysis of the Amazonian forest data and the response of the corner reflectors deployed during the PALSAR acquisitions lead to the conclusion that the antenna is highly isolated (better than -35 dB). Theses results are confirmed using data collected over the Sweden and Ottawa calibration sites. The 5-m height trihedrals deployed in the Sweden calibration site by the Chalmers University of technology permits accurate measurement of antenna parameters, and detection of 2-3 degree Faraday rotation during day acquisition, whereas no Faraday rotation was noted during night acquisition. Small Faraday rotation angles (2-3 degree) have been measured using acquisitions over the DLR Oberpfaffenhofen and the Ottawa calibration sites. The presence of small but still significant Faraday rotation (2-3 degree) induces a CR return at the crosspolarization HV and VH that should not be interpreted as the actual antenna cross-talk. PALSAR antenna is highly isolated (better than -35 dB), and diagonal antenna distortion matrices (with zero cross-talk terms) can be used for accurate calibration of PALSAR polarimetric data.

  11. Automatically calibrating admittances in KATE's autonomous launch operations model

    NASA Astrophysics Data System (ADS)

    Morgan, Steve

    1992-09-01

    This report documents a 1000-line Symbolics LISP program that automatically calibrates all 15 fluid admittances in KATE's Autonomous Launch Operations (ALO) model. (KATE is Kennedy Space Center's Knowledge-based Autonomous Test Engineer, a diagnosis and repair expert system created for use on the Space Shuttle's various fluid flow systems.) As a new KATE application, the calibrator described here breaks new ground for KSC's Artificial Intelligence Lab by allowing KATE to both control and measure the hardware she supervises. By automating a formerly manual process, the calibrator: (1) saves the ALO model builder untold amounts of labor; (2) enables quick repairs after workmen accidently adjust ALO's hand valves; and (3) frees the modeler to pursue new KATE applications that previously were too complicated. Also reported are suggestions for enhancing the program: (1) to calibrate ALO's TV cameras, pumps, and sensor tolerances; and (2) to calibrate devices in other KATE models, such as the shuttle's LOX and Environment Control System (ECS).

  12. Automatically calibrating admittances in KATE's autonomous launch operations model

    NASA Technical Reports Server (NTRS)

    Morgan, Steve

    1992-01-01

    This report documents a 1000-line Symbolics LISP program that automatically calibrates all 15 fluid admittances in KATE's Autonomous Launch Operations (ALO) model. (KATE is Kennedy Space Center's Knowledge-based Autonomous Test Engineer, a diagnosis and repair expert system created for use on the Space Shuttle's various fluid flow systems.) As a new KATE application, the calibrator described here breaks new ground for KSC's Artificial Intelligence Lab by allowing KATE to both control and measure the hardware she supervises. By automating a formerly manual process, the calibrator: (1) saves the ALO model builder untold amounts of labor; (2) enables quick repairs after workmen accidently adjust ALO's hand valves; and (3) frees the modeler to pursue new KATE applications that previously were too complicated. Also reported are suggestions for enhancing the program: (1) to calibrate ALO's TV cameras, pumps, and sensor tolerances; and (2) to calibrate devices in other KATE models, such as the shuttle's LOX and Environment Control System (ECS).

  13. SWAT Model Configuration, Calibration and Validation for Lake Champlain Basin

    EPA Pesticide Factsheets

    The Soil and Water Assessment Tool (SWAT) model was used to develop phosphorus loading estimates for sources in the Lake Champlain Basin. This document describes the model setup and parameterization, and presents calibration results.

  14. Velocity prediction errors related to flow model calibration uncertainty

    SciTech Connect

    Stephenson, D.E. ); Duffield, G.M.; Buss, D.R. )

    1990-01-01

    At the Savannah River Site (SRS), a United States Department of Energy facility in South Carolina, a three-dimensional, steady-state numerical model has been developed for a four aquifer, three aquitard groundwater flow system. This model has been used for numerous predictive simulation applications at SRS, and since the initial calibration, the model has been refined several times. Originally, calibration of the model was accomplished using a nonlinear least-squares inverse technique for a set of 50 water-level calibration targets non-uniformly distributed in the four aquifers. The estimated hydraulic properties from this calibration generally showed reasonable agreement with values estimated from field tests. Subsequent model refinements and application of this model to field problems have shown that uncertainties in the model parameterization become much more apparent in the prediction of the velocity field than in the simulation of the distribution of hydraulic heads. The combined use of three types of information (hydraulic head distributions, geologic framework models, and velocity field monitoring) provide valuable calibration data for flow modeling investigations; however, calibration of a flow model typically relies upon measured water levels. For a given set of water-level calibration targets, the uncertainties associated with imperfect knowledge of physical system parameters or groundwater velocities may not be discernable in the calibrated hydraulic head distribution. In this paper, modeling results from studies at SRS illustrate examples of model inadequacy resulting from calibrating only on observed water levels, and the effects of these inadequacies on velocity field prediction are discussed. 14 refs., 6 figs.

  15. Impact of data quality and quantity and the calibration procedure on crop growth model calibration

    NASA Astrophysics Data System (ADS)

    Seidel, Sabine J.; Werisch, Stefan

    2014-05-01

    Crop growth models are a commonly used tool for impact assessment of climate variability and climate change on crop yields and water use. Process-based crop models rely on algorithms that approximate the main physiological plant processes by a set of equations containing several calibration parameters as well as basic underlying assumptions. It is well recognized that model calibration is essential to improve the accuracy and reliability of model predictions. However, model calibration and validation is often hindered by a limited quantity and quality of available data. Recent studies suggest that crop model parameters can only be derived from field experiments in which plant growth and development processes have been measured. To be able to achieve a reliable prediction of crop growth under irrigation or drought stress, the correct characterization of the whole soil-plant-atmosphere system is essential. In this context is the accurate simulation of crop development, yield and the soil water dynamics plays an important role. In this study we aim to investigate the importance of a site and cultivar-specific model calibration based on experimental data using the SVAT model Daisy. We investigate to which extent different data sets and different parameter estimation procedures affect particularly yield estimates, irrigation water demand and the soil water dynamics. The comprehensive experimental data has been derived from an experiment conducted in Germany where five irrigation regimes were imposed on cabbage. Data collection included continuous measurements of soil tension and soil water content in two plots at three depths, weekly measurements of LAI, plant heights, leaf-N-content, stomatal conductivity, biomass partitioning, rooting depth as well as harvested yields and duration of growing period. Three crop growth calibration strategies were compared: (1) manual calibration based on yield and duration of growing period, (2) manual calibration based on yield

  16. Multi-fidelity approach to dynamics model calibration

    NASA Astrophysics Data System (ADS)

    Absi, Ghina N.; Mahadevan, Sankaran

    2016-02-01

    This paper investigates the use of structural dynamics computational models with multiple levels of fidelity in the calibration of system parameters. Different types of models may be available for the estimation of unmeasured system properties, with different levels of physics fidelity, mesh resolution and boundary condition assumptions. In order to infer these system properties, Bayesian calibration uses information from multiple sources (including experimental data and prior knowledge), and comprehensively quantifies the uncertainty in the calibration parameters. Estimating the posteriors is done using Markov Chain Monte Carlo sampling, which requires a large number of computations, thus making the use of a high-fidelity model for calibration prohibitively expensive. On the other hand, use of a low-fidelity model could lead to significant error in calibration and prediction. Therefore, this paper develops an approach for model parameter calibration with a low-fidelity model corrected using higher fidelity simulations, and investigates the trade-off between accuracy and computational effort. The methodology is illustrated for a curved panel located in the vicinity of a hypersonic aircraft engine, subjected to acoustic loading. Two models (a frequency response analysis and a full time history analysis) are combined to calibrate the damping characteristics of the panel.

  17. Calibration of stormwater quality regression models: a random process?

    PubMed

    Dembélé, A; Bertrand-Krajewski, J-L; Barillon, B

    2010-01-01

    Regression models are among the most frequently used models to estimate pollutants event mean concentrations (EMC) in wet weather discharges in urban catchments. Two main questions dealing with the calibration of EMC regression models are investigated: i) the sensitivity of models to the size and the content of data sets used for their calibration, ii) the change of modelling results when models are re-calibrated when data sets grow and change with time when new experimental data are collected. Based on an experimental data set of 64 rain events monitored in a densely urbanised catchment, four TSS EMC regression models (two log-linear and two linear models) with two or three explanatory variables have been derived and analysed. Model calibration with the iterative re-weighted least squares method is less sensitive and leads to more robust results than the ordinary least squares method. Three calibration options have been investigated: two options accounting for the chronological order of the observations, one option using random samples of events from the whole available data set. Results obtained with the best performing non linear model clearly indicate that the model is highly sensitive to the size and the content of the data set used for its calibration.

  18. Calibration of the Site-Scale Saturated Zone Flow Model

    SciTech Connect

    G. A. Zyvoloski

    2001-06-28

    The purpose of the flow calibration analysis work is to provide Performance Assessment (PA) with the calibrated site-scale saturated zone (SZ) flow model that will be used to make radionuclide transport calculations. As such, it is one of the most important models developed in the Yucca Mountain project. This model will be a culmination of much of our knowledge of the SZ flow system. The objective of this study is to provide a defensible site-scale SZ flow and transport model that can be used for assessing total system performance. A defensible model would include geologic and hydrologic data that are used to form the hydrogeologic framework model; also, it would include hydrochemical information to infer transport pathways, in-situ permeability measurements, and water level and head measurements. In addition, the model should include information on major model sensitivities. Especially important are those that affect calibration, the direction of transport pathways, and travel times. Finally, if warranted, alternative calibrations representing different conceptual models should be included. To obtain a defensible model, all available data should be used (or at least considered) to obtain a calibrated model. The site-scale SZ model was calibrated using measured and model-generated water levels and hydraulic head data, specific discharge calculations, and flux comparisons along several of the boundaries. Model validity was established by comparing model-generated permeabilities with the permeability data from field and laboratory tests; by comparing fluid pathlines obtained from the SZ flow model with those inferred from hydrochemical data; and by comparing the upward gradient generated with the model with that observed in the field. This analysis is governed by the Office of Civilian Radioactive Waste Management (OCRWM) Analysis and Modeling Report (AMR) Development Plan ''Calibration of the Site-Scale Saturated Zone Flow Model'' (CRWMS M&O 1999a).

  19. Calibration of disease simulation model using an engineering approach.

    PubMed

    Kong, Chung Yin; McMahon, Pamela M; Gazelle, G Scott

    2009-06-01

    Calibrating a disease simulation model's outputs to existing clinical data is vital to generate confidence in the model's predictive ability. Calibration involves two challenges: 1) defining a total goodness-of-fit (GOF) score for multiple targets if simultaneous fitting is required, and 2) searching for the optimal parameter set that minimizes the total GOF score (i.e., yields the best fit). To address these two prominent challenges, we have applied an engineering approach to calibrate a microsimulation model, the Lung Cancer Policy Model (LCPM). First, 11 targets derived from clinical and epidemiologic data were combined into a total GOF score by a weighted-sum approach, accounting for the user-defined relative importance of the calibration targets. Second, two automated parameter search algorithms, simulated annealing (SA) and genetic algorithm (GA), were independently applied to a simultaneous search of 28 natural history parameters to minimize the total GOF score. Algorithm performance metrics were defined for speed and model fit. Both search algorithms obtained total GOF scores below 95 within 1000 search iterations. Our results show that SA outperformed GA in locating a lower GOF. After calibrating our LCPM, the predicted natural history of lung cancer was consistent with other mathematical models of lung cancer development. An engineering-based calibration method was able to simultaneously fit LCPM output to multiple calibration targets, with the benefits of fast computational speed and reduced the need for human input and its potential bias.

  20. Definition and sensitivity of the conceptual MORDOR rainfall-runoff model parameters using different multi-criteria calibration strategies

    NASA Astrophysics Data System (ADS)

    Garavaglia, F.; Seyve, E.; Gottardi, F.; Le Lay, M.; Gailhard, J.; Garçon, R.

    2014-12-01

    MORDOR is a conceptual hydrological model extensively used in Électricité de France (EDF, French electric utility company) operational applications: (i) hydrological forecasting, (ii) flood risk assessment, (iii) water balance and (iv) climate change studies. MORDOR is a lumped, reservoir, elevation based model with hourly or daily areal rainfall and air temperature as the driving input data. The principal hydrological processes represented are evapotranspiration, direct and indirect runoff, ground water, snow accumulation and melt and routing. The model has been intensively used at EDF for more than 20 years, in particular for modeling French mountainous watersheds. In the matter of parameters calibration we propose and test alternative multi-criteria techniques based on two specific approaches: automatic calibration using single-objective functions and a priori parameter calibration founded on hydrological watershed features. The automatic calibration approach uses single-objective functions, based on Kling-Gupta efficiency, to quantify the good agreement between the simulated and observed runoff focusing on four different runoff samples: (i) time-series sample, (I) annual hydrological regime, (iii) monthly cumulative distribution functions and (iv) recession sequences.The primary purpose of this study is to analyze the definition and sensitivity of MORDOR parameters testing different calibration techniques in order to: (i) simplify the model structure, (ii) increase the calibration-validation performance of the model and (iii) reduce the equifinality problem of calibration process. We propose an alternative calibration strategy that reaches these goals. The analysis is illustrated by calibrating MORDOR model to daily data for 50 watersheds located in French mountainous regions.

  1. Finite Element Model Calibration Approach for Ares I-X

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Lazor, Daniel R.; Gaspar, James L.; Parks, Russel A.; Bartolotta, Paul A.

    2010-01-01

    Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of nonconventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pre-test predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.

  2. Finite Element Model Calibration Approach for Area I-X

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Gaspar, James L.; Lazor, Daniel R.; Parks, Russell A.; Bartolotta, Paul A.

    2010-01-01

    Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of non-conventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pretest predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.

  3. Assessment of simulation-based calibration of rectangular pulse models

    NASA Astrophysics Data System (ADS)

    Vanhaute, Willem Jan; Vandenberghe, Sander; Willems, Patrick; Verhoest, Niko E. C.

    2013-04-01

    The use of stochastic rainfall models has become widespread in many hydrologic applications, especially when historical rainfall records lack in length or quality to be used for practical purposes. Among a variety of models, rectangular pulse models such as the Neyman-scott and Bartlett-Lewis type models are known for their parsimonious nature and relative ease in simulating long rainfall time series. The aforementioned models are often calibrated using the generalized method of moments which fits modeled to observed moments. To ease the computational burden, the expected values of the modeled moments are usually expressed in function of the model parameters through analytical expressions. The derivation of such analytical expressions is considered to be an important bottleneck in the development of these rectangular pulse models. Any adjustment to the model structure must be accompanied by an adjustment of the analytical moments in order to be able to calibrate the adjusted model. To avoid the use of analytical moments during calibration, a simulation-based calibration is needed. The latter would enable the modeler to make and validate adjustments in a more organic matter. However, such simulation-based calibration must be able to account for the randomness of the simulation. As such, ensemble runs must be made for every objective function evaluation, resulting in considerable computational requirements. The presented research investigates how to exploit today's available computational resources in order to enable simulation-based calibration. Once such type of calibration is feasible, it will open doors to implementing adjustments to the model structure (such as the introduction of dependencies between model variables by using copulas) without the need to rely on analytical expressions of the different moments.

  4. Improvement of hydrological model calibration by selecting multiple parameter ranges

    NASA Astrophysics Data System (ADS)

    Wu, Qiaofeng; Liu, Shuguang; Cai, Yi; Li, Xinjian; Jiang, Yangming

    2017-01-01

    The parameters of hydrological models are usually calibrated to achieve good performance, owing to the highly non-linear problem of hydrology process modelling. However, parameter calibration efficiency has a direct relation with parameter range. Furthermore, parameter range selection is affected by probability distribution of parameter values, parameter sensitivity, and correlation. A newly proposed method is employed to determine the optimal combination of multi-parameter ranges for improving the calibration of hydrological models. At first, the probability distribution was specified for each parameter of the model based on genetic algorithm (GA) calibration. Then, several ranges were selected for each parameter according to the corresponding probability distribution, and subsequently the optimal range was determined by comparing the model results calibrated with the different selected ranges. Next, parameter correlation and sensibility were evaluated by quantifying two indexes, RC Y, X and SE, which can be used to coordinate with the negatively correlated parameters to specify the optimal combination of ranges of all parameters for calibrating models. It is shown from the investigation that the probability distribution of calibrated values of any particular parameter in a Xinanjiang model approaches a normal or exponential distribution. The multi-parameter optimal range selection method is superior to the single-parameter one for calibrating hydrological models with multiple parameters. The combination of optimal ranges of all parameters is not the optimum inasmuch as some parameters have negative effects on other parameters. The application of the proposed methodology gives rise to an increase of 0.01 in minimum Nash-Sutcliffe efficiency (ENS) compared with that of the pure GA method. The rising of minimum ENS with little change of the maximum may shrink the range of the possible solutions, which can effectively reduce uncertainty of the model performance.

  5. Tradeoffs among watershed model calibration targets for parameter estimation

    EPA Science Inventory

    Hydrologic models are commonly calibrated by optimizing a single objective function target to compare simulated and observed flows, although individual targets are influenced by specific flow modes. Nash-Sutcliffe efficiency (NSE) emphasizes flood peaks in evaluating simulation f...

  6. Sediment calibration strategies of Phase 5 Chesapeake Bay watershed model

    USGS Publications Warehouse

    Wu, J.; Shenk, G.W.; Raffensperger, J.; Moyer, D.; Linker, L.C.; ,

    2005-01-01

    Sediment is a primary constituent of concern for Chesapeake Bay due to its effect on water clarity. Accurate representation of sediment processes and behavior in Chesapeake Bay watershed model is critical for developing sound load reduction strategies. Sediment calibration remains one of the most difficult components of watershed-scale assessment. This is especially true for Chesapeake Bay watershed model given the size of the watershed being modeled and complexity involved in land and stream simulation processes. To obtain the best calibration, the Chesapeake Bay program has developed four different strategies for sediment calibration of Phase 5 watershed model, including 1) comparing observed and simulated sediment rating curves for different parts of the hydrograph; 2) analyzing change of bed depth over time; 3) relating deposition/scour to total annual sediment loads; and 4) calculating "goodness-of-fit' statistics. These strategies allow a more accurate sediment calibration, and also provide some insightful information on sediment processes and behavior in Chesapeake Bay watershed.

  7. Tradeoffs among watershed model calibration targets for parameter estimation

    EPA Science Inventory

    Hydrologic models are commonly calibrated by optimizing a single objective function target to compare simulated and observed flows, although individual targets are influenced by specific flow modes. Nash-Sutcliffe efficiency (NSE) emphasizes flood peaks in evaluating simulation f...

  8. Calibration of the STEMS diameter growth model using FIA data

    Treesearch

    Veronica C. Lessard

    2000-01-01

    The diameter growth model used in STEMS, the Stand and Tree Evaluation and Modeling System, was originally calibrated using data from permanent growth plots in Minnesota, Wisconsin, and Michigan. Because the model has been applied in predicting growth using Forest Inventory and Analysis (FIA) data, it was appropriate to refit the model to FIA data. The model was...

  9. Bayesian calibration of groundwater models with input data uncertainty

    NASA Astrophysics Data System (ADS)

    Xu, Tianfang; Valocchi, Albert J.; Ye, Ming; Liang, Feng; Lin, Yu-Feng

    2017-04-01

    Effective water resources management typically relies on numerical models to analyze groundwater flow and solute transport processes. Groundwater models are often subject to input data uncertainty, as some inputs (such as recharge and well pumping rates) are estimated and subject to uncertainty. Current practices of groundwater model calibration often overlook uncertainties in input data; this can lead to biased parameter estimates and compromised predictions. Through a synthetic case study of surface-ground water interaction under changing pumping conditions and land use, we investigate the impacts of uncertain pumping and recharge rates on model calibration and uncertainty analysis. We then present a Bayesian framework of model calibration to handle uncertain input of groundwater models. The framework implements a marginalizing step to account for input data uncertainty when evaluating likelihood. It was found that not accounting for input uncertainty may lead to biased, overconfident parameter estimates because parameters could be over-adjusted to compensate for possible input data errors. Parameter compensation can have deleterious impacts when the calibrated model is used to make forecast under a scenario that is different from calibration conditions. By marginalizing input data uncertainty, the Bayesian calibration approach effectively alleviates parameter compensation and gives more accurate predictions in the synthetic case study. The marginalizing Bayesian method also decomposes prediction uncertainty into uncertainties contributed by parameters, input data, and measurements. The results underscore the need to account for input uncertainty to better inform postmodeling decision making.

  10. Analysis of Sting Balance Calibration Data Using Optimized Regression Models

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; Bader, Jon B.

    2010-01-01

    Calibration data of a wind tunnel sting balance was processed using a candidate math model search algorithm that recommends an optimized regression model for the data analysis. During the calibration the normal force and the moment at the balance moment center were selected as independent calibration variables. The sting balance itself had two moment gages. Therefore, after analyzing the connection between calibration loads and gage outputs, it was decided to choose the difference and the sum of the gage outputs as the two responses that best describe the behavior of the balance. The math model search algorithm was applied to these two responses. An optimized regression model was obtained for each response. Classical strain gage balance load transformations and the equations of the deflection of a cantilever beam under load are used to show that the search algorithm s two optimized regression models are supported by a theoretical analysis of the relationship between the applied calibration loads and the measured gage outputs. The analysis of the sting balance calibration data set is a rare example of a situation when terms of a regression model of a balance can directly be derived from first principles of physics. In addition, it is interesting to note that the search algorithm recommended the correct regression model term combinations using only a set of statistical quality metrics that were applied to the experimental data during the algorithm s term selection process.

  11. Method calibration of the model 13145 infrared target projectors

    NASA Astrophysics Data System (ADS)

    Huang, Jianxia; Gao, Yuan; Han, Ying

    2014-11-01

    The SBIR Model 13145 Infrared Target Projectors ( The following abbreviation Evaluation Unit ) used for characterizing the performances of infrared imaging system. Test items: SiTF, MTF, NETD, MRTD, MDTD, NPS. Infrared target projectors includes two area blackbodies, a 12 position target wheel, all reflective collimator. It provide high spatial frequency differential targets, Precision differential targets imaged by infrared imaging system. And by photoelectricity convert on simulate signal or digital signal. Applications software (IR Windows TM 2001) evaluate characterizing the performances of infrared imaging system. With regards to as a whole calibration, first differently calibration for distributed component , According to calibration specification for area blackbody to calibration area blackbody, by means of to amend error factor to calibration of all reflective collimator, radiance calibration of an infrared target projectors using the SR5000 spectral radiometer, and to analyze systematic error. With regards to as parameter of infrared imaging system, need to integrate evaluation method. According to regulation with -GJB2340-1995 General specification for military thermal imaging sets -testing parameters of infrared imaging system, the results compare with results from Optical Calibration Testing Laboratory . As a goal to real calibration performances of the Evaluation Unit.

  12. Mask model calibration for MPC applications utilizing shot dose assignment

    NASA Astrophysics Data System (ADS)

    Bork, Ingo; Buck, Peter; Paninjath, Sankaranarayanan; Mishra, Kushlendra; Bürgel, Christian; Standiford, Keith; Chua, Gek Soon

    2014-10-01

    Shrinking feature sizes and the need for tighter CD (Critical Dimension) control require the introduction of new technologies in mask making processes. One of those methods is the dose assignment of individual shots on VSB (Variable Shaped Beam) mask writers to compensate CD non-linearity effects and improve dose edge slope. Using increased dose levels only for most critical features, generally only for the smallest CDs on a mask, the change in mask write time is minimal while the increase in image quality can be significant. However, this technology requires accurate modeling of the mask effects, especially the CD/dose dependencies. This paper describes a mask model calibration flow for Mask Process Correction (MPC) applications with shot dose assignment. The first step in the calibration flow is the selection of appropriate test structures. For this work, a combination of linespace patterns as well as a series of contact patterns are used for calibration. Features sizes vary from 34 nm up to several micrometers in order to capture a wide range of CDs and pattern densities. After mask measurements are completed the results are carefully analyzed and measurements very close to the process window limitation and outliers are removed from the data set. One key finding in this study is that by including patterns exposed at various dose levels the simulated contours of the calibrated model very well match the SEM contours even if the calibration was based entirely on gauge based CD values. In the calibration example shown in this paper, only 1D line and space measurements as well as 1D contact measurements are used for calibration. However, those measurements include patterns exposed at dose levels between 75% and 150% of the nominal dose. The best model achieved in this study uses 2 e-beam kernels and 4 kernels for the simulation of development and etch effects. The model error RMS on a large range of CD down to 34 nm line CD is 0.71 nm. The calibrated model is then

  13. Multi-objective model calibration and validation based on runoff and groundwater levels

    NASA Astrophysics Data System (ADS)

    Beldring, S.

    2003-04-01

    The multi-objective calibration procedure MOCOM-UA was used to evaluate the validity of a precipitation-runoff model by forcing the model to simulate several observed system responses simultaneously. The model is based on kinematic wave approximations to saturated subsurface flow and saturation overland flow at the hillslope scale in a landcape with a shallow layer of permeable deposits overlying a relatively impermeable bedrock. Data from a catchment with till deposits in the boreal forest zone in south-east Norway were used in this study. The following results were found; (i) The MOCOM-UA method was capable of exploiting information about the physical system contained in the measurement data time series; (ii) The multi-objective calibration procedure provided estimates of the uncertainty associated with model predictions and parameters; (iii) Multi-objective calibration constraining the behaviour of the precipitation-runoff model to observed runoff and groundwater levels reduced the uncertainty of model predictions; (iv) The multi-objective method reduced the uncertainty of the estimates of model parameters; (v) The precipitation-runoff model was able to reproduce several observed system responses simultaneously during both calibration and validation periods; and (vi) Groundwater table depths exerted a major control on the hydrological response of the investigated catchment.

  14. Representative parameter estimation for hydrological models using a lexicographic calibration strategy

    NASA Astrophysics Data System (ADS)

    Gelleszun, Marlene; Kreye, Phillip; Meon, Günter

    2017-10-01

    We introduce the developed lexicographic calibration strategy to circumvent the imbalance between sophisticated hydrological models in combination with complex optimisation algorithms. The criteria for the evaluation of the approach were (i) robustness and transferability of the resulting parameters, (ii) goodness-of-fit criteria in calibration and validation and (iii) time-efficiency. An order of preference was determined prior to the calibration and the parameters were separated into groups for a stepwise calibration to reduce the search space. A comparison with the global optimisation method SCE-UA showed that only 6% of the calculation time was needed; the conditions total volume, seasonality and shape of the hydrograph were successfully achieved for the calibration and for the cross-validation periods. Furthermore, the parameter sets obtained by the lexicographic calibration strategy for different time periods were much more similar to each other than the parameters obtained by SCE-UA. Besides the similarities of the parameter sets, the goodness-of-fit criteria for the cross-validation were better for the lexicographic approach and the water balance components were also more similar. Thus, we concluded that the resulting parameters were more representative for the corresponding catchments and therefore more suitable for transferability. Time-efficient approximate methods were used to account for parameter uncertainty, confidence intervals and the stability of the solution in the optimum.

  15. Using Active Learning for Speeding up Calibration in Simulation Models.

    PubMed

    Cevik, Mucahit; Ergun, Mehmet Ali; Stout, Natasha K; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan

    2016-07-01

    Most cancer simulation models include unobservable parameters that determine disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality, and their values are typically estimated via a lengthy calibration procedure, which involves evaluating a large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We developed an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs and therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using the previously developed University of Wisconsin breast cancer simulation model (UWBCS). In a recent study, calibration of the UWBCS required the evaluation of 378 000 input parameter combinations to build a race-specific model, and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378 000 combinations. Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. © The Author(s) 2015.

  16. Sensor placement for calibration of spatially varying model parameters

    NASA Astrophysics Data System (ADS)

    Nath, Paromita; Hu, Zhen; Mahadevan, Sankaran

    2017-08-01

    This paper presents a sensor placement optimization framework for the calibration of spatially varying model parameters. To account for the randomness of the calibration parameters over space and across specimens, the spatially varying parameter is represented as a random field. Based on this representation, Bayesian calibration of spatially varying parameter is investigated. To reduce the required computational effort during Bayesian calibration, the original computer simulation model is substituted with Kriging surrogate models based on the singular value decomposition (SVD) of the model response and the Karhunen-Loeve expansion (KLE) of the spatially varying parameters. A sensor placement optimization problem is then formulated based on the Bayesian calibration to maximize the expected information gain measured by the expected Kullback-Leibler (K-L) divergence. The optimization problem needs to evaluate the expected K-L divergence repeatedly which requires repeated calibration of the spatially varying parameter, and this significantly increases the computational effort of solving the optimization problem. To overcome this challenge, an approximation for the posterior distribution is employed within the optimization problem to facilitate the identification of the optimal sensor locations using the simulated annealing algorithm. A heat transfer problem with spatially varying thermal conductivity is used to demonstrate the effectiveness of the proposed method.

  17. Simultaneous calibration of hydrological models in geographical space

    NASA Astrophysics Data System (ADS)

    Bárdossy, András; Huang, Yingchun; Wagener, Thorsten

    2016-07-01

    Hydrological models are usually calibrated for selected catchments individually using specific performance criteria. This procedure assumes that the catchments show individual behavior. As a consequence, the transfer of model parameters to other ungauged catchments is problematic. In this paper, the possibility of transferring part of the model parameters was investigated. Three different conceptual hydrological models were considered. The models were restructured by introducing a new parameter η which exclusively controls water balances. This parameter was considered as individual to each catchment. All other parameters, which mainly control the dynamics of the discharge (dynamical parameters), were considered for spatial transfer. Three hydrological models combined with three different performance measures were used in three different numerical experiments to investigate this transferability. The first numerical experiment, involving individual calibration of the models for 15 selected MOPEX catchments, showed that it is difficult to identify which catchments share common dynamical parameters. Parameters of one catchment might be good for another catchment but not the opposite. In the second numerical experiment, a common spatial calibration strategy was used. It was explicitly assumed that the catchments share common dynamical parameters. This strategy leads to parameters which perform well on all catchments. A leave-one-out common calibration showed that in this case a good parameter transfer to ungauged catchments can be achieved. In the third numerical experiment, the common calibration methodology was applied for 96 catchments. Another set of 96 catchments was used to test the transfer of common dynamical parameters. The results show that even a large number of catchments share similar dynamical parameters. The performance is worse than those obtained by individual calibration, but the transfer to ungauged catchments remains possible. The performance of the

  18. Modelling the arsenic (V) and (III) adsorption

    NASA Astrophysics Data System (ADS)

    Rau, I.; Meghea, A.; Peleanu, I.; Gonzalo, A.; Valiente, M.; Zaharescu, M.

    2003-01-01

    Arsenic has gained great notoriety historically for its toxic properties. In aquatic environment, arsenic can exist in several oxidation states, as both inorganic and organometallic species. As (V) is less toxic than As (III). Most research has been directed to the control of arsenic pollution of potable water. Various techniques such as precipitation with iron and aluminium hydroxides, ion exchange, reverse osmosis, and adsorption are used for As (V) removal from surface and waste waters. Because of the easy handling of sludge, its free operation and regeneration capability, the adsorption technique has secured a place as one of the advanced methods of arsenic removal. A study of As (III) and As (V) sorption onto some different adsorbents (Fe (III) — iminodiacetate resin, nanocomposite materials, Fe(III) — forager sponge) referring to kinetic considerations and modelling of the process will be presented. All the systems studied are better described by Freundlich-Langmuir isotherm and the rate constant evaluation shows a sub-unitary order for the adsorption process.

  19. A calibration model for screen-caged Peltier thermocouple psychrometers

    NASA Astrophysics Data System (ADS)

    Brown, R. W.; Bartos, D. L.

    1982-07-01

    A calibration model for screen-caged Peltier thermocouple psychrometers was developed that applies to a water potential range of 0 to 80 bars, over a temperature range of 0 to 40 C, and for cooling times of 15 to 60 seconds. In addition, the model corrects for the effects of temperature gradients over zero-offsets from -60 to +60 microvolts. Complete details of model development are discussed, together with the theory of thermocouple psychrometers, and techniques of calibration and cleaning. Also, information for computer programing and tabular summaries of model characteristics are provided.

  20. Calibrating the ECCO ocean general circulation model using Green's functions

    NASA Technical Reports Server (NTRS)

    Menemenlis, D.; Fu, L. L.; Lee, T.; Fukumori, I.

    2002-01-01

    Green's functions provide a simple, yet effective, method to test and calibrate General-Circulation-Model(GCM) parameterizations, to study and quantify model and data errors, to correct model biases and trends, and to blend estimates from different solutions and data products.

  1. Calibrating RZWQM2 model for maize responses to deficit irrigation

    USDA-ARS?s Scientific Manuscript database

    Calibrating a system model for field research is a challenge and requires collaboration between modelers and experimentalists. In this study, the Root Zone Water Quality Model-DSSAT (RZWQM2) was used for simulating plant water stresses in corn in Eastern Colorado. The experiments were conducted in 2...

  2. Calibrating the ECCO ocean general circulation model using Green's functions

    NASA Technical Reports Server (NTRS)

    Menemenlis, D.; Fu, L. L.; Lee, T.; Fukumori, I.

    2002-01-01

    Green's functions provide a simple, yet effective, method to test and calibrate General-Circulation-Model(GCM) parameterizations, to study and quantify model and data errors, to correct model biases and trends, and to blend estimates from different solutions and data products.

  3. Load Modeling and Calibration Techniques for Power System Studies

    SciTech Connect

    Chassin, Forrest S.; Mayhorn, Ebony T.; Elizondo, Marcelo A.; Lu, Shuai

    2011-09-23

    Load modeling is the most uncertain area in power system simulations. Having an accurate load model is important for power system planning and operation. Here, a review of load modeling and calibration techniques is given. This paper is not comprehensive, but covers some of the techniques most commonly found in the literature. The advantages and disadvantages of each technique are outlined.

  4. Real-data Calibration Experiments On A Distributed Hydrologic Model

    NASA Astrophysics Data System (ADS)

    Brath, A.; Montanari, A.; Toth, E.

    The increasing availability of extended information on the study watersheds does not generally overcome the need for the determination through calibration of at least a part of the parameters of distributed hydrologic models. The complexity of such models, making the computations highly intensive, has often prevented an extensive analysis of calibration issues. The purpose of this study is an evaluation of the validation results of a series of automatic calibration experiments (using the shuffled complex evolu- tion method, Duan et al., 1992) performed with a highly conceptualised, continuously simulating, distributed hydrologic model applied on the real data of a mid-sized Ital- ian watershed. Major flood events occurred in the 1990-2000 decade are simulated with the parameters obtained by the calibration of the model against discharge data observed at the closure section of the watershed and the hydrological features (overall agreement, volumes, peaks and times to peak) of the discharges obtained both in the closure and in an interior stream-gauge are analysed for validation purposes. A first set of calibrations investigates the effect of the variability of the calibration periods, using the data from several single flood events and from longer, continuous periods. Another analysis regards the influence of rainfall input and it is carried out varying the size and distribution of the raingauge network, in order to examine the relation between the spatial pattern of observed rainfall and the variability of modelled runoff. Lastly, a comparison of the hydrographs obtained for the flood events with the model parameterisation resulting when modifying the objective function to be minimised in the automatic calibration procedure is presented.

  5. Sensitivity analysis and calibration of a dynamic physically based slope stability model

    NASA Astrophysics Data System (ADS)

    Zieher, Thomas; Rutzinger, Martin; Schneider-Muntau, Barbara; Perzl, Frank; Leidinger, David; Formayer, Herbert; Geitner, Clemens

    2017-06-01

    Physically based modelling of slope stability on a catchment scale is still a challenging task. When applying a physically based model on such a scale (1 : 10 000 to 1 : 50 000), parameters with a high impact on the model result should be calibrated to account for (i) the spatial variability of parameter values, (ii) shortcomings of the selected model, (iii) uncertainties of laboratory tests and field measurements or (iv) parameters that cannot be derived experimentally or measured in the field (e.g. calibration constants). While systematic parameter calibration is a common task in hydrological modelling, this is rarely done using physically based slope stability models. In the present study a dynamic, physically based, coupled hydrological-geomechanical slope stability model is calibrated based on a limited number of laboratory tests and a detailed multitemporal shallow landslide inventory covering two landslide-triggering rainfall events in the Laternser valley, Vorarlberg (Austria). Sensitive parameters are identified based on a local one-at-a-time sensitivity analysis. These parameters (hydraulic conductivity, specific storage, angle of internal friction for effective stress, cohesion for effective stress) are systematically sampled and calibrated for a landslide-triggering rainfall event in August 2005. The identified model ensemble, including 25 behavioural model runs with the highest portion of correctly predicted landslides and non-landslides, is then validated with another landslide-triggering rainfall event in May 1999. The identified model ensemble correctly predicts the location and the supposed triggering timing of 73.0 % of the observed landslides triggered in August 2005 and 91.5 % of the observed landslides triggered in May 1999. Results of the model ensemble driven with raised precipitation input reveal a slight increase in areas potentially affected by slope failure. At the same time, the peak run-off increases more markedly, suggesting

  6. An Example Multi-Model Analysis: Calibration and Ranking

    NASA Astrophysics Data System (ADS)

    Ahlmann, M.; James, S. C.; Lowry, T. S.

    2007-12-01

    Modeling solute transport is a complex process governed by multiple site-specific parameters like porosity and hydraulic conductivity as well as many solute-dependent processes such as diffusion and reaction. Furthermore, it must be determined whether a steady or time-variant model is most appropriate. A problem arises because over-parameterized conceptual models may be easily calibrated to exactly reproduce measured data, even if these data contain measurement noise. During preliminary site investigation stages where available data may be scarce it is often advisable to develop multiple independent conceptual models, but the question immediately arises: which model is best? This work outlines a method for quickly calibrating and ranking multiple models using the parameter estimation code PEST in conjunction with the second-order-bias-corrected Akaike Information Criterion (AICc). The method is demonstrated using the twelve analytical solutions to the one- dimensional convective-dispersive-reactive solute transport equation as the multiple conceptual models (van~Genuchten M. Th. and W. J. Alves, 1982. Analytical solutions of the one-dimensional convective- dispersive solute transport equation, USDA ARS Technical Bulletin Number 1661. U.S. Salinity Laboratory, 4500 Glenwood Drive, Riverside, CA 92501.). Each solution is calibrated to three data sets, each comprising an increasing number of calibration points that represent increased knowledge of the modeled site (calibration points are selected from one of the analytical solutions that provides the "correct" model). The AICc is calculated after each successive calibration to the three data sets yielding model weights that are functions of the sum of the squared, weighted residuals, the number of parameters, and the number of observations (calibration data points) and ultimately indicates which model has the highest likelihood of being correct. The results illustrate how the sparser data sets can be modeled

  7. Stochastic calibration and learning in nonstationary hydroeconomic models

    NASA Astrophysics Data System (ADS)

    Maneta, M. P.; Howitt, R.

    2014-05-01

    Concern about water scarcity and adverse climate events over agricultural regions has motivated a number of efforts to develop operational integrated hydroeconomic models to guide adaptation and optimal use of water. Once calibrated, these models are used for water management and analysis assuming they remain valid under future conditions. In this paper, we present and demonstrate a methodology that permits the recursive calibration of economic models of agricultural production from noisy but frequently available data. We use a standard economic calibration approach, namely positive mathematical programming, integrated in a data assimilation algorithm based on the ensemble Kalman filter equations to identify the economic model parameters. A moving average kernel ensures that new and past information on agricultural activity are blended during the calibration process, avoiding loss of information and overcalibration for the conditions of a single year. A regularization constraint akin to the standard Tikhonov regularization is included in the filter to ensure its stability even in the presence of parameters with low sensitivity to observations. The results show that the implementation of the PMP methodology within a data assimilation framework based on the enKF equations is an effective method to calibrate models of agricultural production even with noisy information. The recursive nature of the method incorporates new information as an added value to the known previous observations of agricultural activity without the need to store historical information. The robustness of the method opens the door to the use of new remote sensing algorithms for operational water management.

  8. A suggestion for computing objective function in model calibration

    USGS Publications Warehouse

    Wu, Yiping; Liu, Shuguang

    2014-01-01

    A parameter-optimization process (model calibration) is usually required for numerical model applications, which involves the use of an objective function to determine the model cost (model-data errors). The sum of square errors (SSR) has been widely adopted as the objective function in various optimization procedures. However, ‘square error’ calculation was found to be more sensitive to extreme or high values. Thus, we proposed that the sum of absolute errors (SAR) may be a better option than SSR for model calibration. To test this hypothesis, we used two case studies—a hydrological model calibration and a biogeochemical model calibration—to investigate the behavior of a group of potential objective functions: SSR, SAR, sum of squared relative deviation (SSRD), and sum of absolute relative deviation (SARD). Mathematical evaluation of model performance demonstrates that ‘absolute error’ (SAR and SARD) are superior to ‘square error’ (SSR and SSRD) in calculating objective function for model calibration, and SAR behaved the best (with the least error and highest efficiency). This study suggests that SSR might be overly used in real applications, and SAR may be a reasonable choice in common optimization implementations without emphasizing either high or low values (e.g., modeling for supporting resources management).

  9. Calibration of Disease Simulation Model Using an Engineering Approach

    PubMed Central

    Kong, Chung Yin; McMahon, Pamela M.; Gazelle, G. Scott

    2009-01-01

    Objectives Calibrating a disease simulation model’s outputs to existing clinical data is vital to generate confidence in the model’s predictive ability. Calibration involves two challenges: 1) defining a total goodness-of-fit score for multiple targets if simultaneous fitting is required; and 2) searching for the optimal parameter set that minimizes the total goodness-of-fit score (i.e., yields the best fit). To address these two prominent challenges, we have applied an engineering approach to calibrate a microsimulation model, the Lung Cancer Policy Model (LCPM). Methods First, eleven targets derived from clinical and epidemiological data were combined into a total goodness-of-fit score by a weighted-sum approach, accounting for the user-defined relative importance of the calibration targets. Second, two automated parameter search algorithms, Simulated Annealing (SA) and Genetic Algorithm (GA), were independently applied to a simultaneous search of 28 natural history parameters to minimize the total goodness-of-fit score. Algorithm performance metrics were defined for speed and model fit. Results Both search algorithms obtained total goodness-of-fit scores below 95 within 1,000 search iterations. Our results show that SA outperformed GA in locating a lower goodness-of-fit. After calibrating our LCPM, the predicted natural history of lung cancer was consistent with other mathematical models of lung cancer development. Conclusion An engineering-based calibration method was able to simultaneously fit LCPM output to multiple calibration targets, with the benefits of fast computational speed and reduced need for human input and its potential bias. PMID:19900254

  10. Tradeoffs among watershed model calibration targets for parameter estimation

    NASA Astrophysics Data System (ADS)

    Price, Katie; Purucker, S. Thomas; Kraemer, Stephen R.; Babendreier, Justin E.

    2012-10-01

    Hydrologic models are commonly calibrated by optimizing a single objective function target to compare simulated and observed flows, although individual targets are influenced by specific flow modes. Nash-Sutcliffe efficiency (NSE) emphasizes flood peaks in evaluating simulation fit, while modified Nash-Sutcliffe efficiency (MNS) emphasizes lower flows, and the ratio of the simulated to observed standard deviations (RSD) prioritizes flow variability. We investigated tradeoffs of calibrating streamflow on three standard objective functions (NSE, MNS, and RSD), as well as a multiobjective function aggregating these three targets to simultaneously address a range of flow conditions, for calibration of the Soil and Water Assessment Tool (SWAT) daily streamflow simulations in two watersheds. A suite of objective functions was explored to select a minimally redundant set of metrics addressing a range of flow characteristics. After each pass of 2001 simulations, an iterative informal likelihood procedure was used to subset parameter ranges. The ranges from each best-fit simulation set were used for model validation. Values for optimized parameters vary among calibrations using different objective functions, which underscores the importance of linking modeling objectives to calibration target selection. The simulation set approach yielded validated models of similar quality as seen with a single best-fit parameter set, with the added benefit of uncertainty estimations. Our approach represents a novel compromise between equifinality-based approaches and Pareto optimization. Combining the simulation set approach with the multiobjective function was demonstrated to be a practicable and flexible approach for model calibration, which can be readily modified to suit modeling goals, and is not model or location specific.

  11. SEM image contouring for OPC model calibration and verification

    NASA Astrophysics Data System (ADS)

    Tabery, Cyrus; Morokuma, Hidetoshi; Matsuoka, Ryoichi; Page, Lorena; Bailey, George E.; Kusnadi, Ir; Do, Thuy

    2007-03-01

    Lithography models for leading-edge OPC and design verification must be calibrated with empirical data, and this data is traditionally collected as a one-dimensional quantification of the features acquired by a CD-SEM. Two-dimensional proximity features such as line-end, bar-to-bar, or bar-to-line are only partially characterized because of the difficulty in transferring the complete information of a SEM image into the OPC model building process. A new method of two-dimensional measurement uses the contouring of large numbers of SEM images acquired within the context of a design based metrology system to drive improvement in the quality of the final calibrated model. Hitachi High-Technologies has continued to develop "full automated EPE measurement and contouring function" based on design layout and detected edges of SEM image. This function can measure edge placement error everywhere in a SEM image and pass the result as a design layout (GDSII) into Mentor Graphics model calibration flow. Classification of the critical design elements using tagging scripts is used to weight the critical contours in the evaluation of model fitness. During process of placement of the detected SEM edges of into the coordinate system of the design, coordinate errors inevitably are introduced because of pattern matching errors. Also, line edge roughness in 2D features introduces noise that is large compared to the model building accuracy requirements of advanced technology nodes. This required the development of contour averaging algorithms. Contours from multiple SEM images are acquired of a feature and averaged before passing into the model calibration. This function has been incorporated into the prototype Calibre Workbench model calibration flow. Based on these methods, experimental data is presented detailing the model accuracy of a 45nm immersion lithography process using traditional 1D calibration only, and a hybrid model calibration using SEM image contours and 1D measurement

  12. Cloud-Based Model Calibration Using OpenStudio: Preprint

    SciTech Connect

    Hale, E.; Lisell, L.; Goldwasser, D.; Macumber, D.; Dean, J.; Metzger, I.; Parker, A.; Long, N.; Ball, B.; Schott, M.; Weaver, E.; Brackney, L.

    2014-03-01

    OpenStudio is a free, open source Software Development Kit (SDK) and application suite for performing building energy modeling and analysis. The OpenStudio Parametric Analysis Tool has been extended to allow cloud-based simulation of multiple OpenStudio models parametrically related to a baseline model. This paper describes the new cloud-based simulation functionality and presents a model cali-bration case study. Calibration is initiated by entering actual monthly utility bill data into the baseline model. Multiple parameters are then varied over multiple iterations to reduce the difference between actual energy consumption and model simulation results, as calculated and visualized by billing period and by fuel type. Simulations are per-formed in parallel using the Amazon Elastic Cloud service. This paper highlights model parameterizations (measures) used for calibration, but the same multi-nodal computing architecture is available for other purposes, for example, recommending combinations of retrofit energy saving measures using the calibrated model as the new baseline.

  13. Applying Hierarchical Model Calibration to Automatically Generated Items.

    ERIC Educational Resources Information Center

    Williamson, David M.; Johnson, Matthew S.; Sinharay, Sandip; Bejar, Isaac I.

    This study explored the application of hierarchical model calibration as a means of reducing, if not eliminating, the need for pretesting of automatically generated items from a common item model prior to operational use. Ultimately the successful development of automatic item generation (AIG) systems capable of producing items with highly similar…

  14. Multi-Dimensional Calibration of Impact Dynamic Models

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.; Annett, Martin S.; Jackson, Karen E.

    2011-01-01

    NASA Langley, under the Subsonic Rotary Wing Program, recently completed two helicopter tests in support of an in-house effort to study crashworthiness. As part of this effort, work is on-going to investigate model calibration approaches and calibration metrics for impact dynamics models. Model calibration of impact dynamics problems has traditionally assessed model adequacy by comparing time histories from analytical predictions to test at only a few critical locations. Although this approach provides for a direct measure of the model predictive capability, overall system behavior is only qualitatively assessed using full vehicle animations. In order to understand the spatial and temporal relationships of impact loads as they migrate throughout the structure, a more quantitative approach is needed. In this work impact shapes derived from simulated time history data are used to recommend sensor placement and to assess model adequacy using time based metrics and orthogonality multi-dimensional metrics. An approach for model calibration is presented that includes metric definitions, uncertainty bounds, parameter sensitivity, and numerical optimization to estimate parameters to reconcile test with analysis. The process is illustrated using simulated experiment data.

  15. Calibration of a large-scale hydrological model using satellite-based soil moisture and evapotranspiration products

    NASA Astrophysics Data System (ADS)

    López López, Patricia; Sutanudjaja, Edwin H.; Schellekens, Jaap; Sterk, Geert; Bierkens, Marc F. P.

    2017-06-01

    A considerable number of river basins around the world lack sufficient ground observations of hydro-meteorological data for effective water resources assessment and management. Several approaches can be developed to increase the quality and availability of data in these poorly gauged or ungauged river basins; among them, the use of Earth observations products has recently become promising. Earth observations of various environmental variables can be used potentially to increase knowledge about the hydrological processes in the basin and to improve streamflow model estimates, via assimilation or calibration. The present study aims to calibrate the large-scale hydrological model PCRaster GLOBal Water Balance (PCR-GLOBWB) using satellite-based products of evapotranspiration and soil moisture for the Moroccan Oum er Rbia River basin. Daily simulations at a spatial resolution of 5 × 5 arcmin are performed with varying parameters values for the 32-year period 1979-2010. Five different calibration scenarios are inter-compared: (i) reference scenario using the hydrological model with the standard parameterization, (ii) calibration using in situ-observed discharge time series, (iii) calibration using the Global Land Evaporation Amsterdam Model (GLEAM) actual evapotranspiration time series, (iv) calibration using ESA Climate Change Initiative (CCI) surface soil moisture time series and (v) step-wise calibration using GLEAM actual evapotranspiration and ESA CCI surface soil moisture time series. The impact on discharge estimates of precipitation in comparison with model parameters calibration is investigated using three global precipitation products, including ERA-Interim (EI), WATCH Forcing methodology applied to ERA-Interim reanalysis data (WFDEI) and Multi-Source Weighted-Ensemble Precipitation data by merging gauge, satellite and reanalysis data (MSWEP). Results show that GLEAM evapotranspiration and ESA CCI soil moisture may be used for model calibration resulting in

  16. Challenges of OPC model calibration from SEM contours

    NASA Astrophysics Data System (ADS)

    Granik, Yuri; Kusnadi, Ir

    2008-03-01

    Traditionally OPC models are calibrated to match CD measurements from selected test pattern locations. This demand for massive CD data drives advances in metrology. Considerable progress has recently been achieved in complimenting this CD data with SEM contours. Here we propose solutions to some challenges that emerge in calibrating OPC models from the experimental contours. We discuss and state the minimization objective as a measure of the distance between simulation and experimental contours. The main challenge is to correctly process inevitable gaps, discontinuities and roughness of the SEM contours. We discuss standardizing the data interchange formats and procedures between OPC and metrology vendors.

  17. Stepwise calibration procedure for regional coupled hydrological-hydrogeological models

    NASA Astrophysics Data System (ADS)

    Labarthe, Baptiste; Abasq, Lena; de Fouquet, Chantal; Flipo, Nicolas

    2014-05-01

    Stream-aquifer interaction is a complex process depending on regional and local processes. Indeed, the groundwater component of hydrosystem and large scale heterogeneities control the regional flows towards the alluvial plains and the rivers. In second instance, the local distribution of the stream bed permeabilities controls the dynamics of stream-aquifer water fluxes within the alluvial plain, and therefore the near-river piezometric head distribution. In order to better understand the water circulation and pollutant transport in watersheds, the integration of these multi-dimensional processes in modelling platform has to be performed. Thus, the nested interfaces concept in continental hydrosystem modelling (where regional fluxes, simulated by large scale models, are imposed at local stream-aquifer interfaces) has been presented in Flipo et al (2014). This concept has been implemented in EauDyssée modelling platform for a large alluvial plain model (900km2) part of a 11000km2 multi-layer aquifer system, located in the Seine basin (France). The hydrosystem modelling platform is composed of four spatially distributed modules (Surface, Sub-surface, River and Groundwater), corresponding to four components of the terrestrial water cycle. Considering the large number of parameters to be inferred simultaneously, the calibration process of coupled models is highly computationally demanding and therefore hardly applicable to a real case study of 10000km2. In order to improve the efficiency of the calibration process, a stepwise calibration procedure is proposed. The stepwise methodology involves determining optimal parameters of all components of the coupled model, to provide a near optimum prior information for the global calibration. It starts with the surface component parameters calibration. The surface parameters are optimised based on the comparison between simulated and observed discharges (or filtered discharges) at various locations. Once the surface parameters

  18. MT3DMS: Model use, calibration, and validation

    USGS Publications Warehouse

    Zheng, C.; Hill, Mary C.; Cao, G.; Ma, R.

    2012-01-01

    MT3DMS is a three-dimensional multi-species solute transport model for solving advection, dispersion, and chemical reactions of contaminants in saturated groundwater flow systems. MT3DMS interfaces directly with the U.S. Geological Survey finite-difference groundwater flow model MODFLOW for the flow solution and supports the hydrologic and discretization features of MODFLOW. MT3DMS contains multiple transport solution techniques in one code, which can often be important, including in model calibration. Since its first release in 1990 as MT3D for single-species mass transport modeling, MT3DMS has been widely used in research projects and practical field applications. This article provides a brief introduction to MT3DMS and presents recommendations about calibration and validation procedures for field applications of MT3DMS. The examples presented suggest the need to consider alternative processes as models are calibrated and suggest opportunities and difficulties associated with using groundwater age in transport model calibration.

  19. Technical note: Bayesian calibration of dynamic ruminant nutrition models.

    PubMed

    Reed, K F; Arhonditsis, G B; France, J; Kebreab, E

    2016-08-01

    Mechanistic models of ruminant digestion and metabolism have advanced our understanding of the processes underlying ruminant animal physiology. Deterministic modeling practices ignore the inherent variation within and among individual animals and thus have no way to assess how sources of error influence model outputs. We introduce Bayesian calibration of mathematical models to address the need for robust mechanistic modeling tools that can accommodate error analysis by remaining within the bounds of data-based parameter estimation. For the purpose of prediction, the Bayesian approach generates a posterior predictive distribution that represents the current estimate of the value of the response variable, taking into account both the uncertainty about the parameters and model residual variability. Predictions are expressed as probability distributions, thereby conveying significantly more information than point estimates in regard to uncertainty. Our study illustrates some of the technical advantages of Bayesian calibration and discusses the future perspectives in the context of animal nutrition modeling.

  20. Benchmarking the Sandia Pulsed Reactor III cavity neutron spectrum for electronic parts calibration and testing

    SciTech Connect

    Kelly, J.G.; Griffin, P.J.; Fan, W.C.

    1993-08-01

    The SPR III bare cavity spectrum and integral parameters have been determined with 24 measured spectrum sensor responses and an independent, detailed, MCNP transport calculation. This environment qualifies as a benchmark field for electronic parts testing.

  1. WEPP: Model use, calibration, and validation

    USDA-ARS?s Scientific Manuscript database

    The Water Erosion Prediction Project (WEPP) model is a process-based, continuous simulation, distributed parameter, hydrologic and soil erosion prediction system. It has been developed over the past 25 years to allow for easy application to a large number of land management scenarios. Most general o...

  2. WEPP: Model use, calibration and validation

    USDA-ARS?s Scientific Manuscript database

    The Water Erosion Prediction Project (WEPP) model is a process-based, continuous simulation, distributed parameter, hydrologic and soil erosion prediction system. It has been developed over the past 25 years to allow for easy application to a large number of land management scenarios. Most general o...

  3. Bayesian Calibration of the Community Land Model using Surrogates

    SciTech Connect

    Ray, Jaideep; Hou, Zhangshuan; Huang, Maoyi; Sargsyan, K.; Swiler, Laura P.

    2015-01-01

    We present results from the Bayesian calibration of hydrological parameters of the Community Land Model (CLM), which is often used in climate simulations and Earth system models. A statistical inverse problem is formulated for three hydrological parameters, conditioned on observations of latent heat surface fluxes over 48 months. Our calibration method uses polynomial and Gaussian process surrogates of the CLM, and solves the parameter estimation problem using a Markov chain Monte Carlo sampler. Posterior probability densities for the parameters are developed for two sites with different soil and vegetation covers. Our method also allows us to examine the structural error in CLM under two error models. We find that accurate surrogate models can be created for CLM in most cases. The posterior distributions lead to better prediction than the default parameter values in CLM. Climatologically averaging the observations does not modify the parameters’ distributions significantly. The structural error model reveals a correlation time-scale which can potentially be used to identify physical processes that could be contributing to it. While the calibrated CLM has a higher predictive skill, the calibration is under-dispersive.

  4. Bayesian calibration of the Community Land Model using surrogates

    SciTech Connect

    Ray, Jaideep; Hou, Zhangshuan; Huang, Maoyi; Swiler, Laura Painton

    2014-02-01

    We present results from the Bayesian calibration of hydrological parameters of the Community Land Model (CLM), which is often used in climate simulations and Earth system models. A statistical inverse problem is formulated for three hydrological parameters, conditional on observations of latent heat surface fluxes over 48 months. Our calibration method uses polynomial and Gaussian process surrogates of the CLM, and solves the parameter estimation problem using a Markov chain Monte Carlo sampler. Posterior probability densities for the parameters are developed for two sites with different soil and vegetation covers. Our method also allows us to examine the structural error in CLM under two error models. We find that surrogate models can be created for CLM in most cases. The posterior distributions are more predictive than the default parameter values in CLM. Climatologically averaging the observations does not modify the parameters' distributions significantly. The structural error model reveals a correlation time-scale which can be used to identify the physical process that could be contributing to it. While the calibrated CLM has a higher predictive skill, the calibration is under-dispersive.

  5. Calibration of hydrological models using flow-duration curves

    NASA Astrophysics Data System (ADS)

    Westerberg, I. K.; Guerrero, J.-L.; Younger, P. M.; Beven, K. J.; Seibert, J.; Halldin, S.; Freer, J. E.; Xu, C.-Y.

    2010-12-01

    The degree of belief we have in predictions from hydrologic models depends on how well they can reproduce observations. Calibrations with traditional performance measures such as the Nash-Sutcliffe model efficiency are challenged by problems including: (1) uncertain discharge data, (2) variable importance of the performance with flow magnitudes, (3) influence of unknown input/output errors and (4) inability to evaluate model performance when observation time periods for discharge and model input data do not overlap. A new calibration method using flow-duration curves (FDCs) was developed which addresses these problems. The method focuses on reproducing the observed discharge frequency distribution rather than the exact hydrograph. It consists of applying limits of acceptability for selected evaluation points (EPs) of the observed uncertain FDC in the extended GLUE approach. Two ways of selecting the EPs were tested - based on equal intervals of discharge and of volume of water. The method was tested and compared to a calibration using the traditional model efficiency for the daily four-parameter WASMOD model in the Paso La Ceiba catchment in Honduras and for Dynamic TOPMODEL evaluated at an hourly time scale for the Brue catchment in Great Britain. The volume method of selecting EPs gave the best results in both catchments with better calibrated slow flow, recession and evaporation than the other criteria. Observed and simulated time series of uncertain discharges agreed better for this method both in calibration and prediction in both catchments without resulting in overpredicted simulated uncertainty. An advantage with the method is that the rejection criterion is based on an estimation of the uncertainty in discharge data and that the EPs of the FDC can be chosen to reflect the aims of the modelling application e.g. using more/less EPs at high/low flows. While the new method is less sensitive to epistemic input/output errors than the normal use of limits of

  6. Calibration of hydrological models using flow-duration curves

    NASA Astrophysics Data System (ADS)

    Westerberg, I. K.; Guerrero, J.-L.; Younger, P. M.; Beven, K. J.; Seibert, J.; Halldin, S.; Freer, J. E.; Xu, C.-Y.

    2011-07-01

    The degree of belief we have in predictions from hydrologic models will normally depend on how well they can reproduce observations. Calibrations with traditional performance measures, such as the Nash-Sutcliffe model efficiency, are challenged by problems including: (1) uncertain discharge data, (2) variable sensitivity of different performance measures to different flow magnitudes, (3) influence of unknown input/output errors and (4) inability to evaluate model performance when observation time periods for discharge and model input data do not overlap. This paper explores a calibration method using flow-duration curves (FDCs) to address these problems. The method focuses on reproducing the observed discharge frequency distribution rather than the exact hydrograph. It consists of applying limits of acceptability for selected evaluation points (EPs) on the observed uncertain FDC in the extended GLUE approach. Two ways of selecting the EPs were tested - based on equal intervals of discharge and of volume of water. The method was tested and compared to a calibration using the traditional model efficiency for the daily four-parameter WASMOD model in the Paso La Ceiba catchment in Honduras and for Dynamic TOPMODEL evaluated at an hourly time scale for the Brue catchment in Great Britain. The volume method of selecting EPs gave the best results in both catchments with better calibrated slow flow, recession and evaporation than the other criteria. Observed and simulated time series of uncertain discharges agreed better for this method both in calibration and prediction in both catchments. An advantage with the method is that the rejection criterion is based on an estimation of the uncertainty in discharge data and that the EPs of the FDC can be chosen to reflect the aims of the modelling application, e.g. using more/less EPs at high/low flows. While the method appears less sensitive to epistemic input/output errors than previous use of limits of acceptability applied

  7. Calibrating Subjective Probabilities Using Hierarchical Bayesian Models

    NASA Astrophysics Data System (ADS)

    Merkle, Edgar C.

    A body of psychological research has examined the correspondence between a judge's subjective probability of an event's outcome and the event's actual outcome. The research generally shows that subjective probabilities are noisy and do not match the "true" probabilities. However, subjective probabilities are still useful for forecasting purposes if they bear some relationship to true probabilities. The purpose of the current research is to exploit relationships between subjective probabilities and outcomes to create improved, model-based probabilities for forecasting. Once the model has been trained in situations where the outcome is known, it can then be used in forecasting situations where the outcome is unknown. These concepts are demonstrated using experimental psychology data, and potential applications are discussed.

  8. Hydrologic and water quality models: Key calibration and validation topics

    USDA-ARS?s Scientific Manuscript database

    As a continuation of efforts to provide a common background and platform for accordant development of calibration and validation (C/V) engineering practices, ASABE members worked to determine critical topics related to model C/V, perform a synthesis of the Moriasi et al. (2012) special collection of...

  9. An Application of the Poisson Race Model to Confidence Calibration

    ERIC Educational Resources Information Center

    Merkle, Edgar C.; Van Zandt, Trisha

    2006-01-01

    In tasks as diverse as stock market predictions and jury deliberations, a person's feelings of confidence in the appropriateness of different choices often impact that person's final choice. The current study examines the mathematical modeling of confidence calibration in a simple dual-choice task. Experiments are motivated by an accumulator…

  10. Hydrologic and water quality models: Use, calibration, and validation

    USDA-ARS?s Scientific Manuscript database

    This paper introduces a special collection of 22 research articles that present and discuss calibration and validation concepts in detail for hydrologic and water quality models by their developers and presents a broad framework for developing the American Society of Agricultural and Biological Engi...

  11. An Application of the Poisson Race Model to Confidence Calibration

    ERIC Educational Resources Information Center

    Merkle, Edgar C.; Van Zandt, Trisha

    2006-01-01

    In tasks as diverse as stock market predictions and jury deliberations, a person's feelings of confidence in the appropriateness of different choices often impact that person's final choice. The current study examines the mathematical modeling of confidence calibration in a simple dual-choice task. Experiments are motivated by an accumulator…

  12. Robust camera calibration for sport videos using court models

    NASA Astrophysics Data System (ADS)

    Farin, Dirk; Krabbe, Susanne; de With, Peter H. N.; Effelsberg, Wolfgang

    2003-12-01

    We propose an automatic camera calibration algorithm for court sports. The obtained camera calibration parameters are required for applications that need to convert positions in the video frame to real-world coordinates or vice versa. Our algorithm uses a model of the arrangement of court lines for calibration. Since the court model can be specified by the user, the algorithm can be applied to a variety of different sports. The algorithm starts with a model initialization step which locates the court in the image without any user assistance or a-priori knowledge about the most probable position. Image pixels are classified as court line pixels if they pass several tests including color and local texture constraints. A Hough transform is applied to extract line elements, forming a set of court line candidates. The subsequent combinatorial search establishes correspondences between lines in the input image and lines from the court model. For the succeeding input frames, an abbreviated calibration algorithm is used, which predicts the camera parameters for the new image and optimizes the parameters using a gradient-descent algorithm. We have conducted experiments on a variety of sport videos (tennis, volleyball, and goal area sequences of soccer games). Video scenes with considerable difficulties were selected to test the robustness of the algorithm. Results show that the algorithm is very robust to occlusions, partial court views, bad lighting conditions, or shadows.

  13. I-spline Smoothing for Calibrating Predictive Models.

    PubMed

    Wu, Yuan; Jiang, Xiaoqian; Kim, Jihoon; Ohno-Machado, Lucila

    2012-01-01

    We proposed the I-spline Smoothing approach for calibrating predictive models by solving a nonlinear monotone regression problem. We took advantage of I-spline properties to obtain globally optimal solutions while keeping the computational cost low. Numerical studies based on three data sets showed the empirical evidences of I-spline Smoothing in improving calibration (i.e.,1.6x, 1.4x, and 1.4x on the three datasets compared to the average of competitors-Binning, Platt Scaling, Isotonic Regression, Monotone Spline Smoothing, Smooth Isotonic Regression) without deterioration of discrimination.

  14. Calibration of longwavelength exotech model 20-C spectroradiometer

    NASA Technical Reports Server (NTRS)

    Kumar, R.; Robinson, B.; Silva, L.

    1978-01-01

    A brief description of the Exotech model 20-C field spectroradiometer which measures the spectral radiance of a target in the wavelength ranges 0.37 to 2.5 microns (short wavelength unit), 2.8 to 5.6 microns and 7.0 to 14 microns (long wavelength unit) is given. Wavelength calibration of long wavelength unit was done by knowing the strong, sharp and accurately known absorption bands of polystyrene, atmospheric carbon dioxide and methyl cyclohexane (liquid) in the infrared wavelength region. The spectral radiance calibration was done by recording spectral scans of the hot and the cold blackbodies and assuming that spectral radiance varies linearly with the signal.

  15. Atmospheric drag model calibrations for spacecraft lifetime prediction

    NASA Technical Reports Server (NTRS)

    Binebrink, A. L.; Radomski, M. S.; Samii, M. V.

    1989-01-01

    Although solar activity prediction uncertainty normally dominates decay prediction error budget for near-Earth spacecraft, the effect of drag force modeling errors for given levels of solar activity needs to be considered. Two atmospheric density models, the modified Harris-Priester model and the Jacchia-Roberts model, to reproduce the decay histories of the Solar Mesosphere Explorer (SME) and Solar Maximum Mission (SMM) spacecraft in the 490- to 540-kilometer altitude range were analyzed. Historical solar activity data were used in the input to the density computations. For each spacecraft and atmospheric model, a drag scaling adjustment factor was determined for a high-solar-activity year, such that the observed annual decay in the mean semimajor axis was reproduced by an averaged variation-of-parameters (VOP) orbit propagation. The SME (SMM) calibration was performed using calendar year 1983 (1982). The resulting calibration factors differ by 20 to 40 percent from the predictions of the prelaunch ballistic coefficients. The orbit propagations for each spacecraft were extended to the middle of 1988 using the calibrated drag models. For the Jaccia-Roberts density model, the observed decay in the mean semimajor axis of SME (SMM) over the 4.5-year (5.5-year) predictive period was reproduced to within 1.5 (4.4) percent. The corresponding figure for the Harris-Priester model was 8.6 (20.6) percent. Detailed results and conclusions regarding the importance of accurate drag force modeling for lifetime predictions are presented.

  16. Theoretical model atmosphere spectra used for the calibration of infrared instruments

    NASA Astrophysics Data System (ADS)

    Decin, L.; Eriksson, K.

    2007-09-01

    Context: One of the key ingredients in establishing the relation between input signal and output flux from a spectrometer is accurate determination of the spectrophotometric calibration. In the case of spectrometers onboard satellites, the accuracy of this part of the calibration pedigree is ultimately linked to the accuracy of the set of reference spectral energy distributions (SEDs) that the spectrophotometric calibration is built on. Aims: In this paper, we deal with the spectrophotometric calibration of infrared (IR) spectrometers onboard satellites in the 2 to 200 μm wavelength range. We aim at comparing the different reference SEDs used for the IR spectrophotometric calibration. The emphasis is on the reference SEDs of stellar standards with spectral type later than A0, with special focus on the theoretical model atmosphere spectra. Methods: Using the MARCS model atmosphere code, spectral reference SEDs were constructed for a set of IR stellar standards (A dwarfs, solar analogs, G9-M0 giants). A detailed error analysis was performed to estimate proper uncertainties on the predicted flux values. Results: It is shown that the uncertainty on the predicted fluxes can be as high as 10%, but in case high-resolution observational optical or near-IR data are available, and IR excess can be excluded, the uncertainty on medium-resolution SEDs can be reduced to 1-2% in the near-IR, to ~3% in the mid-IR, and to ~5% in the far-IR. Moreover, it is argued that theoretical stellar atmosphere spectra are at the moment the best representations for the IR fluxes of cool stellar standards. Conclusions: When aiming at a determination of the spectrophotometric calibration of IR spectrometers better than 3%, effort should be put into constructing an appropriate set of stellar reference SEDs based on theoretical atmosphere spectra for some 15 standard stars with spectral types between A0 V and M0 III.

  17. In vitro calibration of the equilibrium reactions of the metallochromic indicator dye antipyrylazo III with calcium.

    PubMed Central

    Hollingworth, S; Aldrich, R W; Baylor, S M

    1987-01-01

    The equilibrium reactions of the metallochromic indicator dye Antipyrylazo III with calcium at physiological ionic strength have been investigated spectrophotometrically. Dye absorbance as a function of wavelength was measured at various total dye and calcium concentrations. Analysis of the absorbance spectra indicated that at pH 6.9 at least three calcium:dye complexes form, with 1:1, 1:2, and possibly 2:2 stoichiometries. The dissociation constant and the changes in dye extinction coefficients on formation of the 1:2 complex, the main complex which forms when Antipyrylazo III is used to study cytoplasmic calcium transients, have been characterized. PMID:3567312

  18. Simultaneous calibration of surface flow and baseflow simulations: a revisit of the SWAT model calibration framework

    SciTech Connect

    Zhang, Xuesong; Srinivasan, Ragahvan; Arnold, J. G.; Izaurralde, Roberto C.; Bosch, David

    2011-04-21

    Accurate analysis of water flow pathways from rainfall to streams is critical for simulating water use, climate change impact, and contaminants transport. In this study, we developed a new scheme to simultaneously calibrate surface flow (SF) and baseflow (BF) simulations of soil and water assessment tool (SWAT) by combing evolutionary multi-objective optimization (EMO) and BF separation techniques. The application of this scheme demonstrated pronounced trade-off of SWAT’s performance on SF and BF simulations. The simulated major water fluxes and storages variables (e.g. soil moisture, evapotranspiration, and groundwater) using the multiple parameters from EMO span wide ranges. Uncertainty analysis was conducted by Bayesian model averaging of the Pareto optimal solutions. The 90% confidence interval (CI) estimated using all streamflows substantially overestimate the uncertainty of low flows on BF days while underestimating the uncertainty of high flows on SF days. Despite using statistical criteria calculated based on streamflow for model selection, it is important to conduct diagnostic analysis of the agreement of SWAT behaviour and actual watershed dynamics. The new calibration technique can serve as a useful tool to explore the tradeoff between SF and BF simulations and provide candidates for further diagnostic assessment and model identification.

  19. Analysis of Sting Balance Calibration Data Using Optimized Regression Models

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert; Bader, Jon B.

    2009-01-01

    Calibration data of a wind tunnel sting balance was processed using a search algorithm that identifies an optimized regression model for the data analysis. The selected sting balance had two moment gages that were mounted forward and aft of the balance moment center. The difference and the sum of the two gage outputs were fitted in the least squares sense using the normal force and the pitching moment at the balance moment center as independent variables. The regression model search algorithm predicted that the difference of the gage outputs should be modeled using the intercept and the normal force. The sum of the two gage outputs, on the other hand, should be modeled using the intercept, the pitching moment, and the square of the pitching moment. Equations of the deflection of a cantilever beam are used to show that the search algorithm s two recommended math models can also be obtained after performing a rigorous theoretical analysis of the deflection of the sting balance under load. The analysis of the sting balance calibration data set is a rare example of a situation when regression models of balance calibration data can directly be derived from first principles of physics and engineering. In addition, it is interesting to see that the search algorithm recommended the same regression models for the data analysis using only a set of statistical quality metrics.

  20. Calibration of the hydrogeological model of the Baltic Artesian Basin

    NASA Astrophysics Data System (ADS)

    Virbulis, J.; Klints, I.; Timuhins, A.; Sennikovs, J.; Bethers, U.

    2012-04-01

    Let us consider the calibration issue for the Baltic Artesian Basin (BAB) which is a complex hydrogeological system in the southeastern Baltic with surface area close to 0.5 million square kilometers. The model of the geological structure contains 42 layers including aquifers and aquitards. The age of sediments varies from Cambrian up to the Quaternary deposits. The finite element method model was developed for the calculation of the steady state three-dimensional groundwater flow with free surface. No-flow boundary conditions were applied on the rock bottom and the side boundaries of BAB, while simple hydrological model is applied on the surface. The level of the lakes, rivers and the sea is fixed as constant hydraulic head. Constant mean value of 70 mm/year was assumed as an infiltration flux elsewhere and adjusted during the automatic calibration process. Averaged long-term water extraction was applied at the water supply wells. The calibration of the hydrogeological model is one of the most important steps during the model development. The knowledge about the parameters of the modeled system is often insufficient, especially for the large regional models, and a lack of geometric and hydraulic conductivity data is typical. The quasi-Newton optimization method L-BFGS-B is used for the calibration of the BAB model. Model is calibrated on the available water level measurements in monitoring wells and level measurements in boreholes during their installation. As the available data is not uniformly distributed over the covered area, weight coefficient is assigned to each borehole in order not to overestimate the clusters of boreholes. The year 2000 is chosen as the reference year for the present time scenario and the data from surrounding years are also taken into account but with smaller weighting coefficients. The objective function to be minimized by the calibration process is the weighted sum of squared differences between observed and modeled piezometric heads

  1. A calibration model for screen-caged Peltier thermocouple psychrometers

    Treesearch

    Ray W. Brown; Dale L. Bartos

    1982-01-01

    A calibration model for screen-caged Peltier thermocouple psychrometers was developed that applies to a water potential range of 0 to-80 bars, over a temperature range of 0° to 40° C, and for cooling times of 15 to 60 seconds. In addition, the model corrects for the effects of temperature gradients over zero-offsets from -60 to + 60 microvolts. Complete details of...

  2. A new selection metric for multiobjective hydrologic model calibration

    NASA Astrophysics Data System (ADS)

    Asadzadeh, Masoud; Tolson, Bryan A.; Burn, Donald H.

    2014-09-01

    A novel selection metric called Convex Hull Contribution (CHC) is introduced for solving multiobjective (MO) optimization problems with Pareto fronts that can be accurately approximated by a convex curve. The hydrologic model calibration literature shows that many biobjective calibration problems with a proper setup result in such Pareto fronts. The CHC selection approach identifies a subset of archived nondominated solutions whose map in the objective space forms convex approximation of the Pareto front. The optimization algorithm can sample solely from these solutions to more accurately approximate the convex shape of the Pareto front. It is empirically demonstrated that CHC improves the performance of Pareto Archived Dynamically Dimensioned Search (PA-DDS) when solving MO problems with convex Pareto fronts. This conclusion is based on the results of several benchmark mathematical problems and several hydrologic model calibration problems with two or three objective functions. The impact of CHC on PA-DDS performance is most evident when the computational budget is somewhat limited. It is also demonstrated that 1,000 solution evaluations (limited budget in this study) is sufficient for PA-DDS with CHC-based selection to achieve very high quality calibration results relative to the results achieved after 10,000 solution evaluations.

  3. Calibration and validation of DRAINMOD to model bioretention hydrology

    NASA Astrophysics Data System (ADS)

    Brown, R. A.; Skaggs, R. W.; Hunt, W. F.

    2013-04-01

    SummaryPrevious field studies have shown that the hydrologic performance of bioretention cells varies greatly because of factors such as underlying soil type, physiographic region, drainage configuration, surface storage volume, drainage area to bioretention surface area ratio, and media depth. To more accurately describe bioretention hydrologic response, a long-term hydrologic model that generates a water balance is needed. Some current bioretention models lack the ability to perform long-term simulations and others have never been calibrated from field monitored bioretention cells with underdrains. All peer-reviewed models lack the ability to simultaneously perform both of the following functions: (1) model an internal water storage (IWS) zone drainage configuration and (2) account for soil-water content using the soil-water characteristic curve. DRAINMOD, a widely-accepted agricultural drainage model, was used to simulate the hydrologic response of runoff entering a bioretention cell. The concepts of water movement in bioretention cells are very similar to those of agricultural fields with drainage pipes, so many bioretention design specifications corresponded directly to DRAINMOD inputs. Detailed hydrologic measurements were collected from two bioretention field sites in Nashville and Rocky Mount, North Carolina, to calibrate and test the model. Each field site had two sets of bioretention cells with varying media depths, media types, drainage configurations, underlying soil types, and surface storage volumes. After 12 months, one of these characteristics was altered - surface storage volume at Nashville and IWS zone depth at Rocky Mount. At Nashville, during the second year (post-repair period), the Nash-Sutcliffe coefficients for drainage and exfiltration/evapotranspiration (ET) both exceeded 0.8 during the calibration and validation periods. During the first year (pre-repair period), the Nash-Sutcliffe coefficients for drainage, overflow, and exfiltration

  4. Model calibration for changing climates: lessons from Australian droughts.

    NASA Astrophysics Data System (ADS)

    Fowler, K.; Peel, M. C.; Western, A. W.; Zhang, L.

    2016-12-01

    Hydrologic models have potential to be useful tools in planning for future climate variability. They are often used when translating projected climatic shifts (eg. in rainfall or PET) into potential shortfalls in water availability. However, recent literature suggests that conceptual rainfall-runoff models have variable performance simulating runoff under changing climatic conditions. Models calibrated to wetter conditions tend to perform poorly when climatic conditions become drier. In particular, models often provide biased simulations after a change in climate. This suggests that either the models themselves are deficient, and/or common calibration methods need to be improved. Therefore, this research tested alternative calibration methods. The overall goal was to find parameter sets that are robust to changes in climate and provide better performance when evaluated over multi-year droughts. Two broad approaches were trialled: hydrologic signature matching (using the DREAM-ABC algorithm), and single-objective optimisation (using the CMA-ES algorithm). For hydrologic signature matching, 36 hydrologic signatures were defined and over 200 combinations of these signatures were trialled. For single objective optimisation, 15 different objective functions were trialled. For both methods, testing was carried out in 86 catchments in South East Australia using 5 different rainfall runoff models. The results indicate two broad strategies for improving calibration methods for changing climates. First, common 'least squares' methods are too sensitive to day-to-day variations and not sufficiently sensitive to long-term changes. Thus, signatures or objective functions that incorporate longer timescales (eg. annual) may do better. Second, the least squares method tended to be outperformed by methods that take the absolute error, such as the Index of Agreement. Together, these two strategies have potential to better prepare models for future climatic changes.

  5. Automatic Calibration Method for a Storm Water Runoff Model

    NASA Astrophysics Data System (ADS)

    Barco, J.; Wong, K. M.; Hogue, T.; Stenstrom, M. K.

    2007-12-01

    Major metropolitan areas are characterized by continuous increases in imperviousness due to urban development. Increasing imperviousness increases runoff volume and maximum rates of runoff, with generally negative consequences for natural systems. To avoid environmental degradation, new development standards often prohibit increases in total runoff volume and may limit maximum flow rates. Methods to reduce runoff volume and maximum runoff rate are required, and solutions to the problems may benefit from the use of advanced models. In this study the U.S. Storm Water Management Model (SWMM) was adapted and calibrated to the Ballona Creek watershed, a large urban catchment in Southern California. A geographic information system (GIS) was used to process the input data and generate the spatial distribution of precipitation. An optimization procedure using the Complex Method was incorporated to estimate runoff parameters, and ten storms were used for calibration and validation. The calibrated model predicted the observed outputs with reasonable accuracy. A sensitivity analysis showed the impact of the model parameters, and results were most sensitive to imperviousness and impervious depression storage and least sensitive to Manning roughness for surface flow. Optimized imperviousness was greater than imperviousness predicted from landuse information. The results demonstrate that this methodology of integrating GIS and stormwater model with a constrained optimization technique can be applied to large watersheds, and can be a useful tool to evaluate alternative strategies to reduce runoff rate and volume.

  6. Model calibration criteria for estimating ecological flow characteristics

    USGS Publications Warehouse

    Vis, Marc; Knight, Rodney; Poole, Sandra; Wolfe, William J.; Seibert, Jan; Breuer, Lutz; Kraft, Philipp

    2016-01-01

    Quantification of streamflow characteristics in ungauged catchments remains a challenge. Hydrological modeling is often used to derive flow time series and to calculate streamflow characteristics for subsequent applications that may differ from those envisioned by the modelers. While the estimation of model parameters for ungauged catchments is a challenging research task in itself, it is important to evaluate whether simulated time series preserve critical aspects of the streamflow hydrograph. To address this question, seven calibration objective functions were evaluated for their ability to preserve ecologically relevant streamflow characteristics of the average annual hydrograph using a runoff model, HBV-light, at 27 catchments in the southeastern United States. Calibration trials were repeated 100 times to reduce parameter uncertainty effects on the results, and 12 ecological flow characteristics were computed for comparison. Our results showed that the most suitable calibration strategy varied according to streamflow characteristic. Combined objective functions generally gave the best results, though a clear underprediction bias was observed. The occurrence of low prediction errors for certain combinations of objective function and flow characteristic suggests that (1) incorporating multiple ecological flow characteristics into a single objective function would increase model accuracy, potentially benefitting decision-making processes; and (2) there may be a need to have different objective functions available to address specific applications of the predicted time series.

  7. Numerical modeling, calibration, and validation of an ultrasonic separator.

    PubMed

    Cappon, Hans; Keesman, Karel J

    2013-03-01

    Our overall goal is to apply acoustic separation technology for the recovery of valuable particulate matter from wastewater in industry. Such large-scale separator systems require detailed design and evaluation to optimize the system performance at the earliest stage possible. Numerical models can facilitate and accelerate the design of this application; therefore, a finite element (FE) model of an ultrasonic particle separator is a prerequisite. In our application, the particle separator consists of a glass resonator chamber with a piezoelectric transducer attached to the glass by means of epoxy adhesive. Separation occurs most efficiently when the system is operated at its main eigenfrequency. The goal of the paper is to calibrate and validate a model of a demonstrator ultrasonic separator, preserving known physical parameters and estimating the remaining unknown or less-certain parameters to allow extrapolation of the model beyond the measured system. A two-step approach was applied to obtain a validated model of the separator. The first step involved the calibration of the piezoelectric transducer. The second step, the subject of this paper, involves the calibration and validation of the entire separator using nonlinear optimization techniques. The results show that the approach lead to a fully calibrated 2-D model of the empty separator, which was validated with experiments on a filled separator chamber. The large sensitivity of the separator to small variations indicated that such a system should either be made and operated within tight specifications to obtain the required performance or the operation of the system should be adaptable to cope with a slightly off-spec system, requiring a feedback controller.

  8. Calibration process of highly parameterized semi-distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Vidmar, Andrej; Brilly, Mitja

    2017-04-01

    Hydrological phenomena take place in the hydrological system, which is governed by nature, and are essentially stochastic. These phenomena are unique, non-recurring, and changeable across space and time. Since any river basin with its own natural characteristics and any hydrological event therein, are unique, this is a complex process that is not researched enough. Calibration is a procedure of determining the parameters of a model that are not known well enough. Input and output variables and mathematical model expressions are known, while only some parameters are unknown, which are determined by calibrating the model. The software used for hydrological modelling nowadays is equipped with sophisticated algorithms for calibration purposes without possibility to manage process by modeler. The results are not the best. We develop procedure for expert driven process of calibration. We use HBV-light-CLI hydrological model which has command line interface and coupling it with PEST. PEST is parameter estimation tool which is used widely in ground water modeling and can be used also on surface waters. Process of calibration managed by expert directly, and proportionally to the expert knowledge, affects the outcome of the inversion procedure and achieves better results than if the procedure had been left to the selected optimization algorithm. First step is to properly define spatial characteristic and structural design of semi-distributed model including all morphological and hydrological phenomena, like karstic area, alluvial area and forest area. This step includes and requires geological, meteorological, hydraulic and hydrological knowledge of modeler. Second step is to set initial parameter values at their preferred values based on expert knowledge. In this step we also define all parameter and observation groups. Peak data are essential in process of calibration if we are mainly interested in flood events. Each Sub Catchment in the model has own observations group

  9. Stellar model chromospheres. III - Arcturus /K2 III/

    NASA Technical Reports Server (NTRS)

    Ayres, T. R.; Linsky, J. L.

    1975-01-01

    Models are constructed for the upper photosphere and chromosphere of Arcturus based on the H, K, and IR triplet lines of Ca II and the h and k lines of Mg II. The chromosphere model is derived from complete redistribution solutions for a five-level Ca II ion and a two-level Mg II ion. A photospheric model is derived from the Ca II wings using first the 'traditional' complete-redistribution limit and then the more realistic partial-redistribution approximation. The temperature and mass column densities for the temperature-minimum region and the chromosphere-transition region boundary are computed, and the pressure in the transition region and corona are estimated. It is found that the ratio of minimum temperature to effective temperature is approximately 0.77 for Arcturus, Procyon, and the sun, and that mass tends to increase at the temperature minimum with decreasing gravity. The pressure is found to be about 1 percent of the solar value, and the surface brightness of the Arcturus transition region and coronal spectrum is estimated to be much less than for the sun. The partial-redistribution calculation for the Ca II K line indicates that the emission width is at least partially determined by damping rather than Doppler broadening, suggesting a reexamination of previous explanations for the Wilson-Bappu effect.

  10. Use of Cloud Computing to Calibrate a Highly Parameterized Model

    NASA Astrophysics Data System (ADS)

    Hayley, K. H.; Schumacher, J.; MacMillan, G.; Boutin, L.

    2012-12-01

    We present a case study using cloud computing to facilitate the calibration of a complex and highly parameterized model of regional groundwater flow. The calibration dataset consisted of many (~1500) measurements or estimates of static hydraulic head, a high resolution time series of groundwater extraction and disposal rates at 42 locations and pressure monitoring at 147 locations with a total of more than one million raw measurements collected over a ten year pumping history, and base flow estimates at 5 surface water monitoring locations. This modeling project was undertaken to assess the sustainability of groundwater withdrawal and disposal plans for insitu heavy oil extraction in Northeast Alberta, Canada. The geological interpretations used for model construction were based on more than 5,000 wireline logs collected throughout the 30,865 km2 regional study area (RSA), and resulted in a model with 28 slices, and 28 hydro stratigraphic units (average model thickness of 700 m, with aquifers ranging from a depth of 50 to 500 m below ground surface). The finite element FEFLOW model constructed on this geological interpretation had 331,408 nodes and required 265 time steps to simulate the ten year transient calibration period. This numerical model of groundwater flow required 3 hours to run on a on a server with two, 2.8 GHz processers and 16 Gb. RAM. Calibration was completed using PEST. Horizontal and vertical hydraulic conductivity as well as specific storage for each unit were independent parameters. For the recharge and the horizontal hydraulic conductivity in the three aquifers with the most transient groundwater use, a pilot point parameterization was adopted. A 7*7 grid of pilot points was defined over the RSA that defined a spatially variable horizontal hydraulic conductivity or recharge field. A 7*7 grid of multiplier pilot points that perturbed the more regional field was then superimposed over the 3,600 km2 local study area (LSA). The pilot point

  11. A controlled experiment in ground water flow model calibration

    USGS Publications Warehouse

    Hill, M.C.; Cooley, R.L.; Pollock, D.W.

    1998-01-01

    Nonlinear regression was introduced to ground water modeling in the 1970s, but has been used very little to calibrate numerical models of complicated ground water systems. Apparently, nonlinear regression is thought by many to be incapable of addressing such complex problems. With what we believe to be the most complicated synthetic test case used for such a study, this work investigates using nonlinear regression in ground water model calibration. Results of the study fall into two categories. First, the study demonstrates how systematic use of a well designed nonlinear regression method can indicate the importance of different types of data and can lead to successive improvement of models and their parameterizations. Our method differs from previous methods presented in the ground water literature in that (1) weighting is more closely related to expected data errors than is usually the case; (2) defined diagnostic statistics allow for more effective evaluation of the available data, the model, and their interaction; and (3) prior information is used more cautiously. Second, our results challenge some commonly held beliefs about model calibration. For the test case considered, we show that (1) field measured values of hydraulic conductivity are not as directly applicable to models as their use in some geostatistical methods imply; (2) a unique model does not necessarily need to be identified to obtain accurate predictions; and (3) in the absence of obvious model bias, model error was normally distributed. The complexity of the test case involved implies that the methods used and conclusions drawn are likely to be powerful in practice.Nonlinear regression was introduced to ground water modeling in the 1970s, but has been used very little to calibrate numerical models of complicated ground water systems. Apparently, nonlinear regression is thought by many to be incapable of addressing such complex problems. With what we believe to be the most complicated synthetic

  12. Design of Experiments, Model Calibration and Data Assimilation

    SciTech Connect

    Williams, Brian J.

    2014-07-30

    This presentation provides an overview of emulation, calibration and experiment design for computer experiments. Emulation refers to building a statistical surrogate from a carefully selected and limited set of model runs to predict unsampled outputs. The standard kriging approach to emulation of complex computer models is presented. Calibration refers to the process of probabilistically constraining uncertain physics/engineering model inputs to be consistent with observed experimental data. An initial probability distribution for these parameters is updated using the experimental information. Markov chain Monte Carlo (MCMC) algorithms are often used to sample the calibrated parameter distribution. Several MCMC algorithms commonly employed in practice are presented, along with a popular diagnostic for evaluating chain behavior. Space-filling approaches to experiment design for selecting model runs to build effective emulators are discussed, including Latin Hypercube Design and extensions based on orthogonal array skeleton designs and imposed symmetry requirements. Optimization criteria that further enforce space-filling, possibly in projections of the input space, are mentioned. Designs to screen for important input variations are summarized and used for variable selection in a nuclear fuels performance application. This is followed by illustration of sequential experiment design strategies for optimization, global prediction, and rare event inference.

  13. Strain Gage Loads Calibration Testing with Airbag Support for the Gulfstream III SubsoniC Research Aircraft Testbed (SCRAT)

    NASA Technical Reports Server (NTRS)

    Lokos, William; Miller, Eric; Hudson, Larry; Holguin, Andrew; Neufeld, David; Haraguchi, Ronnie

    2015-01-01

    This paper describes the design and conduct of the strain gage load calibration ground test of the SubsoniC Research Aircraft Testbed, Gulfstream III aircraft, and the subsequent data analysis and its results. The goal of this effort was to create and validate multi-gage load equations for shear force, bending moment, and torque for two wing measurement stations. For some of the testing the aircraft was supported by three air bags in order to isolate the wing structure from extraneous load inputs through the main landing gear. Thirty-two strain gage bridges were installed on the left wing. Hydraulic loads were applied to the wing lower surface through a total of 16 load zones. Some dead weight load cases were applied to the upper wing surface using shot bags. Maximum applied loads reached 54,000 pounds.

  14. Efficient Accommodation of Local Minima in Watershed Model Calibration

    DTIC Science & Technology

    2006-02-02

    of information if it does not display a currently valid OMB control number. 1. REPORT DATE 2006 2. REPORT TYPE 3. DATES COVERED 00-00-2006 to...should notify the user of this, and of the fact that parameter estimates forthcom- ing from the calibration process are nonunique . Whether or not an...challenges posed by parameter nonuniqueness and local objective function minima will lead to the necessity to carry out more model runs than that

  15. Spatial and Temporal Self-Calibration of a Hydroeconomic Model

    NASA Astrophysics Data System (ADS)

    Howitt, R. E.; Hansen, K. M.

    2008-12-01

    Hydroeconomic modeling of water systems where risk and reliability of water supply are of critical importance must address explicitly how to model water supply uncertainty. When large fluctuations in annual precipitation and significant variation in flows by location are present, a model which solves with perfect foresight of future water conditions may be inappropriate for some policy and research questions. We construct a simulation-optimization model with limited foresight of future water conditions using positive mathematical programming and self-calibration techniques. This limited foresight netflow (LFN) model signals the value of storing water for future use and reflects a more accurate economic value of water at key locations, given that future water conditions are unknown. Failure to explicitly model this uncertainty could lead to undervaluation of storage infrastructure and contractual mechanisms for managing water supply risk. A model based on sequentially updated information is more realistic, since water managers make annual storage decisions without knowledge of yet to be realized future water conditions. The LFN model runs historical hydrological conditions through the current configuration of the California water system to determine the economically efficient allocation of water under current economic conditions and infrastructure. The model utilizes current urban and agricultural demands, storage and conveyance infrastructure, and the state's hydrological history to indicate the scarcity value of water at key locations within the state. Further, the temporal calibration penalty functions vary by year type, reflecting agricultural water users' ability to alter cropping patterns in response to water conditions. The model employs techniques from positive mathematical programming (Howitt, 1995; Howitt, 1998; Cai and Wang, 2006) to generate penalty functions that are applied to deviations from observed data. The functions are applied to monthly flows

  16. Methane emission modeling with MCMC calibration for a boreal peatland

    NASA Astrophysics Data System (ADS)

    Raivonen, Maarit; Smolander, Sampo; Susiluoto, Jouni; Backman, Leif; Li, Xuefei; Markkanen, Tiina; Kleinen, Thomas; Makela, Jarmo; Aalto, Tuula; Rinne, Janne; Brovkin, Victor; Vesala, Timo

    2016-04-01

    Natural wetlands, particularly peatlands of the boreal latitudes, are a significant source of methane (CH4). At the moment, the emission estimates are highly uncertain. These natural emissions respond to climatic variability, so it is necessary to understand their dynamics, in order to be able to predict how they affect the greenhouse gas balance in the future. We have developed a model of CH4 production, oxidation and transport in boreal peatlands. It simulates production of CH4 as a proportion of anaerobic peat respiration, transport of CH4 and oxygen between the soil and the atmosphere via diffusion in aerenchymatous plants and in peat pores (water and air filled), ebullition and oxidation of CH4 by methanotrophic microbes. Ultimately, we aim to add the model functionality to global climate models such as the JSBACH (Reick et al., 2013), the land surface scheme of the MPI Earth System Model. We tested the model with measured methane fluxes (using eddy covariance technique) from the Siikaneva site, an oligotrophic boreal fen in southern Finland (61°49' N, 24°11' E), over years 2005-2011. To give the model estimates regional reliability, we calibrated the model using Markov chain Monte Carlo (MCMC) technique. Although the simulations and the research are still ongoing, preliminary results from the MCMC calibration can be described as very promising considering that the model is still at relatively early stage. We will present the model and its dynamics as well as results from the MCMC calibration and the comparison with Siikaneva flux data.

  17. Xenon arc lamp spectral radiance modelling for satellite instrument calibration

    NASA Astrophysics Data System (ADS)

    Rolt, Stephen; Clark, Paul; Schmoll, Jürgen; Shaw, Benjamin J. R.

    2016-07-01

    Precise radiometric measurements play a central role in many areas of astronomical and terrestrial observation. We focus on the use of continuum light sources in the absolute radiometric calibration of detectors in an imaging spectrometer for space applications. The application, in this instance, revolves around the ground based calibration of the Sentinel-4/UVN instrument. This imaging spectrometer instrument is expected to be deployed in 2019 and will make spatially resolved spectroscopic measurements of atmospheric chemistry. The instrument, which operates across the UV/VIS and NIR spectrum from 305-775 nm, is designed to measure the absolute spectral radiance of the Earth and compare it with the absolute spectral irradiance of the Sun. Of key importance to the fidelity of these absolute measurements is the ground based calibration campaign. Continuum lamp sources that are temporally stable and are spatially well defined are central to this process. Xenon short arc lamps provide highly intense and efficient continuum illumination in a range extending from the ultra-violet to the infra-red and their spectrum is well matched to this specific application. Despite their widespread commercial use, certain aspects of their performance are not well documented in the literature. One of the important requirements in this calibration application is the delivery of highly uniform, collimated illumination at high radiance. In this process, it cannot be assumed that the xenon arc is a point source; the spatial distribution of the radiance must be characterised accurately. We present here careful measurements that thoroughly characterise the spatial distribution of the spectral radiance of a 1000W xenon lamp. A mathematical model is presented describing the spatial distribution. Temporal stability is another exceptionally important requirement in the calibration process. As such, the paper also describes strategies to re-inforce the temporal stability of the lamp output by

  18. A Robust Bayesian Random Effects Model for Nonlinear Calibration Problems

    PubMed Central

    Fong, Y.; Wakefield, J.; De Rosa, S.; Frahm, N.

    2013-01-01

    Summary In the context of a bioassay or an immunoassay, calibration means fitting a curve, usually nonlinear, through the observations collected on a set of samples containing known concentrations of a target substance, and then using the fitted curve and observations collected on samples of interest to predict the concentrations of the target substance in these samples. Recent technological advances have greatly improved our ability to quantify minute amounts of substance from a tiny volume of biological sample. This has in turn led to a need to improve statistical methods for calibration. In this paper, we focus on developing calibration methods robust to dependent outliers. We introduce a novel normal mixture model with dependent error terms to model the experimental noise. In addition, we propose a re-parameterization of the five parameter logistic nonlinear regression model that allows us to better incorporate prior information. We examine the performance of our methods with simulation studies and show that they lead to a substantial increase in performance measured in terms of mean squared error of estimation and a measure of the average prediction accuracy. A real data example from the HIV Vaccine Trials Network Laboratory is used to illustrate the methods. PMID:22551415

  19. An improved calibration technique for wind tunnel model attitude sensors

    NASA Technical Reports Server (NTRS)

    Tripp, John S.; Wong, Douglas T.; Finley, Tom D.; Tcheng, Ping

    1993-01-01

    Aerodynamic wind tunnel tests at NASA Langley Research Center (LaRC) require accurate measurement of model attitude. Inertial accelerometer packages have been the primary sensor used to measure model attitude to an accuracy of +/- 0.01 deg as required for aerodynamic research. The calibration parameters of the accelerometer package are currently obtained from a seven-point tumble test using a simplified empirical approximation. The inaccuracy due to the approximation exceeds the accuracy requirement as the misalignment angle between the package axis and the model body axis increases beyond 1.4 deg. This paper presents the exact solution derived from the coordinate transformation to eliminate inaccuracy caused by the approximation. In addition, a new calibration procedure is developed in which the data taken from the seven-point tumble test is fit to the exact solution by means of a least-squares estimation procedure. Validation tests indicate that the new calibration procedure provides +/- 0.005-deg accuracy over large package misalignments, which is not possible with the current procedure.

  20. KINEROS2-AGWA: Model Use, Calibration, and Validation

    NASA Technical Reports Server (NTRS)

    Goodrich, D C.; Burns, I. S.; Unkrich, C. L.; Semmens, D. J.; Guertin, D. P.; Hernandez, M.; Yatheendradas, S.; Kennedy, J. R.; Levick, L. R..

    2013-01-01

    KINEROS (KINematic runoff and EROSion) originated in the 1960s as a distributed event-based model that conceptualizes a watershed as a cascade of overland flow model elements that flow into trapezoidal channel model elements. KINEROS was one of the first widely available watershed models that interactively coupled a finite difference approximation of the kinematic overland flow equations to a physically based infiltration model. Development and improvement of KINEROS continued from the 1960s on a variety of projects for a range of purposes, which has resulted in a suite of KINEROS-based modeling tools. This article focuses on KINEROS2 (K2), a spatially distributed, event-based watershed rainfall-runoff and erosion model, and the companion ArcGIS-based Automated Geospatial Watershed Assessment (AGWA) tool. AGWA automates the time-consuming tasks of watershed delineation into distributed model elements and initial parameterization of these elements using commonly available, national GIS data layers. A variety of approaches have been used to calibrate and validate K2 successfully across a relatively broad range of applications (e.g., urbanization, pre- and post-fire, hillslope erosion, erosion from roads, runoff and recharge, and manure transport). The case studies presented in this article (1) compare lumped to stepwise calibration and validation of runoff and sediment at plot, hillslope, and small watershed scales; and (2) demonstrate an uncalibrated application to address relative change in watershed response to wildfire.

  1. Synthetic calibration of a Rainfall-Runoff Model

    USGS Publications Warehouse

    Thompson, David B.; Westphal, Jerome A.; ,

    1990-01-01

    A method for synthetically calibrating storm-mode parameters for the U.S. Geological Survey's Precipitation-Runoff Modeling System is described. Synthetic calibration is accomplished by adjusting storm-mode parameters to minimize deviations between the pseudo-probability disributions represented by regional regression equations and actual frequency distributions fitted to model-generated peak discharge and runoff volume. Results of modeling storm hydrographs using synthetic and analytic storm-mode parameters are presented. Comparisons are made between model results from both parameter sets and between model results and observed hydrographs. Although mean storm runoff is reproducible to within about 26 percent of the observed mean storm runoff for five or six parameter sets, runoff from individual storms is subject to large disparities. Predicted storm runoff volume ranged from 2 percent to 217 percent of commensurate observed values. Furthermore, simulation of peak discharges was poor. Predicted peak discharges from individual storm events ranged from 2 percent to 229 percent of commensurate observed values. The model was incapable of satisfactorily executing storm-mode simulations for the study watersheds. This result is not considered a particular fault of the model, but instead is indicative of deficiencies in similar conceptual models.

  2. Design driven test patterns for OPC models calibration

    NASA Astrophysics Data System (ADS)

    Al-Imam, Mohamed

    2009-03-01

    In the modern photolithography process for manufacturing integrated circuits, geometry dimensions need to be realized on silicon which are much smaller than the exposure wavelength. Thus Resolution Enhancement Techniques have an indispensable role towards the implementation of a successful technology process node. Finding an appropriate RET recipe, that answers the needs of a certain fabrication process, usually involves intensive computational simulations. These simulations have to reflect how different elements in the lithography process under study will behave. In order to achieve this, accurate models are needed that truly represent the transmission of patterns from mask to silicon. A common practice in calibrating lithography models is to collect data for the dimensions of some test structures created on the exposure mask along with the corresponding dimensions of these test structures on silicon after exposure. This data is used to tune the models for good predictions. The models will be guaranteed to accurately predict the test structures that has been used in its tuning. However, real designs might have a much greater variety of structures that might not have been included in the test structures. This paper explores a method for compiling the test structures to be used in the model calibration process using design layouts as an input. The method relies on reducing structures in the design layout to the essential unique structure from the lithography models point of view, and thus ensuring that the test structures represent what the model would actually have to predict during the simulations.

  3. Approximate Bayesian Computation for Diagnostic Model Calibration and Evaluation

    NASA Astrophysics Data System (ADS)

    Vrugt, J. A.; Sadegh, M.

    2013-12-01

    In this talk I will discuss theory, concepts and applications of Approximate Bayesian Computation (ABC) for diagnostic model calibration and evaluation. This statistical methodology relaxes the need for an explicit likelihood function in favor of one or multiple different summary statistics rooted in hydrologic theory that together have a more clear and compelling diagnostic power than some average measure of the size of the error residuals. A few illustrative case studies are used to demonstrate that ABC is relatively easy to implement, and readily employs signature based indices to analyze and pinpoint which part of the model is malfunctioning and in need of further improvement.

  4. Application of variance components estimation to calibrate geoid error models.

    PubMed

    Guo, Dong-Mei; Xu, Hou-Ze

    2015-01-01

    The method of using Global Positioning System-leveling data to obtain orthometric heights has been well studied. A simple formulation for the weighted least squares problem has been presented in an earlier work. This formulation allows one directly employing the errors-in-variables models which completely descript the covariance matrices of the observables. However, an important question that what accuracy level can be achieved has not yet to be satisfactorily solved by this traditional formulation. One of the main reasons for this is the incorrectness of the stochastic models in the adjustment, which in turn allows improving the stochastic models of measurement noises. Therefore the issue of determining the stochastic modeling of observables in the combined adjustment with heterogeneous height types will be a main focus point in this paper. Firstly, the well-known method of variance component estimation is employed to calibrate the errors of heterogeneous height data in a combined least square adjustment of ellipsoidal, orthometric and gravimetric geoid. Specifically, the iterative algorithms of minimum norm quadratic unbiased estimation are used to estimate the variance components for each of heterogeneous observations. Secondly, two different statistical models are presented to illustrate the theory. The first method directly uses the errors-in-variables as a priori covariance matrices and the second method analyzes the biases of variance components and then proposes bias-corrected variance component estimators. Several numerical test results show the capability and effectiveness of the variance components estimation procedure in combined adjustment for calibrating geoid error model.

  5. Calibrating the Abaqus Crushable Foam Material Model using UNM Data

    SciTech Connect

    Schembri, Philip E.; Lewis, Matthew W.

    2014-02-27

    Triaxial test data from the University of New Mexico and uniaxial test data from W-14 is used to calibrate the Abaqus crushable foam material model to represent the syntactic foam comprised of APO-BMI matrix and carbon microballoons used in the W76. The material model is an elasto-plasticity model in which the yield strength depends on pressure. Both the elastic properties and the yield stress are estimated by fitting a line to the elastic region of each test response. The model parameters are fit to the data (in a non-rigorous way) to provide both a conservative and not-conservative material model. The model is verified to perform as intended by comparing the values of pressure and shear stress at yield, as well as the shear and volumetric stress-strain response, to the test data.

  6. CALIBRATING STELLAR POPULATION MODELS WITH MAGELLANIC CLOUD STAR CLUSTERS

    SciTech Connect

    Noeel, N. E. D.; Carollo, C. M.; Greggio, L.; Renzini, A.; Maraston, C.

    2013-07-20

    Stellar population models are commonly calculated using star clusters as calibrators for those evolutionary stages that depend on free parameters. However, discrepancies exist among different models, even if similar sets of calibration clusters are used. With the aim of understanding these discrepancies, and of improving the calibration procedure, we consider a set of 43 Magellanic Cloud (MC) clusters, taking age and photometric information from the literature. We carefully assign ages to each cluster based on up-to-date determinations, ensuring that these are as homogeneous as possible. To cope with statistical fluctuations, we stack the clusters in five age bins, deriving for each of them integrated luminosities and colors. We find that clusters become abruptly red in optical and optical-infrared colors as they age from {approx}0.6 to {approx}1 Gyr, which we interpret as due to the development of a well-populated thermally pulsing asymptotic giant branch (TP-AGB). We argue that other studies missed this detection because of coarser age binnings. Maraston and Girardi et al. models predict the presence of a populated TP-AGB at {approx}0.6 Gyr, with a correspondingly very red integrated color, at variance with the data; Bruzual and Charlot and Conroy models run within the error bars at all ages. The discrepancy between the synthetic colors of Maraston models and the average colors of MC clusters results from the now obsolete age scale adopted. Finally, our finding that the TP-AGB phase appears to develop between {approx}0.6 and 1 Gyr is dependent on the adopted age scale for the clusters and may have important implications for stellar evolution.

  7. Sparkle/PM3 Parameters for the Modeling of Neodymium(III), Promethium(III), and Samarium(III) Complexes.

    PubMed

    Freire, Ricardo O; da Costa, Nivan B; Rocha, Gerd B; Simas, Alfredo M

    2007-07-01

    The Sparkle/PM3 model is extended to neodymium(III), promethium(III), and samarium(III) complexes. The unsigned mean error, for all Sparkle/PM3 interatomic distances between the trivalent lanthanide ion and the ligand atoms of the first sphere of coordination, is 0.074 Å for Nd(III); 0.057 Å for Pm(III); and 0.075 Å for Sm(III). These figures are similar to the Sparkle/AM1 ones of 0.076 Å, 0.059 Å, and 0.075 Å, respectively, indicating they are all comparable models. Moreover, their accuracy is similar to what can be obtained by present-day ab initio effective potential calculations on such lanthanide complexes. Hence, the choice of which model to utilize will depend on the assessment of the effect of either AM1 or PM3 on the quantum chemical description of the organic ligands. Finally, we present a preliminary attempt to verify the geometry prediction consistency of Sparkle/PM3. Since lanthanide complexes are usually flexible, we randomly generated 200 different input geometries for the samarium complex QIPQOV which were then fully optimized by Sparkle/PM3. A trend appeared in that, on average, the lower the total energy of the local minima found, the lower the unsigned mean errors, and the higher the accuracy of the model. These preliminary results do indicate that attempting to find, with Sparkle/PM3, a global minimum for the geometry of a given complex, with the understanding that it will tend to be closer to the experimental geometry, appears to be warranted. Therefore, the sparkle model is seemingly a trustworthy semiempirical quantum chemical model for the prediction of lanthanide complexes geometries.

  8. Calibrating Building Energy Models Using Supercomputer Trained Machine Learning Agents

    SciTech Connect

    Sanyal, Jibonananda; New, Joshua Ryan; Edwards, Richard; Parker, Lynne Edwards

    2014-01-01

    Building Energy Modeling (BEM) is an approach to model the energy usage in buildings for design and retrofit purposes. EnergyPlus is the flagship Department of Energy software that performs BEM for different types of buildings. The input to EnergyPlus can often extend in the order of a few thousand parameters which have to be calibrated manually by an expert for realistic energy modeling. This makes it challenging and expensive thereby making building energy modeling unfeasible for smaller projects. In this paper, we describe the Autotune research which employs machine learning algorithms to generate agents for the different kinds of standard reference buildings in the U.S. building stock. The parametric space and the variety of building locations and types make this a challenging computational problem necessitating the use of supercomputers. Millions of EnergyPlus simulations are run on supercomputers which are subsequently used to train machine learning algorithms to generate agents. These agents, once created, can then run in a fraction of the time thereby allowing cost-effective calibration of building models.

  9. A road map for multi-way calibration models.

    PubMed

    Escandar, Graciela M; Olivieri, Alejandro C

    2017-08-07

    A large number of experimental applications of multi-way calibration are known, and a variety of chemometric models are available for the processing of multi-way data. While the main focus has been directed towards three-way data, due to the availability of various instrumental matrix measurements, a growing number of reports are being produced on order signals of increasing complexity. The purpose of this review is to present a general scheme for selecting the appropriate data processing model, according to the properties exhibited by the multi-way data. In spite of the complexity of the multi-way instrumental measurements, simple criteria can be proposed for model selection, based on the presence and number of the so-called multi-linearity breaking modes (instrumental modes that break the low-rank multi-linearity of the multi-way arrays), and also on the existence of mutually dependent instrumental modes. Recent literature reports on multi-way calibration are reviewed, with emphasis on the models that were selected for data processing.

  10. BLOOD FLOW IN THE CIRCLE OF WILLIS: MODELING AND CALIBRATION*

    PubMed Central

    DEVAULT, KRISTEN; GREMAUD, PIERRE A.; NOVAK, VERA; OLUFSEN, METTE S.; VERNIÈRES, GUILLAUME; ZHAO, PENG

    2008-01-01

    A numerical model based on one-dimensional balance laws and ad hoc zero-dimensional boundary conditions is tested against experimental data. The study concentrates on the circle of Willis, a vital subnetwork of the cerebral vasculature. The main goal is to obtain efficient and reliable numerical tools with predictive capabilities. The flow is assumed to obey the Navier–Stokes equations, while the mechanical reactions of the arterial walls follow a viscoelastic model. Like many previous studies, a dimension reduction is performed through averaging. Unlike most previous work, the resulting model is both calibrated and validated against in vivo data, more precisely transcranial Doppler data of cerebral blood velocity. The network considered has three inflow vessels and six outflow vessels. Inflow conditions come from the data, while outflow conditions are modeled. Parameters in the outflow conditions are calibrated using a subset of the data through ensemble Kalman filtering techniques. The rest of the data is used for validation. The results demonstrate the viability of the proposed approach. PMID:19043621

  11. Dynamic calibration of agent-based models using data assimilation.

    PubMed

    Ward, Jonathan A; Evans, Andrew J; Malleson, Nicolas S

    2016-04-01

    A widespread approach to investigating the dynamical behaviour of complex social systems is via agent-based models (ABMs). In this paper, we describe how such models can be dynamically calibrated using the ensemble Kalman filter (EnKF), a standard method of data assimilation. Our goal is twofold. First, we want to present the EnKF in a simple setting for the benefit of ABM practitioners who are unfamiliar with it. Second, we want to illustrate to data assimilation experts the value of using such methods in the context of ABMs of complex social systems and the new challenges these types of model present. We work towards these goals within the context of a simple question of practical value: how many people are there in Leeds (or any other major city) right now? We build a hierarchy of exemplar models that we use to demonstrate how to apply the EnKF and calibrate these using open data of footfall counts in Leeds.

  12. Dynamic calibration of agent-based models using data assimilation

    PubMed Central

    Ward, Jonathan A.; Evans, Andrew J.; Malleson, Nicolas S.

    2016-01-01

    A widespread approach to investigating the dynamical behaviour of complex social systems is via agent-based models (ABMs). In this paper, we describe how such models can be dynamically calibrated using the ensemble Kalman filter (EnKF), a standard method of data assimilation. Our goal is twofold. First, we want to present the EnKF in a simple setting for the benefit of ABM practitioners who are unfamiliar with it. Second, we want to illustrate to data assimilation experts the value of using such methods in the context of ABMs of complex social systems and the new challenges these types of model present. We work towards these goals within the context of a simple question of practical value: how many people are there in Leeds (or any other major city) right now? We build a hierarchy of exemplar models that we use to demonstrate how to apply the EnKF and calibrate these using open data of footfall counts in Leeds. PMID:27152214

  13. Efficiency of Evolutionary Algorithms for Calibration of Watershed Models

    NASA Astrophysics Data System (ADS)

    Ahmadi, M.; Arabi, M.

    2009-12-01

    Since the promulgation of the Clean Water Act in the U.S. and other similar legislations around the world over the past three decades, watershed management programs have focused on the nexus of pollution prevention and mitigation. In this context, hydrologic/water quality models have been increasingly embedded in the decision making process. Simulation models are now commonly used to investigate the hydrologic response of watershed systems under varying climatic and land use conditions, and also to study the fate and transport of contaminants at various spatiotemporal scales. Adequate calibration and corroboration of models for various outputs at varying scales is an essential component of watershed modeling. The parameter estimation process could be challenging when multiple objectives are important. For example, improving streamflow predictions of the model at a stream location may result in degradation of model predictions for sediments and/or nutrient at the same location or other outlets. This paper aims to evaluate the applicability and efficiency of single and multi objective evolutionary algorithms for parameter estimation of complex watershed models. To this end, the Shuffled Complex Evolution (SCE-UA) algorithm, a single-objective genetic algorithm (GA), and a multi-objective genetic algorithm (i.e., NSGA-II) were reconciled with the Soil and Water Assessment Tool (SWAT) to calibrate the model at various locations within the Wildcat Creek Watershed, Indiana. The efficiency of these methods were investigated using different error statistics including root mean square error, coefficient of determination and Nash-Sutcliffe efficiency coefficient for the output variables as well as the baseflow component of the stream discharge. A sensitivity analysis was carried out to screening model parameters that bear significant uncertainties. Results indicated that while flow processes can be reasonably ascertained, parameterization of nutrient and pesticide processes

  14. Simple Parametric Model for Intensity Calibration of Cassini Composite Infrared Spectrometer Data

    NASA Technical Reports Server (NTRS)

    Brasunas, J.; Mamoutkine, A.; Gorius, N.

    2016-01-01

    Accurate intensity calibration of a linear Fourier-transform spectrometer typically requires the unknown science target and the two calibration targets to be acquired under identical conditions. We present a simple model suitable for vector calibration that enables accurate calibration via adjustments of measured spectral amplitudes and phases when these three targets are recorded at different detector or optics temperatures. Our model makes calibration more accurate both by minimizing biases due to changing instrument temperatures that are always present at some level and by decreasing estimate variance through incorporating larger averages of science and calibration interferogram scans.

  15. Simple Parametric Model for Intensity Calibration of Cassini Composite Infrared Spectrometer Data

    NASA Technical Reports Server (NTRS)

    Brasunas, J.; Mamoutkine, A.; Gorius, N.

    2016-01-01

    Accurate intensity calibration of a linear Fourier-transform spectrometer typically requires the unknown science target and the two calibration targets to be acquired under identical conditions. We present a simple model suitable for vector calibration that enables accurate calibration via adjustments of measured spectral amplitudes and phases when these three targets are recorded at different detector or optics temperatures. Our model makes calibration more accurate both by minimizing biases due to changing instrument temperatures that are always present at some level and by decreasing estimate variance through incorporating larger averages of science and calibration interferogram scans.

  16. Simple parametric model for intensity calibration of Cassini composite infrared spectrometer data.

    PubMed

    Brasunas, J; Mamoutkine, A; Gorius, N

    2016-06-10

    Accurate intensity calibration of a linear Fourier-transform spectrometer typically requires the unknown science target and the two calibration targets to be acquired under identical conditions. We present a simple model suitable for vector calibration that enables accurate calibration via adjustments of measured spectral amplitudes and phases when these three targets are recorded at different detector or optics temperatures. Our model makes calibration more accurate both by minimizing biases due to changing instrument temperatures that are always present at some level and by decreasing estimate variance through incorporating larger averages of science and calibration interferogram scans.

  17. New NIR Calibration Models Speed Biomass Composition and Reactivity Characterization

    SciTech Connect

    2015-09-01

    Obtaining accurate chemical composition and reactivity (measures of carbohydrate release and yield) information for biomass feedstocks in a timely manner is necessary for the commercialization of biofuels. This highlight describes NREL's work to use near-infrared (NIR) spectroscopy and partial least squares multivariate analysis to develop calibration models to predict the feedstock composition and the release and yield of soluble carbohydrates generated by a bench-scale dilute acid pretreatment and enzymatic hydrolysis assay. This highlight is being developed for the September 2015 Alliance S&T Board meeting.

  18. MODELING NATURAL ATTENUATION OF FUELS WITH BIOPLUME III

    EPA Science Inventory

    A natural attenuation model that simulates the aerobic and anaerobic biodegradation of fuel hydrocarbons was developed. The resulting model, BIOPLUME III, demonstrates the importance of biodegradation in reducing contaminant concentrations in ground water. In hypothetical simulat...

  19. Testing calibration routines for LISFLOOD, a distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Pannemans, B.

    2009-04-01

    Traditionally hydrological models are considered as difficult to calibrate: their highly non-linearity results in rugged and rough response surfaces were calibration algorithms easily get stuck in local minima. For the calibration of distributed hydrological models two extra factors play an important role: on the one hand they are often costly on computation, thus restricting the feasible number of model runs; on the other hand their distributed nature smooths the response surface, thus facilitating the search for a global minimum. Lisflood is a distributed hydrological model currently used for the European Flood Alert System - EFAS (Van der Knijff et al, 2008). Its upcoming recalibration over more then 200 catchments, each with an average runtime of 2-3 minutes, proved a perfect occasion to put several existing calibration algorithms to the test. The tested routines are Downhill Simplex (DHS, Nelder and Mead, 1965), SCEUA (Duan et Al. 1993), SCEM (Vrugt et al., 2003) and AMALGAM (Vrugt et al., 2008), and they were evaluated on their capability to efficiently converge onto the global minimum and on the spread in the found solutions in repeated runs. The routines were let loose on a simple hyperbolic function, on a Lisflood catchment using model output as observation, and on two Lisflood catchments using real observations (one on the river Inn in the Alps, the other along the downstream stretch of the Elbe). On the mathematical problem and on the catchment with synthetic observations DHS proved to be the fastest and the most efficient in finding a solution. SCEUA and AMALGAM are a slower, but while SCEUA keeps converging on the exact solution, AMALGAM slows down after about 600 runs. For the Lisflood models with real-time observations AMALGAM (hybrid algorithm that combines several other algorithms, we used CMA, PSO and GA) came as fastest out of the tests, and giving comparable results in consecutive runs. However, some more work is needed to tweak the stopping

  20. Analytical characterization of a Bruderhedral calibration target model

    NASA Astrophysics Data System (ADS)

    Cremona-Simmons, Peter M.

    1996-06-01

    The Army Research Laboratory (ARL) has constructed a variation of the bruderhedral calibration and radar cross section (RCS) target model and measured its radar characteristics in the field. A computer version of the same model was generated, and later characterized in both elevation and azimuth for validation. Our goal is to develop a millimeter-wave (MMW) signature generation tool for guidance integrated fuzing (GIF) systems and applications. Before realizing this goal, one must develop a test-bed of tools and approaches upon which to build. ARL has identified approaches to developing generic analytical target-signature models based on some existing electromagnetic scattering codes. A high-frequency RCS and signature prediction software model was selected to perform the radar analysis and provide a mechanism, a synthetic aperture radar (SAR) model, for recognizing prominent scatterers off high-fidelity target models. This method will assist us in creating suitable far- to near-field 3-D transitional models at MMW frequencies. Two target model descriptions were used in the signature prediction model: a flat facet format and a curved surface format. This paper introduces these software models, and some optics and SAR considerations relating to the test wavelength and the size of the target. Also, the simulated azimuthal and elevation response patterns, along with some results from the SAR model, are presented.

  1. A critical comparison of systematic calibration protocols for activated sludge models: a SWOT analysis.

    PubMed

    Sin, Gürkan; Van Hulle, Stijn W H; De Pauw, Dirk J W; van Griensven, Ann; Vanrolleghem, Peter A

    2005-07-01

    Modelling activated sludge systems has gained an increasing momentum after the introduction of activated sludge models (ASMs) in 1987. Application of dynamic models for full-scale systems requires essentially a calibration of the chosen ASM to the case under study. Numerous full-scale model applications have been performed so far which were mostly based on ad hoc approaches and expert knowledge. Further, each modelling study has followed a different calibration approach: e.g. different influent wastewater characterization methods, different kinetic parameter estimation methods, different selection of parameters to be calibrated, different priorities within the calibration steps, etc. In short, there was no standard approach in performing the calibration study, which makes it difficult, if not impossible, to (1) compare different calibrations of ASMs with each other and (2) perform internal quality checks for each calibration study. To address these concerns, systematic calibration protocols have recently been proposed to bring guidance to the modeling of activated sludge systems and in particular to the calibration of full-scale models. In this contribution four existing calibration approaches (BIOMATH, HSG, STOWA and WERF) will be critically discussed using a SWOT (Strengths, Weaknesses, Opportunities, Threats) analysis. It will also be assessed in what way these approaches can be further developed in view of further improving the quality of ASM calibration. In this respect, the potential of automating some steps of the calibration procedure by use of mathematical algorithms is highlighted.

  2. A calibration hierarchy for risk models was defined: from utopia to empirical data.

    PubMed

    Van Calster, Ben; Nieboer, Daan; Vergouwe, Yvonne; De Cock, Bavo; Pencina, Michael J; Steyerberg, Ewout W

    2016-06-01

    Calibrated risk models are vital for valid decision support. We define four levels of calibration and describe implications for model development and external validation of predictions. We present results based on simulated data sets. A common definition of calibration is "having an event rate of R% among patients with a predicted risk of R%," which we refer to as "moderate calibration." Weaker forms of calibration only require the average predicted risk (mean calibration) or the average prediction effects (weak calibration) to be correct. "Strong calibration" requires that the event rate equals the predicted risk for every covariate pattern. This implies that the model is fully correct for the validation setting. We argue that this is unrealistic: the model type may be incorrect, the linear predictor is only asymptotically unbiased, and all nonlinear and interaction effects should be correctly modeled. In addition, we prove that moderate calibration guarantees nonharmful decision making. Finally, results indicate that a flexible assessment of calibration in small validation data sets is problematic. Strong calibration is desirable for individualized decision support but unrealistic and counter productive by stimulating the development of overly complex models. Model development and external validation should focus on moderate calibration. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Model Free Gate Design and Calibration For Superconducting Qubits

    NASA Astrophysics Data System (ADS)

    Egger, Daniel; Wilhelm, Frank

    2014-03-01

    Gates for superconducting qubits are realized by time dependent control pulses. The pulse shape for a specific gate depends on the parameters of the superconducting qubits, e.g. frequency and non-linearity. Based on ones knowledge of these parameters and using a specific model the pulse shape is determined either analytically or numerically using optimal control [arXiv:1306.6894, arXiv:1306.2279]. However the performance of the pulse is limited by the accuracy of the model. For a pulse with few parameters this is generally not a problem since it can be ``debugged'' manually. He we present an automated method for calibrating multiparameter pulses. We use the Nelder-Mead simplex method to close the control loop. This scheme uses the experiment as feedback and thus does not need a model. It requires few iterations and circumvents process tomogrophy, therefore making it a fast and versatile tool for gate design.

  4. Mars Entry Atmospheric Data System Modeling, Calibration, and Error Analysis

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.; VanNorman, John; Siemers, Paul M.; Schoenenberger, Mark; Munk, Michelle M.

    2014-01-01

    The Mars Science Laboratory (MSL) Entry, Descent, and Landing Instrumentation (MEDLI)/Mars Entry Atmospheric Data System (MEADS) project installed seven pressure ports through the MSL Phenolic Impregnated Carbon Ablator (PICA) heatshield to measure heatshield surface pressures during entry. These measured surface pressures are used to generate estimates of atmospheric quantities based on modeled surface pressure distributions. In particular, the quantities to be estimated from the MEADS pressure measurements include the dynamic pressure, angle of attack, and angle of sideslip. This report describes the calibration of the pressure transducers utilized to reconstruct the atmospheric data and associated uncertainty models, pressure modeling and uncertainty analysis, and system performance results. The results indicate that the MEADS pressure measurement system hardware meets the project requirements.

  5. Antithrombin III in animal models of sepsis and organ failure.

    PubMed

    Dickneite, G

    1998-01-01

    Antithrombin III (AT III) is the physiological inhibitor of thrombin and other serine proteases of the clotting cascade. In the development of sepsis, septic shock and organ failure, the plasma levels of AT III decrease considerably, suggesting the concept of a substitution therapy with the inhibitor. A decrease of AT III plasma levels might also be associated with other pathological disorders like trauma, burns, pancreatitis or preclampsia. Activation of coagulation and consumption of AT III is the consequence of a generalized inflammation called SIRS (systemic inflammatory response syndrome). The clotting cascade is also frequently activated after organ transplantation, especially if organs are grafted between different species (xenotransplantation). During the past years AT III has been investigated in numerous corresponding disease models in different animal species which will be reviewed here. The bulk of evidence suggests, that AT III substitution reduces morbidity and mortality in the diseased animals. While gaining more experience with AT III, the concept of substitution therapy to maximal baseline plasma levels (100%) appears to become insufficient. Evidence from clinical and preclinical studies now suggests to adjust the AT III plasma levels to about 200%, i.e., doubling the normal value. During the last few years several authors proposed that AT III might not only be an anti-thrombotic agent, but to have in addition an anti-inflammatory effect.

  6. Calibration Modeling Methodology to Optimize Performance for Low Range Applications

    NASA Technical Reports Server (NTRS)

    McCollum, Raymond A.; Commo, Sean A.; Parker, Peter A.

    2010-01-01

    Calibration is a vital process in characterizing the performance of an instrument in an application environment and seeks to obtain acceptable accuracy over the entire design range. Often, project requirements specify a maximum total measurement uncertainty, expressed as a percent of full-scale. However in some applications, we seek to obtain enhanced performance at the low range, therefore expressing the accuracy as a percent of reading should be considered as a modeling strategy. For example, it is common to desire to use a force balance in multiple facilities or regimes, often well below its designed full-scale capacity. This paper presents a general statistical methodology for optimizing calibration mathematical models based on a percent of reading accuracy requirement, which has broad application in all types of transducer applications where low range performance is required. A case study illustrates the proposed methodology for the Mars Entry Atmospheric Data System that employs seven strain-gage based pressure transducers mounted on the heatshield of the Mars Science Laboratory mission.

  7. A Solvatochromic Model Calibrates Nitriles’ Vibrational Frequencies to Electrostatic Fields

    PubMed Central

    Bagchi, Sayan; Fried, Stephen D.; Boxer, Steven G.

    2012-01-01

    Electrostatic interactions provide a primary connection between a protein’s three-dimensional structure and its function. Infrared (IR) probes are useful because vibrational frequencies of certain chemical groups, such as nitriles, are linearly sensitive to local electrostatic field, and can serve as a molecular electric field meter. IR spectroscopy has been used to study electrostatic changes or fluctuations in proteins, but measured peak frequencies have not been previously mapped to total electric fields, because of the absence of a field-frequency calibration and the complication of local chemical effects such as H-bonds. We report a solvatochromic model that provides a means to assess the H-bonding status of aromatic nitrile vibrational probes, and calibrates their vibrational frequencies to electrostatic field. The analysis involves correlations between the nitrile’s IR frequency and its 13C chemical shift, whose observation is facilitated by a robust method for introducing isotopes into aromatic nitriles. The method is tested on the model protein Ribonuclease S (RNase S) containing a labeled p-CN-Phe near the active site. Comparison of the measurements in RNase S against solvatochromic data gives an estimate of the average total electrostatic field at this location. The value determined agrees quantitatively with MD simulations, suggesting broader potential for the use of IR probes in the study of protein electrostatics. PMID:22694663

  8. A solvatochromic model calibrates nitriles' vibrational frequencies to electrostatic fields.

    PubMed

    Bagchi, Sayan; Fried, Stephen D; Boxer, Steven G

    2012-06-27

    Electrostatic interactions provide a primary connection between a protein's three-dimensional structure and its function. Infrared probes are useful because vibrational frequencies of certain chemical groups, such as nitriles, are linearly sensitive to local electrostatic field and can serve as a molecular electric field meter. IR spectroscopy has been used to study electrostatic changes or fluctuations in proteins, but measured peak frequencies have not been previously mapped to total electric fields, because of the absence of a field-frequency calibration and the complication of local chemical effects such as H-bonds. We report a solvatochromic model that provides a means to assess the H-bonding status of aromatic nitrile vibrational probes and calibrates their vibrational frequencies to electrostatic field. The analysis involves correlations between the nitrile's IR frequency and its (13)C chemical shift, whose observation is facilitated by a robust method for introducing isotopes into aromatic nitriles. The method is tested on the model protein ribonuclease S (RNase S) containing a labeled p-CN-Phe near the active site. Comparison of the measurements in RNase S against solvatochromic data gives an estimate of the average total electrostatic field at this location. The value determined agrees quantitatively with molecular dynamics simulations, suggesting broader potential for the use of IR probes in the study of protein electrostatics.

  9. How useful are stream level observations for model calibration?

    NASA Astrophysics Data System (ADS)

    Seibert, Jan; Vis, Marc; Pool, Sandra

    2014-05-01

    Streamflow estimation in ungauged basins is especially challenging in data-scarce regions and it might be reasonable to take at least a few measurements. Recent studies demonstrated that few streamflow measurements, representing data that could be measured with limited efforts in an ungauged basin, might be needed to constrain runoff models for simulations in ungauged basins. While in these previous studies we assumed that few streamflow measurements were taken during different points in time over one year, obviously it would be reasonable to (also) measure stream levels. Several approaches could be used in practice for such stream level observations: water level loggers have become less expensive and easier to install and can be used to obtain continuous stream level time series; stream levels will in the near future be increasingly available from satellite remote sensing resulting in evenly space time series; community-based approaches (e.g., crowdhydrology.org), finally, can offer level observations at irregular time intervals. Here we present a study where a catchment runoff model (the HBV model) was calibrated for gauged basins in Switzerland assuming that only a subset of the data was available. We pretended that only stream level observations at different time intervals, representing the temporal resolution of the different observation approaches mentioned before, and a small number of streamflow observations were available. The model, which was calibrated based on these data subsets, was then evaluated on the full observed streamflow record. Our results indicate that streamlevel data alone already can provide surprisingly good model simulation results, which can be further improved by the combination with one streamflow observation. The surprisingly good results with only streamlevel time series can be explained by the relatively high precipitation in the studied catchments. Constructing a hypothetical catchment with reduced precipitation resulted in poorer

  10. Rainfall stochastic disaggregation models: Calibration and validation of a multiplicative cascade model

    NASA Astrophysics Data System (ADS)

    Gaume, E.; Mouhous, N.; Andrieu, H.

    2007-05-01

    The simulation of long time series of rainfall rates at short time steps remains an important issue for various applications in hydrology. Among the various types of simulation models, random multiplicative cascade models (RMC models) appear as an appealing solution which displays the advantages to be parameter parsimonious and linked to the multifractal theory. This paper deals with the calibration and validation of RMC models. More precisely, it discusses the limits of the scaling exponent function method often used to calibrate RMC models, and presents an hydrological validation of calibrated RMC models. A 8-year time series of 1-min rainfall rates is used for the calibration and the validation of the tested models. The paper is organized in three parts. In the first part, the scaling invariance properties of the studied rainfall series is shown using various methods ( q-moments, PDMS, autocovariance structure) and a RMC model is calibrated on the basis of the rainfall data scaling exponent function. A detailed analysis of the obtained results reveals that the shape of the scaling exponent function, and hence the values of the calibrated parameters of the RMC model, are highly sensitive to sampling fluctuation and may also be biased. In the second part, the origin of the sensivity to sampling fluctuation and of the bias is studied in detail and a modified Jackknife estimator is tested to reduce the bias. Finally, two hydrological applications are proposed to validate two candidate RMC models: a canonical model based on a log-Poisson random generator, and a basic micro-canonical model based on a uniform random generator. It is tested in this third part if the models reproduce faithfully the statistical distribution of rainfall characteristics on which they have not been calibrated. The results obtained for two validation tests are relatively satisfactory but also show that the temporal structure of the measured rainfall time series at small time steps is not well

  11. Strain Gage Loads Calibration Testing with Airbag Support for the Gulfstream III SubsoniC Research Aircraft Testbed (SCRAT)

    NASA Technical Reports Server (NTRS)

    Lokos, William A.; Miller, Eric J.; Hudson, Larry D.; Holguin, Andrew C.; Neufeld, David C.; Haraguchi, Ronnie

    2015-01-01

    This paper describes the design and conduct of the strain-gage load calibration ground test of the SubsoniC Research Aircraft Testbed, Gulfstream III aircraft, and the subsequent data analysis and results. The goal of this effort was to create and validate multi-gage load equations for shear force, bending moment, and torque for two wing measurement stations. For some of the testing the aircraft was supported by three airbags in order to isolate the wing structure from extraneous load inputs through the main landing gear. Thirty-two strain gage bridges were installed on the left wing. Hydraulic loads were applied to the wing lower surface through a total of 16 load zones. Some dead-weight load cases were applied to the upper wing surface using shot bags. Maximum applied loads reached 54,000 lb. Twenty-six load cases were applied with the aircraft resting on its landing gear, and 16 load cases were performed with the aircraft supported by the nose gear and three airbags around the center of gravity. Maximum wing tip deflection reached 17 inches. An assortment of 2, 3, 4, and 5 strain-gage load equations were derived and evaluated against independent check cases. The better load equations had root mean square errors less than 1 percent. Test techniques and lessons learned are discussed.

  12. Modeling Prairie Pothole Lakes: Linking Satellite Observation and Calibration (Invited)

    NASA Astrophysics Data System (ADS)

    Schwartz, F. W.; Liu, G.; Zhang, B.; Yu, Z.

    2009-12-01

    This paper examines the response of a complex lake wetland system to variations in climate. The focus is on the lakes and wetlands of the Missouri Coteau, which is part of the larger Prairie Pothole Region of the Central Plains of North America. Information on lake size was enumerated from satellite images, and yielded power law relationships for different hydrological conditions. More traditional lake-stage data were made available to us from the USGS Cottonwood Lake Study Site in North Dakota. A Probabilistic Hydrologic Model (PHM) was developed to simulate lake complexes comprised of tens-of-thousands or more individual closed-basin lakes and wetlands. What is new about this model is a calibration scheme that utilizes remotely-sensed data on lake area as well as stage data for individual lakes. Some ¼ million individual data points are used within a Genetic Algorithm to calibrate the model by comparing the simulated results with observed lake area-frequency power law relationships derived from Landsat images and water depths from seven individual lakes and wetlands. The simulated lake behaviors show good agreement with the observations under average, dry, and wet climatic conditions. The calibrated model is used to examine the impact of climate variability on a large lake complex in ND, in particular, the “Dust Bowl Drought” 1930s. This most famous drought of the 20th Century devastated the agricultural economy of the Great Plains with health and social impacts lingering for years afterwards. Interestingly, the drought of 1930s is unremarkable in relation to others of greater intensity and frequency before AD 1200 in the Great Plains. Major droughts and deluges have the ability to create marked variability of the power law function (e.g. up to one and a half orders of magnitude variability from the extreme Dust Bowl Drought to the extreme 1993-2001 deluge). This new probabilistic modeling approach provides a novel tool to examine the response of the

  13. Soil and water assessment tool model calibration results for different catchment sizes in poland.

    PubMed

    Ostojski, Mieczyslaw S; Niedbala, Jerzy; Orlinska-Wozniak, Paulina; Wilk, Pawel; Gębala, Joanna

    2014-01-01

    The watershed model SWAT (Soil and Water Assessment Tool) can be used to implement the requirements of international agreements that Poland has ratified. Among these requirements are the establishment of catchment-based, rather than administrative-based, management plans and spatial information systems. Furthermore, Polish law requires that management of water resources be based on catchment systems. This article explores the use of the SWAT model in the implementation of catchment-based water management in Poland. Specifically, the impacts of basin size on calibration and on the results of the simulation process were analyzed. SWAT was set up and calibrated for three Polish watersheds of varying sizes: (i) Gąsawka, a small basin (>593.7 km), (ii) Rega, a medium-sized basin (2766.8 km), and (iii) Warta, a large basin (54,500 km) representing about 17.4% of Polish territory. The results indicated that the size of the catchment has an impact on the calibration process and simulation outputs. Several factors influenced by the size of the catchment affected the modeling results. Among these factors are the number of measurement points within the basin and the length of the measuring period and data quality at checkpoints as determined by the position of the measuring station. It was concluded that the SWAT model is a suitable tool for the implementation of catchment-based water management in Poland regardless of watershed size. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  14. Bayesian calibration of hyperelastic constitutive models of soft tissue.

    PubMed

    Madireddy, Sandeep; Sista, Bhargava; Vemaganti, Kumar

    2016-06-01

    There is inherent variability in the experimental response used to characterize the hyperelastic mechanical response of soft tissues. This has to be accounted for while estimating the parameters in the constitutive models to obtain reliable estimates of the quantities of interest. The traditional least squares method of parameter estimation does not give due importance to this variability. We use a Bayesian calibration framework based on nested Monte Carlo sampling to account for the variability in the experimental data and its effect on the estimated parameters through a systematic probability-based treatment. We consider three different constitutive models to represent the hyperelastic nature of soft tissue: Mooney-Rivlin model, exponential model, and Ogden model. Three stress-strain data sets corresponding to the deformation of agarose gel, bovine liver tissue, and porcine brain tissue are considered. Bayesian fits and parameter estimates are compared with the corresponding least squares values. Finally, we propagate the uncertainty in the parameters to a quantity of interest (QoI), namely the force-indentation response, to study the effect of model form on the values of the QoI. Our results show that the quality of the fit alone is insufficient to determine the adequacy of the model, and due importance has to be given to the maximum likelihood value, the landscape of the likelihood distribution, and model complexity. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Empirical calibration of the near-infrared Ca II triplet - III. Fitting functions

    NASA Astrophysics Data System (ADS)

    Cenarro, A. J.; Gorgas, J.; Cardiel, N.; Vazdekis, A.; Peletier, R. F.

    2002-02-01

    Using a near-infrared stellar library of 706 stars with a wide coverage of atmospheric parameters, we study the behaviour of the CaII triplet strength in terms of effective temperature, surface gravity and metallicity. Empirical fitting functions for recently defined line-strength indices, namely CaT*, CaT and PaT, are provided. These functions can be easily implemented into stellar population models to provide accurate predictions for integrated CaII strengths. We also present a thorough study of the various error sources and their relation to the residuals of the derived fitting functions. Finally, the derived functional forms and the behaviour of the predicted CaII are compared with those of previous works in the field.

  16. Mathematical modelling to support traceable dynamic calibration of pressure sensors

    NASA Astrophysics Data System (ADS)

    Matthews, C.; Pennecchi, F.; Eichstädt, S.; Malengo, A.; Esward, T.; Smith, I.; Elster, C.; Knott, A.; Arrhén, F.; Lakka, A.

    2014-06-01

    This paper focuses on the mathematical modelling required to support the development of new primary standard systems for traceable calibration of dynamic pressure sensors. We address two fundamentally different approaches to realizing primary standards, specifically the shock tube method and the drop-weight method. Focusing on the shock tube method, the paper presents first results of system identification and discusses future experimental work that is required to improve the mathematical and statistical models. We use simulations to identify differences between the shock tube and drop-weight methods, to investigate sources of uncertainty in the system identification process and to assist experimentalists in designing the required measuring systems. We demonstrate the identification method on experimental results and draw conclusions.

  17. Using land subsidence observations for groundwater model calibration

    NASA Astrophysics Data System (ADS)

    Tufekci, Nesrin; Schoups, Gerrit; Faitouri, Mohamed Al; Mahapatra, Pooja; van de Giesen, Nick; Hanssen, Ramon

    2017-04-01

    PS-InSAR derived subsidence and groundwater level time series are used to calibrate a groundwater model of Tazerbo well field, Libya, by estimating spatially varying elastic skeletal storage (Sske) and hydraulic conductivity (Hk) of the model area. Tazerbo well field is a part of the Great Man-Made River Project (GMMRP) designed with 108 wells and total pumping rate of 1 million m3/day. The water is pumped from the deep sandstone aquifer (Nubian sandstone), which is overlaid by a thick mudstone-siltstone aquitard. Pumping related deformation patterns around Tazerbo well field are obtained by processing 20 descending Envisat scenes for the period between 2004 and 2010, which yield a concentrated deformation around the well field with the maximum deformation rate around 4 mm/yr. The trends of time series of groundwater head and subsidence are in good agreement for observation wells located in the vicinity of the pumping wells and the pattern of subsidence correlates with the locations of active wells. At the beginning of calibration, different pairs of Sske and Hk are assigned at observation well locations by trial and error so that the simulation results of the forward model would approximate heads and mean linear deformation velocity at these locations. Accordingly, the estimated initial parameters suggest relatively constant Hk(5 m/d) and increasing Sske from south to north (1x10-6 m-1 - 5x10-6 m-1). In order to refine their spatial distribution, representative values of Sske and Hk are assigned at 25 equidistant points over the area, which are restricted by the predetermined values. To calibrate the parameters at assigned locations UCODE is used along with MATLAB. Once the convergence is achieved the estimated parameter values at these locations are held constant and new "in between - equidistant" locations are determined to estimate Sske and Hk in order to spatially refine their distribution. This approach is followed until the relation between observed and

  18. Calibration of Predictor Models Using Multiple Validation Experiments

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This paper presents a framework for calibrating computational models using data from several and possibly dissimilar validation experiments. The offset between model predictions and observations, which might be caused by measurement noise, model-form uncertainty, and numerical error, drives the process by which uncertainty in the models parameters is characterized. The resulting description of uncertainty along with the computational model constitute a predictor model. Two types of predictor models are studied: Interval Predictor Models (IPMs) and Random Predictor Models (RPMs). IPMs use sets to characterize uncertainty, whereas RPMs use random vectors. The propagation of a set through a model makes the response an interval valued function of the state, whereas the propagation of a random vector yields a random process. Optimization-based strategies for calculating both types of predictor models are proposed. Whereas the formulations used to calculate IPMs target solutions leading to the interval value function of minimal spread containing all observations, those for RPMs seek to maximize the models' ability to reproduce the distribution of observations. Regarding RPMs, we choose a structure for the random vector (i.e., the assignment of probability to points in the parameter space) solely dependent on the prediction error. As such, the probabilistic description of uncertainty is not a subjective assignment of belief, nor is it expected to asymptotically converge to a fixed value, but instead it casts the model's ability to reproduce the experimental data. This framework enables evaluating the spread and distribution of the predicted response of target applications depending on the same parameters beyond the validation domain.

  19. A New Perspective for the Calibration of Computational Predictor Models.

    SciTech Connect

    Crespo, Luis Guillermo

    2014-11-01

    This paper presents a framework for calibrating computational models using data from sev- eral and possibly dissimilar validation experiments. The offset between model predictions and observations, which might be caused by measurement noise, model-form uncertainty, and numerical error, drives the process by which uncertainty in the models parameters is characterized. The resulting description of uncertainty along with the computational model constitute a predictor model. Two types of predictor models are studied: Interval Predictor Models (IPMs) and Random Predictor Models (RPMs). IPMs use sets to characterize uncer- tainty, whereas RPMs use random vectors. The propagation of a set through a model makes the response an interval valued function of the state, whereas the propagation of a random vector yields a random process. Optimization-based strategies for calculating both types of predictor models are proposed. Whereas the formulations used to calculate IPMs target solutions leading to the interval value function of minimal spread containing all observations, those for RPMs seek to maximize the models' ability to reproduce the distribution of obser- vations. Regarding RPMs, we choose a structure for the random vector (i.e., the assignment of probability to points in the parameter space) solely dependent on the prediction error. As such, the probabilistic description of uncertainty is not a subjective assignment of belief, nor is it expected to asymptotically converge to a fixed value, but instead it is a description of the model's ability to reproduce the experimental data. This framework enables evaluating the spread and distribution of the predicted response of target applications depending on the same parameters beyond the validation domain (i.e., roll-up and extrapolation).

  20. Optimizing the lithography model calibration algorithms for NTD process

    NASA Astrophysics Data System (ADS)

    Hu, C. M.; Lo, Fred; Yang, Elvis; Yang, T. H.; Chen, K. C.

    2016-03-01

    As patterns shrink to the resolution limits of up-to-date ArF immersion lithography technology, negative tone development (NTD) process has been an increasingly adopted technique to get superior imaging quality through employing bright-field (BF) masks to print the critical dark-field (DF) metal and contact layers. However, from the fundamental materials and process interaction perspectives, several key differences inherently exist between NTD process and the traditional positive tone development (PTD) system, especially the horizontal/vertical resist shrinkage and developer depletion effects, hence the traditional resist parameters developed for the typical PTD process have no longer fit well in NTD process modeling. In order to cope with the inherent differences between PTD and NTD processes accordingly get improvement on NTD modeling accuracy, several NTD models with different combinations of complementary terms were built to account for the NTD-specific resist shrinkage, developer depletion and diffusion, and wafer CD jump induced by sub threshold assistance feature (SRAF) effects. Each new complementary NTD term has its definite aim to deal with the NTD-specific phenomena. In this study, the modeling accuracy is compared among different models for the specific patterning characteristics on various feature types. Multiple complementary NTD terms were finally proposed to address all the NTD-specific behaviors simultaneously and further optimize the NTD modeling accuracy. The new algorithm of multiple complementary NTD term tested on our critical dark-field layers demonstrates consistent model accuracy improvement for both calibration and verification.

  1. VS2DI: Model use, calibration, and validation

    USGS Publications Warehouse

    Healy, Richard W.; Essaid, Hedeff I.

    2012-01-01

    VS2DI is a software package for simulating water, solute, and heat transport through soils or other porous media under conditions of variable saturation. The package contains a graphical preprocessor for constructing simulations, a postprocessor for displaying simulation results, and numerical models that solve for flow and solute transport (VS2DT) and flow and heat transport (VS2DH). Flow is described by the Richards equation, and solute and heat transport are described by advection-dispersion equations; the finite-difference method is used to solve these equations. Problems can be simulated in one, two, or three (assuming radial symmetry) dimensions. This article provides an overview of calibration techniques that have been used with VS2DI; included is a detailed description of calibration procedures used in simulating the interaction between groundwater and a stream fed by drainage from agricultural fields in central Indiana. Brief descriptions of VS2DI and the various types of problems that have been addressed with the software package are also presented.

  2. A Comparison of Linking and Concurrent Calibration under the Graded Response Model.

    ERIC Educational Resources Information Center

    Kim, Seock-Ho; Cohen, Allan S.

    2002-01-01

    Compared two methods for developing a common metric for the graded response model under item response theory: (1) linking separate calibration runs using equating coefficients from the characteristic curve method; and (2) concurrent calibration using the combined data of the base and target groups. Concurrent calibration yielded consistently,…

  3. Mixture models for calibrating the BED for HIV incidence testing

    PubMed Central

    Mahiane, Severin Guy; Fiamma, Agnès; Auvert, Bertran

    2014-01-01

    A number of antibody biomarkers have been developed to distinguish between recent and established Human Immunodeficiency Virus (HIV) infection and used for HIV incidence estimation from cross-sectional specimens. In general, a cut-off value is specified and estimates of the following parameters are needed: a) the mean time interval (w) between seroconversion and reaching that cut-off, b) the probability of correctly identifying individuals who became infected in the last w years (sensitivity) and c) the probability of correctly identifying individuals who have been infected for more than w years (specificity). We develop two statistical methods to study the distribution of a biomarker and derive a formula for estimating HIV incidence from a cross sectional survey. Both methods allow handling interval censored data and basically consist of using a generalised mixture model to model the growth of the biomarker as a function of time since infection. The first uses data from all followed-up individuals and allows incidence estimation in the cohort while the second only uses data from seroconverters. We illustrate our methods using repeated measures of the IgG capture BED enzyme immunoassay. Estimates of calibration parameters i.e. mean window period, mean recency period, sensitivity and specificities obtained from both models are comparable. The formula derived for incidence estimation gives the maximum likelihood estimate of incidence which, for a given window period, depends only on sensitivity and specificity. The optimal choice of the window period is discussed. Numerical simulations suggest that data from seroconverters can provide reasonable estimates of the calibration parameters. PMID:24834521

  4. Parameter Estimation for the Thurstone Case III Model.

    ERIC Educational Resources Information Center

    Mackay, David B.; Chaiy, Seoil

    1982-01-01

    The ability of three estimation criteria to recover parameters of the Thurstone Case V and Case III models from comparative judgment data was investigated via Monte Carlo techniques. Significant differences in recovery are shown to exist. (Author/JKS)

  5. Root zone water quality model (RZWQM2): Model use, calibration and validation

    USGS Publications Warehouse

    Ma, Liwang; Ahuja, Lajpat; Nolan, B.T.; Malone, Robert; Trout, Thomas; Qi, Z.

    2012-01-01

    The Root Zone Water Quality Model (RZWQM2) has been used widely for simulating agricultural management effects on crop production and soil and water quality. Although it is a one-dimensional model, it has many desirable features for the modeling community. This article outlines the principles of calibrating the model component by component with one or more datasets and validating the model with independent datasets. Users should consult the RZWQM2 user manual distributed along with the model and a more detailed protocol on how to calibrate RZWQM2 provided in a book chapter. Two case studies (or examples) are included in this article. One is from an irrigated maize study in Colorado to illustrate the use of field and laboratory measured soil hydraulic properties on simulated soil water and crop production. It also demonstrates the interaction between soil and plant parameters in simulated plant responses to water stresses. The other is from a maize-soybean rotation study in Iowa to show a manual calibration of the model for crop yield, soil water, and N leaching in tile-drained soils. Although the commonly used trial-and-error calibration method works well for experienced users, as shown in the second example, an automated calibration procedure is more objective, as shown in the first example. Furthermore, the incorporation of the Parameter Estimation Software (PEST) into RZWQM2 made the calibration of the model more efficient than a grid (ordered) search of model parameters. In addition, PEST provides sensitivity and uncertainty analyses that should help users in selecting the right parameters to calibrate.

  6. Development and Calibration of Reaction Models for Multilayered Nanocomposites

    NASA Astrophysics Data System (ADS)

    Vohra, Manav

    This dissertation focuses on the development and calibration of reaction models for multilayered nanocomposites. The nanocomposites comprise sputter deposited alternating layers of distinct metallic elements. Specifically, we focus on the equimolar Ni-Al and Zr-Al multilayered systems. Computational models are developed to capture the transient reaction phenomena as well as understand the dependence of reaction properties on the microstructure, composition and geometry of the multilayers. Together with the available experimental data, simulations are used to calibrate the models and enhance the accuracy of their predictions. Recent modeling efforts for the Ni-Al system have investigated the nature of self-propagating reactions in the multilayers. Model fidelity was enhanced by incorporating melting effects due to aluminum [Besnoin et al. (2002)]. Salloum and Knio formulated a reduced model to mitigate computational costs associated with multi-dimensional reaction simulations [Salloum and Knio (2010a)]. However, exist- ing formulations relied on a single Arrhenius correlation for diffusivity, estimated for the self-propagating reactions, and cannot be used to quantify mixing rates at lower temperatures within reasonable accuracy [Fritz (2011)]. We thus develop a thermal model for a multilayer stack comprising a reactive Ni-Al bilayer (nanocalorimeter) and exploit temperature evolution measurements to calibrate the diffusion parameters associated with solid state mixing (≈720 K - 860 K) in the bilayer. The equimolar Zr-Al multilayered system when reacted aerobically is shown to exhibit slow aerobic oxidation of zirconium (in the intermetallic), sustained for about 2-10 seconds after completion of the formation reaction. In a collaborative effort, we aim to exploit the sustained heat release for bio-agent defeat application. A simplified computational model is developed to capture the extended reaction regime characterized by oxidation of Zr-Al multilayers

  7. Operation and calibration of the Wincharger 450 model SWECS

    NASA Astrophysics Data System (ADS)

    Bryant, P. J.; Boeh, M.

    This paper presents an analysis of the operation of the new 450 model Wincharger. Assembly, testing, output power calibrations and other operational parameters are presented. Techniques of testing are described, including the use of a pickup truck for Controlled Velocity Tests (CVT). The measured output power was just above the rated values when only 12 volts was applied to the generator field. When a separate and constant 15 volt field was applied the output ranged from 46 watts for a 10 mi/h wind speed to 1146 watts for 35 mi/h. At the rated 25 mi/h speed an output of 774 watts was obtained by tuning a resistive load. These values are much greater than the ratings for this unit. However, it is being tested here with a separate field supply and without a voltage regulator.

  8. Linking big models to big data: efficient ecosystem model calibration through Bayesian model emulation

    NASA Astrophysics Data System (ADS)

    Fer, I.; Kelly, R.; Andrews, T.; Dietze, M.; Richardson, A. D.

    2016-12-01

    Our ability to forecast ecosystems is limited by how well we parameterize ecosystem models. Direct measurements for all model parameters are not always possible and inverse estimation of these parameters through Bayesian methods is computationally costly. A solution to computational challenges of Bayesian calibration is to approximate the posterior probability surface using a Gaussian Process that emulates the complex process-based model. Here we report the integration of this method within an ecoinformatics toolbox, Predictive Ecosystem Analyzer (PEcAn), and its application with two ecosystem models: SIPNET and ED2.1. SIPNET is a simple model, allowing application of MCMC methods both to the model itself and to its emulator. We used both approaches to assimilate flux (CO2 and latent heat), soil respiration, and soil carbon data from Bartlett Experimental Forest. This comparison showed that emulator is reliable in terms of convergence to the posterior distribution. A 10000-iteration MCMC analysis with SIPNET itself required more than two orders of magnitude greater computation time than an MCMC run of same length with its emulator. This difference would be greater for a more computationally demanding model. Validation of the emulator-calibrated SIPNET against both the assimilated data and out-of-sample data showed improved fit and reduced uncertainty around model predictions. We next applied the validated emulator method to the ED2, whose complexity precludes standard Bayesian data assimilation. We used the ED2 emulator to assimilate demographic data from a network of inventory plots. For validation of the calibrated ED2, we compared the model to results from Empirical Succession Mapping (ESM), a novel synthesis of successional patterns in Forest Inventory and Analysis data. Our results revealed that while the pre-assimilation ED2 formulation cannot capture the emergent demographic patterns from ESM analysis, constrained model parameters controlling demographic

  9. Hydrological processes and model representation: impact of soft data on calibration

    Treesearch

    J.G. Arnold; M.A. Youssef; H. Yen; M.J. White; A.Y. Sheshukov; A.M. Sadeghi; D.N. Moriasi; J.L. Steiner; Devendra Amatya; R.W. Skaggs; E.B. Haney; J. Jeong; M. Arabi; P.H. Gowda

    2015-01-01

    Hydrologic and water quality models are increasingly used to determine the environmental impacts of climate variability and land management. Due to differing model objectives and differences in monitored data, there are currently no universally accepted procedures for model calibration and validation in the literature. In an effort to develop accepted model calibration...

  10. Air pollution modeling and its application III

    SciTech Connect

    De Wispelaere, C.

    1984-01-01

    This book focuses on the Lagrangian modeling of air pollution. Modeling cooling tower and power plant plumes, modeling the dispersion of heavy gases, remote sensing as a tool for air pollution modeling, dispersion modeling including photochemistry, and the evaluation of model performances in practical applications are discussed. Specific topics considered include dispersion in the convective boundary layer, the application of personal computers to Lagrangian modeling, the dynamic interaction of cooling tower and stack plumes, the diffusion of heavy gases, correlation spectrometry as a tool for mesoscale air pollution modeling, Doppler acoustic sounding, tetroon flights, photochemical air quality simulation modeling, acid deposition of photochemical oxidation products, atmospheric diffusion modeling, applications of an integral plume rise model, and the estimation of diffuse hydrocarbon leakages from petrochemical factories. This volume constitutes the proceedings of the Thirteenth International Technical Meeting on Air Pollution Modeling and Its Application held in France in 1982.

  11. Simultaneous Semi-Distributed Model Calibration Guided by ...

    EPA Pesticide Factsheets

    Modelling approaches to transfer hydrologically-relevant information from locations with streamflow measurements to locations without such measurements continues to be an active field of research for hydrologists. The Pacific Northwest Hydrologic Landscapes (PNW HL) provide a solid conceptual classification framework based on our understanding of dominant processes. A Hydrologic Landscape code (5 letter descriptor based on physical and climatic properties) describes each assessment unit area, and these units average area 60km2. The core function of these HL codes is to relate and transfer hydrologically meaningful information between watersheds without the need for streamflow time series. We present a novel approach based on the HL framework to answer the question “How can we calibrate models across separate watersheds simultaneously, guided by our understanding of dominant processes?“. We should be able to apply the same parameterizations to assessment units of common HL codes if 1) the Hydrologic Landscapes contain hydrologic information transferable between watersheds at a sub-watershed-scale and 2) we use a conceptual hydrologic model and parameters that reflect the hydrologic behavior of a watershed. In this study, This work specifically tests the ability or inability to use HL-codes to inform and share model parameters across watersheds in the Pacific Northwest. EPA’s Western Ecology Division has published and is refining a framework for defining la

  12. Finite element model calibration of a nonlinear perforated plate

    NASA Astrophysics Data System (ADS)

    Ehrhardt, David A.; Allen, Matthew S.; Beberniss, Timothy J.; Neild, Simon A.

    2017-03-01

    This paper presents a case study in which the finite element model for a curved circular plate is calibrated to reproduce both the linear and nonlinear dynamic response measured from two nominally identical samples. The linear dynamic response is described with the linear natural frequencies and mode shapes identified with a roving hammer test. Due to the uncertainty in the stiffness characteristics from the manufactured perforations, the linear natural frequencies are used to update the effective modulus of elasticity of the full order finite element model (FEM). The nonlinear dynamic response is described with nonlinear normal modes (NNMs) measured using force appropriation and high speed 3D digital image correlation (3D-DIC). The measured NNMs are used to update the boundary conditions of the full order FEM through comparison with NNMs calculated from a nonlinear reduced order model (NLROM). This comparison revealed that the nonlinear behavior could not be captured without accounting for the small curvature of the plate from manufacturing as confirmed in literature. So, 3D-DIC was also used to identify the initial static curvature of each plate and the resulting curvature was included in the full order FEM. The updated models are then used to understand how the stress distribution changes at large response amplitudes providing a possible explanation of failures observed during testing.

  13. Calibration and testing or models of the global carbon cycle

    SciTech Connect

    Emanuel, W.R.; Killough, G.G.; Shugart, H.H. Jr.

    1980-01-01

    A ten-compartment model of the global biogeochemical cycle of carbon is presented. The two less-abundant isotopes of carbon, /sup 13/C and /sup 14/C, as well as total carbon, are considered. The cycling of carbon in the ocean is represented by two well-mixed compartments and in the world's terrestrial ecosystems by seven compartments, five which are dynamic and two with instantaneous transfer. An internally consistent procedure for calibrating this model against an assumed initial steady state is discussed. In particular, the constraint that the average /sup 13/C//sup 12/C ratio in the total flux from the terrestrial component of the model to the atmosphere be equal to that of the steady-state atmosphere is investigated. With this additional constraint, the model provides a more accurate representation of the influence of the terrestrial system on the /sup 13/C//sup 12/C ratio of the atmosphere and provides an improved basis for interpreting records, such as tree rings, reflecting historical changes in this ratio.

  14. Using Runoff Data to Calibrate the Community Land Model

    NASA Astrophysics Data System (ADS)

    Ray, J.; Hou, Z.; Huang, M.; Swiler, L.

    2014-12-01

    We present a statistical method for calibrating the Community Land Model (CLM) using streamflow observations collected between 1999 and 2008 at the outlet of two river basins from the Model Parameter Estimation Experiment (MOPEX), Oostanaula River at Resaca GA, and Walnut River at Winfield KS.. The observed streamflow shows variability over a large range of time-scales, none of which significantly dominates the others; consequently, the time-series seems noisy and is difficult to be directly used in model parameter estimation efforts without significant filtering. We perform a multi-resolution wavelet decomposition of the observed streamflow, and use the wavelet power coefficients (WPC) as the tuning data. We construct a mapping (a surrogate model) between WPC and three hydrological parameters of the CLM using a training set of 256 CLM runs. The dependence of WPC on the parameters is complex and cannot be captured using a surrogate unless the parameter combinations yield physically plausible model predictions, i.e., those that are skillful when compared to observations. Retaining only the top quartile of the runs ensures skillfulness, as measured by the RMS error between observations and CLM predictions. This "screening" of the training data yields a region (the "valid" region) in the parameter space where accurate surrogate models can be created. We construct a classifier for the "valid" region, and, in conjunction with the surrogate models for WPC, pose a Bayesian inverse problem for the three hydrological parameters. The inverse problem is solved using an adaptive Markov chain Monte Carlo (MCMC) method to construct a three-dimensional posterior distribution for the hydrological parameters. Posterior predictive tests using the surrogate model reveal that the posterior distribution is more predictive than the nominal values of the parameters, which are used as default values in the current version of CLM. The effectiveness of the inversion is then validated by

  15. Hotspot detection and design recommendation using silicon calibrated CMP model

    NASA Astrophysics Data System (ADS)

    Hui, Colin; Wang, Xian Bin; Huang, Haigou; Katakamsetty, Ushasree; Economikos, Laertis; Fayaz, Mohammed; Greco, Stephen; Hua, Xiang; Jayathi, Subramanian; Yuan, Chi-Min; Li, Song; Mehrotra, Vikas; Chen, Kuang Han; Gbondo-Tugbawa, Tamba; Smith, Taber

    2009-03-01

    Chemical Mechanical Polishing (CMP) has been used in the manufacturing process for copper (Cu) damascene process. It is well known that dishing and erosion occur during CMP process, and they strongly depend on metal density and line width. The inherent thickness and topography variations become an increasing concern for today's designs running through advanced process nodes (sub 65nm). Excessive thickness and topography variations can have major impacts on chip yield and performance; as such they need to be accounted for during the design stage. In this paper, we will demonstrate an accurate physics based CMP model and its application for CMP-related hotspot detection. Model based checking capability is most useful to identify highly environment sensitive layouts that are prone to early process window limitation and hence failure. Model based checking as opposed to rule based checking can identify more accurately the weak points in a design and enable designers to provide improved layout for the areas with highest leverage for manufacturability improvement. Further, CMP modeling has the ability to provide information on interlevel effects such as copper puddling from underlying topography that cannot be captured in Design-for- Manufacturing (DfM) recommended rules. The model has been calibrated against the silicon produced with the 45nm process from Common Platform (IBMChartered- Samsung) technology. It is one of the earliest 45nm CMP models available today. We will show that the CMP-related hotspots can often occur around the spaces between analog macros and digital blocks in the SoC designs. With the help of the CMP model-based prediction, the design, the dummy fill or the placement of the blocks can be modified to improve planarity and eliminate CMP-related hotspots. The CMP model can be used to pass design recommendations to designers to improve chip yield and performance.

  16. Comparison of the Mortality Probability Admission Model III, National Quality Forum, and Acute Physiology and Chronic Health Evaluation IV hospital mortality models: implications for national benchmarking*.

    PubMed

    Kramer, Andrew A; Higgins, Thomas L; Zimmerman, Jack E

    2014-03-01

    To examine the accuracy of the original Mortality Probability Admission Model III, ICU Outcomes Model/National Quality Forum modification of Mortality Probability Admission Model III, and Acute Physiology and Chronic Health Evaluation IVa models for comparing observed and risk-adjusted hospital mortality predictions. Retrospective paired analyses of day 1 hospital mortality predictions using three prognostic models. Fifty-five ICUs at 38 U.S. hospitals from January 2008 to December 2012. Among 174,001 intensive care admissions, 109,926 met model inclusion criteria and 55,304 had data for mortality prediction using all three models. None. We compared patient exclusions and the discrimination, calibration, and accuracy for each model. Acute Physiology and Chronic Health Evaluation IVa excluded 10.7% of all patients, ICU Outcomes Model/National Quality Forum 20.1%, and Mortality Probability Admission Model III 24.1%. Discrimination of Acute Physiology and Chronic Health Evaluation IVa was superior with area under receiver operating curve (0.88) compared with Mortality Probability Admission Model III (0.81) and ICU Outcomes Model/National Quality Forum (0.80). Acute Physiology and Chronic Health Evaluation IVa was better calibrated (lowest Hosmer-Lemeshow statistic). The accuracy of Acute Physiology and Chronic Health Evaluation IVa was superior (adjusted Brier score = 31.0%) to that for Mortality Probability Admission Model III (16.1%) and ICU Outcomes Model/National Quality Forum (17.8%). Compared with observed mortality, Acute Physiology and Chronic Health Evaluation IVa overpredicted mortality by 1.5% and Mortality Probability Admission Model III by 3.1%; ICU Outcomes Model/National Quality Forum underpredicted mortality by 1.2%. Calibration curves showed that Acute Physiology and Chronic Health Evaluation performed well over the entire risk range, unlike the Mortality Probability Admission Model and ICU Outcomes Model/National Quality Forum models. Acute

  17. Efficient Calibration of Computationally Intensive Groundwater Models through Surrogate Modelling with Lower Levels of Fidelity

    NASA Astrophysics Data System (ADS)

    Razavi, S.; Anderson, D.; Martin, P.; MacMillan, G.; Tolson, B.; Gabriel, C.; Zhang, B.

    2012-12-01

    Many sophisticated groundwater models tend to be computationally intensive as they rigorously represent detailed scientific knowledge about the groundwater systems. Calibration (model inversion), which is a vital step of groundwater model development, can require hundreds or thousands of model evaluations (runs) for different sets of parameters and as such demand prohibitively large computational time and resources. One common strategy to circumvent this computational burden is surrogate modelling which is concerned with developing and utilizing fast-to-run surrogates of the original computationally intensive models (also called fine models). Surrogates can be either based on statistical and data-driven models such as kriging and neural networks or simplified physically-based models with lower fidelity to the original system (also called coarse models). Fidelity in this context refers to the degree of the realism of a simulation model. This research initially investigates different strategies for developing lower-fidelity surrogates of a fine groundwater model and their combinations. These strategies include coarsening the fine model, relaxing the numerical convergence criteria, and simplifying the model geological conceptualisation. Trade-offs between model efficiency and fidelity (accuracy) are of special interest. A methodological framework is developed for coordinating the original fine model with its lower-fidelity surrogates with the objective of efficiently calibrating the parameters of the original model. This framework is capable of mapping the original model parameters to the corresponding surrogate model parameters and also mapping the surrogate model response for the given parameters to the original model response. This framework is general in that it can be used with different optimization and/or uncertainty analysis techniques available for groundwater model calibration and parameter/predictive uncertainty assessment. A real-world computationally

  18. Enhancing the quality of hydrologic model calibrations and their transfer to operational flood forecasters

    NASA Astrophysics Data System (ADS)

    Aggett, Graeme; Spies, Ryan; Szfranski, Bill; Hahn, Claudia; Weil, Page

    2016-04-01

    An adequate forecasting model may not perform well if it is inadequately calibrated. Model calibration is often constrained by the lack of adequate calibration data, especially for small river basins with high spatial rainfall variability. Rainfall/snow station networks may not be dense enough to accurately estimate the catchment rainfall/SWE. High discharges during flood events are subject to significant error due to flow gauging difficulty. Dynamic changes in catchment conditions (e.g., urbanization; losses in karstic systems) invariably introduce non-homogeneity in the water level and flow data. This presentation will highlight some of the challenges in reliable calibration of National Weather Service (i.e. US) operational flood forecast models, emphasizing the various challenges in different physiographic/climatic domains. It will also highlight the benefit of using various data visualization techniques to transfer information about model calibration to operational forecasters so they may understand the influence of the calibration on model performance under various conditions.

  19. Hydrological modeling in alpine catchments: sensing the critical parameters towards an efficient model calibration.

    PubMed

    Achleitner, S; Rinderer, M; Kirnbauer, R

    2009-01-01

    For the Tyrolean part of the river Inn, a hybrid model for flood forecast has been set up and is currently in its test phase. The system is a hybrid system which comprises of a hydraulic 1D model for the river Inn, and the hydrological models HQsim (Rainfall-runoff-discharge model) and the snow and ice melt model SES for modeling the rainfall runoff form non-glaciated and glaciated tributary catchment respectively. Within this paper the focus is put on the hydrological modeling of the totally 49 connected non-glaciated catchments realized with the software HQsim. In the course of model calibration, the identification of the most sensitive parameters is important aiming at an efficient calibration procedure. The indicators used for explaining the parameter sensitivities were chosen specifically for the purpose of flood forecasting. Finally five model parameters could be identified as being sensitive for model calibration when aiming for a well calibrated model for flood conditions. In addition two parameters were identified which are sensitive in situations where the snow line plays an important role.

  20. Theoretical foundation, methods, and criteria for calibrating human vibration models using frequency response functions

    PubMed Central

    Dong, Ren G.; Welcome, Daniel E.; McDowell, Thomas W.; Wu, John Z.

    2015-01-01

    While simulations of the measured biodynamic responses of the whole human body or body segments to vibration are conventionally interpreted as summaries of biodynamic measurements, and the resulting models are considered quantitative, this study looked at these simulations from a different angle: model calibration. The specific aims of this study are to review and clarify the theoretical basis for model calibration, to help formulate the criteria for calibration validation, and to help appropriately select and apply calibration methods. In addition to established vibration theory, a novel theorem of mechanical vibration is also used to enhance the understanding of the mathematical and physical principles of the calibration. Based on this enhanced understanding, a set of criteria was proposed and used to systematically examine the calibration methods. Besides theoretical analyses, a numerical testing method is also used in the examination. This study identified the basic requirements for each calibration method to obtain a unique calibration solution. This study also confirmed that the solution becomes more robust if more than sufficient calibration references are provided. Practically, however, as more references are used, more inconsistencies can arise among the measured data for representing the biodynamic properties. To help account for the relative reliabilities of the references, a baseline weighting scheme is proposed. The analyses suggest that the best choice of calibration method depends on the modeling purpose, the model structure, and the availability and reliability of representative reference data. PMID:26740726

  1. Theoretical foundation, methods, and criteria for calibrating human vibration models using frequency response functions

    NASA Astrophysics Data System (ADS)

    Dong, Ren G.; Welcome, Daniel E.; McDowell, Thomas W.; Wu, John Z.

    2015-11-01

    While simulations of the measured biodynamic responses of the whole human body or body segments to vibration are conventionally interpreted as summaries of biodynamic measurements, and the resulting models are considered quantitative, this study looked at these simulations from a different angle: model calibration. The specific aims of this study are to review and clarify the theoretical basis for model calibration, to help formulate the criteria for calibration validation, and to help appropriately select and apply calibration methods. In addition to established vibration theory, a novel theorem of mechanical vibration is also used to enhance the understanding of the mathematical and physical principles of the calibration. Based on this enhanced understanding, a set of criteria was proposed and used to systematically examine the calibration methods. Besides theoretical analyses, a numerical testing method is also used in the examination. This study identified the basic requirements for each calibration method to obtain a unique calibration solution. This study also confirmed that the solution becomes more robust if more than sufficient calibration references are provided. Practically, however, as more references are used, more inconsistencies can arise among the measured data for representing the biodynamic properties. To help account for the relative reliabilities of the references, a baseline weighting scheme is proposed. The analyses suggest that the best choice of calibration method depends on the modeling purpose, the model structure, and the availability and reliability of representative reference data.

  2. Modelling carbon oxidation in pulp mill activated sludge systems: calibration of Activated Sludge Model No 3.

    PubMed

    Barañao, P A; Hall, E R

    2004-01-01

    Activated Sludge Model No 3 (ASM3) was chosen to model an activated sludge system treating effluents from a mechanical pulp and paper mill. The high COD concentration and the high content of readily biodegradable substrates of the wastewater make this model appropriate for this system. ASM3 was calibrated based on batch respirometric tests using fresh wastewater and sludge from the treatment plant, and on analytical measurements of COD, TSS and VSS. The model, developed for municipal wastewater, was found suitable for fitting a variety of respirometric batch tests, performed at different temperatures and food to microorganism ratios (F/M). Therefore, a set of calibrated parameters, as well as the wastewater COD fractions, was estimated for this industrial wastewater. The majority of the calibrated parameters were in the range of those found in the literature.

  3. NSLS-II: Nonlinear Model Calibration for Synchrotrons

    SciTech Connect

    Bengtsson, J.

    2010-10-08

    This tech note is essentially a summary of a lecture we delivered to the Acc. Phys. Journal Club Apr, 2010. However, since the estimated accuracy of these methods has been naive and misleading in the field of particle accelerators, i.e., ignores the impact of noise, we will elaborate on this in some detail. A prerequisite for a calibration of the nonlinear Hamiltonian is that the quadratic part has been understood, i.e., that the linear optics for the real accelerator has been calibrated. For synchrotron light source operations, this problem has been solved by the interactive LOCO technique/tool (Linear Optics from Closed Orbits). Before that, in the context of hadron accelerators, it has been done by signal processing of turn-by-turn BPM data. We have outlined how to make a basic calibration of the nonlinear model for synchrotrons. In particular, we have shown how this was done for LEAR, CERN (antiprotons) in the mid-80s. Specifically, our accuracy for frequency estimation was {approx} 1 x 10{sup -5} for 1024 turns (to calibrate the linear optics) and {approx} 1 x 10{sup -4} for 256 turns for tune footprint and betatron spectrum. For a comparison, the estimated tune footprint for stable beam for NSLS-II is {approx}0.1. Since the transverse damping time is {approx}20 msec, i.e., {approx}4,000 turns. There is no fundamental difference for: antiprotons, protons, and electrons in this case. Because the estimated accuracy for these methods in the field of particle accelerators has been naive, i.e., ignoring the impact of noise, we have also derived explicit formula, from first principles, for a quantitative statement. For e.g. N = 256 and 5% noise we obtain {delta}{nu} {approx} 1 x 10{sup -5}. A comparison with the state-of-the-arts in e.g. telecomm and electrical engineering since the 60s is quite revealing. For example, Kalman filter (1960), crucial for the: Ranger, Mariner, and Apollo (including the Lunar Module) missions during the 60s. Or Claude Shannon et al

  4. Impact of Spatial Scale on Calibration and Model Output for a Grid-based SWAT Model

    NASA Astrophysics Data System (ADS)

    Pignotti, G.; Vema, V. K.; Rathjens, H.; Raj, C.; Her, Y.; Chaubey, I.; Crawford, M. M.

    2014-12-01

    The traditional implementation of the Soil and Water Assessment Tool (SWAT) model utilizes common landscape characteristics known as hydrologic response units (HRUs). Discretization into HRUs provides a simple, computationally efficient framework for simulation, but also represents a significant limitation of the model as spatial connectivity between HRUs is ignored. SWATgrid, a newly developed, distributed version of SWAT, provides modified landscape routing via a grid, overcoming these limitations. However, the current implementation of SWATgrid has significant computational overhead, which effectively precludes traditional calibration and limits the total number of grid cells in a given modeling scenario. Moreover, as SWATgrid is a relatively new modeling approach, it remains largely untested with little understanding of the impact of spatial resolution on model output. The objective of this study was to determine the effects of user-defined input resolution on SWATgrid predictions in the Upper Cedar Creek Watershed (near Auburn, IN, USA). Original input data, nominally at 30 m resolution, was rescaled for a range of resolutions between 30 and 4,000 m. A 30 m traditional SWAT model was developed as the baseline for model comparison. Monthly calibration was performed, and the calibrated parameter set was then transferred to all other SWAT and SWATgrid models to focus the effects of resolution on prediction uncertainty relative to the baseline. Model output was evaluated with respect to stream flow at the outlet and water quality parameters. Additionally, output of SWATgrid models were compared to output of traditional SWAT models at each resolution, utilizing the same scaled input data. A secondary objective considered the effect of scale on calibrated parameter values, where each standard SWAT model was calibrated independently, and parameters were transferred to SWATgrid models at equivalent scales. For each model, computational requirements were evaluated

  5. The impact of asynchronicity on event-flow estimation in basin-scale hydrologic model calibration

    USDA-ARS?s Scientific Manuscript database

    The calibration of basin-scale hydrologic models consists of adjusting parameters such that simulated values closely match observed values. However, due to inevitable inaccuracies in models and model inputs, simulated response hydrographs for multi-year calibrations will not be perfectly synchroniz...

  6. Experiments for calibration and validation of plasticity and failure material modeling: 304L stainless steel.

    SciTech Connect

    Lee, Kenneth L.; Korellis, John S.; McFadden, Sam X.

    2006-01-01

    Experimental data for material plasticity and failure model calibration and validation were obtained from 304L stainless steel. Model calibration data were taken from smooth tension, notched tension, and compression tests. Model validation data were provided from experiments using thin-walled tube specimens subjected to path dependent combinations of internal pressure, extension, and torsion.

  7. Modelling and calibration of a ring-shaped electrostatic meter

    NASA Astrophysics Data System (ADS)

    Zhang, Jianyong; Zhou, Bin; Xu, Chuanlong; Wang, Shimin

    2009-02-01

    Ring-shaped electrostatic flow meters can provide very useful information on pneumatically transported air-solids mixture. This type of meters are popular in measuring and controlling the pulverized coal flow distribution among conveyors leading to burners in coal-fired power stations, and they have also been used for research purposes, e.g. for the investigation of electrification mechanism of air-solids two-phase flow. In this paper, finite element method (FEM) is employed to analyze the characteristics of ring-shaped electrostatic meters, and a mathematic model has been developed to express the relationship between the meter's voltage output and the motion of charged particles in the sensing volume. The theoretical analysis and the test results using a belt rig demonstrate that the output of the meter depends upon many parameters including the characteristics of conditioning circuitry, the particle velocity vector, the amount and the rate of change of the charge carried by particles, the locations of particles and etc. This paper also introduces a method to optimize the theoretical model via calibration.

  8. The value of subsidence data in ground water model calibration.

    PubMed

    Yan, Tingting; Burbey, Thomas J

    2008-01-01

    The accurate estimation of aquifer parameters such as transmissivity and specific storage is often an important objective during a ground water modeling investigation or aquifer resource evaluation. Parameter estimation is often accomplished with changes in hydraulic head data as the key and most abundant type of observation. The availability and accessibility of global positioning system and interferometric synthetic aperture radar data in heavily pumped alluvial basins can provide important subsidence observations that can greatly aid parameter estimation. The aim of this investigation is to evaluate the value of spatial and temporal subsidence data for automatically estimating parameters with and without observation error using UCODE-2005 and MODFLOW-2000. A synthetic conceptual model (24 separate cases) containing seven transmissivity zones and three zones each for elastic and inelastic skeletal specific storage was used to simulate subsidence and drawdown in an aquifer with variably thick interbeds with delayed drainage. Five pumping wells of variable rates were used to stress the system for up to 15 years. Calibration results indicate that (1) the inverse of the square of the observation values is a reasonable way to weight the observations, (2) spatially abundant subsidence data typically produce superior parameter estimates under constant pumping even with observation error, (3) only a small number of subsidence observations are required to achieve accurate parameter estimates, and (4) for seasonal pumping, accurate parameter estimates for elastic skeletal specific storage values are largely dependent on the quantity of temporal observational data and less on the quantity of available spatial data.

  9. Model Robust Calibration: Method and Application to Electronically-Scanned Pressure Transducers

    NASA Technical Reports Server (NTRS)

    Walker, Eric L.; Starnes, B. Alden; Birch, Jeffery B.; Mays, James E.

    2010-01-01

    This article presents the application of a recently developed statistical regression method to the controlled instrument calibration problem. The statistical method of Model Robust Regression (MRR), developed by Mays, Birch, and Starnes, is shown to improve instrument calibration by reducing the reliance of the calibration on a predetermined parametric (e.g. polynomial, exponential, logarithmic) model. This is accomplished by allowing fits from the predetermined parametric model to be augmented by a certain portion of a fit to the residuals from the initial regression using a nonparametric (locally parametric) regression technique. The method is demonstrated for the absolute scale calibration of silicon-based pressure transducers.

  10. Generator Dynamic Model Validation and Parameter Calibration Using Phasor Measurements at the Point of Connection

    SciTech Connect

    Huang, Zhenyu; Du, Pengwei; Kosterev, Dmitry; Yang, Steve

    2013-05-01

    Disturbance data recorded by phasor measurement units (PMU) offers opportunities to improve the integrity of dynamic models. However, manually tuning parameters through play-back events demands significant efforts and engineering experiences. In this paper, a calibration method using the extended Kalman filter (EKF) technique is proposed. The formulation of EKF with parameter calibration is discussed. Case studies are presented to demonstrate its validity. The proposed calibration method is cost-effective, complementary to traditional equipment testing for improving dynamic model quality.

  11. Calibration of Uncertainty Analysis of the SWAT Model Using Genetic Algorithms and Bayesian Model Averaging

    USDA-ARS?s Scientific Manuscript database

    In this paper, the Genetic Algorithms (GA) and Bayesian model averaging (BMA) were combined to simultaneously conduct calibration and uncertainty analysis for the Soil and Water Assessment Tool (SWAT). In this hybrid method, several SWAT models with different structures are first selected; next GA i...

  12. Development of Conceptual Benchmark Models to Evaluate Complex Hydrologic Model Calibration in Managed Basins Using Python

    NASA Astrophysics Data System (ADS)

    Hughes, J. D.; White, J.

    2013-12-01

    For many numerical hydrologic models it is a challenge to quantitatively demonstrate that complex models are preferable to simpler models. Typically, a decision is made to develop and calibrate a complex model at the beginning of a study. The value of selecting a complex model over simpler models is commonly inferred from use of a model with fewer simplifications of the governing equations because it can be time consuming to develop another numerical code with data processing and parameter estimation functionality. High-level programming languages like Python can greatly reduce the effort required to develop and calibrate simple models that can be used to quantitatively demonstrate the increased value of a complex model. We have developed and calibrated a spatially-distributed surface-water/groundwater flow model for managed basins in southeast Florida, USA, to (1) evaluate the effect of municipal groundwater pumpage on surface-water/groundwater exchange, (2) investigate how the study area will respond to sea-level rise, and (3) explore combinations of these forcing functions. To demonstrate the increased value of this complex model, we developed a two-parameter conceptual-benchmark-discharge model for each basin in the study area. The conceptual-benchmark-discharge model includes seasonal scaling and lag parameters and is driven by basin rainfall. The conceptual-benchmark-discharge models were developed in the Python programming language and used weekly rainfall data. Calibration was implemented with the Broyden-Fletcher-Goldfarb-Shanno method available in the Scientific Python (SciPy) library. Normalized benchmark efficiencies calculated using output from the complex model and the corresponding conceptual-benchmark-discharge model indicate that the complex model has more explanatory power than the simple model driven only by rainfall.

  13. Calibration of model constants in a biological reaction model for sewage treatment plants.

    PubMed

    Amano, Ken; Kageyama, Kohji; Watanabe, Shoji; Takemoto, Takeshi

    2002-02-01

    Various biological reaction models have been proposed which estimate concentrations of soluble and insoluble components in effluent of sewage treatment plants. These models should be effective to develop a better operation system and plant design, but their formulas consist of nonlinear equations, and there are many model constants, which are not easy to calibrate. A technique has been proposed to decide the model constants by precise experiments, but it is not practical for design engineers or process operators to perform these experiments regularly. Other approaches which calibrate the model constants by mathematical techniques should be used. In this paper, the optimal regulator method of modern control theory is applied as a mathematical technique to calibrate the model constants. This method is applied in a small sewage treatment testing facility. Calibration of the model constants is examined to decrease the deviations between calculated and measured concentrations. Results show that calculated values of component concentrations approach measured values and the method is useful for actual plants.

  14. Using the cloud to speed-up calibration of watershed-scale hydrologic models (Invited)

    NASA Astrophysics Data System (ADS)

    Goodall, J. L.; Ercan, M. B.; Castronova, A. M.; Humphrey, M.; Beekwilder, N.; Steele, J.; Kim, I.

    2013-12-01

    This research focuses on using the cloud to address computational challenges associated with hydrologic modeling. One example is calibration of a watershed-scale hydrologic model, which can take days of execution time on typical computers. While parallel algorithms for model calibration exist and some researchers have used multi-core computers or clusters to run these algorithms, these solutions do not fully address the challenge because (i) calibration can still be too time consuming even on multicore personal computers and (ii) few in the community have the time and expertise needed to manage a compute cluster. Given this, another option for addressing this challenge that we are exploring through this work is the use of the cloud for speeding-up calibration of watershed-scale hydrologic models. The cloud used in this capacity provides a means for renting a specific number and type of machines for only the time needed to perform a calibration model run. The cloud allows one to precisely balance the duration of the calibration with the financial costs so that, if the budget allows, the calibration can be performed more quickly by renting more machines. Focusing specifically on the SWAT hydrologic model and a parallel version of the DDS calibration algorithm, we show significant speed-up time across a range of watershed sizes using up to 256 cores to perform a model calibration. The tool provides a simple web-based user interface and the ability to monitor the calibration job submission process during the calibration process. Finally this talk concludes with initial work to leverage the cloud for other tasks associated with hydrologic modeling including tasks related to preparing inputs for constructing place-based hydrologic models.

  15. Calibration models for density borehole logging - construction report

    SciTech Connect

    Engelmann, R.E.; Lewis, R.E.; Stromswold, D.C.

    1995-10-01

    Two machined blocks of magnesium and aluminum alloys form the basis for Hanford`s density models. The blocks provide known densities of 1.780 {plus_minus} 0.002 g/cm{sup 3} and 2.804 {plus_minus} 0.002 g/cm{sup 3} for calibrating borehole logging tools that measure density based on gamma-ray scattering from a source in the tool. Each block is approximately 33 x 58 x 91 cm (13 x 23 x 36 in.) with cylindrical grooves cut into the sides of the blocks to hold steel casings of inner diameter 15 cm (6 in.) and 20 cm (8 in.). Spacers that can be inserted between the blocks and casings can create air gaps of thickness 0.64, 1.3, 1.9, and 2.5 cm (0.25, 0.5, 0.75 and 1.0 in.), simulating air gaps that can occur in actual wells from hole enlargements behind the casing.

  16. Optimal Calibration Designs for Tests of Polytomously Scored Items Described by Item Response Theory Models.

    ERIC Educational Resources Information Center

    Holman, Rebecca; Berger, Martijn P. F.

    2001-01-01

    Studied calibration designs that maximize the determinants of Fisher's information matrix on the item parameters for sets of polytomously scored items. Analyzed these items using a number of item response theory models. Results show that for the data and models used, a D-optimal calibration design for an answer or set of answers can reduce the…

  17. Augmenting watershed model calibration with incorporation of ancillary data sources and qualitative soft data sources

    USDA-ARS?s Scientific Manuscript database

    Watershed simulation models can be calibrated using “hard data” such as temporal streamflow observations; however, users may find upon examination of detailed outputs that some of the calibrated models may not reflect summative actual watershed behavior. Thus, it is necessary to use “soft data” (i....

  18. Automated calibration of a three-dimensional ground water flow model

    SciTech Connect

    Baker, F.G.; Guo, X.; Zigich, D.

    1996-12-31

    A three-dimensional ground water flow model was developed and calibrated for use as a quantitative tool for the evaluation of several potential ground water remedial alternatives during the On-Post Feasibility Study for the Rocky Mountain Arsenal. The USGS MODFLOW code was implemented and calibrated for steady-state conditions over the entire model area and for transient conditions where local pumping test data were available. Strict modeling goals and calibration criteria were established before modeling was initiated and formed a basis to guide the modeling process as it proceeded. The modeling effort utilized a non-traditional optimization technique to assist in model calibration. During calibration, this practical and systematic parameter adjustment procedure was used where parameter change was tightly constrained by preset geologic and hydrogeologic conditions. Hydraulic conductivity parameter was adjusted based on frequent comparison of calculated head to observed head conditions. The driving parameter was adjusted within limits until the calibration criteria achieved predetermined calibration targets. The paper presents the calibration approach and discusses the model application for evaluation of alternatives.

  19. Impact of length of calibration period on the APEX model water quantity and quality simulation performance

    USDA-ARS?s Scientific Manuscript database

    Availability of continuous long-term measured data for model calibration and validation is limited due to time and resources constraints. As a result, hydrologic and water quality models are calibrated and, if possible, validated when measured data is available. Past work reported on the impact of t...

  20. Evaluation of impact of length of calibration time period on the APEX model streamflow simulation

    USDA-ARS?s Scientific Manuscript database

    Due to resource constraints, continuous long-term measured data for model calibration and validation (C/V) are rare. As a result, most hydrologic and water quality models are calibrated and, if possible, validated using limited available measured data. However, little research has been carried out t...

  1. Impact of length of dataset on streamflow calibration parameters and performance of APEX model

    USDA-ARS?s Scientific Manuscript database

    Due to resource constraints, long-term monitoring data for calibration and validation of hydrologic and water quality models are rare. As a result, most models are calibrated and, if possible, validated using limited measured data. However, little research has been done to determine the impact of le...

  2. Improving SWAT model prediction using an upgraded denitrification scheme and constrained auto calibration

    USDA-ARS?s Scientific Manuscript database

    The reliability of common calibration practices for process based water quality models has recently been questioned. A so-called “adequately calibrated model” may contain input errors not readily identifiable by model users, or may not realistically represent intra-watershed responses. These short...

  3. HRMA calibration handbook: EKC gravity compensated XRCF models

    NASA Technical Reports Server (NTRS)

    Tananbaum, H. D.; Jerius, D.; Hughes, J.

    1994-01-01

    This document, consisting of hardcopy printout of explanatory text, figures, and tables, represents one incarnation of the AXAF high resolution mirror assembly (HRMA) Calibration Handbook. However, as we have envisioned it, the handbook also consists of electronic versions of this hardcopy printout (in the form of postscript files), the individual scripts which produced the various figures and the associated input data, the model raytrace files, and all scripts, parameter files, and input data necessary to generate the raytraces. These data are all available electronically as either ASCII or FITS files. The handbook is intended to be a living document and will be updated as new information and/or fabrication data on the HRMA are obtained, or when the need for additional results are indicated. The SAO Mission Support Team (MST) is developing a high fidelity HRMA model, consisting of analytical and numerical calculations, computer software, and databases of fundamental physical constants, laboratory measurements, configuration data, finite element models, AXAF assembly data, and so on. This model serves as the basis for the simulations presented in the handbook. The 'core' of the model is the raytrace package OSAC, which we have substantially modified and now refer to as SAOsac. One major structural modification to the software has been to utilize the UNIX binary pipe data transport mechanism for passing rays between program modules. This change has made it possible to simulate rays which are distributed randomly over the entrance aperture of the telescope. It has also resulted in a highly efficient system for tracing large numbers of rays. In one application to date (the analysis of VETA-I ring focus data) we have employed 2 x 10(exp 7) rays, a substantial improvement over the limit of 1 x 10(exp 4) rays in the original OSAC module. A second major modification is the manner in which SAOsac incorporates low spatial frequency surface errors into the geometric raytrace

  4. Calibration of the Forward-scattering Spectrometer Probe: Modeling Scattering from a Multimode Laser Beam

    NASA Technical Reports Server (NTRS)

    Hovenac, Edward A.; Lock, James A.

    1993-01-01

    Scattering calculations using a more detailed model of the multimode laser beam in the forward-scattering spectrometer probe (FSSP) were carried out by using a recently developed extension to Mie scattering theory. From this model, new calibration curves for the FSSP were calculated. The difference between the old calibration curves and the new ones is small for droplet diameters less than 10 micrometers, but the difference increases to approximately 10% at diameters of 50 micrometers. When using glass beads to calibrate the FSSP, calibration errors can be minimized, by using glass beads of many different diameters, over the entire range of the FSSP. If the FSSP is calibrated using one-diameter glass beads, then the new formalism is necessary to extrapolate the calibration over the entire range.

  5. Calibration of the forward-scattering spectrometer probe - Modeling scattering from a multimode laser beam

    NASA Technical Reports Server (NTRS)

    Hovenac, Edward A.; Lock, James A.

    1993-01-01

    Scattering calculations using a detailed model of the multimode laser beam in the forward-scattering spectrometer probe (FSSP) were carried out using a recently developed extension to Mie scattering theory. From this model, new calibration curves for the FSSP were calculated. The difference between the old calibration curves and the new ones is small for droplet diameters less than 10 microns, but the difference increases to approximately 10 percent at diameters of 50 microns. When using glass beads to calibrate the FSSP, calibration errors can be minimized by using glass beads of many different diameters, over the entire range of the FSSP. If the FSSP is calibrated using one-diameter glass beads, then the new formalism is necessary to extrapolate the calibration over the entire range.

  6. Schematic model for QCD. III. Hadronic states

    SciTech Connect

    Nunez, M.V.; Lerma, S.H.; Hess, P.O.; Jesgarz, S.; Civitarese, O.; Reboiro, M.

    2004-09-01

    The hadronic spectrum obtained in the framework of a QCD-inspired schematic model is presented. The model is the extension of a previous version, whose basic degrees of freedom are constituent quarks, antiquarks, and gluons. The interaction between quarks and gluons is a phenomenological interaction and its parameters are fixed from data. The classification of the states, in terms of quark and antiquark and gluon configurations is based on symmetry considerations, and it is independent of the chosen interaction. Following this procedure, nucleon and {delta} resonances are identified, as well as various penta- and hepta-quarks states. The lowest pentaquarks state is predicted at 1.5 GeV and it has negative parity, while the lowest hepta-quarks state has positive parity and its energy is of the order of 2.5 GeV.

  7. Graphical assessment of internal and external calibration of logistic regression models by using loess smoothers.

    PubMed

    Austin, Peter C; Steyerberg, Ewout W

    2014-02-10

    Predicting the probability of the occurrence of a binary outcome or condition is important in biomedical research. While assessing discrimination is an essential issue in developing and validating binary prediction models, less attention has been paid to methods for assessing model calibration. Calibration refers to the degree of agreement between observed and predicted probabilities and is often assessed by testing for lack-of-fit. The objective of our study was to examine the ability of graphical methods to assess the calibration of logistic regression models. We examined lack of internal calibration, which was related to misspecification of the logistic regression model, and external calibration, which was related to an overfit model or to shrinkage of the linear predictor. We conducted an extensive set of Monte Carlo simulations with a locally weighted least squares regression smoother (i.e., the loess algorithm) to examine the ability of graphical methods to assess model calibration. We found that loess-based methods were able to provide evidence of moderate departures from linearity and indicate omission of a moderately strong interaction. Misspecification of the link function was harder to detect. Visual patterns were clearer with higher sample sizes, higher incidence of the outcome, or higher discrimination. Loess-based methods were also able to identify the lack of calibration in external validation samples when an overfit regression model had been used. In conclusion, loess-based smoothing methods are adequate tools to graphically assess calibration and merit wider application.

  8. Why Bother to Calibrate? Model Consistency and the Value of Prior Information

    NASA Astrophysics Data System (ADS)

    Hrachowitz, Markus; Fovet, Ophelie; Ruiz, Laurent; Euser, Tanja; Gharari, Shervan; Nijzink, Remko; Savenije, Hubert; Gascuel-Odoux, Chantal

    2015-04-01

    Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.

  9. Why Bother and Calibrate? Model Consistency and the Value of Prior Information.

    NASA Astrophysics Data System (ADS)

    Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J. E.; Savenije, H.; Gascuel-Odoux, C.

    2014-12-01

    Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.

  10. Research on quasi-dynamic calibration model of plastic sensitive element based on neural networks

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Kong, Deren; Yang, Lixia; Zhang, Zouzou

    2017-08-01

    Quasi-dynamic calibration accuracy of the plastic sensitive element depends on the accuracy of the fitting model between pressure and deformation. By using the excellent nonlinear mapping ability of RBF (Radial Basis Function) neural network, a calibration model is established which use the peak pressure as the input and use the deformation of the plastic sensitive element as the output in this paper. The calibration experiments of a batch of copper cylinders are carried out on the quasi-dynamic pressure calibration device, which pressure range is within the range of 200MPa to 700MPa. The experiment data are acquired according to the standard pressure monitoring system. The network train and study are done to quasi dynamic calibration model based on neural network by using MATLAB neural network toolbox. Taking the testing samples as the research object, the prediction accuracy of neural network model is compared with the exponential fitting model and the second-order polynomial fitting model. The results show that prediction of the neural network model is most close to the testing samples, and the accuracy of prediction model based on neural network is better than 0.5%, respectively one order higher than the second-order polynomial fitting model and two orders higher than the exponential fitting model. The quasi-dynamic calibration model between pressure peak and deformation of plastic sensitive element, which is based on neural network, provides important basis for creating higher accuracy quasi-dynamic calibration table.

  11. An analysis of calibration curve models for solid-state heat-flow calorimeters

    SciTech Connect

    Hypes, P. A.; Bracken, D. S.; McCabe, G.

    2001-01-01

    Various calibration curve models for solid-state calorimeters are compared to determine which model best fits the calibration data. The calibration data are discussed. The criteria used to select the best model are explained. A conclusion regarding the best model for the calibration curve is presented. These results can also be used to evaluate the random and systematic error of a calorimetric measurement. A linear/quadratic model has been used for decades to fit the calibration curves for wheatstone bridge calorimeters. Excellent results have been obtained using this calibration curve model. The Multical software package uses this model for the calibration curve. The choice of this model is supported by 40 years [1] of calorimeter data. There is good empirical support for the linear/quadratic model. Calorimeter response is strongly linear. Calorimeter sensitivity is slightly lower at higher powers; the negative coefficient of the x{sup 2} term accounts for this. The solid-state calorimeter is operated using the Multical [2] software package. An investigation was undertaken to determine if the linear/quadratic model is the best model for the new sensor technology used in the solid-state calorimeter.

  12. On the calibration and verification of two-dimensional, distributed, Hortonian, continuous watershed models

    NASA Astrophysics Data System (ADS)

    Senarath, Sharika U. S.; Ogden, Fred L.; Downer, Charles W.; Sharif, Hatim O.

    2000-02-01

    Physically based, two-dimensional, distributed parameter Hortonian hydrologic models are sensitive to a number of spatially varied parameters and inputs and are particularly sensitive to the initial soil moisture field. However, soil moisture data are generally unavailable for most catchments. Given an erroneous initial soil moisture field, single-event calibrations are easily achieved using different combinations of model parameters, including physically unrealistic values. Verification of single-event calibrations is very difficult for models of this type because of parameter estimation errors that arise from initial soil moisture field uncertainty. The purpose of this study is to determine if the likelihood of obtaining a verifiable calibration increases when a continuous flow record, consisting of multiple runoff producing events is used for model calibration. The physically based, two-dimensional, distributed, Hortonian hydrologic model CASC2D [Julien et al., 1995] is converted to a continuous formulation that simulates the temporal evolution of soil moisture between rainfall events. Calibration is performed using 6 weeks of record from the 21.3 km 2 Goodwin Creek Experimental Watershed, located in northern Mississippi. Model parameters are assigned based on soil textures, land use/land cover maps, and a combination of both. The sensitivity of the new model formulation to parameter variation is evaluated. Calibration is performed using the shuffled complex evolution method [Duan et al., 1991]. Three different tests are conducted to evaluate model performance based on continuous calibration. Results show that calibration on a continuous basis significantly improves model performance for periods, or subcatchments, not used in calibration and the likelihood of obtaining realistic simulations of spatially varied catchment dynamics. The automated calibration reveals that the parameter assignment methodology used in this study results in overparameterization

  13. Calibration and uncertainty issues of a hydrological model (SWAT) applied to West Africa

    NASA Astrophysics Data System (ADS)

    Schuol, J.; Abbaspour, K. C.

    2006-09-01

    Distributed hydrological models like SWAT (Soil and Water Assessment Tool) are often highly over-parameterized, making parameter specification and parameter estimation inevitable steps in model calibration. Manual calibration is almost infeasible due to the complexity of large-scale models with many objectives. Therefore we used a multi-site semi-automated inverse modelling routine (SUFI-2) for calibration and uncertainty analysis. Nevertheless, the question of when a model is sufficiently calibrated remains open, and requires a project dependent definition. Due to the non-uniqueness of effective parameter sets, parameter calibration and prediction uncertainty of a model are intimately related. We address some calibration and uncertainty issues using SWAT to model a four million km2 area in West Africa, including mainly the basins of the river Niger, Volta and Senegal. This model is a case study in a larger project with the goal of quantifying the amount of global country-based available freshwater. Annual and monthly simulations with the "calibrated" model for West Africa show promising results in respect of the freshwater quantification but also point out the importance of evaluating the conceptual model uncertainty as well as the parameter uncertainty.

  14. Interplanetary density models as inferred from solar Type III bursts

    NASA Astrophysics Data System (ADS)

    Oppeneiger, Lucas; Boudjada, Mohammed Y.; Lammer, Helmut; Lichtenegger, Herbert

    2016-04-01

    We report on the density models derived from spectral features of solar Type III bursts. They are generated by beams of electrons travelling outward from the Sun along open magnetic field lines. Electrons generate Langmuir waves at the plasma frequency along their ray paths through the corona and the interplanetary medium. A large frequency band is covered by the Type III bursts from several MHz down to few kHz. In this analysis, we consider the previous empirical density models proposed to describe the electron density in the interplanetary medium. We show that those models are mainly based on the analysis of Type III bursts generated in the interplanetary medium and observed by satellites (e.g. RAE, HELIOS, VOYAGER, ULYSSES,WIND). Those models are confronted to stereoscopic observations of Type III bursts recorded by WIND, ULYSSES and CASSINI spacecraft. We discuss the spatial evolution of the electron beam along the interplanetary medium where the trajectory is an Archimedean spiral. We show that the electron beams and the source locations are depending on the choose of the empirical density models.

  15. Impact of model development, calibration and validation decisions on hydrological simulations in West Lake Erie Basin

    USDA-ARS?s Scientific Manuscript database

    Watershed simulation models are used extensively to investigate hydrologic processes, landuse and climate change impacts, pollutant load assessments and best management practices (BMPs). Developing, calibrating and validating these models require a number of critical decisions that will influence t...

  16. Comparison of global optimization approaches for robust calibration of hydrologic model parameters

    NASA Astrophysics Data System (ADS)

    Jung, I. W.

    2015-12-01

    Robustness of the calibrated parameters of hydrologic models is necessary to provide a reliable prediction of future performance of watershed behavior under varying climate conditions. This study investigated calibration performances according to the length of calibration period, objective functions, hydrologic model structures and optimization methods. To do this, the combination of three global optimization methods (i.e. SCE-UA, Micro-GA, and DREAM) and four hydrologic models (i.e. SAC-SMA, GR4J, HBV, and PRMS) was tested with different calibration periods and objective functions. Our results showed that three global optimization methods provided close calibration performances under different calibration periods, objective functions, and hydrologic models. However, using the agreement of index, normalized root mean square error, Nash-Sutcliffe efficiency as the objective function showed better performance than using correlation coefficient and percent bias. Calibration performances according to different calibration periods from one year to seven years were hard to generalize because four hydrologic models have different levels of complexity and different years have different information content of hydrological observation. Acknowledgements This research was supported by a grant (14AWMP-B082564-01) from Advanced Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.

  17. Efficient Auto-Calibration of Computationally Intensive Hydrologic Models by Running the Model on Short Data Periods

    NASA Astrophysics Data System (ADS)

    Razavi, S.; Tolson, B.

    2012-04-01

    Sophisticated hydrologic models may require very long run times to simulate for medium-sized and long data periods. With such models in hand, activities like automatic calibration, parameter space exploration, and uncertainty analysis become very computationally intensive as these models are required to repeatedly run hundreds or thousands of times. This study proposes a strategy to improve the computational efficiency of these activities by utilizing a secondary model in conjunction with the original model which works on a medium-sized or long calibration data period. The secondary model is basically the same as the original model but running on a relatively short data period which is a portion of the calibration data period. Certain relationships can be identified to relate the performance of the model on the entire calibration period with the performance of the secondary model on the short data period. Upon establishing such a relationship, the performance of the model for a given parameter set over the entire calibration period can be probabilistically predicted after running the model with the same parameter set over the short data period. The appeal of this strategy is demonstrated in a SWAT hydrologic model automatic calibration case study. A SWAT2000 model of the Cannonsville reservoir watershed in New York, the United States, with 14 parameters is calibrated over a 6-year period. Kriging is used to establish the relationship between the modelling performances for the entire calibration and short periods. Covariance Matrix Adaptation-Evolution Strategy (CMA-ES) is used as the optimizing engine to explore the parameter space during calibration. Numerical results show that the proposed strategy can significantly reduce the computational budget required in automatic calibration practices. Importantly, these efficiency gains are achievable with a minimum level of sacrifice of accuracy. Results also show that through this strategy the parameter space can be

  18. View of a five inch standard Mark III model 1 ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    View of a five inch standard Mark III model 1 #39, manufactured in 1916 at the naval gun factory waterveliet, NY; this is the only gun remaining on olympia dating from the period when it was in commission; note ammunition lift at left side of photograph. (p36) - USS Olympia, Penn's Landing, 211 South Columbus Boulevard, Philadelphia, Philadelphia County, PA

  19. Automatic Calibration of Hydrological Models in the Newly Reconstructed Catchments: Issues, Methods and Uncertainties

    NASA Astrophysics Data System (ADS)

    Nazemi, Alireza; Elshorbagy, Amin

    2010-05-01

    The use of optimisation methods has a long tradition in the calibration of conceptual hydrological models; nevertheless, most of the previous investigations have been made in the catchments with long period of data collection and only with respect to the runoff information. The present study focuses on the automatic calibration of hydrological models using the states (i.e. soil moisture) as well as the fluxes (i.e., AET) in a prototype catchment, in which intensive gauging network collects variety of catchment variables; yet only a short period of data is available. First, the characteristics of such a calibration attempt are highlighted and discussed and a number of research questions are proposed. Then, four different optimisation methods, i.e. Latin Hypercube Sampling, Shuffled Complex Evolution Metropolis, Multi-Objective Shuffled Complex Evolution Metropolis and Non-dominated Sort Genetic Algorithm II, have been considered and applied for the automatic calibration of the GSDW model in a newly oil-sand reconstructed catchment in northern Alberta, Canada. It is worthwhile to mention that the original GSDW model had to be translated into MATLAB in order to enable the model to be automatically calibrated. Different conceptualisation scenarios are generated and calibrated. The calibration results have been analysed and compared in terms of the optimality and the quality of solutions. The concepts of multi-objectivity and lack of identifiability are addressed in the calibration solutions and the best calibration algorithm is selected based on the error of representing the soil moisture content in different layers. The current study also considers uncertainties, which might occur in the formulation of calibration process by considering different calibration scenarios using the same model and dataset. The interactions among accuracy, identifiability, and the model parsimony are addressed and discussed. The present investigation concludes that the calibration of

  20. Stochastic finite element model calibration based on frequency responses and bootstrap sampling

    NASA Astrophysics Data System (ADS)

    Vakilzadeh, Majid K.; Yaghoubi, Vahid; Johansson, Anders T.; Abrahamsson, Thomas J. S.

    2017-05-01

    A new stochastic finite element model calibration framework for estimation of the uncertainty in model parameters and predictions from the measured frequency responses is proposed in this paper. It combines the principles of bootstrapping with the technique of FE model calibration with damping equalization. The challenge for the calibration problem is to find an initial estimate of the parameters that is reasonably close to the global minimum of the deviation between model predictions and measurement data. The idea of model calibration with damping equalization is to formulate the calibration metric as the deviation between the logarithm of the frequency responses of FE model and a test data model found from measurement where the same level of modal damping is imposed on all modes. This formulation gives a smooth metric with a large radius of convergence to the global minimum. In this study, practical suggestions are made to improve the performance of this calibration procedure in dealing with noisy measurements. A dedicated frequency sampling strategy is suggested for measurement of frequency responses in order to improve the estimate of a test data model. The deviation metric at each frequency line is weighted using the signal-to-noise ratio of the measured frequency responses. The solution to the improved calibration procedure with damping equalization is viewed as a starting value for the optimization procedure used for uncertainty quantification. The experimental data is then resampled using the bootstrapping approach and the FE model calibration problem, initiating from the estimated starting value, is solved on each individual resampled dataset to produce uncertainty bounds on the model parameters and predictions. The proposed stochastic model calibration framework is demonstrated on a six degree-of-freedom spring-mass system prior to being applied to a general purpose satellite structure.

  1. A Hierarchical Bayesian Model for Calibrating Estimates of Species Divergence Times

    PubMed Central

    Heath, Tracy A.

    2012-01-01

    In Bayesian divergence time estimation methods, incorporating calibrating information from the fossil record is commonly done by assigning prior densities to ancestral nodes in the tree. Calibration prior densities are typically parametric distributions offset by minimum age estimates provided by the fossil record. Specification of the parameters of calibration densities requires the user to quantify his or her prior knowledge of the age of the ancestral node relative to the age of its calibrating fossil. The values of these parameters can, potentially, result in biased estimates of node ages if they lead to overly informative prior distributions. Accordingly, determining parameter values that lead to adequate prior densities is not straightforward. In this study, I present a hierarchical Bayesian model for calibrating divergence time analyses with multiple fossil age constraints. This approach applies a Dirichlet process prior as a hyperprior on the parameters of calibration prior densities. Specifically, this model assumes that the rate parameters of exponential prior distributions on calibrated nodes are distributed according to a Dirichlet process, whereby the rate parameters are clustered into distinct parameter categories. Both simulated and biological data are analyzed to evaluate the performance of the Dirichlet process hyperprior. Compared with fixed exponential prior densities, the hierarchical Bayesian approach results in more accurate and precise estimates of internal node ages. When this hyperprior is applied using Markov chain Monte Carlo methods, the ages of calibrated nodes are sampled from mixtures of exponential distributions and uncertainty in the values of calibration density parameters is taken into account. PMID:22334343

  2. A hierarchical Bayesian model for calibrating estimates of species divergence times.

    PubMed

    Heath, Tracy A

    2012-10-01

    In Bayesian divergence time estimation methods, incorporating calibrating information from the fossil record is commonly done by assigning prior densities to ancestral nodes in the tree. Calibration prior densities are typically parametric distributions offset by minimum age estimates provided by the fossil record. Specification of the parameters of calibration densities requires the user to quantify his or her prior knowledge of the age of the ancestral node relative to the age of its calibrating fossil. The values of these parameters can, potentially, result in biased estimates of node ages if they lead to overly informative prior distributions. Accordingly, determining parameter values that lead to adequate prior densities is not straightforward. In this study, I present a hierarchical Bayesian model for calibrating divergence time analyses with multiple fossil age constraints. This approach applies a Dirichlet process prior as a hyperprior on the parameters of calibration prior densities. Specifically, this model assumes that the rate parameters of exponential prior distributions on calibrated nodes are distributed according to a Dirichlet process, whereby the rate parameters are clustered into distinct parameter categories. Both simulated and biological data are analyzed to evaluate the performance of the Dirichlet process hyperprior. Compared with fixed exponential prior densities, the hierarchical Bayesian approach results in more accurate and precise estimates of internal node ages. When this hyperprior is applied using Markov chain Monte Carlo methods, the ages of calibrated nodes are sampled from mixtures of exponential distributions and uncertainty in the values of calibration density parameters is taken into account.

  3. Calibration of hydrological models using TOPEX/Poseidon radar altimetry observations

    NASA Astrophysics Data System (ADS)

    Sun, W.; Song, H.; Cheng, T.; Yu, J.

    2015-05-01

    This paper describes an approach for calibrating hydrological models using satellite radar altimetric observations of river water level at the basin outlet, aiming at providing a new direction for solving the calibration problem in ungauged basins where streamflow observations are unavailable. The methodology is illustrated by a case study in the Upper Mississippi basin. The water level data are derived from the TOPEX/Poseidon (T/P) satellite. The Generalized Likelihood Uncertainty Estimation (GLUE) method is employed for model calibration and uncertainty analysis. The Nash-Sutcliffe efficiency of averaged simulated streamflow by behavioural parameter sets is 64.50%. And the uncertainty bounds of the ensemble simulation embrace about 65% of daily streamflow. These results indicate that the hydrological model has been calibrated effectively. At the same time, comparison with traditional calibration using streamflow data illustrates that the proposed method is only valuable for applications in ungauged basins.

  4. Genetic Algorithm Calibration of Probabilistic Cellular Automata for Modeling Mining Permit Activity

    USGS Publications Warehouse

    Louis, S.J.; Raines, G.L.

    2003-01-01

    We use a genetic algorithm to calibrate a spatially and temporally resolved cellular automata to model mining activity on public land in Idaho and western Montana. The genetic algorithm searches through a space of transition rule parameters of a two dimensional cellular automata model to find rule parameters that fit observed mining activity data. Previous work by one of the authors in calibrating the cellular automaton took weeks - the genetic algorithm takes a day and produces rules leading to about the same (or better) fit to observed data. These preliminary results indicate that genetic algorithms are a viable tool in calibrating cellular automata for this application. Experience gained during the calibration of this cellular automata suggests that mineral resource information is a critical factor in the quality of the results. With automated calibration, further refinements of how the mineral-resource information is provided to the cellular automaton will probably improve our model.

  5. Modeling a self-calibrating thermocouple for use in a smart temperature measurement system

    SciTech Connect

    Ruppel, F.R. )

    1990-12-01

    A finite-difference computer-simulation program was developed to explain the thermodynamic behavior of the self-calibrating thermocouple. Based on a literature review and simulation analysis, a method was developed to recognize which point on the time-temperature curve is the calibration point. A description of the model and results of parametric studies are given.

  6. Calibration for nonlinear mixed effects models: an application to the withdrawal time prediction.

    PubMed

    Concordet, D; Nunez, O G

    2000-12-01

    We propose calibration methods for nonlinear mixed effects models. Using an estimator whose asymptotic properties are known, four different statistics are used to perform the calibration. Simulations are carried out to compare the performance of these statistics. Finally, the milk discard time prediction of an antibiotic, which has motivated this study, is performed on real data.

  7. Development, Calibration and Application of Runoff Forecasting Models for the Allegheny River Basin

    DTIC Science & Technology

    1988-06-01

    coefficients are based on calibration with higher flows. The calibrated HECIF models are now in day-to-day use in the Pittsburgh District. As experience is...Interrelated Racords to Simulate TP-2S Digital Simulation of an Existing Water Resources System Streof low TP-29 Cmuter Applications in Continuing Eduacation

  8. Observational calibration of the projection factor of Cepheids. III. The long-period Galactic Cepheid RS Puppis

    NASA Astrophysics Data System (ADS)

    Kervella, Pierre; Trahin, Boris; Bond, Howard E.; Gallenne, Alexandre; Szabados, Laszlo; Mérand, Antoine; Breitfelder, Joanne; Dailloux, Julien; Anderson, Richard I.; Fouqué, Pascal; Gieren, Wolfgang; Nardetto, Nicolas; Pietrzyński, Grzegorz

    2017-04-01

    The projection factor (p-factor) is an essential component of the classical Baade-Wesselink (BW) technique, which is commonly used to determine the distances to pulsating stars. It is a multiplicative parameter used to convert radial velocities into pulsational velocities. As the BW distances are linearly proportional to the p-factor, its accurate calibration for Cepheids is of critical importance for the reliability of their distance scale. We focus on the observational determination of the p-factor of the long-period Cepheid RS Pup (P = 41.5 days). This star is particularly important as this is one of the brightest Cepheids in the Galaxy and an analog of the Cepheids used to determine extragalactic distances. An accurate distance of 1910 ± 80 pc (± 4.2%) has recently been determined for RS Pup using the light echoes propagating in its circumstellar nebula. We combine this distance with new VLTI/PIONIER interferometric angular diameters, photometry, and radial velocities to derive the p-factor of RS Pup using the code Spectro-Photo-Interferometry of Pulsating Stars (SPIPS). We obtain p = 1.250 ± 0.064 ( ± 5.1%), defined for cross-correlation radial velocities. Together with measurements from the literature, the p-factor of RS Pup confirms the good agreement of a constant \\overline{p=\\meanp ± \\meanperr (± \\meanprelerr%)} model with the observations. We conclude that the p-factor of Cepheids is constant or mildly variable over a broad range of periods (3.7 to 41.5 days). Based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere under ESO programs 093.D-0316(A), 094.D-0773(B), 096.D-0341(A) and 098.D-0067(A). Based in part on observations with the 1.3 m telescope operated by the SMARTS Consortium at Cerro Tololo Interamerican Observatory.

  9. Improved Multivariate Calibration Models for Corn Stover Feedstock and Dilute-Acid Pretreated Corn Stover

    SciTech Connect

    Wolfrum, E. J.; Sluiter, A. D.

    2009-01-01

    We have studied rapid calibration models to predict the composition of a variety of biomass feedstocks by correlating near-infrared (NIR) spectroscopic data to compositional data produced using traditional wet chemical analysis techniques. The rapid calibration models are developed using multivariate statistical analysis of the spectroscopic and wet chemical data. This work discusses the latest versions of the NIR calibration models for corn stover feedstock and dilute-acid pretreated corn stover. Measures of the calibration precision and uncertainty are presented. No statistically significant differences (p = 0.05) are seen between NIR calibration models built using different mathematical pretreatments. Finally, two common algorithms for building NIR calibration models are compared; no statistically significant differences (p = 0.05) are seen for the major constituents glucan, xylan, and lignin, but the algorithms did produce different predictions for total extractives. A single calibration model combining the corn stover feedstock and dilute-acid pretreated corn stover samples gave less satisfactory predictions than the separate models.

  10. Exploring a Three-Level Model of Calibration Accuracy

    ERIC Educational Resources Information Center

    Schraw, Gregory; Kuch, Fred; Gutierrez, Antonio P.; Richmond, Aaron S.

    2014-01-01

    We compared 5 different statistics (i.e., G index, gamma, "d'", sensitivity, specificity) used in the social sciences and medical diagnosis literatures to assess calibration accuracy in order to examine the relationship among them and to explore whether one statistic provided a best fitting general measure of accuracy. College…

  11. Exploring a Three-Level Model of Calibration Accuracy

    ERIC Educational Resources Information Center

    Schraw, Gregory; Kuch, Fred; Gutierrez, Antonio P.; Richmond, Aaron S.

    2014-01-01

    We compared 5 different statistics (i.e., G index, gamma, "d'", sensitivity, specificity) used in the social sciences and medical diagnosis literatures to assess calibration accuracy in order to examine the relationship among them and to explore whether one statistic provided a best fitting general measure of accuracy. College…

  12. System-Wide Calibration of River System Models: Opportunities and Challenges

    NASA Astrophysics Data System (ADS)

    Kim, S. S. H.; Hughes, J. D.; Dutta, D.; Vaze, J.

    2014-12-01

    Semi-distributed river system models are traditionally calibrated using a reach-by-reach calibration approach from that starts from headwater gauges and moves downstream toward the end of the system. Such a calibration method poses a unique problem since errors related to over-fitting, poor gauging data and uncertain physical connection are passed downstream. Reach-by-reach calibration, while efficient, cannot compensate for limited/poor calibration data of some gauges. To overcome the limitations of reach-by-reach calibration, a system calibration approach is proposed in which all the river reaches within a river basin are calibrated together using a global objective function for all stream flow gauges. In this approach, relative weights can be assigned in the global objective function for different gauges based on the magnitude and quality of available data. The system calibration approach was implemented in a river network covering 11 stream flow gauges within Murrumbidgee catchment (Australia). This study optimises flow at the selected gauges within the river network simultaneously (36 calibrated parameters) utilising a process-based semi-distributed river system model. The model includes processes such as routing, localised runoff, irrigation diversion, overbank flow and losses to groundwater. Goodness of fit is evaluated at the 11 gauges and a flow based weighting scheme is employed to find posterior distributions of parameters using an Approximate Bayesian Computation. The method is evaluated against a reach-by-reach calibration scheme. The comparison shows that the system calibration approach provides an overall improved goodness-of-fit by systematically de-valuing poor quality gauges providing an overall improved basin-wide performance. Clusters of viable parameter sets are determined from the posterior distributions and each examined to assess the effects of parameter uncertainty on internal model states. Such a method of calibration provides a lot more

  13. A sequential approach to calibrate ecosystem models with multiple time series data

    NASA Astrophysics Data System (ADS)

    Oliveros-Ramos, Ricardo; Verley, Philippe; Echevin, Vincent; Shin, Yunne-Jai

    2017-02-01

    When models are aimed to support decision-making, their credibility is essential to consider. Model fitting to observed data is one major criterion to assess such credibility. However, due to the complexity of ecosystem models making their calibration more challenging, the scientific community has given more attention to the exploration of model behavior than to a rigorous comparison to observations. This work highlights some issues related to the comparison of complex ecosystem models to data and proposes a methodology for a sequential multi-phases calibration (or parameter estimation) of ecosystem models. We first propose two criteria to classify the parameters of a model: the model dependency and the time variability of the parameters. Then, these criteria and the availability of approximate initial estimates are used as decision rules to determine which parameters need to be estimated, and their precedence order in the sequential calibration process. The end-to-end (E2E) ecosystem model ROMS-PISCES-OSMOSE applied to the Northern Humboldt Current Ecosystem is used as an illustrative case study. The model is calibrated using an evolutionary algorithm and a likelihood approach to fit time series data of landings, abundance indices and catch at length distributions from 1992 to 2008. Testing different calibration schemes regarding the number of phases, the precedence of the parameters' estimation, and the consideration of time varying parameters, the results show that the multiple-phase calibration conducted under our criteria allowed to improve the model fit.

  14. Calibration of the 7—Equation Transition Model for High Reynolds Flows at Low Mach

    NASA Astrophysics Data System (ADS)

    Colonia, S.; Leble, V.; Steijl, R.; Barakos, G.

    2016-09-01

    The numerical simulation of flows over large-scale wind turbine blades without considering the transition from laminar to fully turbulent flow may result in incorrect estimates of the blade loads and performance. Thanks to its relative simplicity and promising results, the Local-Correlation based Transition Modelling concept represents a valid way to include transitional effects into practical CFD simulations. However, the model involves coefficients that need tuning. In this paper, the γ—equation transition model is assessed and calibrated, for a wide range of Reynolds numbers at low Mach, as needed for wind turbine applications. An aerofoil is used to evaluate the original model and calibrate it; while a large scale wind turbine blade is employed to show that the calibrated model can lead to reliable solutions for complex three-dimensional flows. The calibrated model shows promising results for both two-dimensional and three-dimensional flows, even if cross-flow instabilities are neglected.

  15. A regional calibration scheme for a distributed hydrologic model based on a copula dissimilarity measure

    NASA Astrophysics Data System (ADS)

    Kumar, R.; Samaniego, L. E.

    2011-12-01

    Spatially distributed hydrologic models at mesoscale level are based on the conceptualization and generalization of hydrological processes. Therefore, such models require parameter adjustment for its successful application at a given scale. Automatic computer-based algorithms are commonly used for the calibration purpose. While such algorithms can provide much faster and efficient results as compared to the traditional manual calibration method, they are also prone to overtraining of a parameter set for a given catchment. As a result, the transferability of model parameters from a calibration site to un-calibrated site is limited. In this study, we propose a regional multi-basin calibration scheme to prevent the overtraining of model parameters in a specific catchment. The idea is to split the available catchments into two disjoint groups in such a way that catchments belonging to the first group can be used for calibration (i.e. for minimization or maximization of objective functions), and catchments belonging to other group are used to cross-validation of the model performance for each generated parameter set. The calibration process should be stopped if the model shows a significant decrease in its performance at cross-validation catchments while increasing performance at calibration sites. Hydrologically diverse catchments were selected as members of each calibration and cross-validation groups to obtain a regional set of robust parameter. A dissimilarity measure based on runoff and antecedent precipitation copulas was used for the selection of the disjoint sets. The proposed methodology was used to calibrate transfer function parameters of a distributed mesoscale hydrologic model (mHM), whose parameter fields are linked to catchment characteristics through a set of transfer functions using a multiscale parameter regionalisation method. This study was carried out in 106 south German catchments ranging in size from 4 km2 to 12 700 km2. Initial test results

  16. Regression Model Term Selection for the Analysis of Strain-Gage Balance Calibration Data

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred; Volden, Thomas R.

    2010-01-01

    The paper discusses the selection of regression model terms for the analysis of wind tunnel strain-gage balance calibration data. Different function class combinations are presented that may be used to analyze calibration data using either a non-iterative or an iterative method. The role of the intercept term in a regression model of calibration data is reviewed. In addition, useful algorithms and metrics originating from linear algebra and statistics are recommended that will help an analyst (i) to identify and avoid both linear and near-linear dependencies between regression model terms and (ii) to make sure that the selected regression model of the calibration data uses only statistically significant terms. Three different tests are suggested that may be used to objectively assess the predictive capability of the final regression model of the calibration data. These tests use both the original data points and regression model independent confirmation points. Finally, data from a simplified manual calibration of the Ames MK40 balance is used to illustrate the application of some of the metrics and tests to a realistic calibration data set.

  17. Matching Images to Models: Camera Calibration for 3-D Surface Reconstruction

    NASA Technical Reports Server (NTRS)

    Morris, Robin D.; Smelyanskiy, Vadim N.; Cheeseman. Peter C.; Norvig, Peter (Technical Monitor)

    2001-01-01

    In a previous paper we described a system which recursively recovers a super-resolved three dimensional surface model from a set of images of the surface. In that paper we assumed that the camera calibration for each image was known. In this paper we solve two problems. Firstly, if an estimate of the surface is already known, the problem is to calibrate a new image relative to the existing surface model. Secondly, if no surface estimate is available, the relative camera calibration between the images in the set must be estimated. This will allow an initial surface model to be estimated. Results of both types of estimation are given.

  18. Seismology on a Comet: Calibration Measurements, Modeling and Inversion

    NASA Astrophysics Data System (ADS)

    Faber, C.; Hoppe, J.; Knapmeyer, M.; Fischer, H.; Seidensticker, K. J.

    2011-12-01

    The Mission Rosetta was launched to comet 67P/Churyumov-Gerasimenko in 2004. It will finally reach the comet and will deliver the Lander Philae at the surface of the nucleus in November 2014. The Lander carries ten experiments, one of which is the Surface Electric Sounding and Acoustic Monitoring Experiment (SESAME). Part of this experiment is the Comet Acoustic Surface Sounding Experiment (CASSE) housed in the three feet of the lander. The primary goal of CASSE is to determine the elastic parameters of the surface material, like the Young's modulus and the Poisson ratio. Additional goals are the determination of shallow structure, quantification of porosity, and the location of activity spots and thermally and impact caused cometary activity. We conduct calibration measurements with accelerometers identical to the flight model. The goal of these measurements is to develop inversion procedures for travel times and to estimate the expected accuracy that CASSE can achieve in terms of elastic wave velocity, elastic parameters, and source location. The experiments are conducted mainly on sandy soil, in dry, wet or frozen conditions, and apart from buildings with their reflecting walls and artificial noise sources. We expect that natural sources, like thermal cracking at sunrise and sunset, can be located to an accuracy of about 10 degrees in direction and a few decimeters (1σ) in distance if occurring within the sensor triangle and from first arrivals alone. The accuracy of the direction is essentially independent of the distance, whereas distance determination depends critically on the identification of later arrivals. Determination of elastic wave velocities on the comet will be conducted with controlled sources at known positions and are likely to achieve an accuracy of σ=15% for the velocity of the first arriving wave. Limitations are due to the fixed source-receiver geometry and the wavelength emitted by the CASSE piezo-ceramic sources. In addition to the

  19. Effects of temporal and spatial resolution of calibration data on integrated hydrologic water quality model identification

    NASA Astrophysics Data System (ADS)

    Jiang, Sanyuan; Jomaa, Seifeddine; Büttner, Olaf; Rode, Michael

    2014-05-01

    Hydrological water quality modeling is increasingly used for investigating runoff and nutrient transport processes as well as watershed management but it is mostly unclear how data availablity determins model identification. In this study, the HYPE (HYdrological Predictions for the Environment) model, which is a process-based, semi-distributed hydrological water quality model, was applied in two different mesoscale catchments (Selke (463 km2) and Weida (99 km2)) located in central Germany to simulate discharge and inorganic nitrogen (IN) transport. PEST and DREAM(ZS) were combined with the HYPE model to conduct parameter calibration and uncertainty analysis. Split-sample test was used for model calibration (1994-1999) and validation (1999-2004). IN concentration and daily IN load were found to be highly correlated with discharge, indicating that IN leaching is mainly controlled by runoff. Both dynamics and balances of water and IN load were well captured with NSE greater than 0.83 during validation period. Multi-objective calibration (calibrating hydrological and water quality parameters simultaneously) was found to outperform step-wise calibration in terms of model robustness. Multi-site calibration was able to improve model performance at internal sites, decrease parameter posterior uncertainty and prediction uncertainty. Nitrogen-process parameters calibrated using continuous daily averages of nitrate-N concentration observations produced better and more robust simulations of IN concentration and load, lower posterior parameter uncertainty and IN concentration prediction uncertainty compared to the calibration against uncontinuous biweekly nitrate-N concentration measurements. Both PEST and DREAM(ZS) are efficient in parameter calibration. However, DREAM(ZS) is more sound in terms of parameter identification and uncertainty analysis than PEST because of its capability to evolve parameter posterior distributions and estimate prediction uncertainty based on global

  20. Calibration drift in regression and machine learning models for acute kidney injury.

    PubMed

    Davis, Sharon E; Lasko, Thomas A; Chen, Guanhua; Siew, Edward D; Matheny, Michael E

    2017-03-31

    Predictive analytics create opportunities to incorporate personalized risk estimates into clinical decision support. Models must be well calibrated to support decision-making, yet calibration deteriorates over time. This study explored the influence of modeling methods on performance drift and connected observed drift with data shifts in the patient population. Using 2003 admissions to Department of Veterans Affairs hospitals nationwide, we developed 7 parallel models for hospital-acquired acute kidney injury using common regression and machine learning methods, validating each over 9 subsequent years. Discrimination was maintained for all models. Calibration declined as all models increasingly overpredicted risk. However, the random forest and neural network models maintained calibration across ranges of probability, capturing more admissions than did the regression models. The magnitude of overprediction increased over time for the regression models while remaining stable and small for the machine learning models. Changes in the rate of acute kidney injury were strongly linked to increasing overprediction, while changes in predictor-outcome associations corresponded with diverging patterns of calibration drift across methods. Efficient and effective updating protocols will be essential for maintaining accuracy of, user confidence in, and safety of personalized risk predictions to support decision-making. Model updating protocols should be tailored to account for variations in calibration drift across methods and respond to periods of rapid performance drift rather than be limited to regularly scheduled annual or biannual intervals.

  1. Spatially-distributed Calibration of Two Macroscale Hydrologic Models Across the Columbia River Basin

    NASA Astrophysics Data System (ADS)

    Chegwidden, O.; Xiao, M.; Rupp, D. E.; Stumbaugh, M. R.; Hamman, J.; Pan, M.; Nijssen, B.

    2015-12-01

    Hydrologic models are often calibrated to streamflow observations at discrete points along a river network. Even if the area contributing to each flow location is discretized into multiple model elements, the calibration parameters are typically adjusted uniformly, either by setting them to the same value or transforming them in the same way (for example, multiply each parameter value by a given factor). Such a procedure typically results in sharp gradients in calibrated parameters between neighboring subbasins and disregards parameter heterogeneity at the subbasin scale. Here we apply a streamflow disaggregation procedure to develop daily, spatially-distributed runoff fields at the same resolution as the model application. We then use these fields to calibrate selected model parameters for each model grid cell independently. We have implemented two hydrologic models (the Variable Infiltration Capacity model and the Precipitation Runoff Modeling System) across the Columbia River Basin plus the coastal drainages in Oregon and Washington at a subdaily timestep and a spatial resolution of 1/16 degree or ~6km, resulting in 23,929 individual model grid cells. All model grid cells are calibrated independently to the distributed runoff fields using the shuffled complex evolution method and the Kling-Gupta Efficiency (KGE) as the objective function. The KGE was calculated on a weekly time step to minimize the effects of timing errors in the disaggregated runoff fields. We will present calibrated parameter fields and then discuss their structure (or lack thereof), which can provide important insight into parameter identifiability and uncertainty.

  2. Calibration of a distributed flood forecasting model with input uncertainty using a Bayesian framework

    NASA Astrophysics Data System (ADS)

    Li, Mingliang; Yang, Dawen; Chen, Jinsong; Hubbard, Susan S.

    2012-08-01

    In the process of calibrating distributed hydrological models, accounting for input uncertainty is important, yet challenging. In this study, we develop a Bayesian model to estimate parameters associated with a geomorphology-based hydrological model (GBHM). The GBHM model uses geomorphic characteristics to simplify model structure and physically based methods to represent hydrological processes. We divide the observed discharge into low- and high-flow data, and use the first-order autoregressive model to describe their temporal dependence. We consider relative errors in rainfall as spatially distributed variables and estimate them jointly with the GBHM parameters. The joint posterior probability distribution is explored using Markov chain Monte Carlo methods, which include Metropolis-Hastings, delay rejection adaptive Metropolis, and Gibbs sampling methods. We evaluate the Bayesian model using both synthetic and field data sets. The synthetic case study demonstrates that the developed method generally is effective in calibrating GBHM parameters and in estimating their associated uncertainty. The calibration ignoring input errors has lower accuracy and lower reliability compared to the calibration that includes estimation of the input errors, especially under model structure uncertainty. The field case study shows that calibration of GBHM parameters under complex field conditions remains a challenge. Although jointly estimating input errors and GBHM parameters improves the continuous ranked probability score and the consistency of the predictive distribution with the observed data, the improvement is incremental. To better calibrate parameters in a distributed model, such as GBHM here, we need to develop a more complex model and incorporate much more information.

  3. Weighting observations in the context of calibrating ground-water models

    USGS Publications Warehouse

    Hill, M.C.; Tiedeman, C.R.

    2002-01-01

    This paper investigates four issues related to weighting observations in the context of ground-water models calibrated with nonlinear regression: (1) terminology, (2) determining values for the weighting, (3) measurement and model errors, and (4) the effect weighting can have on the accuracy of calibrated models and measures of uncertainty. It is shown that the confusing aspects of weighting can be managed, and are not a practical barrier to using regression methods.

  4. Experiments for Calibration and Validation of Plasticity and Failure Material Modeling: 6061-T651 Aluminum

    SciTech Connect

    McFadden, Sam X.; Korellis, John S.; Lee, Kenneth L.; Rogillio, Brendan R.; Hatch, Paul W.

    2008-03-01

    Experimental data for material plasticity and failure model calibration and validation were obtained from 6061-T651 aluminum, in the form of a 4-in. diameter extruded rod. Model calibration data were taken from smooth tension, notched tension, and shear tests. Model validation data were provided from experiments using thin-walled tube specimens subjected to path-dependent combinations of internal pressure, extension, and torsion.

  5. Validation and calibration of structural models that combine information from multiple sources.

    PubMed

    Dahabreh, Issa J; Wong, John B; Trikalinos, Thomas A

    2017-02-01

    Mathematical models that attempt to capture structural relationships between their components and combine information from multiple sources are increasingly used in medicine. Areas covered: We provide an overview of methods for model validation and calibration and survey studies comparing alternative approaches. Expert commentary: Model validation entails a confrontation of models with data, background knowledge, and other models, and can inform judgments about model credibility. Calibration involves selecting parameter values to improve the agreement of model outputs with data. When the goal of modeling is quantitative inference on the effects of interventions or forecasting, calibration can be viewed as estimation. This view clarifies issues related to parameter identifiability and facilitates formal model validation and the examination of consistency among different sources of information. In contrast, when the goal of modeling is the generation of qualitative insights about the modeled phenomenon, calibration is a rather informal process for selecting inputs that result in model behavior that roughly reproduces select aspects of the modeled phenomenon and cannot be equated to an estimation procedure. Current empirical research on validation and calibration methods consists primarily of methodological appraisals or case-studies of alternative techniques and cannot address the numerous complex and multifaceted methodological decisions that modelers must make. Further research is needed on different approaches for developing and validating complex models that combine evidence from multiple sources.

  6. Revisiting Runoff Model Calibration: Airborne Snow Observatory Results Allow Improved Modeling Results

    NASA Astrophysics Data System (ADS)

    McGurk, B. J.; Painter, T. H.

    2014-12-01

    Deterministic snow accumulation and ablation simulation models are widely used by runoff managers throughout the world to predict runoff quantities and timing. Model fitting is typically based on matching modeled runoff volumes and timing with observed flow time series at a few points in the basin. In recent decades, sparse networks of point measurements of the mountain snowpacks have been available to compare with modeled snowpack, but the comparability of results from a snow sensor or course to model polygons of 5 to 50 sq. km is suspect. However, snowpack extent, depth, and derived snow water equivalent have been produced by the NASA/JPL Airborne Snow Observatory (ASO) mission for spring of 20013 and 2014 in the Tuolumne River basin above Hetch Hetchy Reservoir. These high-resolution snowpack data have exposed the weakness in a model calibration based on runoff alone. The U.S. Geological Survey's Precipitation Runoff Modeling System (PRMS) calibration that was based on 30-years of inflow to Hetch Hetchy produces reasonable inflow results, but modeled spatial snowpack location and water quantity diverged significantly from the weekly measurements made by ASO during the two ablation seasons. The reason is that the PRMS model has many flow paths, storages, and water transfer equations, and a calibrated outflow time series can be right for many wrong reasons. The addition of a detailed knowledge of snow extent and water content constrains the model so that it is a better representation of the actual watershed hydrology. The mechanics of recalibrating PRMS to the ASO measurements will be described, and comparisons in observed versus modeled flow for both a small subbasin and the entire Hetch Hetchy basin will be shown. The recalibrated model provided a bitter fit to the snowmelt recession, a key factor for water managers as they balance declining inflows with demand for power generation and ecosystem releases during the final months of snow melt runoff.

  7. Calibration of denitrifying activity of polyphosphate accumulating organisms in an extended ASM2d model.

    PubMed

    García-Usach, F; Ribes, J; Ferrer, J; Seco, A

    2010-10-01

    This paper presents the results of an experimental study for the modelling and calibration of denitrifying activity of polyphosphate accumulating organisms (PAOs) in full-scale WWTPs that incorporate simultaneous nitrogen and phosphorus removal. The convenience of using different yields under aerobic and anoxic conditions for modelling biological phosphorus removal processes with the ASM2d has been demonstrated. Thus, parameter η(PAO) in the model is given a physical meaning and represents the fraction of PAOs that are able to follow the DPAO metabolism. Using stoichiometric relationships, which are based on assumed biochemical pathways, the anoxic yields considered in the extended ASM2d can be obtained as a function of their respective aerobic yields. Thus, this modification does not mean an extra calibration effort to obtain the new parameters. In this work, an off-line calibration methodology has been applied to validate the model, where general relationships among stoichiometric parameters are proposed to avoid increasing the number of parameters to calibrate. The results have been validated through a UCT scheme pilot plant that is fed with municipal wastewater. The good concordance obtained between experimental and simulated values validates the use of anoxic yields as well as the calibration methodology. Deterministic modelling approaches, together with off-line calibration methodologies, are proposed to assist in decision-making about further process optimization in biological phosphate removal, since parameter values obtained by off-line calibration give valuable information about the activated sludge process such as the amount of DPAOs in the system.

  8. Step wise, multiple objective calibration of a hydrologic model for a snowmelt dominated basin

    USGS Publications Warehouse

    Hay, L.E.; Leavesley, G.H.; Clark, M.P.; Markstrom, S.L.; Viger, R.J.; Umemoto, M.

    2006-01-01

    The ability to apply a hydrologic model to large numbers of basins for forecasting purposes requires a quick and effective calibration strategy. This paper presents a step wise, multiple objective, automated procedure for hydrologic model calibration. This procedure includes the sequential calibration of a model's simulation of solar radiation (SR), potential evapotranspiration (PET), water balance, and daily runoff. The procedure uses the Shuffled Complex Evolution global search algorithm to calibrate the U.S. Geological Survey's Precipitation Runoff Modeling System in the Yampa River basin of Colorado. This process assures that intermediate states of the model (SR and PET on a monthly mean basis), as well as the water balance and components of the daily hydrograph are simulated, consistently with measured values.

  9. Thermal Modeling Method Improvements for SAGE III on ISS

    NASA Technical Reports Server (NTRS)

    Liles, Kaitlin; Amundsen, Ruth; Davis, Warren; McLeod, Shawn

    2015-01-01

    The Stratospheric Aerosol and Gas Experiment III (SAGE III) instrument is the fifth in a series of instruments developed for monitoring aerosols and gaseous constituents in the stratosphere and troposphere. SAGE III will be delivered to the International Space Station (ISS) via the SpaceX Dragon vehicle. A detailed thermal model of the SAGE III payload, which consists of multiple subsystems, has been developed in Thermal Desktop (TD). Many innovative analysis methods have been used in developing this model; these will be described in the paper. This paper builds on a paper presented at TFAWS 2013, which described some of the initial developments of efficient methods for SAGE III. The current paper describes additional improvements that have been made since that time. To expedite the correlation of the model to thermal vacuum (TVAC) testing, the chambers and GSE for both TVAC chambers at Langley used to test the payload were incorporated within the thermal model. This allowed the runs of TVAC predictions and correlations to be run within the flight model, thus eliminating the need for separate models for TVAC. In one TVAC test, radiant lamps were used which necessitated shooting rays from the lamps, and running in both solar and IR wavebands. A new Dragon model was incorporated which entailed a change in orientation; that change was made using an assembly, so that any potential additional new Dragon orbits could be added in the future without modification of the model. The Earth orbit parameters such as albedo and Earth infrared flux were incorporated as time-varying values that change over the course of the orbit; despite being required in one of the ISS documents, this had not been done before by any previous payload. All parameters such as initial temperature, heater voltage, and location of the payload are defined based on the case definition. For one component, testing was performed in both air and vacuum; incorporating the air convection in a submodel that was

  10. Automated model-based calibration of imaging spectrographs

    NASA Astrophysics Data System (ADS)

    Kosec, Matjaž; Bürmen, Miran; Tomaževič, Dejan; Pernuš, Franjo; Likar, Boštjan

    2012-03-01

    Hyper-spectral imaging has gained recognition as an important non-invasive research tool in the field of biomedicine. Among the variety of available hyperspectral imaging systems, systems comprising an imaging spectrograph, lens, wideband illumination source and a corresponding camera stand out for the short acquisition time and good signal to noise ratio. The individual images acquired by imaging spectrograph-based systems contain full spectral information along one spatial dimension. Due to the imperfections in the camera lens and in particular the optical components of the imaging spectrograph, the acquired images are subjected to spatial and spectral distortions, resulting in scene dependent nonlinear spectral degradations and spatial misalignments which need to be corrected. However, the existing correction methods require complex calibration setups and a tedious manual involvement, therefore, the correction of the distortions is often neglected. Such simplified approach can lead to significant errors in the analysis of the acquired hyperspectral images. In this paper, we present a novel fully automated method for correction of the geometric and spectral distortions in the acquired images. The method is based on automated non-rigid registration of the reference and acquired images corresponding to the proposed calibration object incorporating standardized spatial and spectral information. The obtained transformation was successfully used for sub-pixel correction of various hyperspectral images, resulting in significant improvement of the spectral and spatial alignment. It was found that the proposed calibration is highly accurate and suitable for routine use in applications involving either diffuse reflectance or transmittance measurement setups.

  11. Neuromusculoskeletal model self-calibration for on-line sequential bayesian moment estimation

    NASA Astrophysics Data System (ADS)

    Bueno, Diana R.; Montano, L.

    2017-04-01

    Objective. Neuromusculoskeletal models involve many subject-specific physiological parameters that need to be adjusted to adequately represent muscle properties. Traditionally, neuromusculoskeletal models have been calibrated with a forward-inverse dynamic optimization which is time-consuming and unfeasible for rehabilitation therapy. Non self-calibration algorithms have been applied to these models. To the best of our knowledge, the algorithm proposed in this work is the first on-line calibration algorithm for muscle models that allows a generic model to be adjusted to different subjects in a few steps. Approach. In this paper we propose a reformulation of the traditional muscle models that is able to sequentially estimate the kinetics (net joint moments), and also its full self-calibration (subject-specific internal parameters of the muscle from a set of arbitrary uncalibrated data), based on the unscented Kalman filter. The nonlinearity of the model as well as its calibration problem have obliged us to adopt the sum of Gaussians filter suitable for nonlinear systems. Main results. This sequential Bayesian self-calibration algorithm achieves a complete muscle model calibration using as input only a dataset of uncalibrated sEMG and kinematics data. The approach is validated experimentally using data from the upper limbs of 21 subjects. Significance. The results show the feasibility of neuromusculoskeletal model self-calibration. This study will contribute to a better understanding of the generalization of muscle models for subject-specific rehabilitation therapies. Moreover, this work is very promising for rehabilitation devices such as electromyography-driven exoskeletons or prostheses.

  12. Calibrating corneal material model parameters using only inflation data: an ill-posed problem.

    PubMed

    Kok, S; Botha, N; Inglis, H M

    2014-12-01

    Goldmann applanation tonometry (GAT) is a method used to estimate the intraocular pressure by measuring the indentation resistance of the cornea. A popular approach to investigate the sensitivity of GAT results to material and geometry variations is to perform numerical modelling using the finite element method, for which a calibrated material model is required. These material models are typically calibrated using experimental inflation data by solving an inverse problem. In the inverse problem, the underlying material constitutive behaviour is inferred from the measured macroscopic response (chamber pressure versus apical displacement). In this study, a biomechanically motivated elastic fibre-reinforced corneal material model is chosen. The inverse problem of calibrating the corneal material model parameters using only experimental inflation data is demonstrated to be ill-posed, with small variations in the experimental data leading to large differences in the calibrated model parameters. This can result in different groups of researchers, calibrating their material model with the same inflation test data, drawing vastly different conclusions about the effect of material parameters on GAT results. It is further demonstrated that multiple loading scenarios, such as inflation as well as bending, would be required to reliably calibrate such a corneal material model.

  13. Visible spectroscopy calibration transfer model in determining pH of Sala mangoes

    NASA Astrophysics Data System (ADS)

    Yahaya, O. K. M.; MatJafri, M. Z.; Aziz, A. A.; Omar, A. F.

    2015-05-01

    The purpose of this study is to compare the efficiency of calibration transfer procedures between three spectrometers involving two Ocean Optics Inc. spectrometers, namely, QE65000 and Jaz, and also, ASD FieldSpec 3 in measuring the pH of Sala mango by visible reflectance spectroscopy. This study evaluates the ability of these spectrometers in measuring the pH of Sala mango by applying similar calibration algorithms through direct calibration transfer. This visible reflectance spectroscopy technique defines a spectrometer as a master instrument and another spectrometer as a slave. The multiple linear regression (MLR) of calibration model generated using the QE65000 spectrometer is transferred to the Jaz spectrometer and vice versa for Set 1. The same technique is applied for Set 2 with QE65000 spectrometer is transferred to the FieldSpec3 spectrometer and vice versa. For Set 1, the result showed that the QE65000 spectrometer established a calibration model with higher accuracy than that of the Jaz spectrometer. In addition, the calibration model developed on Jaz spectrometer successfully predicted the pH of Sala mango, which was measured using QE65000 spectrometer, with a root means square error of prediction RMSEP = 0.092 pH and coefficients of determination R2 = 0.892. Moreover, the best prediction result is obtained for Set 2 when the calibration model developed on QE65000 spectrometer is successfully transferred to FieldSpec 3 with R2 = 0.839 and RMSEP = 0.16 pH.

  14. Utilization of Expert Knowledge in a Multi-Objective Hydrologic Model Automatic Calibration Process

    NASA Astrophysics Data System (ADS)

    Quebbeman, J.; Park, G. H.; Carney, S.; Day, G. N.; Micheletty, P. D.

    2016-12-01

    Spatially distributed continuous simulation hydrologic models have a large number of parameters for potential adjustment during the calibration process. Traditional manual calibration approaches of such a modeling system is extremely laborious, which has historically motivated the use of automatic calibration procedures. With a large selection of model parameters, achieving high degrees of objective space fitness - measured with typical metrics such as Nash-Sutcliffe, Kling-Gupta, RMSE, etc. - can easily be achieved using a range of evolutionary algorithms. A concern with this approach is the high degree of compensatory calibration, with many similarly performing solutions, and yet grossly varying parameter set solutions. To help alleviate this concern, and mimic manual calibration processes, expert knowledge is proposed for inclusion within the multi-objective functions, which evaluates the parameter decision space. As a result, Pareto solutions are identified with high degrees of fitness, but also create parameter sets that maintain and utilize available expert knowledge resulting in more realistic and consistent solutions. This process was tested using the joint SNOW-17 and Sacramento Soil Moisture Accounting method (SAC-SMA) within the Animas River basin in Colorado. Three different elevation zones, each with a range of parameters, resulted in over 35 model parameters simultaneously calibrated. As a result, high degrees of fitness were achieved, in addition to the development of more realistic and consistent parameter sets such as those typically achieved during manual calibration procedures.

  15. The value of hydrograph partitioning curves for calibrating hydrological models in glacierized basins

    NASA Astrophysics Data System (ADS)

    He, Zhihua; Vorogushyn, Sergiy; Unger-Shayesteh, Katy; Gafurov, Abror; Merz, Bruno

    2017-04-01

    This study uses a novel method for calibrating a glacio-hydrological model based on hydrograph partitioning curves (HPC), and evaluates its value in comparison to multi-criteria optimization approaches which use glacier mass balance, satellite snow cover images and discharge. The HPCs are extracted from the observed flow hydrographs using additionally catchment precipitation and temperature gradients. They indicate the periods when the various runoff processes dominate the basin hydrograph. The annual cumulative curve of the difference between average daily temperature and melt threshold temperature over the basin, as well as the annual cumulative curve of average daily snowfall on the glacierized areas are used to identify the start and end dates of snow and glacier ablation periods. Model parameters characterizing different runoff processes are calibrated on different HPCs in a stepwise and iterative way. Results show that 1) the HPC-based method guarantees model-internal consistency comparable to the multi-criteria calibration methods; 2) the HPC-based method presents higher parameter identifiability and improves the stability of calibrated parameter values across various calibration periods; and 3) the HPC-based method outperforms the other calibration methods in simulating the share of groundwater, as well as in reproducing the seasonal dynamics of snow and glacier melt. Our findings indicate the potential of HPCs to substitute multi-criteria methods for hydrological model calibration in glacierized basins where other data than discharge are often not available or very costly to obtain.

  16. Efficient calibration of a distributed pde-based hydrological model using grid coarsening

    NASA Astrophysics Data System (ADS)

    von Gunten, D.; Wöhling, T.; Haslauer, C.; Merchán, D.; Causapé, J.; Cirpka, O. A.

    2014-11-01

    Partial-differential-equation based integrated hydrological models are now regularly used at catchment scale. They rely on the shallow water equations for surface flow and on the Richards' equations for subsurface flow, allowing a spatially explicit representation of properties and states. However, these models usually come at high computational costs, which limit their accessibility to state-of-the-art methods of parameter estimation and uncertainty quantification, because these methods require a large number of model evaluations. In this study, we present an efficient model calibration strategy, based on a hierarchy of grid resolutions, each of them resolving the same zonation of subsurface and land-surface units. We first analyze which model outputs show the highest similarities between the original model and two differently coarsened grids. Then we calibrate the coarser models by comparing these similar outputs to the measurements. We finish the calibration using the fully resolved model, taking the result of the preliminary calibration as starting point. We apply the proposed approach to the well monitored Lerma catchment in North-East Spain, using the model HydroGeoSphere. The original model grid with 80,000 finite elements was complemented with two other model variants with approximately 16,000 and 10,000 elements, respectively. Comparing the model results for these different grids, we observe differences in peak discharge, evapotranspiration, and near-surface saturation. Hydraulic heads and low flow, however, are very similar for all tested parameter sets, which allows the use of these variables to calibrate our model. The calibration results are satisfactory and the duration of the calibration has been greatly decreased by using different model grid resolutions.

  17. Calibration of a Hydrologic Model via Densely Distributed Soil Moisture Observations

    NASA Astrophysics Data System (ADS)

    Thorstensen, A. R.; Nguyen, P.; Hsu, K. L.; Sorooshian, S.

    2014-12-01

    The complexity of a catchment's physical heterogeneities is often addressed through calibration via observed streamflow. As hydrologic models move from lumped to distributed, and Earth observations increase in number and variety, the question is raised as to whether or not such distributed observations can be used to satisfy the possibly heterogenic calibration needs of a catchment. The goal of this study is to examine if calibration of a distributed hydrologic model using soil moisture observations can improve simulated streamflow. The NWS's Hydrology Laboratory Research Distributed Hydrologic Model (HL-RDHM) is used in this study. HL-RDHM uses the Sacramento Heat Transfer with enhanced Evapotranspiration for rainfall-runoff production and can convert conceptual storages to soil layers. This allows for calibration of conceptual parameters based on observed soil moisture profiles. HL-RDHM is calibrated using scalar multipliers of a-priori grids derived from soil surveys, with the premise that heterogeneity of these grids is correct. This assumption is relaxed to study the benefit of distributed calibration. Soil moisture measurements in the Turkey River Basin, which was equipped with 20 in-situ soil moisture sites for the Iowa Flood Studies campaign, were used for calibration of parameters related to soil moisture (i.e. storage and release parameters). The Shuffled Complex Evolution method was used to calibrate pixels collocated with in-situ probes based on soil moisture RMSE at point scale. Methods to allocate calibrated parameter values to remaining pixels include an averaging method, spatial interpolation, and a similarity method. Calibration was done for spring 2013, and validation for 2009 and 2011. Results show that calibration using stream gauges remains the superior method, especially for correlation. This is because calibration based on streamflow can correct peak timing by adjusting routing parameters. Such adjustments using soil moisture cannot be done

  18. Evaluation of different validation strategies and long term effects in NIR calibration models.

    PubMed

    Sileoni, Valeria; Marconi, Ombretta; Perretti, Giuseppe; Fantozzi, Paolo

    2013-12-01

    Stable and reliable NIR calibration models for the barley malt quality assessment were developed and exhaustively evaluated. The measured parameters are: fine extract, fermentability, pH, soluble nitrogen, viscosity, friability and free-amino nitrogen. The reliability of the developed calibration models was evaluated comparing the classic leave-one-out internal validation with a more challenging one exploiting re-sampling scheme. The long-term effects, intended as possible alterations of the NIR method predictive power, due to the variation between samples collected in different years, were evaluated through an external validation which demonstrated the stability of the developed calibration models. Finally, the accuracy and the precision of the developed calibration models were evaluated in comparison with the reference methods. This exhaustive evaluation offers a realistic idea of the developed NIR methods predictive power for future unknown samples and their application in the beer industry.

  19. Role of Imaging Specrometer Data for Model-based Cross-calibration of Imaging Sensors

    NASA Technical Reports Server (NTRS)

    Thome, Kurtis John

    2014-01-01

    Site characterization benefits from imaging spectrometry to determine spectral bi-directional reflectance of a well-understood surface. Cross calibration approaches, uncertainties, role of imaging spectrometry, model-based site characterization, and application to product validation.

  20. Effect of photobleaching on calibration model development in biological Raman spectroscopy

    PubMed Central

    Barman, Ishan; Kong, Chae-Ryon; Singh, Gajendra P.; Dasari, Ramachandra R.

    2011-01-01

    A major challenge in performing quantitative biological studies using Raman spectroscopy lies in overcoming the influence of the dominant sample fluorescence background. Moreover, the prediction accuracy of a calibration model can be severely compromised by the quenching of the endogenous fluorophores due to the introduction of spurious correlations between analyte concentrations and fluorescence levels. Apparently, functional models can be obtained from such correlated samples, which cannot be used successfully for prospective prediction. This work investigates the deleterious effects of photobleaching on prediction accuracy of implicit calibration algorithms, particularly for transcutaneous glucose detection using Raman spectroscopy. Using numerical simulations and experiments on physical tissue models, we show that the prospective prediction error can be substantially larger when the calibration model is developed on a photobleaching correlated dataset compared to an uncorrelated one. Furthermore, we demonstrate that the application of shifted subtracted Raman spectroscopy (SSRS) reduces the prediction errors obtained with photobleaching correlated calibration datasets compared to those obtained with uncorrelated ones. PMID:21280891

  1. CALIBRATION OF SUBSURFACE BATCH AND REACTIVE-TRANSPORT MODELS INVOLVING COMPLEX BIOGEOCHEMICAL PROCESSES

    EPA Science Inventory

    In this study, the calibration of subsurface batch and reactive-transport models involving complex biogeochemical processes was systematically evaluated. Two hypothetical nitrate biodegradation scenarios were developed and simulated in numerical experiments to evaluate the perfor...

  2. CALIBRATION OF SUBSURFACE BATCH AND REACTIVE-TRANSPORT MODELS INVOLVING COMPLEX BIOGEOCHEMICAL PROCESSES

    EPA Science Inventory

    In this study, the calibration of subsurface batch and reactive-transport models involving complex biogeochemical processes was systematically evaluated. Two hypothetical nitrate biodegradation scenarios were developed and simulated in numerical experiments to evaluate the perfor...

  3. Ecologically-focused Calibration of Hydrological Models for Environmental Flow Applications

    NASA Astrophysics Data System (ADS)

    Adams, S. K.; Bledsoe, B. P.

    2015-12-01

    Hydrologic alteration resulting from watershed urbanization is a common cause of aquatic ecosystem degradation. Developing environmental flow criteria for urbanizing watersheds requires quantitative flow-ecology relationships that describe biological responses to streamflow alteration. Ideally, gaged flow data are used to develop flow-ecology relationships; however, biological monitoring sites are frequently ungaged. For these ungaged locations, hydrologic models must be used to predict streamflow characteristics through calibration and testing at gaged sites, followed by extrapolation to ungaged sites. Physically-based modeling of rainfall-runoff response has frequently utilized "best overall fit" calibration criteria, such as the Nash-Sutcliffe Efficiency (NSE), that do not necessarily focus on specific aspects of the flow regime relevant to biota of interest. This study investigates the utility of employing flow characteristics known a priori to influence regional biological endpoints as "ecologically-focused" calibration criteria compared to traditional, "best overall fit" criteria. For this study, 19 continuous HEC-HMS 4.0 models were created in coastal southern California and calibrated to hourly USGS streamflow gages with nearby biological monitoring sites using one "best overall fit" and three "ecologically-focused" criteria: NSE, Richards-Baker Flashiness Index (RBI), percent of time when the flow is < 1 cfs (%<1), and a Combined Calibration (RBI and %<1). Calibrated models were compared using calibration accuracy, environmental flow metric reproducibility, and the strength of flow-ecology relationships. Results indicate that "ecologically-focused" criteria can be calibrated with high accuracy and may provide stronger flow-ecology relationships than "best overall fit" criteria, especially when multiple "ecologically-focused" criteria are used in concert, despite inabilities to accurately reproduce additional types of ecological flow metrics to which the

  4. Predictive sensor based x-ray calibration using a physical model

    SciTech Connect

    Fuente, Matias de la; Lutz, Peter; Wirtz, Dieter C.; Radermacher, Klaus

    2007-04-15

    Many computer assisted surgery systems are based on intraoperative x-ray images. To achieve reliable and accurate results these images have to be calibrated concerning geometric distortions, which can be distinguished between constant distortions and distortions caused by magnetic fields. Instead of using an intraoperative calibration phantom that has to be visible within each image resulting in overlaying markers, the presented approach directly takes advantage of the physical background of the distortions. Based on a computed physical model of an image intensifier and a magnetic field sensor, an online compensation of distortions can be achieved without the need of an intraoperative calibration phantom. The model has to be adapted once to each specific image intensifier through calibration, which is based on an optimization algorithm systematically altering the physical model parameters, until a minimal error is reached. Once calibrated, the model is able to predict the distortions caused by the measured magnetic field vector and build an appropriate dewarping function. The time needed for model calibration is not yet optimized and takes up to 4 h on a 3 GHz CPU. In contrast, the time needed for distortion correction is less than 1 s and therefore absolutely acceptable for intraoperative use. First evaluations showed that by using the model based dewarping algorithm the distortions of an XRII with a 21 cm FOV could be significantly reduced. The model was able to predict and compensate distortions by approximately 80% to a remaining error of 0.45 mm (max) (0.19 mm rms)

  5. The effects of model complexity and calibration period on groundwater recharge simulations

    NASA Astrophysics Data System (ADS)

    Moeck, Christian; Van Freyberg, Jana; Schirmer, Mario

    2017-04-01

    A significant number of groundwater recharge models exist that vary in terms of complexity (i.e., structure and parametrization). Typically, model selection and conceptualization is very subjective and can be a key source of uncertainty in the recharge simulations. Another source of uncertainty is the implicit assumption that model parameters, calibrated over historical periods, are also valid for the simulation period. To the best of our knowledge there is no systematic evaluation of the effect of the model complexity and calibration strategy on the performance of recharge models. To address this gap, we utilized a long-term recharge data set (20 years) from a large weighting lysimeter. We performed a differential split sample test with four groundwater recharge models that vary in terms of complexity. They were calibrated using six calibration periods with climatically contrasting conditions in a constrained Monte Carlo approach. Despite the climatically contrasting conditions, all models performed similarly well during the calibration. However, during validation a clear effect of the model structure on model performance was evident. The more complex, physically-based models predicted recharge best, even when calibration and prediction periods had very different climatic conditions. In contrast, more simplistic soil-water balance and lumped model performed poorly under such conditions. For these models we found a strong dependency on the chosen calibration period. In particular, our analysis showed that this can have relevant implications when using recharge models as decision-making tools in a broad range of applications (e.g. water availability, climate change impact studies, water resource management, etc.).

  6. Crash test for groundwater recharge models: The effects of model complexity and calibration period on groundwater recharge predictions

    NASA Astrophysics Data System (ADS)

    Moeck, Christian; Von Freyberg, Jana; Schrimer, Maria

    2016-04-01

    An important question in recharge impact studies is how model choice, structure and calibration period affect recharge predictions. It is still unclear if a certain model type or structure is less affected by running the model on time periods with different hydrological conditions compared to the calibration period. This aspect, however, is crucial to ensure reliable predictions of groundwater recharge. In this study, we quantify and compare the effect of groundwater recharge model choice, model parametrization and calibration period in a systematic way. This analysis was possible thanks to a unique data set from a large-scale lysimeter in a pre-alpine catchment where daily long-term recharge rates are available. More specifically, the following issues are addressed: We systematically evaluate how the choice of hydrological models influences predictions of recharge. We assess how different parameterizations of models due to parameter non-identifiability affect predictions of recharge by applying a Monte Carlo approach. We systematically assess how the choice of calibration periods influences predictions of recharge within a differential split sample test focusing on the model performance under extreme climatic and hydrological conditions. Results indicate that all applied models (simple lumped to complex physically based models) were able to simulate the observed recharge rates for five different calibration periods. However, there was a marked impact of the calibration period when the complete 20 years validation period was simulated. Both, seasonal and annual differences between simulated and observed daily recharge rates occurred when the hydrological conditions were different to the calibration period. These differences were, however, less distinct for the physically based models, whereas the simpler models over- or underestimate the observed recharge depending on the considered season. It is, however, possible to reduce the differences for the simple models by

  7. Calibration of a groundwater flow and contaminant transport computer model: Progress toward model validation

    SciTech Connect

    Lee, R. R.; Ketelle, R. H.; Bownds, J. M.; Rizk, T. A.

    1989-09-01

    A groundwater flow and contaminant transport model calibration was performed to evaluate the ability of a typical, verified computer code to simulate groundwater tracer migration in the shallow aquifer of the Conasauga Group. Previously, standard practice site data interpretation and groundwater modeling resulted in inaccurate simulations of contaminant transport direction and rate compared with tracer migration behavior. The site's complex geology, the presence of flow in both fractured and weathered zones, and the transient character of flow in the shallow aquifer combined to render inaccurate assumptions of steady-state, homogeneous groundwater flow. The improvement of previous modeling results required iterative phases of conceptual model development, hypothesis testing, site field investigations, and modeling. The activities focused on generating a model grid that was compatible with site hydrogeologic conditions and on establishing boundary conditions based on site data. An annual average water table configuration derived from site data and fixed head boundary conditions was used as input for flow modeling. The contaminant transport model was combined with the data-driven flow model to obtain a preliminary contaminant plume. Calibration of the transport code was achieved by comparison with site tracer migration and concentration data. This study documents the influence of fractures and the transient character of flow and transport in the shallow aquifer. Although compatible with porous medium theory, site data demonstrate that the tracer migration pathway would not be anticipated using conventional porous medium analysis. 126 figs., 22 refs., 5 tabs.

  8. Multivariate Calibration Models for Sorghum Composition using Near-Infrared Spectroscopy

    SciTech Connect

    Wolfrum, E.; Payne, C.; Stefaniak, T.; Rooney, W.; Dighe, N.; Bean, B.; Dahlberg, J.

    2013-03-01

    NREL developed calibration models based on near-infrared (NIR) spectroscopy coupled with multivariate statistics to predict compositional properties relevant to cellulosic biofuels production for a variety of sorghum cultivars. A robust calibration population was developed in an iterative fashion. The quality of models developed using the same sample geometry on two different types of NIR spectrometers and two different sample geometries on the same spectrometer did not vary greatly.

  9. A Methodology for Calibrating a WATFLOOD Model of the Upper South Saskatchewan River

    NASA Astrophysics Data System (ADS)

    Dunning, C. F.; Soulis, R. D.; Craig, J. R.

    2009-05-01

    The upper South Saskatchewan River consists of the Red Deer River, the Bow River, and the Old Man River. With a contributing area of 120,000 km2, these three watersheds flow through a diverse range of land types including mountains, foothills and prairies. Using WATFLOOD, a model has been developed to simulate stream flow in this basin and this model is used as the case study for a straightforward calibration approach. The input for this model is interpolated rainfall data from twenty-three rain gauges throughout the basin, and the model output (stream flow) will be compared to measured stream flow data from thirty stream gauges. The basin is divided into nine land classes and four river classes. Because of the diversity of land types in this basin, proper identification of the parameters for individual land classes and river classes contributes significantly to the accuracy of the model. Critical land class and river class parameters are initially calibrated manually in representative sub-basins (comprised of >90%) of a single land class to determine the effect each parameter has on the system and to determine a reasonable starting estimate of each parameter. Once manual calibration is complete, DDS (Dynamically Dimensioned Search Algorithm) is used to automatically calibrate the model one sub-basin at a time. During this process only the parameters found significant during the manual calibration are altered and focus is on the land classes and river classes that dominate that sub-basin. The process of automated calibration is repeated once more but is done with multiple sub-basins and uses a stream flow weighting method. This is the final step towards a model that is calibrated to represent the diversity of the entire basin. The technique described is intended to be a general method for calibrating a regional scale model with diverse land types. The method is straight forward and allows adjusted parameters to provide relative accuracy over the entire basin.

  10. Model Calibration and Optics Correction Using Orbit Response Matrix in the Fermilab Booster

    SciTech Connect

    Lebedev, V.A.; Prebys, E.; Petrenko, A.V.; Kopp, S.E.; McAteer, M.J.; /Texas U.

    2012-05-01

    We have calibrated the lattice model and measured the beta and dispersion functions in Fermilab's fast-ramping Booster synchrotron using the Linear Optics from Closed Orbit (LOCO) method. We used the calibrated model to implement ramped coupling, dispersion, and beta-beating corrections throughout the acceleration cycle, reducing horizontal beta beating from its initial magnitude of {approx}30% to {approx}10%, and essentially eliminating vertical beta-beating and transverse coupling.

  11. The impact of modelling errors on interferometer calibration for 21 cm power spectra

    NASA Astrophysics Data System (ADS)

    Ewall-Wice, Aaron; Dillon, Joshua S.; Liu, Adrian; Hewitt, Jacqueline

    2017-09-01

    We study the impact of sky-based calibration errors from source mismodelling on 21 cm power spectrum measurements with an interferometer and propose a method for suppressing their effects. While emission from faint sources that are not accounted for in calibration catalogues is believed to be spectrally smooth, deviations of true visibilities from model visibilities are not, due to the inherent chromaticity of the interferometer's sky response (the 'wedge'). Thus, unmodelled foregrounds, below the confusion limit of many instruments, introduce frequency structure into gain solutions on the same line-of-sight scales on which we hope to observe the cosmological signal. We derive analytic expressions describing these errors using linearized approximations of the calibration equations and estimate the impact of this bias on measurements of the 21 cm power spectrum during the epoch of reionization. Given our current precision in primary beam and foreground modelling, this noise will significantly impact the sensitivity of existing experiments that rely on sky-based calibration. Our formalism describes the scaling of calibration with array and sky-model parameters and can be used to guide future instrument design and calibration strategy. We find that sky-based calibration that downweights long baselines can eliminate contamination in most of the region outside of the wedge with only a modest increase in instrumental noise.

  12. Respiratory Inductance Plethysmography calibration for pediatric upper airway obstruction: an animal model

    PubMed Central

    Khemani, Robinder G.; Flink, Rutger; Hotz, Justin; Ross, Patrick A.; Ghuman, Anoopindar; Newth, Christopher JL

    2014-01-01

    Background To determine optimal methods of Respiratory Inductance Plethysmography (RIP) flow calibration for application to pediatric post-extubation upper airway obstruction. Methods We measured RIP, spirometry, and esophageal manometry in spontaneously breathing, intubated Rhesus monkeys with increasing inspiratory resistance. RIP calibration was based on: ΔµVao ≈ M[ΔµVRC + K(ΔµVAB)] where K establishes the relationship between the uncalibrated rib cage (ΔµVRC) and abdominal (ΔµVAB) RIP signals. We calculated K during: (1) isovolume maneuvers during a negative inspiratory force (NIF) (2) Quantitative Diagnostic Calibration (QDC) during (a) tidal breathing, (b) continuous positive airway pressure (CPAP), and (c) increasing degrees of UAO. We compared the calibrated RIP flow waveform to spirometry quantitatively and qualitatively. Results Isovolume calibrated RIP flow tracings were more accurate (against spirometry) both quantitatively and qualitatively than those from QDC (p<0.0001), with bigger differences as UAO worsened. Isovolume calibration yielded nearly identical clinical interpretation of inspiratory flow limitation as spirometry. Conclusions In an animal model of pediatric UAO, Isovolume calibrated RIP flow tracings are accurate against spirometry. QDC during tidal breathing yields poor RIP flow calibration, particularly as UAO worsens. Routine use of a NIF maneuver before extubation affords the opportunity to use RIP to study post extubation UAO in children. PMID:25279987

  13. Respiratory inductance plethysmography calibration for pediatric upper airway obstruction: an animal model.

    PubMed

    Khemani, Robinder G; Flink, Rutger; Hotz, Justin; Ross, Patrick A; Ghuman, Anoopindar; Newth, Christopher J L

    2015-01-01

    We sought to determine optimal methods of respiratory inductance plethysmography (RIP) flow calibration for application to pediatric postextubation upper airway obstruction. We measured RIP, spirometry, and esophageal manometry in spontaneously breathing, intubated Rhesus monkeys with increasing inspiratory resistance. RIP calibration was based on: ΔµV(ao) ≈ M[ΔµV(RC) + K(ΔµV(AB))] where K establishes the relationship between the uncalibrated rib cage (ΔµV(RC)) and abdominal (ΔµV(AB)) RIP signals. We calculated K during (i) isovolume maneuvers during a negative inspiratory force (NIF), (ii) quantitative diagnostic calibration (QDC) during (a) tidal breathing, (b) continuous positive airway pressure (CPAP), and (c) increasing degrees of upper airway obstruction (UAO). We compared the calibrated RIP flow waveform to spirometry quantitatively and qualitatively. Isovolume calibrated RIP flow tracings were more accurate (against spirometry) both quantitatively and qualitatively than those from QDC (P < 0.0001), with bigger differences as UAO worsened. Isovolume calibration yielded nearly identical clinical interpretation of inspiratory flow limitation as spirometry. In an animal model of pediatric UAO, isovolume calibrated RIP flow tracings are accurate against spirometry. QDC during tidal breathing yields poor RIP flow calibration, particularly as UAO worsens. Routine use of a NIF maneuver before extubation affords the opportunity to use RIP to study postextubation UAO in children.

  14. Modeling, Calibration, and Sensitivity Analysis of Coupled Land-Surface Models

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Gupta, H. V.; Bastidas, L. A.; Sorooshian, S.

    2002-12-01

    To better understand various land-surface hydrological processes, it is desirable and pressing to extend land-surface modeling from off-line modes to coupled modes to explore the significance of various land surface-atmospheric interactions in regulating the energy and water balance of the hydrologic cycle. While it is extremely difficult to directly test the parameterizations of a global climate model due to the complexity, a locally coupled single-column model provides a favorable environment for investigations into the complicated interactions between the land surface and the overlying atmosphere. In this research, the off-line NCAR LSM and the coupled NCAR Single-column Community Climate Model (NCAR SCCM) are used. Extensive efforts have been focused on the impacts that the coupling of the two systems may have on the sensitivities of the land-surface model to both land-surface parameters and land-surface parameterizations. Additional efforts are directed to the comparisons of results from off-line and coupled calibration experiments using the optimization algorithm MOCOM-UA and IOP data sets from the Atmosphere Radiation Measurement-Cloud and Radiation Testbed (ARM-CART) project. Possibilities of calibrating some atmospheric parameters in the coupled model are also explored. Preliminary results show that the parameterization of surface energy and water balance is crucial in coupled systems and that the land-atmosphere coupling can significantly affect the estimations of land-surface parameters. In addition, it has been found that solar radiation and precipitation play an extremely important role in a coupled land-surface model by dominating the two-way interactions within the coupled system. This study will also enable us to investigate into the feasibility of applying the parameter estimation methods used for point-validations of LSM over grid-boxes in a coupled environment, and facilitate following studies on the effects that a coupled environment would have

  15. [Outlier sample discriminating methods for building calibration model in melons quality detecting using NIR spectra].

    PubMed

    Tian, Hai-Qing; Wang, Chun-Guang; Zhang, Hai-Jun; Yu, Zhi-Hong; Li, Jian-Kang

    2012-11-01

    Outlier samples strongly influence the precision of the calibration model in soluble solids content measurement of melons using NIR Spectra. According to the possible sources of outlier samples, three methods (predicted concentration residual test; Chauvenet test; leverage and studentized residual test) were used to discriminate these outliers respectively. Nine suspicious outliers were detected from calibration set which including 85 fruit samples. Considering the 9 suspicious outlier samples maybe contain some no-outlier samples, they were reclaimed to the model one by one to see whether they influence the model and prediction precision or not. In this way, 5 samples which were helpful to the model joined in calibration set again, and a new model was developed with the correlation coefficient (r) 0. 889 and root mean square errors for calibration (RMSEC) 0.6010 Brix. For 35 unknown samples, the root mean square errors prediction (RMSEP) was 0.854 degrees Brix. The performance of this model was more better than that developed with non outlier was eliminated from calibration set (r = 0.797, RMSEC= 0.849 degrees Brix, RMSEP = 1.19 degrees Brix), and more representative and stable with all 9 samples were eliminated from calibration set (r = 0.892, RMSEC = 0.605 degrees Brix, RMSEP = 0.862 degrees).

  16. Comparative study on ATR-FTIR calibration models for monitoring solution concentration in cooling crystallization

    NASA Astrophysics Data System (ADS)

    Zhang, Fangkun; Liu, Tao; Wang, Xue Z.; Liu, Jingxiang; Jiang, Xiaobin

    2017-02-01

    In this paper calibration model building based on using an ATR-FTIR spectroscopy is investigated for in-situ measurement of the solution concentration during a cooling crystallization process. The cooling crystallization of L-glutamic Acid (LGA) as a case is studied here. It was found that using the metastable zone (MSZ) data for model calibration can guarantee the prediction accuracy for monitoring the operating window of cooling crystallization, compared to the usage of undersaturated zone (USZ) spectra for model building as traditionally practiced. Calibration experiments were made for LGA solution under different concentrations. Four candidate calibration models were established using different zone data for comparison, by using a multivariate partial least-squares (PLS) regression algorithm for the collected spectra together with the corresponding temperature values. Experiments under different process conditions including the changes of solution concentration and operating temperature were conducted. The results indicate that using the MSZ spectra for model calibration can give more accurate prediction of the solution concentration during the crystallization process, while maintaining accuracy in changing the operating temperature. The primary reason of prediction error was clarified as spectral nonlinearity for in-situ measurement between USZ and MSZ. In addition, an LGA cooling crystallization experiment was performed to verify the sensitivity of these calibration models for monitoring the crystal growth process.

  17. Watershed model calibration to the base flow recession curve with and without evapotranspiration effects

    NASA Astrophysics Data System (ADS)

    Jepsen, S. M.; Harmon, T. C.; Shi, Y.

    2016-04-01

    Calibration of watershed models to the shape of the base flow recession curve is a way to capture the important relationship between groundwater discharge and subsurface water storage in a catchment. In some montane Mediterranean regions, such as the midelevation Providence Creek catchment in the southern Sierra Nevada of California (USA), nearly all base flow recession occurs after snowmelt, and during this time evapotranspiration (ET) usually exceeds base flow. We assess the accuracy to which watershed models can be calibrated to ET-dominated base flow recession in Providence Creek, both in terms of fitting a discharge time-series and realistically capturing the observed discharge-storage relationship for the catchment. Model parameters estimated from calibrations to ET-dominated recession are compared to parameters estimated from reference calibrations to base flow recession with ET-effects removed ("potential recession"). We employ the Penn State Integrated Hydrologic Model (PIHM) for simulations of base flow and ET, and methods that are otherwise general in nature. In models calibrated to ET-dominated recession, simulation errors in ET and the targeted relationship for recession (-dQ/dt versus Q) contribute substantially (up to 57% and 46%, respectively) to overestimates in the discharge-storage differential, defined as d(lnQ)/dS, relative to that derived from water flux observations. These errors result in overestimates of deep-subsurface hydraulic conductivity in models calibrated to ET-dominated recession, by up to an order of magnitude, relative to reference calibrations to potential recession. These results illustrate a potential opportunity for improving model representation of discharge-storage dynamics by calibrating to the shape of base flow recession after removing the complicating effects of ET.

  18. Use of Energy and Other Monitored Data to Calibrate a Whole Building Energy Model

    NASA Astrophysics Data System (ADS)

    Reddy, David

    This thesis documents an approach to utilize energy and other measured data to improve the calibration of a whole building energy model. Each chapter documents important steps of the process, and provides building energy analysts with insight on how to use this information to improve modeling assumptions, and hence energy model predictions. Important components of the study included creation of a custom, annual simulation weather file, designing and implementing an electrical sub-metering system, and disaggregating electrical energy use by model zone and energy end-use. Data and information were aggregated to create a DOE-2.2 whole building energy model, and the incremental improvement in model calibration was demonstrated as input assumptions were refined. The results of this study show accurate description of dynamic model inputs, particularly inputs that describe occupant's manipulation of building systems, was the most influential factor affecting energy model calibration.

  19. Validation and Calibration of Nuclear Thermal Hydraulics Multiscale Multiphysics Models - Subcooled Flow Boiling Study

    SciTech Connect

    Anh Bui; Nam Dinh; Brian Williams

    2013-09-01

    In addition to validation data plan, development of advanced techniques for calibration and validation of complex multiscale, multiphysics nuclear reactor simulation codes are a main objective of the CASL VUQ plan. Advanced modeling of LWR systems normally involves a range of physico-chemical models describing multiple interacting phenomena, such as thermal hydraulics, reactor physics, coolant chemistry, etc., which occur over a wide range of spatial and temporal scales. To a large extent, the accuracy of (and uncertainty in) overall model predictions is determined by the correctness of various sub-models, which are not conservation-laws based, but empirically derived from measurement data. Such sub-models normally require extensive calibration before the models can be applied to analysis of real reactor problems. This work demonstrates a case study of calibration of a common model of subcooled flow boiling, which is an important multiscale, multiphysics phenomenon in LWR thermal hydraulics. The calibration process is based on a new strategy of model-data integration, in which, all sub-models are simultaneously analyzed and calibrated using multiple sets of data of different types. Specifically, both data on large-scale distributions of void fraction and fluid temperature and data on small-scale physics of wall evaporation were simultaneously used in this work’s calibration. In a departure from traditional (or common-sense) practice of tuning/calibrating complex models, a modern calibration technique based on statistical modeling and Bayesian inference was employed, which allowed simultaneous calibration of multiple sub-models (and related parameters) using different datasets. Quality of data (relevancy, scalability, and uncertainty) could be taken into consideration in the calibration process. This work presents a step forward in the development and realization of the “CIPS Validation Data Plan” at the Consortium for Advanced Simulation of LWRs to enable

  20. ADVANCED UTILITY SIMULATION MODEL, REPORT OF SENSITIVITY TESTING, CALIBRATION, AND MODEL OUTPUT COMPARISONS (VERSION 3.0)

    EPA Science Inventory

    The report gives results of activities relating to the Advanced Utility Simulation Model (AUSM): sensitivity testing. comparison with a mature electric utility model, and calibration to historical emissions. The activities were aimed at demonstrating AUSM's validity over input va...

  1. ADVANCED UTILITY SIMULATION MODEL, REPORT OF SENSITIVITY TESTING, CALIBRATION, AND MODEL OUTPUT COMPARISONS (VERSION 3.0)

    EPA Science Inventory

    The report gives results of activities relating to the Advanced Utility Simulation Model (AUSM): sensitivity testing. comparison with a mature electric utility model, and calibration to historical emissions. The activities were aimed at demonstrating AUSM's validity over input va...

  2. Efficient Calibration of Distributed Catchment Models Using Perceptual Understanding and Hydrologic Signatures

    NASA Astrophysics Data System (ADS)

    Hutton, C.; Wagener, T.; Freer, J. E.; Duffy, C.; Han, D.

    2015-12-01

    Distributed models offer the potential to resolve catchment systems in more detail, and therefore simulate the hydrological impacts of spatial changes in catchment forcing (e.g. landscape change). Such models may contain a large number of model parameters which are computationally expensive to calibrate. Even when calibration is possible, insufficient data can result in model parameter and structural equifinality. In order to help reduce the space of feasible models and supplement traditional outlet discharge calibration data, semi-quantitative information (e.g. knowledge of relative groundwater levels), may also be used to identify behavioural models when applied to constrain spatially distributed predictions of states and fluxes. The challenge is to combine these different sources of information together to identify a behavioural region of state-space, and efficiently search a large, complex parameter space to identify behavioural parameter sets that produce predictions that fall within this behavioural region. Here we present a methodology to incorporate different sources of data to efficiently calibrate distributed catchment models. Metrics of model performance may be derived from multiple sources of data (e.g. perceptual understanding and measured or regionalised hydrologic signatures). For each metric, an interval or inequality is used to define the behaviour of the catchment system, accounting for data uncertainties. These intervals are then combined to produce a hyper-volume in state space. The state space is then recast as a multi-objective optimisation problem, and the Borg MOEA is applied to first find, and then populate the hyper-volume, thereby identifying acceptable model parameter sets. We apply the methodology to calibrate the PIHM model at Plynlimon, UK by incorporating perceptual and hydrologic data into the calibration problem. Furthermore, we explore how to improve calibration efficiency through search initialisation from shorter model runs.

  3. Impact of influent data frequency and model structure on the quality of WWTP model calibration and uncertainty.

    PubMed

    Cierkens, Katrijn; Plano, Salvatore; Benedetti, Lorenzo; Weijers, Stefan; de Jonge, Jarno; Nopens, Ingmar

    2012-01-01

    Application of activated sludge models (ASMs) to full-scale wastewater treatment plants (WWTPs) is still hampered by the problem of model calibration of these over-parameterised models. This either requires expert knowledge or global methods that explore a large parameter space. However, a better balance in structure between the submodels (ASM, hydraulic, aeration, etc.) and improved quality of influent data result in much smaller calibration efforts. In this contribution, a methodology is proposed that links data frequency and model structure to calibration quality and output uncertainty. It is composed of defining the model structure, the input data, an automated calibration, confidence interval computation and uncertainty propagation to the model output. Apart from the last step, the methodology is applied to an existing WWTP using three models differing only in the aeration submodel. A sensitivity analysis was performed on all models, allowing the ranking of the most important parameters to select in the subsequent calibration step. The aeration submodel proved very important to get good NH(4) predictions. Finally, the impact of data frequency was explored. Lowering the frequency resulted in larger deviations of parameter estimates from their default values and larger confidence intervals. Autocorrelation due to high frequency calibration data has an opposite effect on the confidence intervals. The proposed methodology opens doors to facilitate and improve calibration efforts and to design measurement campaigns.

  4. Determining probability distributions of parameter performances for time-series model calibration: A river system trial

    NASA Astrophysics Data System (ADS)

    Kim, Shaun Sang Ho; Hughes, Justin Douglas; Chen, Jie; Dutta, Dushmanta; Vaze, Jai

    2015-11-01

    A calibration method is presented that uses a sub-period resampling method to estimate probability distributions of performance for different parameter sets. Where conventional calibration methods implicitly identify the best performing parameterisations on average, the new method looks at the consistency of performance during sub-periods. The method is implemented with the conceptual river reach algorithms within the Australian Water Resources Assessments River (AWRA-R) model in the Murray-Darling Basin, Australia. The new method is tested for 192 reaches in a cross-validation scheme and results are compared to a traditional split-sample calibration-validation implementation. This is done to evaluate the new technique's ability to predict daily streamflow outside the calibration period. The new calibration method produced parameterisations that performed better in validation periods than optimum calibration parameter sets for 103 reaches and produced the same parameterisations for 35 reaches. The method showed a statistically significant improvement to predictive performance and potentially provides more rational flux terms over traditional split-sample calibration methods. Particular strengths of the proposed calibration method is that it avoids extra weighting towards rare periods of good agreement and also prevents compensating biases through time. The method can be used as a diagnostic tool to evaluate stochasticity of modelled systems and used to determine suitable model structures of different time-series models. Although the method is demonstrated using a hydrological model, the method is not limited to the field of hydrology and could be adopted for many different time-series modelling applications.

  5. Effects of FRAX(®) model calibration on intervention rates: a simulation study.

    PubMed

    Leslie, William D; Lix, Lisa M

    2011-01-01

    The WHO fracture risk assessment tool (FRAX(®)) estimates an individual's 10-yr major osteoporotic and hip fracture probabilities using a tool customized to the fracture epidemiology of a specific population. Incorrect model calibration could therefore affect performance of the model in clinical practice. The current analysis was undertaken to explore how simulated miscalibration in the FRAX(®) tool would affect the numbers of individuals meeting specific intervention criteria (10-yr major osteoporotic fracture probability ≥20%, 10-yr hip fracture probability ≥3%). The study cohort included 36,730 women and 2873 men aged 50yr and older with FRAX(®) probability estimates using femoral neck bone mineral density. We simulated relative miscalibration error in 10% increments from -50% to +50% relative to a correctly calibrated FRAX(®) model. We found that small changes in model calibration (even on the order of 10%) had large effects on the number of individuals qualifying for treatment. There was a steep gradient in the relationship between relative change in calibration and relative change in intervention rates: for every 1% change in calibration, there was a 2.5% change in intervention rates for women and 4.1% for men. For hip fracture probability, the gradient of the relationship was closer to unity. These results highlight the importance of FRAX(®) model calibration, and speak to the importance of using high-quality fracture epidemiology in constructing FRAX(®) tools.

  6. 20nm CMP model calibration with optimized metrology data and CMP model applications

    NASA Astrophysics Data System (ADS)

    Katakamsetty, Ushasree; Koli, Dinesh; Yeo, Sky; Hui, Colin; Ghulghazaryan, Ruben; Aytuna, Burak; Wilson, Jeff

    2015-03-01

    Chemical Mechanical Polishing (CMP) is the essential process for planarization of wafer surface in semiconductor manufacturing. CMP process helps to produce smaller ICs with more electronic circuits improving chip speed and performance. CMP also helps to increase throughput and yield, which results in reduction of IC manufacturer's total production costs. CMP simulation model will help to early predict CMP manufacturing hotspots and minimize the CMP and CMP induced Lithography and Etch defects [2]. In the advanced process nodes, conventional dummy fill insertion for uniform density is not able to address all the CMP short-range, long-range, multi-layer stacking and other effects like pad conditioning, slurry selectivity, etc. In this paper, we present the flow for 20nm CMP modeling using Mentor Graphics CMP modeling tools to build a multilayer Cu-CMP model and study hotspots. We present the inputs required for good CMP model calibration, challenges faced with metrology collections and techniques to optimize the wafer cost. We showcase the CMP model validation results and the model applications to predict multilayer topography accumulation affects for hotspot detection. We provide the flow for early detection of CMP hotspots with Calibre CMPAnalyzer to improve Design-for-Manufacturability (DFM) robustness.

  7. Nonlinear model calibration of a shear wall building using time and frequency data features

    NASA Astrophysics Data System (ADS)

    Asgarieh, Eliyar; Moaveni, Babak; Barbosa, Andre R.; Chatzi, Eleni

    2017-02-01

    This paper investigates the effects of different factors on the performance of nonlinear model updating for a seven-story shear wall building model. The accuracy of calibrated models using different data features and modeling assumptions is studied by comparing the time and frequency responses of the models with the exact simulated ones. Simplified nonlinear finite element models of the shear wall building are calibrated so that the misfit between the considered response data features of the models and the structure is minimized. A refined FE model of the test structure, which was calibrated manually to match the shake table test data, is used instead of the real structure for this performance evaluation study. The simplified parsimonious FE models are composed of simple nonlinear beam-column fiber elements with nonlinearity infused in them by assigning generated hysteretic nonlinear material behaviors to uniaxial stress-strain relationship of the fibers. Four different types of data features and their combinations are used for model calibration: (1) time-varying instantaneous modal parameters, (2) displacement time histories, (3) acceleration time histories, and (4) dissipated hysteretic energy. It has been observed that the calibrated simplified FE models can accurately predict the nonlinear structural response in the absence of significant modeling errors. In the last part of this study, the physics-based models are further simplified for casting into state-space formulation and a real-time identification is performed using an Unscented Kalman filter. It has been shown that the performance of calibrated state-space models can be satisfactory when reasonable modeling assumptions are used.

  8. Calibration of Regional-Scale Subsurface Nitrogen Transport Models to Support the Analysis of Impaired Watersheds

    NASA Astrophysics Data System (ADS)

    Matott, L. S.; Rabideau, A. J.

    2006-05-01

    Nitrate-contaminated groundwater discharge may be a significant source of pollutant loading to impaired water- bodies, and this contribution may be assessed via large-scale regional modeling of subsurface nitrogen transport. Several aspects of large-scale subsurface transport modeling make automated calibration a difficult task. First, the appropriate level of model complexity for a regional subsurface nitrogen transport model is not obvious. Additionally, there are immense computational costs associated with large-scale transport modeling, and these costs are further exacerbated by automated calibration, which can require thousands of model evaluations. Finally, available evidence suggests that highly complex reactive transport models suffer from parameter non-uniqueness, a characteristic that can frustrate traditional regression-based calibration algorithms. These difficulties are the topic of ongoing research at the University at Buffalo, and a preliminary modeling and calibration approach will be presented. The approach is in the early stages of development and is being tested on a 400 square kilometer model that encompasses an agricultural research site in the Neuse River Basin (the Lizzie Research Station), located on an active and privately owned hog farm. Early results highlight the sensitivity of calibrated denitrification rate constants to a variety of secondary processes, including surface complexation of iron and manganese, ion exchange, and the precipitation/dissolution of calcite and metals.

  9. Bayesian calibration for electrochemical thermal model of lithium-ion cells

    NASA Astrophysics Data System (ADS)

    Tagade, Piyush; Hariharan, Krishnan S.; Basu, Suman; Verma, Mohan Kumar Singh; Kolake, Subramanya Mayya; Song, Taewon; Oh, Dukjin; Yeo, Taejung; Doo, Seokgwang

    2016-07-01

    Pseudo-two dimensional electrochemical thermal (P2D-ECT) model contains many parameters that are difficult to evaluate experimentally. Estimation of these model parameters is challenging due to computational cost and the transient model. Due to lack of complete physical understanding, this issue gets aggravated at extreme conditions like low temperature (LT) operations. This paper presents a Bayesian calibration framework for estimation of the P2D-ECT model parameters. The framework uses a matrix variate Gaussian process representation to obtain a computationally tractable formulation for calibration of the transient model. Performance of the framework is investigated for calibration of the P2D-ECT model across a range of temperatures (333 Ksbnd 263 K) and operating protocols. In the absence of complete physical understanding, the framework also quantifies structural uncertainty in the calibrated model. This information is used by the framework to test validity of the new physical phenomena before incorporation in the model. This capability is demonstrated by introducing temperature dependence on Bruggeman's coefficient and lithium plating formation at LT. With the incorporation of new physics, the calibrated P2D-ECT model accurately predicts the cell voltage with high confidence. The accurate predictions are used to obtain new insights into the low temperature lithium ion cell behavior.

  10. Reaction-based reactive transport modeling of Fe(III)

    SciTech Connect

    Kemner, K.M.; Kelly, S.D.; Burgos, Bill; Roden, Eric

    2006-06-01

    This research project (started Fall 2004) was funded by a grant to Argonne National Laboratory, The Pennsylvania State University, and The University of Alabama in the Integrative Studies Element of the NABIR Program (DE-FG04-ER63914/63915/63196). Dr. Eric Roden, formerly at The University of Alabama, is now at the University of Wisconsin, Madison. Our project focuses on the development of a mechanistic understanding and quantitative models of coupled Fe(III)/U(VI) reduction in FRC Area 2 sediments. This work builds on our previous studies of microbial Fe(III) and U(VI) reduction, and is directly aligned with the Scheibe et al. NABIR FRC Field Project at Area 2.

  11. Calibration of Distributed Hydrologic Models Considering the Heterogeneity of the Parameters across the Basin

    NASA Astrophysics Data System (ADS)

    Athira, P.; Sudheer, K.

    2013-12-01

    Parameter estimation is one of the major tasks in the application of any physics based distributed model. Generally the calibration does not consider the heterogeneity of the parameters across the basin, and as a result the model simulation conforms to the location for which it has been calibrated for. However, the major advantage of distributed hydrological models is to have reasonably good simulations on various locations in the watershed, including ungauged locations. While multi-site calibration can address this issue to some extent, the availability of more gauge sites in a watershed is always not guaranteed. When single site calibration is performed, generally a uniform variation of the parameters is considered across the basin which does not ensure the true heterogeneity of the parameters in the basin. The primary objective of this study is to compare the effect of uniform variation of the parameter with a procedure that identifies actual heterogeneity of the parameters across the basin, while performing calibration of distributed hydrological models. In order to demonstrate the objective, a case study of two watersheds in the USA using the model, Soil and Water Assessment Tool (SWAT) is presented and discussed. Initially, the SWAT model is calibrated for both the watersheds in the traditional way considering uniform variation of the sensitive parameters during the calibration. Further, the hydrological response units (HRU) delineated in the SWAT are classified into various clusters based the land use, soil type and slope. A random perturbation of the parameters is performed in these clusters during calibration. The rationale behind this approach was to identify plausible parameter values that simulate the hydrological process in these clusters appropriately. The proposed procedure is applied to both the basins. The results indicate that the simulations obtained for upstream ungauged locations (other than that used for calibration) are much better when a

  12. The preliminary checkout, evaluation and calibration of a 3-component force measurement system for calibrating propulsion simulators for wind tunnel models

    NASA Technical Reports Server (NTRS)

    Scott, W. A.

    1984-01-01

    The propulsion simulator calibration laboratory (PSCL) in which calibrations can be performed to determine the gross thrust and airflow of propulsion simulators installed in wind tunnel models is described. The preliminary checkout, evaluation and calibration of the PSCL's 3 component force measurement system is reported. Methods and equipment were developed for the alignment and calibration of the force measurement system. The initial alignment of the system demonstrated the need for more efficient means of aligning system's components. The use of precision alignment jigs increases both the speed and accuracy with which the system is aligned. The calibration of the force measurement system shows that the methods and equipment for this procedure can be successful.

  13. More efficient evolutionary strategies for model calibration with watershed model for demonstration

    NASA Astrophysics Data System (ADS)

    Baggett, J. S.; Skahill, B. E.

    2008-12-01

    Evolutionary strategies allow automatic calibration of more complex models than traditional gradient based approaches, but they are more computationally intensive. We present several efficiency enhancements for evolution strategies, many of which are not new, but when combined have been shown to dramatically decrease the number of model runs required for calibration of synthetic problems. To reduce the number of expensive model runs we employ a surrogate objective function for an adaptively determined fraction of the population at each generation (Kern et al., 2006). We demonstrate improvements to the adaptive ranking strategy that increase its efficiency while sacrificing little reliability and further reduce the number of model runs required in densely sampled parts of parameter space. Furthermore, we include a gradient individual in each generation that is usually not selected when the search is in a global phase or when the derivatives are poorly approximated, but when selected near a smooth local minimum can dramatically increase convergence speed (Tahk et al., 2007). Finally, the selection of the gradient individual is used to adapt the size of the population near local minima. We show, by incorporating these enhancements into the Covariance Matrix Adaption Evolution Strategy (CMAES; Hansen, 2006), that their synergetic effect is greater than their individual parts. This hybrid evolutionary strategy exploits smooth structure when it is present but degrades to an ordinary evolutionary strategy, at worst, if smoothness is not present. Calibration of 2D-3D synthetic models with the modified CMAES requires approximately 10%-25% of the model runs of ordinary CMAES. Preliminary demonstration of this hybrid strategy will be shown for watershed model calibration problems. Hansen, N. (2006). The CMA Evolution Strategy: A Comparing Review. In J.A. Lozano, P. Larrañga, I. Inza and E. Bengoetxea (Eds.). Towards a new evolutionary computation. Advances in estimation of

  14. A hierarchical analysis of terrestrial ecosystem model Biome-BGC: Equilibrium analysis and model calibration

    SciTech Connect

    Thornton, Peter E; Wang, Weile; Law, Beverly E.; Nemani, Ramakrishna R

    2009-01-01

    The increasing complexity of ecosystem models represents a major difficulty in tuning model parameters and analyzing simulated results. To address this problem, this study develops a hierarchical scheme that simplifies the Biome-BGC model into three functionally cascaded tiers and analyzes them sequentially. The first-tier model focuses on leaf-level ecophysiological processes; it simulates evapotranspiration and photosynthesis with prescribed leaf area index (LAI). The restriction on LAI is then lifted in the following two model tiers, which analyze how carbon and nitrogen is cycled at the whole-plant level (the second tier) and in all litter/soil pools (the third tier) to dynamically support the prescribed canopy. In particular, this study analyzes the steady state of these two model tiers by a set of equilibrium equations that are derived from Biome-BGC algorithms and are based on the principle of mass balance. Instead of spinning-up the model for thousands of climate years, these equations are able to estimate carbon/nitrogen stocks and fluxes of the target (steady-state) ecosystem directly from the results obtained by the first-tier model. The model hierarchy is examined with model experiments at four AmeriFlux sites. The results indicate that the proposed scheme can effectively calibrate Biome-BGC to simulate observed fluxes of evapotranspiration and photosynthesis; and the carbon/nitrogen stocks estimated by the equilibrium analysis approach are highly consistent with the results of model simulations. Therefore, the scheme developed in this study may serve as a practical guide to calibrate/analyze Biome-BGC; it also provides an efficient way to solve the problem of model spin-up, especially for applications over large regions. The same methodology may help analyze other similar ecosystem models as well.

  15. Simulation of average and low flows through the regional calibration of a rainfall-runoff model, II: model calibration and simulation results

    NASA Astrophysics Data System (ADS)

    Lombardi, Laura; Montanari, Alberto; Toth, Elena

    2010-05-01

    The study presents, in two companion papers, an approach for regional calibration of a rainfall-runoff model that may be applied also to ungauged or scarcely gauged catchments, since it is based on the knowledge of characteristics of the catchment and of its climate other than hydrometric measurements. In the companion presentation, we describe the use of a regional procedure for estimating selected river flow statistics that describe the main properties of the river flows time series, on the basis of geo-morpho-climatic attributes of the catchments. This second presentation describes instead i) the calibration of the rainfall-runoff model obtained by optimizing the simulation of the statistics derived in the companion presentation and ii) the analyses of the modelled streamflow in simulation mode, focussing in particular on the reproduction of average and low flows. In detail, the maximum likelihood function in the spectral domain proposed by Whittle is approximated in the time domain by maximising the simultaneous fit (through a multiobjective optimisation) of the selected statistics of streamflow values, with the aim to propose a calibration procedure that can be applied at regional scale. A simple conceptual rainfall-runoff model will be used, of the lumped type and continuously simulating, characterised by a relatively low number of parameters to be calibrated. The experiments will be carried out for different study watersheds in the analysed region, where, along with detailed climatic and geomorfologic information, also precipitation and evapotranspiration daily time series are available. The selected catchments are treated as ungauged, but simultaneous daily series of streamflow are available, for evaluating the results and for a comparison with the simulation provided by a classical least squares calibration of the daily errors. The comparison will analyse, in particular, the performance indexes that highlight the fit of average and low streamflows.

  16. Estimating aquifer recharge in Mission River watershed, Texas: model development and calibration using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Uddameri, V.; Kuchanur, M.

    2007-01-01

    Soil moisture balance studies provide a convenient approach to estimate aquifer recharge when only limited site-specific data are available. A monthly mass-balance approach has been utilized in this study to estimate recharge in a small watershed in the coastal bend of South Texas. The developed lumped parameter model employs four adjustable parameters to calibrate model predicted stream runoff to observations at a gaging station. A new procedure was developed to correctly capture the intermittent nature of rainfall. The total monthly rainfall was assigned to a single-equivalent storm whose duration was obtained via calibration. A total of four calibrations were carried out using an evolutionary computing technique called genetic algorithms as well as the conventional gradient descent (GD) technique. Ordinary least squares and the heteroscedastic maximum likelihood error (HMLE) based objective functions were evaluated as part of this study as well. While the genetic algorithm based calibrations were relatively better in capturing the peak runoff events, the GD based calibration did slightly better in capturing the low flow events. Treating the Box-Cox exponent in the HMLE function as a calibration parameter did not yield better estimates and the study corroborates the suggestion made in the literature of fixing this exponent at 0.3. The model outputs were compared against available information and results indicate that the developed modeling approach provides a conservative estimate of recharge.

  17. Optimizing hydrological consistency by incorporating hydrological signatures into model calibration objectives

    NASA Astrophysics Data System (ADS)

    Shafii, Mahyar; Tolson, Bryan A.

    2015-05-01

    The simulated outcome of a calibrated hydrologic model should be hydrologically consistent with the measured response data. Hydrologic modelers typically calibrate models to optimize residual-based goodness-of-fit measures, e.g., the Nash-Sutcliffe efficiency measure, and then evaluate the obtained results with respect to hydrological signatures, e.g., the flow duration curve indices. The literature indicates that the consideration of a large number of hydrologic signatures has not been addressed in a full multiobjective optimization context. This research develops a model calibration methodology to achieve hydrological consistency using goodness-of-fit measures, many hydrological signatures, as well as a level of acceptability for each signature. The proposed framework relies on a scoring method that transforms any hydrological signature to a calibration objective. These scores are used to develop the hydrological consistency metric, which is maximized to obtain hydrologically consistent parameter sets during calibration. This consistency metric is implemented in different signature-based calibration formulations that adapt the sampling according to hydrologic signature values. These formulations are compared with the traditional formulations found in the literature for seven case studies. The results reveal that Pareto dominance-based multiobjective optimization yields the highest level of consistency among all formulations. Furthermore, it is found that the choice of optimization algorithms does not affect the findings of this research.

  18. Modeling and Calibration of a Novel One-Mirror Galvanometric Laser Scanner.

    PubMed

    Yu, Chengyi; Chen, Xiaobo; Xi, Juntong

    2017-01-15

    A laser stripe sensor has limited application when a point cloud of geometric samples on the surface of the object needs to be collected, so a galvanometric laser scanner is designed by using a one-mirror galvanometer element as its mechanical device to drive the laser stripe to sweep along the object. A novel mathematical model is derived for the proposed galvanometer laser scanner without any position assumptions and then a model-driven calibration procedure is proposed. Compared with available model-driven approaches, the influence of machining and assembly errors is considered in the proposed model. Meanwhile, a plane-constraint-based approach is proposed to extract a large number of calibration points effectively and accurately to calibrate the galvanometric laser scanner. Repeatability and accuracy of the galvanometric laser scanner are evaluated on the automobile production line to verify the efficiency and accuracy of the proposed calibration method. Experimental results show that the proposed calibration approach yields similar measurement performance compared with a look-up table calibration method.

  19. Modeling and Calibration of a Novel One-Mirror Galvanometric Laser Scanner

    PubMed Central

    Yu, Chengyi; Chen, Xiaobo; Xi, Juntong

    2017-01-01

    A laser stripe sensor has limited application when a point cloud of geometric samples on the surface of the object needs to be collected, so a galvanometric laser scanner is designed by using a one-mirror galvanometer element as its mechanical device to drive the laser stripe to sweep along the object. A novel mathematical model is derived for the proposed galvanometer laser scanner without any position assumptions and then a model-driven calibration procedure is proposed. Compared with available model-driven approaches, the influence of machining and assembly errors is considered in the proposed model. Meanwhile, a plane-constraint-based approach is proposed to extract a large number of calibration points effectively and accurately to calibrate the galvanometric laser scanner. Repeatability and accuracy of the galvanometric laser scanner are evaluated on the automobile production line to verify the efficiency and accuracy of the proposed calibration method. Experimental results show that the proposed calibration approach yields similar measurement performance compared with a look-up table calibration method. PMID:28098844

  20. Calibration of the Variable Infiltration Capacity Model from Hyper-Resolution to the Regional Scale

    NASA Astrophysics Data System (ADS)

    Uijlenhoet, R.; Melsen, L. A.; Teuling, R.; Torfs, P.; Zappa, M.

    2014-12-01

    The Variable Infiltration Capacity (VIC, Liang et al., 1994) model has been used for a broad range of applications, in hydrology as well as in the fields of climate and global change. Calibration of the often distributed application of the model is difficult, certainly in the light of the ongoing discussion on applying global models at hyper-resolution (Wood et al., 2011). To improve the calibration procedure for VIC applied at grid resolutions varying from meso-scale catchments to the 1 km 'hyper'resolution now used in several global modeling studies, the parameters of the model are studied in more detail with specific focus on scale effects. A lumped VIC-model was constructed for three nested basins: the Rietholzbach (3.4 km2), Jonschwil (492 km2) and the Thur basin (1700 km2) in Switzerland. With the DELSA sensitivity analysis method (Rakovec et al., 2013) it was shown that parameter sensitivity does not change over scale. Extensive calibration of the lumped models using the DREAM algorithm (Vrugt et al., 2008) revealed that most of the calibrated parameter values of the three basins were within each others uncertainty bound based on the converged part of the posterior. This information was used for calibration of the distributed VIC models, which where constructed for the Thur basin at a grid resolution of 1x1 km, 5x5 km and 10x10 km.

  1. CALIBRATING THE JOHNSON-HOLMQUIST CERAMIC MODEL FOR SIC USING CTH

    SciTech Connect

    Cazamias, J. U.; Bilyk, S. R.

    2009-12-28

    The Johnson-Holmquist ceramic material model has been calibrated and successfully applied to numerically simulate ballistic events using the Lagrangian code EPIC. While the majority of the constants are ''physics'' based, two of the constants for the failed material response are calibrated using ballistic experiments conducted on a confined cylindrical ceramic target. The maximum strength of the failed ceramic is calibrated by matching the penetration velocity. The second refers to the equivalent plastic strain at failure under constant pressure and is calibrated using the dwell time. Use of these two constants in the CTH Eulerian hydrocode does not predict the ballistic response. This difference may be due to the phenomenological nature of the model and the different numerical schemes used by the codes. This paper determines the aforementioned material constants for SiC suitable for simulating ballistic events using CTH.

  2. Calibration and validation of earthquake catastrophe models. Case study: Impact Forecasting Earthquake Model for Algeria

    NASA Astrophysics Data System (ADS)

    Trendafiloski, G.; Gaspa Rebull, O.; Ewing, C.; Podlaha, A.; Magee, B.

    2012-04-01

    Calibration and validation are crucial steps in the production of the catastrophe models for the insurance industry in order to assure the model's reliability and to quantify its uncertainty. Calibration is needed in all components of model development including hazard and vulnerability. Validation is required to ensure that the losses calculated by the model match those observed in past events and which could happen in future. Impact Forecasting, the catastrophe modelling development centre of excellence within Aon Benfield, has recently launched its earthquake model for Algeria as a part of the earthquake model for the Maghreb region. The earthquake model went through a detailed calibration process including: (1) the seismic intensity attenuation model by use of macroseismic observations and maps from past earthquakes in Algeria; (2) calculation of the country-specific vulnerability modifiers by use of past damage observations in the country. The use of Benouar, 1994 ground motion prediction relationship was proven as the most appropriate for our model. Calculation of the regional vulnerability modifiers for the country led to 10% to 40% larger vulnerability indexes for different building types compared to average European indexes. The country specific damage models also included aggregate damage models for residential, commercial and industrial properties considering the description of the buildings stock given by World Housing Encyclopaedia and the local rebuilding cost factors equal to 10% for damage grade 1, 20% for damage grade 2, 35% for damage grade 3, 75% for damage grade 4 and 100% for damage grade 5. The damage grades comply with the European Macroseismic Scale (EMS-1998). The model was validated by use of "as-if" historical scenario simulations of three past earthquake events in Algeria M6.8 2003 Boumerdes, M7.3 1980 El-Asnam and M7.3 1856 Djidjelli earthquake. The calculated return periods of the losses for client market portfolio align with the

  3. Radiometric modeling and calibration of the Geostationary Imaging Fourier Transform Spectrometer (GIFTS) ground based measurement experiment

    NASA Astrophysics Data System (ADS)

    Tian, Jialin; Smith, William L.; Gazarik, Michael J.

    2008-12-01

    The ultimate remote sensing benefits of the high resolution Infrared radiance spectrometers will be realized with their geostationary satellite implementation in the form of imaging spectrometers. This will enable dynamic features of the atmosphere's thermodynamic fields and pollutant and greenhouse gas constituents to be observed for revolutionary improvements in weather forecasts and more accurate air quality and climate predictions. As an important step toward realizing this application objective, the Geostationary Imaging Fourier Transform Spectrometer (GIFTS) Engineering Demonstration Unit (EDU) was successfully developed under the NASA New Millennium Program, 2000-2006. The GIFTS-EDU instrument employs three focal plane arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The GIFTS calibration is achieved using internal blackbody calibration references at ambient (260 K) and hot (286 K) temperatures. In this paper, we introduce a refined calibration technique that utilizes Principle Component (PC) analysis to compensate for instrument distortions and artifacts, therefore, enhancing the absolute calibration accuracy. This method is applied to data collected during the GIFTS Ground Based Measurement (GBM) experiment, together with simultaneous observations by the accurately calibrated AERI (Atmospheric Emitted Radiance Interferometer), both simultaneously zenith viewing the sky through the same external scene mirror at ten-minute intervals throughout a cloudless day at Logan Utah on September 13, 2006. The accurately calibrated GIFTS radiances are produced using the first four PC scores in the GIFTS-AERI regression model. Temperature and moisture profiles retrieved from the PC-calibrated GIFTS radiances are verified against radiosonde measurements collected throughout the GIFTS sky measurement period. Using the GIFTS GBM calibration model, we compute the calibrated radiances from data

  4. Inaccuracy Determination in Mathematical Model of Labsocs Efficiency Calibration Program

    NASA Astrophysics Data System (ADS)

    Kuznetsov, M.; Nikishkin, T.; Chursin, S.

    2016-08-01

    The study of radioactive materials quantitative inaccuracy determination caused by semiconductor detector aging is presented in the article. The study was conducted using a p- type coaxial GC 1518 detector made of a high-purity germanium produced by Canberra Company and LabSOCS mathematical efficiency calibration program. It was discovered that during 8 years of operation the efficiency of the detector had decreased due to increase of the dead layer of the germanium crystal. Increasing the thickness of the dead layer leads to 2 effects, which influence on the efficiency decrease: the shielding effect and the effect of reducing the active volume of the germanium crystal. It is found that the shielding effect contributes at energies below 88 keV. At energies above 88 keV the inaccuracy is connected with the decrease of the germanium crystal active volume, caused by lithium thermal diffusion.

  5. Impact of the calibration period on the conceptual rainfall-runoff model parameter estimates

    NASA Astrophysics Data System (ADS)

    Todorovic, Andrijana; Plavsic, Jasna

    2015-04-01

    A conceptual rainfall-runoff model is defined by its structure and parameters, which are commonly inferred through model calibration. Parameter estimates depend on objective function(s), optimisation method, and calibration period. Model calibration over different periods may result in dissimilar parameter estimates, while model efficiency decreases outside calibration period. Problem of model (parameter) transferability, which conditions reliability of hydrologic simulations, has been investigated for decades. In this paper, dependence of the parameter estimates and model performance on calibration period is analysed. The main question that is addressed is: are there any changes in optimised parameters and model efficiency that can be linked to the changes in hydrologic or meteorological variables (flow, precipitation and temperature)? Conceptual, semi-distributed HBV-light model is calibrated over five-year periods shifted by a year (sliding time windows). Length of the calibration periods is selected to enable identification of all parameters. One water year of model warm-up precedes every simulation, which starts with the beginning of a water year. The model is calibrated using the built-in GAP optimisation algorithm. The objective function used for calibration is composed of Nash-Sutcliffe coefficient for flows and logarithms of flows, and volumetric error, all of which participate in the composite objective function with approximately equal weights. Same prior parameter ranges are used in all simulations. The model is calibrated against flows observed at the Slovac stream gauge on the Kolubara River in Serbia (records from 1954 to 2013). There are no trends in precipitation nor in flows, however, there is a statistically significant increasing trend in temperatures at this catchment. Parameter variability across the calibration periods is quantified in terms of standard deviations of normalised parameters, enabling detection of the most variable parameters

  6. Binary Classifier Calibration using an Ensemble of Near Isotonic Regression Models

    PubMed Central

    Naeini, Mahdi Pakdaman; Cooper, Gregory F.

    2017-01-01

    Learning accurate probabilistic models from data is crucial in many practical tasks in data mining. In this paper we present a new non-parametric calibration method called ensemble of near isotonic regression (ENIR). The method can be considered as an extension of BBQ [20], a recently proposed calibration method, as well as the commonly used calibration method based on isotonic regression (IsoRegC) [27]. ENIR is designed to address the key limitation of IsoRegC which is the monotonicity assumption of the predictions. Similar to BBQ, the method post-processes the output of a binary classifier to obtain calibrated probabilities. Thus it can be used with many existing classification models to generate accurate probabilistic predictions. We demonstrate the performance of ENIR on synthetic and real datasets for commonly applied binary classification models. Experimental results show that the method outperforms several common binary classifier calibration methods. In particular on the real data, ENIR commonly performs statistically significantly better than the other methods, and never worse. It is able to improve the calibration power of classifiers, while retaining their discrimination power. The method is also computationally tractable for large scale datasets, as it is O(N log N) time, where N is the number of samples. PMID:28316511

  7. Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation

    SciTech Connect

    Robertson, J.; Polly, B.; Collis, J.

    2013-09-01

    This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define 'explicit' input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAE 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.

  8. Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation

    SciTech Connect

    and Ben Polly, Joseph Robertson; Polly, Ben; Collis, Jon

    2013-09-01

    This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define "explicit" input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAE 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.

  9. Toward cosmological-model-independent calibrations for the luminosity relations of Gamma-Ray Bursts

    NASA Astrophysics Data System (ADS)

    Ding, Xuheng; Li, Zhengxiang; Zhu, Zong-Hong

    2015-05-01

    Gamma-ray bursts (GRBs), have been widely used as distance indicators to measure the cosmic expansion and explore the nature of dark energy. A popular method adopted in previous works is to calibrate the luminosity relations which are responsible for distance estimation of GRBs with more primary (low redshift) cosmic distance ladder objects, type Ia supernovae (SNe Ia). Since distances of SNe Ia in all SN Ia samples used to calibrate GRB luminosity relations were usually derived from the global fit in a specific cosmological model, the distance of GRB at a given redshift calibrated with matching SNe Ia was still cosmological-model-dependent. In this paper, we first directly determine the distances of SNe Ia with the Angular Diameter Distances (ADDs) of galaxy clusters without any assumption for the background of the universe, and then calibrate GRB luminosity relations with our cosmology-independent distances of SNe Ia. The results suggest that, compared to the previous original manner where distances of SNe Ia used as calibrators are determined from the global fit in a particular cosmological model, our treatments proposed here yield almost the same calibrations of GRB luminosity relations and the cosmological implications of them do not suffer any circularity.

  10. Binary Classifier Calibration using an Ensemble of Near Isotonic Regression Models.

    PubMed

    Naeini, Mahdi Pakdaman; Cooper, Gregory F

    2016-12-01

    Learning accurate probabilistic models from data is crucial in many practical tasks in data mining. In this paper we present a new non-parametric calibration method called ensemble of near isotonic regression (ENIR). The method can be considered as an extension of BBQ [20], a recently proposed calibration method, as well as the commonly used calibration method based on isotonic regression (IsoRegC) [27]. ENIR is designed to address the key limitation of IsoRegC which is the monotonicity assumption of the predictions. Similar to BBQ, the method post-processes the output of a binary classifier to obtain calibrated probabilities. Thus it can be used with many existing classification models to generate accurate probabilistic predictions. We demonstrate the performance of ENIR on synthetic and real datasets for commonly applied binary classification models. Experimental results show that the method outperforms several common binary classifier calibration methods. In particular on the real data, ENIR commonly performs statistically significantly better than the other methods, and never worse. It is able to improve the calibration power of classifiers, while retaining their discrimination power. The method is also computationally tractable for large scale datasets, as it is O(N log N) time, where N is the number of samples.

  11. Hydro-abrasive erosion on coated Pelton runners: Partial calibration of the IEC model based on measurements in HPP Fieschertal

    NASA Astrophysics Data System (ADS)

    Felix, D.; Abgottspon, A.; Albayrak, I.; Boes, R. M.

    2016-11-01

    At medium- and high-head hydropower plants (HPPs) on sediment-laden rivers, hydro-abrasive erosion on hydraulic turbines is a major economic issue. For optimization of such HPPs, there is an interest in equations to predict erosion depths. Such a semi-empirical equation suitable for engineering practice is proposed in the relevant guideline of the International Electrotechnical Commission (IEC 62364). However, for Pelton turbines no numerical values of the model's calibration parameters have been available yet. In the scope of a research project at the high-head HPP Fieschertal, Switzerland, the particle load and the erosion on the buckets of two hard-coated 32 MW-Pelton runners have been measured since 2012. Based on three years of field data, the numerical values of a group of calibration parameters of the IEC erosion model were determined for five application cases: (i) reduction of splitter height, (ii) increase of splitter width and (iii) increase of cut-out depth due to erosion of mainly base material, as well as erosion of coating on (iv) the splitter crests and (v) inside the buckets. Further laboratory and field investigations are recommended to quantify the effects of individual parameters as well as to improve, generalize and validate erosion models for uncoated and coated Pelton turbines.

  12. Maximin Calibration Designs for the Nominal Response Model: An Empirical Evaluation

    ERIC Educational Resources Information Center

    Passos, Valeria Lima; Berger, Martijn P. F.

    2004-01-01

    The problem of finding optimal calibration designs for dichotomous item response theory (IRT) models has been extensively studied in the literature. In this study, this problem will be extended to polytomous IRT models. Focus is given to items described by the nominal response model (NRM). The optimizations objective is to minimize the generalized…

  13. Psychological Test Calibration Using the Rasch Model: Some Critical Suggestions on Traditional Approaches

    ERIC Educational Resources Information Center

    Kubinger, Klaus D.

    2005-01-01

    This article emphasizes that the Rasch model is not only very useful for psychological test calibration but is also necessary if the number of solved items is to be used as an examinee's score. Simplified proof that the Rasch model implies specific objective parameter comparisons is given. Consequently, a model check per se is possible. For data…

  14. Psychological Test Calibration Using the Rasch Model--Some Critical Suggestions on Traditional Approaches

    ERIC Educational Resources Information Center

    Kubinger, Klaus D.

    2005-01-01

    In this article, we emphasize that the Rasch model is not only very useful for psychological test calibration but is also necessary if the number of solved items is to be used as an examinee's score. Simplified proof that the Rasch model implies specific objective parameter comparisons is given. Consequently, a model check per se is possible. For…

  15. Calibration of the APEX model to simulate management practice effects on runoff, sediment, and phosphorus loss

    USDA-ARS?s Scientific Manuscript database

    Process-based computer models have been proposed as a tool to generate data for phosphorus-index assessment and development. Although models are commonly used to simulate phosphorus (P) loss from agriculture using managements that are different from the calibration data, this use of models has not ...

  16. Spatial calibration and temporal validation of flow for regional scale hydrologic modeling

    USDA-ARS?s Scientific Manuscript database

    Physically based regional scale hydrologic modeling is gaining importance for planning and management of water resources. Calibration and validation of such regional scale model is necessary before applying it for scenario assessment. However, in most regional scale hydrologic modeling, flow validat...

  17. Maximin Calibration Designs for the Nominal Response Model: An Empirical Evaluation

    ERIC Educational Resources Information Center

    Passos, Valeria Lima; Berger, Martijn P. F.

    2004-01-01

    The problem of finding optimal calibration designs for dichotomous item response theory (IRT) models has been extensively studied in the literature. In this study, this problem will be extended to polytomous IRT models. Focus is given to items described by the nominal response model (NRM). The optimizations objective is to minimize the generalized…

  18. Psychological Test Calibration Using the Rasch Model--Some Critical Suggestions on Traditional Approaches

    ERIC Educational Resources Information Center

    Kubinger, Klaus D.

    2005-01-01

    In this article, we emphasize that the Rasch model is not only very useful for psychological test calibration but is also necessary if the number of solved items is to be used as an examinee's score. Simplified proof that the Rasch model implies specific objective parameter comparisons is given. Consequently, a model check per se is possible. For…

  19. Psychological Test Calibration Using the Rasch Model: Some Critical Suggestions on Traditional Approaches

    ERIC Educational Resources Information Center

    Kubinger, Klaus D.

    2005-01-01

    This article emphasizes that the Rasch model is not only very useful for psychological test calibration but is also necessary if the number of solved items is to be used as an examinee's score. Simplified proof that the Rasch model implies specific objective parameter comparisons is given. Consequently, a model check per se is possible. For data…

  20. On the Bayesian calibration of computer model mixtures through experimental data, and the design of predictive models

    NASA Astrophysics Data System (ADS)

    Karagiannis, Georgios; Lin, Guang

    2017-08-01

    For many real systems, several computer models may exist with different physics and predictive abilities. To achieve more accurate simulations/predictions, it is desirable for these models to be properly combined and calibrated. We propose the Bayesian calibration of computer model mixture method which relies on the idea of representing the real system output as a mixture of the available computer model outputs with unknown input dependent weight functions. The method builds a fully Bayesian predictive model as an emulator for the real system output by combining, weighting, and calibrating the available models in the Bayesian framework. Moreover, it fits a mixture of calibrated computer models that can be used by the domain scientist as a mean to combine the available computer models, in a flexible and principled manner, and perform reliable simulations. It can address realistic cases where one model may be more accurate than the others at different input values because the mixture weights, indicating the contribution of each model, are functions of the input. Inference on the calibration parameters can consider multiple computer models associated with different physics. The method does not require knowledge of the fidelity order of the models. We provide a technique able to mitigate the computational overhead due to the consideration of multiple computer models that is suitable to the mixture model framework. We implement the proposed method in a real-world application involving the Weather Research and Forecasting large-scale climate model.

  1. Nitrous oxide emissions from cropland: a procedure for calibrating the DayCent biogeochemical model using inverse modelling

    USGS Publications Warehouse

    Rafique, Rashad; Fienen, Michael N.; Parkin, Timothy B.; Anex, Robert P.

    2013-01-01

    DayCent is a biogeochemical model of intermediate complexity widely used to simulate greenhouse gases (GHG), soil organic carbon and nutrients in crop, grassland, forest and savannah ecosystems. Although this model has been applied to a wide range of ecosystems, it is still typically parameterized through a traditional “trial and error” approach and has not been calibrated using statistical inverse modelling (i.e. algorithmic parameter estimation). The aim of this study is to establish and demonstrate a procedure for calibration of DayCent to improve estimation of GHG emissions. We coupled DayCent with the parameter estimation (PEST) software for inverse modelling. The PEST software can be used for calibration through regularized inversion as well as model sensitivity and uncertainty analysis. The DayCent model was analysed and calibrated using N2O flux data collected over 2 years at the Iowa State University Agronomy and Agricultural Engineering Research Farms, Boone, IA. Crop year 2003 data were used for model calibration and 2004 data were used for validation. The optimization of DayCent model parameters using PEST significantly reduced model residuals relative to the default DayCent parameter values. Parameter estimation improved the model performance by reducing the sum of weighted squared residual difference between measured and modelled outputs by up to 67 %. For the calibration period, simulation with the default model parameter values underestimated mean daily N2O flux by 98 %. After parameter estimation, the model underestimated the mean daily fluxes by 35 %. During the validation period, the calibrated model reduced sum of weighted squared residuals by 20 % relative to the default simulation. Sensitivity analysis performed provides important insights into the model structure providing guidance for model improvement.

  2. Basin-scale geothermal model calibration: experience from the Perth Basin, Australia

    NASA Astrophysics Data System (ADS)

    Wellmann, Florian; Reid, Lynn

    2014-05-01

    The calibration of large-scale geothermal models for entire sedimentary basins is challenging as direct measurements of rock properties and subsurface temperatures are commonly scarce and the basal boundary conditions poorly constrained. Instead of the often applied "trial-and-error" manual model calibration, we examine here if we can gain additional insight into parameter sensitivities and model uncertainty with a model analysis and calibration study. Our geothermal model is based on a high-resolution full 3-D geological model, covering an area of more than 100,000 square kilometers and extending to a depth of 55 kilometers. The model contains all major faults (>80 ) and geological units (13) for the entire basin. This geological model is discretised into a rectilinear mesh with a lateral resolution of 500 x 500 m, and a variable resolution at depth. The highest resolution of 25 m is applied to a depth range of 1000-3000 m where most temperature measurements are available. The entire discretised model consists of approximately 50 million cells. The top thermal boundary condition is derived from surface temperature measurements on land and ocean floor. The base of the model extents below the Moho, and we apply the heat flux over the Moho as a basal heat flux boundary condition. Rock properties (thermal conductivity, porosity, and heat production) have been compiled from several existing data sets. The conductive geothermal forward simulation is performed with SHEMAT, and we then use the stand-alone capabilities of iTOUGH2 for sensitivity analysis and model calibration. Simulated temperatures are compared to 130 quality weighted bottom hole temperature measurements. The sensitivity analysis provided a clear insight into the most sensitive parameters and parameter correlations. This proved to be of value as strong correlations, for example between basal heat flux and heat production in deep geological units, can significantly influence the model calibration procedure

  3. A user-friendly forest model with a multiplicative mathematical structure: a Bayesian approach to calibration

    NASA Astrophysics Data System (ADS)

    Bagnara, M.; Van Oijen, M.; Cameron, D.; Gianelle, D.; Magnani, F.; Sottocornola, M.

    2014-10-01

    Forest models are being increasingly used to study ecosystem functioning, through the reproduction of carbon fluxes and productivity in very different forests all over the world. Over the last two decades, the need for simple and "easy to use" models for practical applications, characterized by few parameters and equations, has become clear, and some have been developed for this purpose. These models aim to represent the main drivers underlying forest ecosystem processes while being applicable to the widest possible range of forest ecosystems. Recently, it has also become clear that model performance should not be assessed only in terms of accuracy of estimations and predictions, but also in terms of estimates of model uncertainties. Therefore, the Bayesian approach has increasingly been applied to calibrate forest models, with the aim of estimating the uncertainty of their results, and of comparing their performances. Some forest models, considered to be user-friendly, rely on a multiplicative or quasi-multiplicative mathematical structure, which is known to cause problems during the calibration process, mainly due to high correlations between parameters. In a Bayesian framework using a Markov Chain Monte Carlo sampling this is likely to impair the reaching of a proper convergence of the chains and the sampling from the correct posterior distribution. Here we show two methods to reach proper convergence when using a forest model with a multiplicative structure, applying different algorithms with different number of iterations during the Markov Chain Monte Carlo or a two-steps calibration. The results showed that recently proposed algorithms for adaptive calibration do not confer a clear advantage over the Metropolis-Hastings Random Walk algorithm for the forest model used here. Moreover, the calibration remains time consuming and mathematically difficult, so advantages of using a fast and user-friendly model can be lost due to the calibration process that is

  4. Microclimate Data Improve Predictions of Insect Abundance Models Based on Calibrated Spatiotemporal Temperatures

    PubMed Central

    Rebaudo, François; Faye, Emile; Dangles, Olivier

    2016-01-01

    A large body of literature has recently recognized the role of microclimates in controlling the physiology and ecology of species, yet the relevance of fine-scale climatic data for modeling species performance and distribution remains a matter of debate. Using a 6-year monitoring of three potato moth species, major crop pests in the tropical Andes, we asked whether the spatiotemporal resolution of temperature data affect the predictions of models of moth performance and distribution. For this, we used three different climatic data sets: (i) the WorldClim dataset (global dataset), (ii) air temperature recorded using data loggers (weather station dataset), and (iii) air crop canopy temperature (microclimate dataset). We developed a statistical procedure to calibrate all datasets to monthly and yearly variation in temperatures, while keeping both spatial and temporal variances (air monthly temperature at 1 km² for the WorldClim dataset, air hourly temperature for the weather station, and air minute temperature over 250 m radius disks for the microclimate dataset). Then, we computed pest performances based on these three datasets. Results for temperature ranging from 9 to 11°C revealed discrepancies in the simulation outputs in both survival and development rates depending on the spatiotemporal resolution of the temperature dataset. Temperature and simulated pest performances were then combined into multiple linear regression models to compare predicted vs. field data. We used an additional set of study sites to test the ability of the results of our model to be extrapolated over larger scales. Results showed that the model implemented with microclimatic data best predicted observed pest abundances for our study sites, but was less accurate than the global dataset model when performed at larger scales. Our simulations therefore stress the importance to consider different temperature datasets depending on the issue to be solved in order to accurately predict species

  5. Microclimate Data Improve Predictions of Insect Abundance Models Based on Calibrated Spatiotemporal Temperatures.

    PubMed

    Rebaudo, François; Faye, Emile; Dangles, Olivier

    2016-01-01

    A large body of literature has recently recognized the role of microclimates in controlling the physiology and ecology of species, yet the relevance of fine-scale climatic data for modeling species performance and distribution remains a matter of debate. Using a 6-year monitoring of three potato moth species, major crop pests in the tropical Andes, we asked whether the spatiotemporal resolution of temperature data affect the predictions of models of moth performance and distribution. For this, we used three different climatic data sets: (i) the WorldClim dataset (global dataset), (ii) air temperature recorded using data loggers (weather station dataset), and (iii) air crop canopy temperature (microclimate dataset). We developed a statistical procedure to calibrate all datasets to monthly and yearly variation in temperatures, while keeping both spatial and temporal variances (air monthly temperature at 1 km² for the WorldClim dataset, air hourly temperature for the weather station, and air minute temperature over 250 m radius disks for the microclimate dataset). Then, we computed pest performances based on these three datasets. Results for temperature ranging from 9 to 11°C revealed discrepancies in the simulation outputs in both survival and development rates depending on the spatiotemporal resolution of the temperature dataset. Temperature and simulated pest performances were then combined into multiple linear regression models to compare predicted vs. field data. We used an additional set of study sites to test the ability of the results of our model to be extrapolated over larger scales. Results showed that the model implemented with microclimatic data best predicted observed pest abundances for our study sites, but was less accurate than the global dataset model when performed at larger scales. Our simulations therefore stress the importance to consider different temperature datasets depending on the issue to be solved in order to accurately predict species

  6. An algorithmic calibration approach to identify globally optimal parameters for constraining the DayCent model

    SciTech Connect

    Rafique, Rashid; Kumar, Sandeep; Luo, Yiqi; Kiely, Gerard; Asrar, Ghassem R.

    2015-02-01

    he accurate calibration of complex biogeochemical models is essential for the robust estimation of soil greenhouse gases (GHG) as well as other environmental conditions and parameters that are used in research and policy decisions. DayCent is a popular biogeochemical model used both nationally and internationally for this purpose. Despite DayCent’s popularity, its complex parameter estimation is often based on experts’ knowledge which is somewhat subjective. In this study we used the inverse modelling parameter estimation software (PEST), to calibrate the DayCent model based on sensitivity and identifi- ability analysis. Using previously published N2 O and crop yield data as a basis of our calibration approach, we found that half of the 140 parameters used in this study were the primary drivers of calibration dif- ferences (i.e. the most sensitive) and the remaining parameters could not be identified given the data set and parameter ranges we used in this study. The post calibration results showed improvement over the pre-calibration parameter set based on, a decrease in residual differences 79% for N2O fluxes and 84% for crop yield, and an increase in coefficient of determination 63% for N2O fluxes and 72% for corn yield. The results of our study suggest that future studies need to better characterize germination tem- perature, number of degree-days and temperature dependency of plant growth; these processes were highly sensitive and could not be adequately constrained by the data used in our study. Furthermore, the sensitivity and identifiability analysis was helpful in providing deeper insight for important processes and associated parameters that can lead to further improvement in calibration of DayCent model.

  7. Development of a univariate calibration model for pharmaceutical analysis based on NIR spectra.

    PubMed

    Blanco, M; Cruz, J; Bautista, M

    2008-12-01

    Near-infrared spectroscopy (NIRS) has been widely used in the pharmaceutical field because of its ability to provide quality information about drugs in near-real time. In practice, however, the NIRS technique requires construction of multivariate models in order to correct collinearity and the typically poor selectivity of NIR spectra. In this work, a new methodology for constructing simple NIR calibration models has been developed, based on the spectrum for the target analyte (usually the active principle ingredient, API), which is compared with that of the sample in order to calculate a correlation coefficient. To this end, calibration samples are prepared spanning an adequate concentration range for the API and their spectra are recorded. The model thus obtained by relating the correlation coefficient to the sample concentration is subjected to least-squares regression. The API concentration in validation samples is predicted by interpolating their correlation coefficients in the straight calibration line previously obtained. The proposed method affords quantitation of API in pharmaceuticals undergoing physical changes during their production process (e.g. granulates, and coated and non-coated tablets). The results obtained with the proposed methodology, based on correlation coefficients, were compared with the predictions of PLS1 calibration models, with which a different model is required for each type of sample. Error values lower than 1-2% were obtained in the analysis of three types of sample using the same model; these errors are similar to those obtained by applying three PLS models for granules, and non-coated and coated samples. Based on the outcome, our methodology is a straightforward choice for constructing calibration models affording expeditious prediction of new samples with varying physical properties. This makes it an effective alternative to multivariate calibration, which requires use of a different model for each type of sample, depending on

  8. Calibration of a large-scale semi-distributed hydrological model for the continental United States

    NASA Astrophysics Data System (ADS)

    Li, S.; Lohmann, D.

    2011-12-01

    Recent major flood losses raised the awareness of flood risk worldwide. In large-scale (e.g., country) flood simulation, semi-distributed hydrological model shows its advantage in capturing spatial heterogeneity of hydrological characteristics within a basin with relatively low computational cost. However, it is still very challenging to calibrate the model over large scale and a wide variety of hydroclimatic conditions. The objectives of this study are (1) to compare the effectiveness of state-of-the-art evolutionary multiobjective algorithms in calibrating a semi-distributed hydrological model used in the RMS flood loss model; (2) to calibrate the model over the entire continental United States. Firstly, the computational efficiency of the following four algorithms is evaluated: the Non-Dominated Sorted Genetic Algorithm II (NSGAII), the Strength Pareto Evolutionary Algorithm 2 (SPEA2), the Epsilon-Dominance Non-Dominated Sorted Genetic Algorithm II (ɛ-NSGAII), and the Epsilon-Dominance Multi-Objective Evolutionary Algorithm (ɛMOEA). The test was conducted on four river basins with a wide variety of hydro-climatic conditions in US. The optimization objectives include RMSE and high-flow RMSE. Results of the analysis indicate that NSGAII has the best performance in terms of effectiveness and stability. Then we applied the modified version of NSGAII to calibrate the hydrological model over the entire continental US. Comparing with the observation and published data shows the performance of the calibrated model is good overall. This well-calibrated model allows a more accurate modeling of flood risk and loss in the continental United States. Furthermore it will allow underwriters to better manage the exposure.

  9. Gaussian process based modeling and experimental design for sensor calibration in drifting environments

    PubMed Central

    Geng, Zongyu; Yang, Feng; Chen, Xi; Wu, Nianqiang

    2016-01-01

    It remains a challenge to accurately calibrate a sensor subject to environmental drift. The calibration task for such a sensor is to quantify the relationship between the sensor’s response and its exposure condition, which is specified by not only the analyte concentration but also the environmental factors such as temperature and humidity. This work developed a Gaussian Process (GP)-based procedure for the efficient calibration of sensors in drifting environments. Adopted as the calibration model, GP is not only able to capture the possibly nonlinear relationship between the sensor responses and the various exposure-condition factors, but also able to provide valid statistical inference for uncertainty quantification of the target estimates (e.g., the estimated analyte concentration of an unknown environment). Built on GP’s inference ability, an experimental design method was developed to achieve efficient sampling of calibration data in a batch sequential manner. The resulting calibration procedure, which integrates the GP-based modeling and experimental design, was applied on a simulated chemiresistor sensor to demonstrate its effectiveness and its efficiency over the traditional method. PMID:26924894

  10. Gas chromatographic quantitative analysis of methanol in wine: operative conditions, optimization and calibration model choice.

    PubMed

    Caruso, Rosario; Gambino, Grazia Laura; Scordino, Monica; Sabatino, Leonardo; Traulo, Pasqualino; Gagliano, Giacomo

    2011-12-01

    The influence of the wine distillation process on methanol content has been determined by quantitative analysis using gas chromatographic flame ionization (GC-FID) detection. A comparative study between direct injection of diluted wine and injection of distilled wine was performed. The distillation process does not affect methanol quantification in wines in proportions higher than 10%. While quantification performed on distilled samples gives more reliable results, a screening method for wine injection after a 1:5 water dilution could be employed. The proposed technique was found to be a compromise between the time consuming distillation process and direct wine injection. In the studied calibration range, the stability of the volatile compounds in the reference solution is concentration-dependent. The stability is higher in the less concentrated reference solution. To shorten the operation time, a stronger temperature ramp and carrier flow rate was employed. With these conditions, helium consumption and column thermal stress were increased. However, detection limits, calibration limits, and analytical method performances are not affected substantially by changing from normal to forced GC conditions. Statistical data evaluation were made using both ordinary (OLS) and bivariate least squares (BLS) calibration models. Further confirmation was obtained that limit of detection (LOD) values, calculated according to the 3sigma approach, are lower than the respective Hubaux-Vos (H-V) calculation method. H-V LOD depends upon background noise, calibration parameters and the number of reference standard solutions employed in producing the calibration curve. These remarks are confirmed by both calibration models used.

  11. Gaussian process based modeling and experimental design for sensor calibration in drifting environments.

    PubMed

    Geng, Zongyu; Yang, Feng; Chen, Xi; Wu, Nianqiang

    2015-09-01

    It remains a challenge to accurately calibrate a sensor subject to environmental drift. The calibration task for such a sensor is to quantify the relationship between the sensor's response and its exposure condition, which is specified by not only the analyte concentration but also the environmental factors such as temperature and humidity. This work developed a Gaussian Process (GP)-based procedure for the efficient calibration of sensors in drifting environments. Adopted as the calibration model, GP is not only able to capture the possibly nonlinear relationship between the sensor responses and the various exposure-condition factors, but also able to provide valid statistical inference for uncertainty quantification of the target estimates (e.g., the estimated analyte concentration of an unknown environment). Built on GP's inference ability, an experimental design method was developed to achieve efficient sampling of calibration data in a batch sequential manner. The resulting calibration procedure, which integrates the GP-based modeling and experimental design, was applied on a simulated chemiresistor sensor to demonstrate its effectiveness and its efficiency over the traditional method.

  12. Development of a camera model and calibration procedure for oblique-viewing endoscopes.

    PubMed

    Yamaguchi, Tetsuzo; Nakamoto, Masahiko; Sato, Yoshinobu; Konishi, Kozo; Hashizume, Makoto; Sugano, Nobuhiko; Yoshikawa, Hideki; Tamura, Shinichi

    2004-01-01

    Oblique-viewing endoscopes (oblique scopes) are widely used in medical practice. They are essential for certain procedures such as laparoscopy, arthroscopy and sinus endoscopy. In an oblique scope the viewing directions are changeable by rotating the scope cylinder. Although a camera calibration method is necessary to apply augmented reality technologies to oblique endoscopic procedures, no method for oblique scope calibration has yet been developed. In the present paper, we formulate a camera model and a calibration procedure for oblique scopes. In the calibration procedure, Tsai's calibration is performed at zero rotation of the scope cylinder, then the variation of the external camera parameters corresponding to the rotation of the scope cylinder is modeled and estimated as a function of the rotation angle. Accurate estimation of the rotational axis is included in the procedure. The accuracy of this estimation was demonstrated to have a significant effect on overall calibration accuracy in the experimental evaluation, especially with large rotation angles. The projection error in the image plane was approximately two pixels. The proposed method was shown to be clinically applicable.

  13. Multi-site calibration, validation, and sensitivity analysis of the MIKE SHE Model for a large watershed in northern China

    Treesearch

    S. Wang; Z. Zhang; G. Sun; P. Strauss; J. Guo; Y. Tang; A. Yao

    2012-01-01

    Model calibration is essential for hydrologic modeling of large watersheds in a heterogeneous mountain environment. Little guidance is available for model calibration protocols for distributed models that aim at capturing the spatial variability of hydrologic processes. This study used the physically-based distributed hydrologic model, MIKE SHE, to contrast a lumped...

  14. Chemometric modelling based on 2D-fluorescence spectra without a calibration measurement.

    PubMed

    Solle, D; Geissler, D; Stärk, E; Scheper, T; Hitzmann, B

    2003-01-22

    2D fluorescence spectra provide information from intracellular compounds. Fluorophores like trytophan, tyrosine and phenylalanin as well as NADH and flavins make the corresponding measurement systems very important for bioprocess supervision and control. The evaluation is usually based on chemometric modelling using for their calibration procedure off-line measurements of the desired process variables. Due to the data driven approach lots of off-line measurements are required. Here a methodology is presented, which enables to perform a calibration procedure of chemometric models without any further measurement. The necessary information for the calibration procedure is provided by means of the a priori knowledge about the process, i.e. a mathematical model, whose model parameters are estimated during the calibration procedure, as well as the fact that the substrate should be consumed at the end of the process run. The new methodology for chemometric calibration is applied for a batch cultivation of aerobically grown S. cerevisiae on the glucose Schatzmann medium. As will be presented the chemometric models, which are determined by this method, can be used for prediction during new process runs. The MATHLAB routine is free available on request from the authors.

  15. When are multiobjective calibration trade-offs in hydrologic models meaningful?

    NASA Astrophysics Data System (ADS)

    Kollat, J. B.; Reed, P. M.; Wagener, T.

    2012-03-01

    This paper applies a four-objective calibration strategy focusing on peak flows, low flows, water balance, and flashiness to 392 model parameter estimation experiment (MOPEX) watersheds across the United States. Our analysis explores the influence of model structure by analyzing how the multiobjective calibration trade-offs for two conceptual hydrologic models, the Hydrology Model (HYMOD) and the Hydrologiska Byråns Vattenbalansavdelning (HBV) model, compare for each of the 392 catchments. Our results demonstrate that for modern multiobjective calibration frameworks to identify any meaningful measure of model structural failure, users must be able to carefully control the precision by which they evaluate their trade-offs. Our study demonstrates that the concept of epsilon-dominance provides an effective means of attaining bounded and meaningful hydrologic model calibration trade-offs. When analyzed at an appropriate precision, we found that meaningful multiobjective trade-offs are far less frequent than prior literature has suggested. However, when trade-offs do exist at a meaningful precision, they have significant value for supporting hydrologic model selection, distinguishing core model deficiencies, and identifying hydroclimatic regions where hydrologic model prediction is highly challenging.

  16. Sparkle/AM1 Parameters for the Modeling of Samarium(III) and Promethium(III) Complexes.

    PubMed

    Freire, Ricardo O; da Costa, Nivan B; Rocha, Gerd B; Simas, Alfredo M

    2006-01-01

    The Sparkle/AM1 model is extended to samarium(III) and promethium(III) complexes. A set of 15 structures of high crystallographic quality (R factor < 0.05 Å), with ligands chosen to be representative of all samarium complexes in the Cambridge Crystallographic Database 2004, CSD, with nitrogen or oxygen directly bonded to the samarium ion, was used as a training set. In the validation procedure, we used a set of 42 other complexes, also of high crystallographic quality. The results show that this parametrization for the Sm(III) ion is similar in accuracy to the previous parametrizations for Eu(III), Gd(III), and Tb(III). On the other hand, promethium is an artificial radioactive element with no stable isotope. So far, there are no promethium complex crystallographic structures in CSD. To circumvent this, we confirmed our previous result that RHF/STO-3G/ECP, with the MWB effective core potential (ECP), appears to be the most efficient ab initio model chemistry in terms of coordination polyhedron crystallographic geometry predictions from isolated lanthanide complex ion calculations. We thus generated a set of 15 RHF/STO-3G/ECP promethium complex structures with ligands chosen to be representative of complexes available in the CSD for all other trivalent lanthanide cations, with nitrogen or oxygen directly bonded to the lanthanide ion. For the 42 samarium(III) complexes and 15 promethium(III) complexes considered, the Sparkle/AM1 unsigned mean error, for all interatomic distances between the Ln(III) ion and the ligand atoms of the first sphere of coordination, is 0.07 and 0.06 Å, respectively, a level of accuracy comparable to present day ab initio/ECP geometries, while being hundreds of times faster.

  17. Calibrated Blade-Element/Momentum Theory Aerodynamic Model of the MARIN Stock Wind Turbine: Preprint

    SciTech Connect

    Goupee, A.; Kimball, R.; de Ridder, E. J.; Helder, J.; Robertson, A.; Jonkman, J.

    2015-04-02

    In this paper, a calibrated blade-element/momentum theory aerodynamic model of the MARIN stock wind turbine is developed and documented. The model is created using open-source software and calibrated to closely emulate experimental data obtained by the DeepCwind Consortium using a genetic algorithm optimization routine. The provided model will be useful for those interested in validating interested in validating floating wind turbine numerical simulators that rely on experiments utilizing the MARIN stock wind turbine—for example, the International Energy Agency Wind Task 30’s Offshore Code Comparison Collaboration Continued, with Correlation project.

  18. Multi-purpose calibration of HBV models for the Rhine with OpenDA

    NASA Astrophysics Data System (ADS)

    van Verseveld, W. J.; Sperna-Weiland, F.; Meissner, D.; Winsemius, H. C.; Weerts, A. H.; Hummel, S.; Sumihar, J. H.; Hegnauer, M.

    2012-04-01

    Calibration strategies for hydrological models nearly always depend on user interests. These interests are strongly determined by the eventual practical application of the model: what information should the model primarily provide, e.g. low flows, high flows, or accumulated inflows; what spatial and temporal information density is available in terms of data, and what information needed in terms of practical use; should parameter uncertainty estimation of the hydrological model be included? The Open-source Data Assimilation toolbox (OpenDA) is an open software framework for calibration and data-assimilation of hydrological models. In this contribution, we show that OpenDA can be used to rapidly calibrate a hydrological model, which is to be used for different purposes or under different circumstances such as mentioned above. To this end, OpenDA includes a number of calibration algorithms, can communicate with a multitude of hydrological and hydraulic models, and can handle multiple calibration signals in one calibration experiment. It can therefore be employed in complex calibration experiments. New algorithms and models can be included efficiently in the software. Our case study focuses on an HBV model structure for the international Rhine basin (area ~ 185.000 km2), consisting of 134 sub-catchment units containing many different gauging stations. This model is embedded in Delft-FEWS, an operational forecasting system which can also be used for offline data management and model integration. We performed a recalibration focussing on two applications: • FEWS-Rivers / FEWS-BfG (operational forecasting): Simulations of snow pack and melt within HBV performed poorly in this application. The model was optimized on hourly time scale. Parameters, related to snow processes were identified and optimized on a large number of available gauge data sets, using the Shuffled Complex Evolution algorithm. • FEWS-GRADE (extreme discharges for dike design): In this application

  19. Guidelines for model calibration and application to flow simulation in the Death Valley regional groundwater system

    USGS Publications Warehouse

    Hill, M.C.; D'Agnese, F. A.; Faunt, C.C.

    2000-01-01

    Fourteen guidelines are described which are intended to produce calibrated groundwater models likely to represent the associated real systems more accurately than typically used methods. The 14 guidelines are discussed in the context of the calibration of a regional groundwater flow model of the Death Valley region in the southwestern United States. This groundwater flow system contains two sites of national significance from which the subsurface transport of contaminants could be or is of concern: Yucca Mountain, which is the potential site of the United States high-level nuclear-waste disposal; and the Nevada Test Site, which contains a number of underground nuclear-testing locations. This application of the guidelines demonstrates how they may be used for model calibration and evaluation, and also to direct further model development and data collection.Fourteen guidelines are described which are intended to produce calibrated groundwater models likely to represent the associated real systems more accurately than typically used methods. The 14 guidelines are discussed in the context of the calibration of a regional groundwater flow model of the Death Valley region in the southwestern United States. This groundwater flow system contains two sites of national significance from which the subsurface transport of contaminants could be or is of concern: Yucca Mountain, which is the potential site of the United States high-level nuclear-waste disposal; and the Nevada Test Site, which contains a number of underground nuclear-testing locations. This application of the guidelines demonstrates how they may be used for model calibration and evaluation, and also to direct further model development and data collection.

  20. Understanding the Day Cent model: Calibration, sensitivity, and identifiability through inverse modeling

    USGS Publications Warehouse

    Necpálová, Magdalena; Anex, Robert P.; Fienen, Michael N.; Del Grosso, Stephen J.; Castellano, Michael J.; Sawyer, John E.; Iqbal, Javed; Pantoja, Jose L.; Barker, Daniel W.

    2015-01-01

    The ability of biogeochemical ecosystem models to represent agro-ecosystems depends on their correct integration with field observations. We report simultaneous calibration of 67 DayCent model parameters using multiple observation types through inverse modeling using the PEST parameter estimation software. Parameter estimation reduced the total sum of weighted squared residuals by 56% and improved model fit to crop productivity, soil carbon, volumetric soil water content, soil temperature, N2O, and soil3NO− compared to the default simulation. Inverse modeling substantially reduced predictive model error relative to the default model for all model predictions, except for soil 3NO− and 4NH+. Post-processing analyses provided insights into parameter–observation relationships based on parameter correlations, sensitivity and identifiability. Inverse modeling tools are shown to be a powerful way to systematize and accelerate the process of biogeochemical model interrogation, improving our understanding of model function and the underlying ecosystem biogeochemical processes that they represent.

  1. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    NASA Astrophysics Data System (ADS)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification

  2. Stochastic Modeling of Overtime Occupancy and Its Application in Building Energy Simulation and Calibration

    SciTech Connect

    Sun, Kaiyu; Yan, Da; Hong, Tianzhen; Guo, Siyue

    2014-02-28

    Overtime is a common phenomenon around the world. Overtime drives both internal heat gains from occupants, lighting and plug-loads, and HVAC operation during overtime periods. Overtime leads to longer occupancy hours and extended operation of building services systems beyond normal working hours, thus overtime impacts total building energy use. Current literature lacks methods to model overtime occupancy because overtime is stochastic in nature and varies by individual occupants and by time. To address this gap in the literature, this study aims to develop a new stochastic model based on the statistical analysis of measured overtime occupancy data from an office building. A binomial distribution is used to represent the total number of occupants working overtime, while an exponential distribution is used to represent the duration of overtime periods. The overtime model is used to generate overtime occupancy schedules as an input to the energy model of a second office building. The measured and simulated cooling energy use during the overtime period is compared in order to validate the overtime model. A hybrid approach to energy model calibration is proposed and tested, which combines ASHRAE Guideline 14 for the calibration of the energy model during normal working hours, and a proposed KS test for the calibration of the energy model during overtime. The developed stochastic overtime model and the hybrid calibration approach can be used in building energy simulations to improve the accuracy of results, and better understand the characteristics of overtime in office buildings.

  3. KINEROS2/AGWA: Model use, calibration, and validation

    USDA-ARS?s Scientific Manuscript database

    KINEROS (KINematic runoff and EROSion) originated in the 1960s as a distributed event-based model that conceptualizes a watershed as a cascade of overland flow model elements that flow into trapezoidal channel model elements. Development and improvement of KINEROS continued from the 1960s on a vari...

  4. Tweaking Model Parameters: Manual Adjustment and Self Calibration

    NASA Astrophysics Data System (ADS)

    Schulz, B.; Tuffs, R. J.; Laureijs, R. J.; Lu, N.; Peschke, S. B.; Gabriel, C.; Khan, I.

    2002-12-01

    The reduction of P32 data is not always straight forward and the application of the transient model needs tight control by the user. This paper describes how to access the model parameters within the P32Tools software and how to work with the "Inspect signals per pixel" panel, in order to explore the parameter space and improve the model fit.

  5. Eutrophication Model Accuracy - Comparison of Calibration and Verification Performance of a Model of the Neuse River Estuary, North Carolina

    NASA Astrophysics Data System (ADS)

    Bowen, J. D.

    2004-12-01

    A modified version of an existing two-dimensional, laterally averaged model (CE-QUAL-W2) was applied to predict water quality conditions in the lower 80-km of the Neuse River Estuary. Separate time periods were modeled for calibration and verification (model testing). The calibration time period ran from June 1997 to December 1999, while the verification time period ran from January to December 2000. During this time the estuary received two periods of unusually high inflows in early 1998 and again in September and October 1999. The latter rainfall event loaded the estuary with the equivalent of nearly two years worth of water and dissolved inorganic nitrogen in just six weeks. Overall, the level of calibration performance achieved by the model was comparable to that attained in other eutrophication model studies of eastern U.S. estuaries. The model most accurately simulated water quality constituents having a consistent spatial variation within the estuary (e.g. nitrate, salinity), and was least accurate for constituents without a consistent spatial variation (e.g. phosphate, chlorophyll-a). Calibration performance varied widely between the three algal groupings modeled (diatoms and dinoflagellates, cryptomonads and chlorophytes, cyanobacteria). Model performance during verification was comparable to the performance seen during calibration. The model's salinity prediction capabilities were somewhat better in the validation, while dissolved oxygen performance in the validation year was slightly poorer compared to calibration performance. Nutrient and chlorophyll-a performance were virtually the same between the calibration and verification exercises. As part of the TMDL analysis, an unsuccessful attempt was made to capture model error as a component of model uncertainty, but it was found that model residuals were neither unbiased nor normally distributed.

  6. Use of multivariate calibration models based on UV-Vis spectra for seawater quality monitoring in Tianjin Bohai Bay, China.

    PubMed

    Liu, Xianhua; Wang, Lili

    2015-01-01

    A series of ultraviolet-visible (UV-Vis) spectra from seawater samples collected from sites along the coastline of Tianjin Bohai Bay in China were subjected to multivariate partial least squares (PLS) regression analysis. Calibration models were developed for monitoring chemical oxygen demand (COD) and concentrations of total organic carbon (TOC). Three different PLS models were developed using the spectra from raw samples (Model-1), diluted samples (Model-2), and diluted and raw samples combined (Model-3). Experimental results showed that: (i) possible nonlinearities in the signal concentration relationships were well accounted for by the multivariate PLS model; (ii) the predicted values of COD and TOC fit the analytical values well; the high correlation coefficients and small root mean squared error of cross-validation (RMSECV) showed that this method can be used for seawater quality monitoring; and (iii) compared with Model-1 and Model-2, Model-3 had the highest coefficient of determination (R2) and the lowest number of latent variables. This latter finding suggests that only large data sets that include data representing different combinations of conditions (i.e., various seawater matrices) will produce stable site-specific regressions. The results of this study illustrate the effectiveness of the proposed method and its potential for use as a seawater quality monitoring technique.

  7. Bayesian calibration of the Unified budburst model in six temperate tree species

    NASA Astrophysics Data System (ADS)

    Fu, Yongshuo H.; Campioli, Matteo; Demarée, Gaston; Deckmyn, Alex; Hamdi, Rafiq; Janssens, Ivan A.; Deckmyn, Gaby

    2012-01-01

    Numerous phenology models developed to predict the budburst date of trees have been merged into one Unified model (Chuine, 2000, J. Theor. Biol. 207, 337-347). In this study, we tested a simplified version of the Unified model (Unichill model) on six woody species. Budburst and temperature data were available for five sites across Belgium from 1957 to 1995. We calibrated the Unichill model using a Bayesian calibration procedure, which reduced the uncertainty of the parameter coefficients and quantified the prediction uncertainty. The model performance differed among species. For two species (chestnut and black locust), the model showed good performance when tested against independent data not used for calibration. For the four other species (beech, oak, birch, ash), the model performed poorly. Model performance improved substantially for most species when using site-specific parameter coefficients instead of across-site parameter coefficients. This suggested that budburst is influenced by local environment and/or genetic differences among populations. Chestnut, black locust and birch were found to be temperature-driven species, and we therefore analyzed the sensitivity of budburst date to forcing temperature in those three species. Model results showed that budburst advanced with increasing temperature for 1-3 days °C-1, which agreed with the observed trends. In synthesis, our results suggest that the Unichill model can be successfully applied to chestnut and black locust (with both across-site and site-specific calibration) and to birch (with site-specific calibration). For other species, temperature is not the only determinant of budburst and additional influencing factors will need to be included in the model.

  8. Calibration of Two-dimensional Floodplain Modeling in the Atchafalaya River Basin Using SAR Interferometry

    NASA Technical Reports Server (NTRS)

    Jung, Hahn Chul; Jasinski, Michael; Kim, Jin-Woo; Shum, C. K.; Bates, Paul; Lee, Hgongki; Neal, Jeffrey; Alsdorf, Doug

    2012-01-01

    Two-dimensional (2D) satellite imagery has been increasingly employed to improve prediction of floodplain inundation models. However, most focus has been on validation of inundation extent, with little attention on the 2D spatial variations of water elevation and slope. The availability of high resolution Interferometric Synthetic Aperture Radar (InSAR) imagery offers unprecedented opportunity for quantitative validation of surface water heights and slopes derived from 2D hydrodynamic models. In this study, the LISFLOOD-ACC hydrodynamic model is applied to the central Atchafalaya River Basin, Louisiana, during high flows typical of spring floods in the Mississippi Delta region, for the purpose of demonstrating the utility of InSAR in coupled 1D/2D model calibration. Two calibration schemes focusing on Manning s roughness are compared. First, the model is calibrated in terms of water elevations at a single in situ gage during a 62 day simulation period from 1 April 2008 to 1 June 2008. Second, the model is calibrated in terms of water elevation changes calculated from ALOS PALSAR interferometry during 46 days of the image acquisition interval from 16 April 2008 to 1 June 2009. The best-fit models show that the mean absolute errors are 3.8 cm for a single in situ gage calibration and 5.7 cm/46 days for InSAR water level calibration. The optimum values of Manning's roughness coefficients are 0.024/0.10 for the channel/floodplain, respectively, using a single in situ gage, and 0.028/0.10 for channel/floodplain the using SAR. Based on the calibrated water elevation changes, daily storage changes within the size of approx 230 sq km of the model area are also calculated to be of the order of 107 cubic m/day during high water of the modeled period. This study demonstrates the feasibility of SAR interferometry to support 2D hydrodynamic model calibration and as a tool for improved understanding of complex floodplain hydrodynamics

  9. How does observation uncertainty influence which stream water samples are most informative for model calibration?

    NASA Astrophysics Data System (ADS)

    Wang, Ling; van Meerveld, Ilja; Seibert, Jan

    2016-04-01

    Streamflow isotope samples taken during rainfall-runoff events are very useful for multi-criteria model calibration because they can help decrease parameter uncertainty and improve internal model consistency. However, the number of samples that can be collected and analysed is often restricted by practical and financial constraints. It is, therefore, important to choose an appropriate sampling strategy and to obtain samples that have the highest information content for model calibration. We used the Birkenes hydrochemical model and synthetic rainfall, streamflow and isotope data to explore which samples are most informative for model calibration. Starting with error-free observations, we investigated how many samples are needed to obtain a certain model fit. Based on different parameter sets, representing different catchments, and different rainfall events, we also determined which sampling times provide the most informative data for model calibration. Our results show that simulation performance for models calibrated with the isotopic data from two intelligently selected samples was comparable to simulations based on isotopic data for all 100 time steps. The models calibrated with the intelligently selected samples also performed better than the model calibrations with two benchmark sampling strategies (random selection and selection based on hydrologic information). Surprisingly, samples on the rising limb and at the peak were less informative than expected and, generally, samples taken at the end of the event were most informative. The timing of the most informative samples depends on the proportion of different flow components (baseflow, slow response flow, fast response flow and overflow). For events dominated by baseflow and slow response flow, samples taken at the end of the event after the fast response flow has ended were most informative; when the fast response flow was dominant, samples taken near the peak were most informative. However when overflow

  10. Examining the Invariance of Rater and Project Calibrations Using a Multi-facet Rasch Model.

    ERIC Educational Resources Information Center

    O'Neill, Thomas R.; Lunz, Mary E.

    To generalize test results beyond the particular test administration, an examinee's ability estimate must be independent of the particular items attempted, and the item difficulty calibrations must be independent of the particular sample of people attempting the items. This stability is a key concept of the Rasch model, a latent trait model of…

  11. Guidebook on LANDFIRE fuels data acquisition, critique, modification, maintenance, and model calibration

    Treesearch

    Richard D. Stratton

    2009-01-01

    With the advent of LANDFIRE fuels layers, an increasing number of specialists are using the data in a variety of fire modeling systems. However, a comprehensive guide on acquiring, critiquing, and editing (ACE) geospatial fuels data does not exist. This paper provides guidance on ACE as well as on assembling a geospatial fuels team, model calibration, and maintaining...

  12. Model Calibration Efforts for the International Space Station's Solar Array Mast

    NASA Technical Reports Server (NTRS)

    Elliott, Kenny B.; Horta, Lucas G.; Templeton, Justin D.; Knight, Norman F., Jr.

    2012-01-01

    The International Space Station (ISS) relies on sixteen solar-voltaic blankets to provide electrical power to the station. Each pair of blankets is supported by a deployable boom called the Folding Articulated Square Truss Mast (FAST Mast). At certain ISS attitudes, the solar arrays can be positioned in such a way that shadowing of either one or three longerons causes an unexpected asymmetric thermal loading that if unchecked can exceed the operational stability limits of the mast. Work in this paper documents part of an independent NASA Engineering and Safety Center effort to assess the existing operational limits. Because of the complexity of the system, the problem is being worked using a building-block progression from components (longerons), to units (single or multiple bays), to assembly (full mast). The paper presents results from efforts to calibrate the longeron components. The work includes experimental testing of two types of longerons (straight and tapered), development of Finite Element (FE) models, development of parameter uncertainty models, and the establishment of a calibration and validation process to demonstrate adequacy of the models. Models in the context of this paper refer to both FE model and probabilistic parameter models. Results from model calibration of the straight longerons show that the model is capable of predicting the mean load, axial strain, and bending strain. For validation, parameter values obtained from calibration of straight longerons are used to validate experimental results for the tapered longerons.

  13. Calibrating and testing a gap model for simulating forest management in the Oregon Coast Range

    Treesearch

    Robert J. Pabst; Matthew N. Goslin; Steven L. Garman; Thomas A. Spies

    2008-01-01

    The complex mix of economic and ecological objectives facing today's forest managers necessitates the development of growth models with a capacity for simulating a wide range of forest conditions while producing outputs useful for economic analyses. We calibrated the gap model ZELIG to simulate stand level forest development in the Oregon Coast Range as part of a...

  14. Calibration of a complex watershed model using high resolution remotely sensed evapotranspiration retrievals

    USDA-ARS?s Scientific Manuscript database

    Process-based watershed models typically require a large number of parameters to describe complex hydrologic and biogeochemical processes in highly variable environments. Most of such parameters are not directly measured in field and require calibration, in most cases through matching modeled fluxes...

  15. Calibration and validation of the SWAT model for a forested watershed in coastal South Carolina

    Treesearch

    Devendra M. Amatya; Elizabeth B. Haley; Norman S. Levine; Timothy J. Callahan; Artur Radecki-Pawlik; Manoj K. Jha

    2008-01-01

    Modeling the hydrology of low-gradient coastal watersheds on shallow, poorly drained soils is a challenging task due to the complexities in watershed delineation, runoff generation processes and pathways, flooding, and submergence caused by tropical storms. The objective of the study is to calibrate and validate a GIS-based spatially-distributed hydrologic model, SWAT...

  16. Calibration Of A Distributed Hydrological Model Using Satellite Data Of LST And Ground Discharge Measurements

    NASA Astrophysics Data System (ADS)

    Corbari, Chiara; Manchini, Marco; Li, Jiren; Su, Zhongbo

    2013-12-01

    Calibration and validation of distributed models at basin scale generally refer to external variables, which are integrated catchment model outputs, and usually depend on the comparison between simulated and observed discharges at the available rivers cross sections, which are usually very few. However distributed models allow an internal validation due to their intrinsic structure, so that internal processes and variables of the model can be controlled in each cell of the domain. In particular this work investigates the potentiality to control evapotranspiration and its spatial and temporal variability through the detection of land surface temperature from satellite remote sensing. This study proposes a methodology for the calibration of distributed hydrological models at basin scale through the constraints on an internal model variable using remote sensing data of land surface temperature. The model (FEST-EWB) algorithm solves the system of energy and mass balances in term of the equilibrium pixel temperature or representative equilibrium temperature that governs the fluxes of energy and mass over the basin domain. This equilibrium surface temperature, which is a critical model state variable, is compared to land surface temperature from MODIS and AATSR. So soil hydraulic parameters and vegetation variables will be calibrated according to the comparison between observed and simulated land surface temperature minimizing the errors. A similar procedure will also be applied performing the traditional calibration using only discharge measurements. These analyses are performed for Upper Yangtze River basin (China) in framework of DRAGON-2 and DRAGON-3 Programme funded by NRSCC and ESA.

  17. Hydrological processes and model representation: Impact of soft data on calibration

    USDA-ARS?s Scientific Manuscript database

    Hydrologic and water quality models are increasingly used to determine the environmental impacts of climate variability and land management. Due to differing model objectives and differences in monitored data, there are currently no universally accepted procedures for calibration and validation in ...

  18. A model for adjustment of differential gravity measurements with simultaneous gravimeter calibration

    NASA Astrophysics Data System (ADS)

    Dias, F. J. S. S.; Escobar, Í. P.

    2001-05-01

    A mathematical model is proposed for adjustment of differential or relative gravity measurements, involving simultaneously instrumental readings, coefficients of the calibration function, and gravity values of selected base stations. Tests were performed with LaCoste and Romberg model G gravimeter measurements for a set of base stations located along a north-south line with 1750 mGal gravity range. This line was linked to nine control stations, where absolute gravity values had been determined by the free-fall method, with an accuracy better than 10 wGal. The model shows good consistence and stability. Results show the possibility of improving the calibration functions of gravimeters, as well as a better estimation of the gravity values, due to the flexibility admitted to the values of the calibration coefficients.

  19. Study of the performance of stereoscopic panomorph systems calibrated with traditional pinhole model

    NASA Astrophysics Data System (ADS)

    Poulin-Girard, Anne-Sophie; Thibault, Simon; Laurendeau, Denis

    2016-06-01

    With their large field of view, anamorphosis, and areas of enhanced magnification, panomorph lenses are an interesting choice for navigation systems for mobile robotics in which knowledge of the surroundings is mandatory. However, panomorph lenses special characteristics can be challenging during the calibration process. This study focuses on the calibration of two panomorph stereoscopic systems with a model and technique developed for narrow-angle lenses, the "Camera Calibration Toolbox for MATLAB." In order to assess the performance of the systems, the mean reprojection error (MRE) related to the calibration and the reconstruction error of control points of an object of interest at various locations in the field of view are used. The calibrations were successful and exhibit MREs of less than one pixel in all cases. However, some poorly reconstructed control points illustrate that an acceptable MRE guarantees neither the quality of 3-D reconstruction nor its uniformity in the field of view. In addition, the nonuniformity in the 3-D reconstruction quality indicates that panomorph lenses require a more accurate estimation of the principal point (center of distortion) coordinates to improve the calibration and therefore the 3-D reconstruction.

  20. Calibration of an advanced material model for a shotcrete lining

    NASA Astrophysics Data System (ADS)

    Chalmovský, Juraj; Závacký, Martin; Miča, Lumír

    2017-09-01

    Proper choice of a constitutive model is an essential part of any successful application of numerical methods in geotechnical engineering. In most cases, attention is paid to the soil constitutive model. For structural elements, such as tunnel linings, retaining structures etc. elastic constitutive models are often used. These material models however do not involve many aspects of a real structural behavior such as limited tensile and compressive strength, strain softening and time dependent behavior during service life of a construction. In the proposed paper, an application of the novel constitutive model for shotcrete (Schädlich, Schweiger, 2014) is presented. The paper is focused at the process of determination of input parameters values of this model based on performed laboratory test. Section of the primary collector network in Brno was chosen for the purpose of obtaining shotcrete lining samples.

  1. Improved Calibration of Modeled Discharge and Storage Change in the Atchafalaya Floodplain Using SAR Interferometry

    NASA Technical Reports Server (NTRS)

    Jung, Hahn Chul; Jasinski, Michael; Kim, Jin-Woo; Shum, C. K.; Bates, Paul; Neal, Jeffrey; Lee, Hyongki; Alsdorf, Doug

    2011-01-01

    This study focuses on the feasibility of using SAR interferometry to support 2D hydrodynamic model calibration and provide water storage change in the floodplain. Two-dimensional (2D) flood inundation modeling has been widely studied using storage cell approaches with the availability of high resolution, remotely sensed floodplain topography. The development of coupled 1D/2D flood modeling has shown improved calculation of 2D floodplain inundation as well as channel water elevation. Most floodplain model results have been validated using remote sensing methods for inundation extent. However, few studies show the quantitative validation of spatial variations in floodplain water elevations in the 2D modeling since most of the gauges are located along main river channels and traditional single track satellite altimetry over the floodplain are limited. Synthetic Aperture Radar (SAR) interferometry recently has been proven to be useful for measuring centimeter-scale water elevation changes over the floodplain. In the current study, we apply the LISFLOOD hydrodynamic model to the central Atchafalaya River Basin, Louisiana, during a 62 day period from 1 April to 1 June 2008 using two different calibration schemes for Manning's n. First, the model is calibrated in terms of water elevations from a single in situ gauge that represents a more traditional approach. Due to the gauge location in the channel, the calibration shows more sensitivity to channel roughness relative to floodplain roughness. Second, the model is calibrated in terms of water elevation changes calculated from ALOS PALSAR interferometry during 46 days of the image acquisition interval from 16 April 2008 to 1 June 2009. Since SAR interferometry receives strongly scatters in floodplain due to double bounce effect as compared to specular scattering of open water, the calibration shows more dependency to floodplain roughness. An iterative approach is used to determine the best-fit Manning's n for the two

  2. Multi-metric calibration of hydrological model to capture overall flow regimes

    NASA Astrophysics Data System (ADS)

    Zhang, Yongyong; Shao, Quanxi; Zhang, Shifeng; Zhai, Xiaoyan; She, Dunxian

    2016-08-01

    Flow regimes (e.g., magnitude, frequency, variation, duration, timing and rating of change) play a critical role in water supply and flood control, environmental processes, as well as biodiversity and life history patterns in the aquatic ecosystem. The traditional flow magnitude-oriented calibration of hydrological model was usually inadequate to well capture all the characteristics of observed flow regimes. In this study, we simulated multiple flow regime metrics simultaneously by coupling a distributed hydrological model with an equally weighted multi-objective optimization algorithm. Two headwater watersheds in the arid Hexi Corridor were selected for the case study. Sixteen metrics were selected as optimization objectives, which could represent the major characteristics of flow regimes. Model performance was compared with that of the single objective calibration. Results showed that most metrics were better simulated by the multi-objective approach than those of the single objective calibration, especially the low and high flow magnitudes, frequency and variation, duration, maximum flow timing and rating. However, the model performance of middle flow magnitude was not significantly improved because this metric was usually well captured by single objective calibration. The timing of minimum flow was poorly predicted by both the multi-metric and single calibrations due to the uncertainties in model structure and input data. The sensitive parameter values of the hydrological model changed remarkably and the simulated hydrological processes by the multi-metric calibration became more reliable, because more flow characteristics were considered. The study is expected to provide more detailed flow information by hydrological simulation for the integrated water resources management, and to improve the simulation performances of overall flow regimes.

  3. Semi-automated calibration method for modelling of mountain permafrost evolution in Switzerland

    NASA Astrophysics Data System (ADS)

    Marmy, Antoine; Rajczak, Jan; Delaloye, Reynald; Hilbich, Christin; Hoelzle, Martin; Kotlarski, Sven; Lambiel, Christophe; Noetzli, Jeannette; Phillips, Marcia; Salzmann, Nadine; Staub, Benno; Hauck, Christian

    2016-11-01

    Permafrost is a widespread phenomenon in mountainous regions of the world such as the European Alps. Many important topics such as the future evolution of permafrost related to climate change and the detection of permafrost related to potential natural hazards sites are of major concern to our society. Numerical permafrost models are the only tools which allow for the projection of the future evolution of permafrost. Due to the complexity of the processes involved and the heterogeneity of Alpine terrain, models must be carefully calibrated, and results should be compared with observations at the site (borehole) scale. However, for large-scale applications, a site-specific model calibration for a multitude of grid points would be very time-consuming. To tackle this issue, this study presents a semi-automated calibration method using the Generalized Likelihood Uncertainty Estimation (GLUE) as implemented in a 1-D soil model (CoupModel) and applies it to six permafrost sites in the Swiss Alps. We show that this semi-automated calibration method is able to accurately reproduce the main thermal condition characteristics with some limitations at sites with unique conditions such as 3-D air or water circulation, which have to be calibrated manually. The calibration obtained was used for global and regional climate model (GCM/RCM)-based long-term climate projections under the A1B climate scenario (EU-ENSEMBLES project) specifically downscaled at each borehole site. The projection shows general permafrost degradation with thawing at 10 m, even partially reaching 20 m depth by the end of the century, but with different timing among the sites and with partly considerable uncertainties due to the spread of the applied climatic forcing.

  4. AUTOMATIC CALIBRATION OF A STOCHASTIC-LAGRANGIAN TRANSPORT MODEL (SLAM)

    EPA Science Inventory

    Numerical models are a useful tool in evaluating and designing NAPL remediation systems. Traditional constitutive finite difference and finite element models are complex and expensive to apply. For this reason, this paper presents the application of a simplified stochastic-Lagran...

  5. AUTOMATIC CALIBRATION OF A STOCHASTIC-LAGRANGIAN TRANSPORT MODEL (SLAM)

    EPA Science Inventory

    Numerical models are a useful tool in evaluating and designing NAPL remediation systems. Traditional constitutive finite difference and finite element models are complex and expensive to apply. For this reason, this paper presents the application of a simplified stochastic-Lagran...

  6. EPIC and APEX: Model use, calibration, and validation

    USDA-ARS?s Scientific Manuscript database

    The Environmental Policy Integrated Climate (EPIC) and Agricultural Policy/Environmental eXtender (APEX) models have been developed to assess a wide variety of agricultural water resource, water quality, and other environmental problems. The EPIC model is designed to be applied at a field-scale leve...

  7. THE EFFECT OF METALLICITY-DEPENDENT T-τ RELATIONS ON CALIBRATED STELLAR MODELS

    SciTech Connect

    Tanner, Joel D.; Basu, Sarbani; Demarque, Pierre

    2014-04-10

    Mixing length theory is the predominant treatment of convection in stellar models today. Usually described by a single free parameter, α, the common practice is to calibrate it using the properties of the Sun, and apply it to all other stellar models as well. Asteroseismic data from Kepler and CoRoT provide precise properties of other stars which can be used to determine α as well, and a recent study of stars in the Kepler field of view found α to vary with metallicity. Interpreting α obtained from calibrated stellar models, however, is complicated by the fact that the value for α depends on the surface boundary condition of the stellar model, or T-τ relation. Calibrated models that use typical T-τ relations, which are static and insensitive to chemical composition, do not include the complete effect of metallicity on α. We use three-dimensional radiation-hydrodynamic simulations to extract metallicity-dependent T-τ relations and use them in calibrated stellar models. We find the previously reported α-metallicity trend to be robust, and not significantly affected by the surface boundary condition of the stellar models.

  8. Estimating the Health Impact of Climate Change with Calibrated Climate Model Output

    PubMed Central

    Zhou, Jingwen; Chang, Howard H.; Fuentes, Montserrat

    2013-01-01

    Studies on the health impacts of climate change routinely use climate model output as future exposure projection. Uncertainty quantification, usually in the form of sensitivity analysis, has focused predominantly on the variability arise from different emission scenarios or multi-model ensembles. This paper describes a Bayesian spatial quantile regression approach to calibrate climate model output for examining to the risks of future temperature on adverse health outcomes. Specifically, we first estimate the spatial quantile process for climate model output using nonlinear monotonic regression during a historical period. The quantile process is then calibrated using the quantile functions estimated from the observed monitoring data. Our model also down-scales the gridded climate model output to the point-level for projecting future exposure over a specific geographical region. The quantile regression approach is motivated by the need to better characterize the tails of future temperature distribution where the greatest health impacts are likely to occur. We applied the methodology to calibrate temperature projections from a regional climate model for the period 2041 to 2050. Accounting for calibration uncertainty, we calculated the number of of excess deaths attributed to future temperature for three cities in the US state of Alabama. PMID:24039385

  9. Improvement of spectral calibration for food analysis through multi-model fusion.

    PubMed

    Tan, Chao; Chen, Hui; Xu, Zehong; Wu, Tong; Wang, Li; Zhu, Wanping

    2012-10-01

    Near-infrared (NIR) spectroscopy will present a more promising tool for quantitative analysis if the predictive ability of the calibration model is further improved. To achieve this goal, a new ensemble calibration method based on uninformative variable elimination (UVE)-partial least square (PLS) is proposed, which is named as ensemble PLS (EPLS), meaning a fusion of multiple PLS models. In this method, different calibration sets are first generated by bootstrap and different PLS models are obtained. Then, the UVE is used to shrink the original variable space into a specific subspace. By repeating this process, a fixed number of candidates PLS member models are obtained. Finally, a smaller part of candidate models are integrated to produce an ensemble model. In order to verify the performance of EPLS, three NIR spectral datasets from food industry were used for illustration. Both full-spectrum PLS and UVEPLS of single models were used as reference. It was found that the proposed method could lead to lower RMSEP (root mean square error of prediction) value than PLS and UVEPLS and such an improvement is statistically significant according to a paired t-test. The results showed that the method is of value to enhance the predictive ability of PLS-based calibration involving complex NIR matrices in food analysis. Copyright © 2012 Elsevier B.V. All rights reserved.

  10. Improvement of spectral calibration for food analysis through multi-model fusion

    NASA Astrophysics Data System (ADS)

    Tan, Chao; Chen, Hui; Xu, Zehong; Wu, Tong; Wang, Li; Zhu, Wanping

    2012-10-01

    Near-infrared (NIR) spectroscopy will present a more promising tool for quantitative analysis if the predictive ability of the calibration model is further improved. To achieve this goal, a new ensemble calibration method based on uninformative variable elimination (UVE)-partial least square (PLS) is proposed, which is named as ensemble PLS (EPLS), meaning a fusion of multiple PLS models. In this method, different calibration sets are first generated by bootstrap and different PLS models are obtained. Then, the UVE is used to shrink the original variable space into a specific subspace. By repeating this process, a fixed number of candidates PLS member models are obtained. Finally, a smaller part of candidate models are integrated to produce an ensemble model. In order to verify the performance of EPLS, three NIR spectral datasets from food industry were used for illustration. Both full-spectrum PLS and UVEPLS of single models were used as reference. It was found that the proposed method could lead to lower RMSEP (root mean square error of prediction) value than PLS and UVEPLS and such an improvement is statistically significant according to a paired t-test. The results showed that the method is of value to enhance the predictive ability of PLS-based calibration involving complex NIR matrices in food analysis.

  11. Estimating the Health Impact of Climate Change with Calibrated Climate Model Output.

    PubMed

    Zhou, Jingwen; Chang, Howard H; Fuentes, Montserrat

    2012-09-01

    Studies on the health impacts of climate change routinely use climate model output as future exposure projection. Uncertainty quantification, usually in the form of sensitivity analysis, has focused predominantly on the variability arise from different emission scenarios or multi-model ensembles. This paper describes a Bayesian spatial quantile regression approach to calibrate climate model output for examining to the risks of future temperature on adverse health outcomes. Specifically, we first estimate the spatial quantile process for climate model output using nonlinear monotonic regression during a historical period. The quantile process is then calibrated using the quantile functions estimated from the observed monitoring data. Our model also down-scales the gridded climate model output to the point-level for projecting future exposure over a specific geographical region. The quantile regression approach is motivated by the need to better characterize the tails of future temperature distribution where the greatest health impacts are likely to occur. We applied the methodology to calibrate temperature projections from a regional climate model for the period 2041 to 2050. Accounting for calibration uncertainty, we calculated the number of of excess deaths attributed to future temperature for three cities in the US state of Alabama.

  12. Self-calibrating models for dynamic monitoring and diagnosis

    NASA Technical Reports Server (NTRS)

    Kuipers, Benjamin

    1994-01-01

    The present goal in qualitative reasoning is to develop methods for automatically building qualitative and semiquantitative models of dynamic systems and to use them for monitoring and fault diagnosis. The qualitative approach to modeling provides a guarantee of coverage while our semiquantitative methods support convergence toward a numerical model as observations are accumulated. We have developed and applied methods for automatic creation of qualitative models, developed two methods for obtaining tractable results on problems that were previously intractable for qualitative simulation, and developed more powerful methods for learning semiquantitative models from observations and deriving semiquantitative predictions from them. With these advances, qualitative reasoning comes significantly closer to realizing its aims as a practical engineering method.

  13. Bayesian Calibration, Validation and Uncertainty Quantification for Predictive Modelling of Tumour Growth: A Tutorial.

    PubMed

    Collis, Joe; Connor, Anthony J; Paczkowski, Marcin; Kannan, Pavitra; Pitt-Francis, Joe; Byrne, Helen M; Hubbard, Matthew E

    2017-04-01

    In this work, we present a pedagogical tumour growth example, in which we apply calibration and validation techniques to an uncertain, Gompertzian model of tumour spheroid growth. The key contribution of this article is the discussion and application of these methods (that are not commonly employed in the field of cancer modelling) in the context of a simple model, whose deterministic analogue is widely known within the community. In the course of the example, we calibrate the model against experimental data that are subject to measurement errors, and then validate the resulting uncertain model predictions. We then analyse the sensitivity of the model predictions to the underlying measurement model. Finally, we propose an elementary learning approach for tuning a threshold parameter in the validation procedure in order to maximize predictive accuracy of our validated model.

  14. Multi-objective calibration of a reservoir model: aggregation and non-dominated sorting approaches

    NASA Astrophysics Data System (ADS)

    Huang, Y.

    2012-12-01

    Numerical reservoir models can be helpful tools for water resource management. These models are generally calibrated against historical measurement data made in reservoirs. In this study, two methods are proposed for the multi-objective calibration of such models: aggregation and non-dominated sorting methods. Both methods use a hybrid genetic algorithm as an optimization engine and are different in fitness assignment. In the aggregation method, a weighted sum of scaled simulation errors is designed as an overall objective function to measure the fitness of solutions (i.e. parameter values). The contribution of this study to the aggregation method is the correlation analysis and its implication to the choice of weight factors. In the non-dominated sorting method, a novel method based on non-dominated sorting and the method of minimal distance is used to calculate the dummy fitness of solutions. The proposed methods are illustrated using a water quality model that was set up to simulate the water quality of Pepacton Reservoir, which is located to the north of New York City and is used for water supply of city. The study also compares the aggregation and the non-dominated sorting methods. The purpose of this comparison is not to evaluate the pros and cons between the two methods but to determine whether the parameter values, objective function values (simulation errors) and simulated results obtained are significantly different with each other. The final results (objective function values) from the two methods are good compromise between all objective functions, and none of these results are the worst for any objective function. The calibrated model provides an overall good performance and the simulated results with the calibrated parameter values match the observed data better than the un-calibrated parameters, which supports and justifies the use of multi-objective calibration. The results achieved in this study can be very useful for the calibration of water

  15. A computer program for calculating relative-transmissivity input arrays to aid model calibration

    USGS Publications Warehouse

    Weiss, Emanuel

    1982-01-01

    A program is documented that calculates a transmissivity distribution for input to a digital ground-water flow model. Factors that are taken into account in the calculation are: aquifer thickness, ground-water viscosity and its dependence on temperature and dissolved solids, and permeability and its dependence on overburden pressure. Other factors affecting ground-water flow are indicated. With small changes in the program code, leakance also could be calculated. The purpose of these calculations is to provide a physical basis for efficient calibration, and to extend rational transmissivity trends into areas where model calibration is insensitive to transmissivity values.

  16. Model development and calibration for the coupled thermal, hydraulic and mechanical phenomena of the bentonite

    SciTech Connect

    Chijimatsu, M.; Borgesson, L.; Fujita, T.; Jussila, P.; Nguyen, S.; Rutqvist, J.; Jing, L.; Hernelind, J.

    2009-02-01

    In Task A of the international DECOVALEX-THMC project, five research teams study the influence of thermal-hydro-mechanical (THM) coupling on the safety of a hypothetical geological repository for spent fuel. In order to improve the analyses, the teams calibrated their bentonite models with results from laboratory experiments, including swelling pressure tests, water uptake tests, thermally gradient tests, and the CEA mock-up THM experiment. This paper describes the mathematical models used by the teams, and compares the results of their calibrations with the experimental data.

  17. Calibration of an agricultural-hydrological model (RZWQM2) using surrogate global optimization

    SciTech Connect

    Xi, Maolong; Lu, Dan; Gui, Dongwei; Qi, Zhiming; Zhang, Guannan

    2016-11-27

    Robust calibration of an agricultural-hydrological model is critical for simulating crop yield and water quality and making reasonable agricultural management. However, calibration of the agricultural-hydrological system models is challenging because of model complexity, the existence of strong parameter correlation, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time, which greatly restricts the successful application of the model. The goal of this study is to locate the optimal solution of the Root Zone Water Quality Model (RZWQM2) given a limited simulation time, so as to improve the model simulation and help make rational and effective agricultural-hydrological decisions. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first used advanced sparse grid (SG) interpolation to construct a surrogate system of the actual RZWQM2, and then we calibrate the surrogate model using the global optimization algorithm, Quantum-behaved Particle Swarm Optimization (QPSO). As the surrogate model is a polynomial with fast evaluation, it can be efficiently evaluated with a sufficiently large number of times during the optimization, which facilitates the global search. We calibrate seven model parameters against five years of yield, drain flow, and NO3-N loss data from a subsurface-drained corn-soybean field in Iowa. Results indicate that an accurate surrogate model can be created for the RZWQM2 with a relatively small number of SG points (i.e., RZWQM2 runs). Compared to the conventional QPSO algorithm, our surrogate-based optimization method can achieve a smaller objective function value and better calibration performance using a fewer number of expensive RZWQM2 executions, which greatly improves computational efficiency.

  18. Calibration of an agricultural-hydrological model (RZWQM2) using surrogate global optimization

    DOE PAGES

    Xi, Maolong; Lu, Dan; Gui, Dongwei; ...

    2016-11-27

    Robust calibration of an agricultural-hydrological model is critical for simulating crop yield and water quality and making reasonable agricultural management. However, calibration of the agricultural-hydrological system models is challenging because of model complexity, the existence of strong parameter correlation, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time, which greatly restricts the successful application of the model. The goal of this study is to locate the optimal solution of the Root Zone Water Quality Model (RZWQM2) given a limited simulation time, so asmore » to improve the model simulation and help make rational and effective agricultural-hydrological decisions. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first used advanced sparse grid (SG) interpolation to construct a surrogate system of the actual RZWQM2, and then we calibrate the surrogate model using the global optimization algorithm, Quantum-behaved Particle Swarm Optimization (QPSO). As the surrogate model is a polynomial with fast evaluation, it can be efficiently evaluated with a sufficiently large number of times during the optimization, which facilitates the global search. We calibrate seven model parameters against five years of yield, drain flow, and NO3-N loss data from a subsurface-drained corn-soybean field in Iowa. Results indicate that an accurate surrogate model can be created for the RZWQM2 with a relatively small number of SG points (i.e., RZWQM2 runs). Compared to the conventional QPSO algorithm, our surrogate-based optimization method can achieve a smaller objective function value and better calibration performance using a fewer number of expensive RZWQM2 executions, which greatly improves computational efficiency.« less

  19. Calibration of an agricultural-hydrological model (RZWQM2) using surrogate global optimization

    NASA Astrophysics Data System (ADS)

    Xi, Maolong; Lu, Dan; Gui, Dongwei; Qi, Zhiming; Zhang, Guannan

    2017-01-01

    Robust calibration of an agricultural-hydrological model is critical for simulating crop yield and water quality and making reasonable agricultural management. However, calibration of the agricultural-hydrological system models is challenging because of model complexity, the existence of strong parameter correlation, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time, which greatly restricts the successful application of the model. The goal of this study is to locate the optimal solution of the Root Zone Water Quality Model (RZWQM2) given a limited simulation time, so as to improve the model simulation and help make rational and effective agricultural-hydrological decisions. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first used advanced sparse grid (SG) interpolation to construct a surrogate system of the actual RZWQM2, and then we calibrate the surrogate model using the global optimization algorithm, Quantum-behaved Particle Swarm Optimization (QPSO). As the surrogate model is a polynomial with fast evaluation, it can be efficiently evaluated with a sufficiently large number of times during the optimization, which facilitates the global search. We calibrate seven model parameters against five years of yield, drain flow, and NO3-N loss data from a subsurface-drained corn-soybean field in Iowa. Results indicate that an accurate surrogate model can be created for the RZWQM2 with a relatively small number of SG points (i.e., RZWQM2 runs). Compared to the conventional QPSO algorithm, our surrogate-based optimization method can achieve a smaller objective function value and better calibration performance using a fewer number of expensive RZWQM2 executions, which greatly improves computational efficiency.

  20. Toward diagnostic model calibration and evaluation: Approximate Bayesian computation

    NASA Astrophysics Data System (ADS)

    Vrugt, Jasper A.; Sadegh, Mojtaba

    2013-07-01

    The ever increasing pace of computational power, along with continued advances in measurement technologies and improvements in process understanding has stimulated the development of increasingly complex hydrologic models that simulate soil moisture flow, groundwater recharge, surface runoff, root water uptake, and river discharge at different spatial and temporal scales. Reconciling these high-order system models with perpetually larger volumes of field data is becoming more and more difficult, particularly because classical likelihood-based fitting methods lack the power to detect and pinpoint deficiencies in the model structure. Gupta et al. (2008) has recently proposed steps (amongst others) toward the development of a more robust and powerful method of model evaluation. Their diagnostic approach uses signature behaviors and patterns observed in the input-output data to illuminate to what degree a representation of the real world has been adequately achieved and how the model should be improved for the purpose of learning and scientific discovery. In this paper, we introduce approximate Bayesian computation (ABC) as a vehicle for diagnostic model evaluation. This statistical methodology relaxes the need for an explicit likelihood function in favor of one or multiple different summary statistics rooted in hydrologic theory that together have a clearer and more compelling diagnostic power than some average measure of the size of the error residuals. Two illustrative case studies are used to demonstrate that ABC is relatively easy to implement, and readily employs signature based indices to analyze and pinpoint which part of the model is malfunctioning and in need of further improvement.

  1. Combining multiobjective optimization and Bayesian model averaging to calibrate forecast ensembles of soil hydraulic models

    NASA Astrophysics Data System (ADS)

    WöHling, Thomas; Vrugt, Jasper A.

    2008-12-01

    Most studies in vadose zone hydrology use a single conceptual model for predictive inference and analysis. Focusing on the outcome of a single model is prone to statistical bias and underestimation of uncertainty. In this study, we combine multiobjective optimization and Bayesian model averaging (BMA) to generate forecast ensembles of soil hydraulic models. To illustrate our method, we use observed tensiometric pressure head data at three different depths in a layered vadose zone of volcanic origin in New Zealand. A set of seven different soil hydraulic models is calibrated using a multiobjective formulation with three different objective functions that each measure the mismatch between observed and predicted soil water pressure head at one specific depth. The Pareto solution space corresponding to these three objectives is estimated with AMALGAM and used to generate four different model ensembles. These ensembles are postprocessed with BMA and used for predictive analysis and uncertainty estimation. Our most important conclusions for the vadose zone under consideration are (1) the mean BMA forecast exhibits similar predictive capabilities as the best individual performing soil hydraulic model, (2) the size of the BMA uncertainty ranges increase with increasing depth and dryness in the soil profile, (3) the best performing ensemble corresponds to the compromise (or balanced) solution of the three-objective Pareto surface, and (4) the combined multiobjective optimization and BMA framework proposed in this paper is very useful to generate forecast ensembles of soil hydraulic models.

  2. Transferability of calibrated microsimulation model parameters for safety assessment using simulated conflicts.

    PubMed

    Essa, Mohamed; Sayed, Tarek

    2015-11-01

    Several studies have investigated the relationship between field-measured conflicts and the conflicts obtained from micro-simulation models using the Surrogate Safety Assessment Model (SSAM). Results from recent studies have shown that while reasonable correlation between simulated and real traffic conflicts can be obtained especially after proper calibration, more work is still needed to confirm that simulated conflicts provide safety measures beyond what can be expected from exposure. As well, the results have emphasized that using micro-simulation model to evaluate safety without proper model calibration should be avoided. The calibration process adjusts relevant simulation parameters to maximize the correlation between field-measured and simulated conflicts. The main objective of this study is to investigate the transferability of calibrated parameters of the traffic simulation model (VISSIM) for safety analysis between different sites. The main purpose is to examine whether the calibrated parameters, when applied to other sites, give reasonable results in terms of the correlation between the field-measured and the simulated conflicts. Eighty-three hours of video data from two signalized intersections in Surrey, BC were used in this study. Automated video-based computer vision techniques were used to extract vehicle trajectories and identify field-measured rear-end conflicts. Calibrated VISSIM parameters obtained from the first intersection which maximized the correlation between simulated and field-observed conflicts were used to estimate traffic conflicts at the second intersection and to compare the results to parameters optimized specifically for the second intersection. The results show that the VISSIM parameters are generally transferable between the two locations as the transferred parameters provided better correlation between simulated and field-measured conflicts than using the default VISSIM parameters. Of the six VISSIM parameters identified as

  3. Calibration of a flood inundation model using a SAR image: influence of acquisition time

    NASA Astrophysics Data System (ADS)

    Van Wesemael, Alexandra; Gobeyn, Sacha; Neal, Jeffrey; Lievens, Hans; Van Eerdenbrugh, Katrien; De Vleeschouwer, Niels; Schumann, Guy; Vernieuwe, Hilde; Di Baldassarre, Giuliano; De Baets, Bernard; Bates, Paul; Verhoest, Niko

    2016-04-01

    Flood risk management has always been in a search for effective prediction approaches. As such, the calibration of flood inundation models is continuously improved. In practice, this calibration process consists of finding the optimal roughness parameters, both channel and floodplain Manning coefficients, since these values considerably influence the flood extent in a catchment. In addition, Synthetic Aperture Radar (SAR) images have been proven to be a very useful tool in calibrating the flood extent. These images can distinguish between wet (flooded) and dry (non-flooded) pixels through the intensity of backscattered radio waves. To this date, however, satellite overpass often occurs only once during a flood event. Therefore, this study is specifically concerned with the effect of the timing of the SAR data acquisition on calibration results. In order to model the flood extent, the raster-based inundation model, LISFLOOD-FP, is used together with a high resolution synthetic aperture radar image (ERS-2 SAR) of a flood event of the river Dee, Wales, in December 2006. As only one satellite image of the considered case study is available, a synthetic framework is implemented in order to generate a time series of SAR observations. These synthetic observations are then used to calibrate the model at different time instants. In doing so, the sensitivity of the model output to the channel and floodplain Manning coefficients is studied through time. As results are examined, these suggest that there is a clear difference in the spatial variability to which water is held within the floodplain. Furthermore, these differences seem to be variable through time. Calibration by means of satellite flood observations obtained from the rising or receding limb, would generally lead to more reliable results rather than near peak flow observations.

  4. Multi-Objective Calibration of Conceptual and Artificial Neural Network Models for Improved Runoff Forecasting

    NASA Astrophysics Data System (ADS)

    de Vos, N. J.; Rientjes, T. H.; Gupta, H. V.

    2006-12-01

    The forecasting of river discharges and water levels requires models that simulate the transformation of rainfall on a watershed into the runoff. The most popular approach to this complex modeling issue is to use conceptual hydrological models. In recent years, however, data-driven model alternatives have gained significant attention. Such models extract and re-use information that is implicit in hydrological data and do not directly take into account the physical laws that underlie rainfall-runoff processes. In this study, we have made a comparison between a conceptual hydrological model and the popular data-driven approach of Artificial Neural Network (ANN) modeling. ANNs use flexible model structures that simulate rainfall-runoff processes by mapping the transformation from system input and/or system states (e.g., rainfall, evaporation, soil moisture content) to system output (e.g. river discharge). Special attention was paid to the procedure of calibration of both approaches. Singular objective functions based on squared-error-based performance measures, such as the Mean Squared Error (MSE) are commonly used in rainfall-runoff modeling. However, not all differences between modeled and observed hydrograph characteristics can be adequately expressed by a single performance measure. Nowadays it is acknowledged that the calibration of rainfall-runoff models is inherently multi-objective. Therefore, Multi-Objective Evolutionary Algorithms (MOEAs) were tested as alternatives to traditional single-objective algorithms for calibration of both a conceptual and an ANN model for forecasting runoff. The MOEAs compare favorably to traditional single-objective methods in terms of performance, and they shed more light on the trade-offs between various objective functions. Additionally, the distribution of model parameter values gives insights into model parameter uncertainty and model structural deficiencies. Summarizing, the current study presents interesting and promising

  5. Value of using remotely sensed evapotranspiration for SWAT model calibration

    USDA-ARS?s Scientific Manuscript database

    Hydrologic models are useful management tools for assessing water resources solutions and estimating the potential impact of climate variation scenarios. A comprehensive understanding of the water budget components and especially the evapotranspiration (ET) is critical and often overlooked for adeq...

  6. Algorithms for Model Calibration of Ground Water Simulators

    DTIC Science & Technology

    2014-11-20

    number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. North Carolina State University 2701 Sullivan Drive Raleigh, NC 27695 -7514 15-Sep-2014...models with a group at the University of North Carolina and a student Deena Hannoun, who is supported by an NSF grant. That modeling questions for that...preprint. [15] A. Toth and C. T. Kelley, Convergence analysis for anderson acceleration, 2013. Submitted for publication. [16] H. F. Walker , C. S

  7. Calibration of HEC-Ras hydrodynamic model using gauged discharge data and flood inundation maps

    NASA Astrophysics Data System (ADS)

    Tong, Rui; Komma, Jürgen

    2017-04-01

    The estimation of flood is essential for disaster alleviation. Hydrodynamic models are implemented to predict the occurrence and variance of flood in different scales. In practice, the calibration of hydrodynamic models aims to search the best possible parameters for the representation the natural flow resistance. Recent years have seen the calibration of hydrodynamic models being more actual and faster following the advance of earth observation products and computer based optimization techniques. In this study, the Hydrologic Engineering River Analysis System (HEC-Ras) model was set up with high-resolution digital elevation model from Laser scanner for the river Inn in Tyrol, Austria. 10 largest flood events from 19 hourly discharge gauges and flood inundation maps were selected to calibrate the HEC-Ras model. Manning roughness values and lateral inflow factors as parameters were automatically optimized with the Shuffled complex with Principal component analysis (SP-UCI) algorithm developed from the Shuffled Complex Evolution (SCE-UA). Different objective functions (Nash-Sutcliffe model efficiency coefficient, the timing of peak, peak value and Root-mean-square deviation) were used in single or multiple way. It was found that the lateral inflow factor was the most sensitive parameter. SP-UCI algorithm could avoid the local optimal and achieve efficient and effective parameters in the calibration of HEC-Ras model using flood extension images. As results showed, calibration by means of gauged discharge data and flood inundation maps, together with objective function of Nash-Sutcliffe model efficiency coefficient, was very robust to obtain more reliable flood simulation, and also to catch up with the peak value and the timing of peak.

  8. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.

    PubMed

    Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin

    2015-02-01

    To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.

  9. Using Diverse Data Types to Calibrate a Watershed Model of the Trout Lake Basin, Northern Wisconsin

    NASA Astrophysics Data System (ADS)

    Hunt, R. J.; Feinstein, D. T.; Pint, C. D.; Anderson, M. P.

    2004-12-01

    As part of the USGS Water, Energy, and Biogeochemical Budgets project and NSF Long-Term Ecological Research work, a parameter estimation code was used to calibrate a deterministic groundwater flow model of the Trout Lake Basin in northern Wisconsin. Observations included traditional calibration targets (head, lake stage, and baseflow observations) as well as unconventional targets such as groundwater flows to and from lakes, depth of a lake plume, and time of travel. The unconventional data types were important for parameter estimation convergence and allowed the development of a more parameterized model. Independent estimates of groundwater inflow to lakes were most important for constraining lakebed leakance, and the depth of the lake plume was important for determining hydraulic conductivity and conceptual aquifer layering. The most important target, however, was a conventional regional baseflow target that was important for correctly distributing flow between sub-basins and the regional system. The use of parameter estimation: 1) facilitated the calibration process by providing a quantitative assessment of the model's ability to match disparate observed data types; and 2) provided a best fit for the particular model conceptualization. The model calibration required the use of a "universal" parameter estimation code in order to include all types of observations in the objective function. The methods described here help address issues of watershed complexity and non-uniqueness common to deterministic watershed models.

  10. Assessing groundwater vulnerability in the Kinshasa region, DR Congo, using a calibrated DRASTIC model

    NASA Astrophysics Data System (ADS)

    Mfumu Kihumba, Antoine; Vanclooster, Marnik; Ndembo Longo, Jean

    2017-02-01

    This study assessed the vulnerability of groundwater against pollution in the Kinshasa region, DR Congo, as a support of a groundwater protection program. The parametric vulnerability model (DRASTIC) was modified and calibrated to predict the intrinsic vulnerability as well as the groundwater pollution risk. The method uses groundwater body specific parameters for the calibration of the factor ratings and weightings of the original DRASTIC model. These groundwater specific parameters are inferred from the statistical relation between the original DRASTIC model and observed nitrate pollution for a specific period. In addition, site-specific land use parameters are integrated into the method. The method is fully embedded in a Geographic Information System (GIS). Following these modifications, the correlation coefficient between groundwater pollution risk and observed nitrate concentrations for the 2013-2014 survey improved from r = 0.42, for the original DRASTIC model, to r = 0.61 for the calibrated model. As a way to validate this pollution risk map, observed nitrate concentrations from another survey (2008) are compared to pollution risk indices showing a good degree of coincidence with r = 0.51. The study shows that a calibration of a vulnerability model is recommended when vulnerability maps are used for groundwater resource management and land use planning at the regional scale and that it is adapted to a specific area.

  11. Calibration of Yucca Mountain unsaturated zone flow and transport model using porewater chloride data

    SciTech Connect

    Liu, Jianchun; Sonnenthal, Eric L.; Bodvarsson, Gudmundur S.

    2002-09-01

    In this study, porewater chloride data from Yucca Mountain, Nevada, are analyzed and modeled by 3-D chemical transport simulations and analytical methods. The simulation modeling approach is based on a continuum formulation of coupled multiphase fluid flow and tracer transport processes through fractured porous rock, using a dual-continuum concept. Infiltration-rate calibrations were using the pore water chloride data. Model results of chloride distributions were improved in matching the observed data with the calibrated infiltration rates. Statistical analyses of the frequency distribution for overall percolation fluxes and chloride concentration in the unsaturated zone system demonstrate that the use of the calibrated infiltration rates had insignificant effect on the distribution of simulated percolation fluxes but significantly changed the predicated distribution of simulated chloride concentrations. An analytical method was also applied to model transient chloride transport. The method was verified by 3-D simulation results as able to capture major chemical transient behavior and trends. Effects of lateral flow in the Paintbrush nonwelded unit on percolation fluxes and chloride distribution were studied by 3-D simulations with increased horizontal permeability. The combined results from these model calibrations furnish important information for the UZ model studies, contributing to performance assessment of the potential repository.

  12. Calibration of multiple Kinect depth sensors for full surface model reconstruction

    NASA Astrophysics Data System (ADS)

    Tsui, Kwan Pang; Wong, Kin Hong; Wang, Changling; Kam, Ho Chuen; Yau, Hing Tuen; Yu, Ying Kin

    2016-07-01

    In this paper, we have investigated different methods to calibrate a 3-D scanning system consisting of multiple Kinect sensors. The main function of the scanning system is for the reconstruction of the full surface model of an object. In this work, we build a four-Kinect system that the Kinect range sensors are positioned around the target object. Each Kinect is responsible for capturing a small local model, and the local models found will be combined to become the full model. To build such a system, calibration of the poses among the Kinects is essential. We have tested a number of methods: using (1) a sphere, (2) a checker board and (3) a cube as the calibration object. After calibration, the results of method (1) and (2) are used in the multiple Kinect system for obtaining the 3-D model of a real object. Results are shown and compared. For method (3) we only performed the simulation test on finding the rotation between two Kinects and the result is promising. This is the first part of a long term project on building a full surface model capturing system. Such a system should be useful in robot vision, scientific research and many other industrial applications.

  13. Calibration of Yucca Mountain unsaturated zone flow and transport model using porewater chloride data.

    PubMed

    Liu, Jianchun; Sonnenthal, Eric L; Bodvarsson, Gudmundur S

    2003-01-01

    In this study, porewater chloride data from Yucca Mountain, NV are analyzed and modeled by three-dimensional chemical transport simulation and analytical methods. The simulation modeling approach is based on a continuum formulation of coupled multiphase fluid flow and tracer transport processes through fractured porous rock using a dual-continuum concept. Infiltration rate calibrations were performed using the porewater chloride data. Model results of chloride distributions were improved in matching the observed data with the calibrated infiltration rates. Statistical analyses of the frequency distribution for overall percolation fluxes and chloride concentration in the unsaturated zone system demonstrate that the use of the calibrated infiltration rates had an insignificant effect on the distribution of simulated percolation fluxes but significantly changed the predicted distribution of simulated chloride concentrations. An analytical method was also applied to model transient chloride transport. The method was verified by three-dimensional simulation results to be capable of capturing major chemical transient behavior and trends. Effects of lateral flow in the Paintbrush nonwelded unit on percolation fluxes and chloride distribution were studied by three-dimensional simulations with increased horizontal permeability. The combined results from these model calibrations furnish important information for the UZ model studies, contributing to performance assessment of the potential repository.

  14. How Does Knowing Snowpack Distribution Help Model Calibration and Reservoir Management?

    NASA Astrophysics Data System (ADS)

    Graham, C. B.; Mazurkiewicz, A.; McGurk, B. J.; Painter, T. H.

    2014-12-01

    Well calibrated hydrologic models are a necessary tool for reservoir managers to meet increasingly complicated regulatory, environmental and consumptive demands on water supply systems. Achieving these objectives is difficult during periods of drought, such as seen in the Sierra Nevada in recent years. This emphasizes the importance of accurate watershed modeling and forecasting of runoff. While basin discharge has traditionally been the main criteria for model calibration, many studies have shown it to be a poor control on model calibration where correct understanding of the subbasin hydrologic processes are required. Additional data sources such as snowpack accumulation and melt are often required to create a reliable model calibration. When allocating resources for monitoring snowpack conditions, water system managers often must choose between monitoring point locations at high temporal resolution (i.e. real time weather and snow monitoring stations) and large spatial surveys (i.e. remote sensing). NASA's Airborne Snow Observatory (ASO) provides a unique opportunity to test the relative value of spatially dense, temporally sparse measurements vs. temporally dense, spatially sparse measurements for hydrologic model calibration. The ASO is a demonstration mission using coupled LiDAR and imaging spectrometer mounted to an aircraft flying at 6100 m to collect high spatial density measurements of snow water content and albedo over the 1189 km2 Tuolumne River Basin. Snow depth and albedo were collected weekly throughout the snowmelt runoff period at 5 m2 resolution during the 2013-2014 snowmelt. We developed an implementation of the USGS Precipitation Runoff Modeling System (PRMS) for the Tuolumne River above Hetch Hetchy Reservoir, the primary water source for San Francisco. The modeled snow accumulation and ablation was calibrated in 2 models using either 2 years of weekly measurements of distributed snow water equivalent from the ASO, or 2 years of 15 minute snow

  15. Radiative type III seesaw model and its collider phenomenology

    NASA Astrophysics Data System (ADS)

    von der Pahlen, Federico; Palacio, Guillermo; Restrepo, Diego; Zapata, Oscar

    2016-08-01

    We analyze the present bounds of a scotogenic model, the radiative type III seesaw, in which an additional scalar doublet and at least two fermion triplets of S U (2 )L are added to the Standard Model. In the radiative type III seesaw, the new physics (NP) sector is odd under an exact global Z2 symmetry. This symmetry guaranties that the lightest NP neutral particle is stable, providing a natural dark matter candidate, and leads to naturally suppressed neutrino masses generated by a one-loop realization of an effective Weinberg operator. We focus on the region with the highest sensitivity in present and future LHC searches, with light scalar dark matter and at least one NP fermion triplet at the sub-TeV scale. This region allows for significant production cross sections of NP fermion pairs at the LHC. We reinterpret a set of searches for supersymmetric particles at the LHC obtained using the package CheckMATE, to set limits on our model as a function of the masses of the NP particles and their Yukawa interactions. The most sensitive search channel is found to be dileptons plus missing transverse energy. In order to target the case of tau enhanced decays and the case of compressed spectra, we reinterpret the recent slepton and chargino search bounds by ATLAS. For a lightest NP fermion triplet with a maximal branching ratio to either electrons or muons, we exclude NP fermion masses of up to 650 GeV, while this bound is reduced to approximately 400 GeV in the tau-philic case. Allowing for a general flavor structure, we set limits on the Yukawa couplings, which are directly related to the neutrino flavor structure.

  16. Modelling, calibration, and error analysis of seven-hole pressure probes

    NASA Technical Reports Server (NTRS)

    Zillac, G. G.

    1993-01-01

    This report describes the calibration of a nonnulling, conical, seven-hole pressure probe over a large range of flow onset angles. The calibration procedure is based on the use of differential pressures to determine the three components of velocity. The method allows determination of the flow angle and velocity magnitude to within an average error of 1.0 deg and 1.0 percent, respectively. Greater accuracy can be achieved by using high-quality pressure transducers. Also included is an examination of the factors which limit the use of the probe, a description of the measurement chain, an error analysis, and a typical experimental result. In addition, a new general analytical model of pressure probe behavior is described, and the validity of the model is demonstrated by comparing it with experimentally measured calibration data for a three-hole yaw meter and a seven-hole probe.

  17. Modeling and Experimental Analysis of Piezoelectric Shakers for High-Frequency Calibration of Accelerometers

    SciTech Connect

    Vogl, Gregory W.; Harper, Kari K.; Payne, Bev

    2010-05-28

    Piezoelectric shakers have been developed and used at the National Institute of Standards and Technology (NIST) for decades for high-frequency calibration of accelerometers. Recently, NIST researchers built new piezoelectric shakers in the hopes of reducing the uncertainties in the calibrations of accelerometers while extending the calibration frequency range beyond 20 kHz. The ability to build and measure piezoelectric shakers invites modeling of these systems in order to improve their design for increased performance, which includes a sinusoidal motion with lower distortion, lower cross-axial motion, and an increased frequency range. In this paper, we present a model of piezoelectric shakers and match it to experimental data. The equations of motion for all masses are solved along with the coupled state equations for the piezoelectric actuator. Finally, additional electrical elements like inductors, capacitors, and resistors are added to the piezoelectric actuator for matching of experimental and theoretical frequency responses.

  18. The criterion-calibration model of cue interaction in contingency judgments.

    PubMed

    Hannah, Samuel D; Allan, Lorraine G

    2011-05-01

    Siegel, Allan, Hannah, and Crump (2009) demonstrated that cue interaction effects in human contingency judgments reflect processing that occurs after the acquisition of information. This finding is in conflict with a broad class of theories. We present a new postacquisition model, the criterion-calibration model, that describes cue interaction effects as involving shifts in a report criterion. The model accounts for the Siegel et al. data and outperforms the only other postacquisition model of cue interaction, Stout and Miller's (2007) SOCR model. We present new data from an experiment designed to evaluate a prediction of the two models regarding reciprocal cue interaction effects. The new data provide further support for the criterion-calibration model.

  19. Evaluation of WAVEWATCH III Wave Model under Tropical Cyclone Conditions

    NASA Astrophysics Data System (ADS)

    Port, J.; Hara, T.; Reichl, B. G.; Ginis, I.

    2016-02-01

    In order to best prepare coastal regions for incoming storms, the ability to model tropical cyclone (hurricane) track and intensity has never been more vital. The ocean surface wave field (sea state) may significantly impact the storm intensity forecast because it modifies the air-sea momentum and heat fluxes as well as the upper ocean turbulent mixing. Therefore, it is important to include accurate sea state predictions in hurricane prediction models. The WAVEWATCH III (WW3) is one of the most skillful surface wave models and NOAA plans to incorporate it in the next generation hurricane prediction models. However, WW3 performance under hurricane conditions has not been thoroughly tested and requires further validations against observational data. This study compares the significant wave height (SWH) predicted by WW3 with satellite and Scanning Radar Altimeter (SRA) observational results during Hurricanes Irene (2011) and Edouard (2014). The WW3 data is generated with and without considering ocean currents and with different wind forcing products. The inclusion of currents generally reduces the predicted SWH and improves the correlation between WW3 predictions and observational data. While both SRA and satellite data offer reasonably good correlations with the WW3 data, the standard deviation of the satellite data from the WW3 data is significantly smaller than that of the SRA data. The generally good correlation found between the observational SWH readings and the SWH values from WW3 supports the validity of the WW3 wave model results under tropical cyclone conditions.

  20. Calibration under uncertainty for finite element models of masonry monuments

    SciTech Connect

    Atamturktur, Sezer,; Hemez, Francois,; Unal, Cetin

    2010-02-01

    Historical unreinforced masonry buildings often include features such as load bearing unreinforced masonry vaults and their supporting framework of piers, fill, buttresses, and walls. The masonry vaults of such buildings are among the most vulnerable structural components and certainly among the most challenging to analyze. The versatility of finite element (FE) analyses in incorporating various constitutive laws, as well as practically all geometric configurations, has resulted in the widespread use of the FE method for the analysis of complex unreinforced masonry structures over the last three decades. However, an FE model is only as accurate as its input parameters, and there are two fundamental challenges while defining FE model input parameters: (1) material properties and (2) support conditions. The difficulties in defining these two aspects of the FE model arise from the lack of knowledge in the common engineering understanding of masonry behavior. As a result, engineers are unable to define these FE model input parameters with certainty, and, inevitably, uncertainties are introduced to the FE model.

  1. Near infrared spectroscopic calibration models for real time monitoring of powder density.

    PubMed

    Román-Ospino, Andrés D; Singh, Ravendra; Ierapetritou, Marianthi; Ramachandran, Rohit; Méndez, Rafael; Ortega-Zuñiga, Carlos; Muzzio, Fernando J; Romañach, Rodolfo J

    2016-10-15

    Near infrared spectroscopic (NIRS) calibration models for real time prediction of powder density (tap, bulk and consolidated) were developed for a pharmaceutical formulation. Powder density is a critical property in the manufacturing of solid oral dosages, related to critical quality attributes such as tablet mass, hardness and dissolution. The establishment of calibration techniques for powder density is highly desired towards the development of control strategies. Three techniques were evaluated to obtain the required variation in powder density for calibration sets: 1) different tap density levels (for a single component), 2) generating different strain levels in powders blends (and as consequence powder density), through a modified shear Couette Cell, and 3) applying normal forces during a compressibility test with a powder rheometer to a pharmaceutical blend. For each variation in powder density, near infrared spectra were acquired to develop partial least squares (PLS) calibration models. Test samples were predicted with a relative standard error of prediction of 0.38%, 7.65% and 0.93% for tap density (single component), shear and rheometer respectively. Spectra obtained in real time in a continuous manufacturing (CM) plant were compared to the spectra from the three approaches used to vary powder density. The calibration based on the application of different strain levels showed the greatest similarity with the blends produced in the CM plant.

  2. Predicting mortality in the intensive care unit: a comparison of the University Health Consortium expected probability of mortality and the Mortality Prediction Model III.

    PubMed

    Lipshutz, Angela K M; Feiner, John R; Grimes, Barbara; Gropper, Michael A

    2016-01-01

    Quality benchmarks are increasingly being used to compare the delivery of healthcare, and may affect reimbursement in the future. The University Health Consortium (UHC) expected probability of mortality (EPM) is one such quality benchmark. Although the UHC EPM is used to compare quality across UHC members, it has not been prospectively validated in the critically ill. We aimed to define the performance characteristics of the UHC EPM in the critically ill and compare its ability to predict mortality with the Mortality Prediction Model III (MPM-III). The first 100 consecutive adult patients discharged from the hospital (including deaths) each quarter from January 1, 2009 until September 30, 2011 that had an intensive care unit (ICU) stay were included. We assessed model discrimination, calibration, and overall performance, and compared the two models using Bland-Altman plots. Eight hundred ninety-one patients were included. Both the UHC EPM and the MPM-III had excellent performance (Brier score 0.05 and 0.06, respectively). The area under the curve was good for both models (UHC 0.90, MPM-III 0.87, p = 0.28). Goodness of fit was statistically significant for both models (UHC p = 0.002, MPM-III p = 0.0003), but improved with logit transformation (UHC p = 0.41; MPM-III p = 0.07). The Bland-Altman plot showed good agreement at extremes of mortality, but agreement diverged as mortality approached 50 %. The UHC EPM exhibited excellent overall performance, calibration, and discrimination, and performed similarly to the MPM-III. Correlation between the two models was poor due to divergence when mortality was maximally uncertain.

  3. Ambrosia artemisiifolia L. pollen simulations over the Euro-CORDEX domain: model description and emission calibration

    NASA Astrophysics Data System (ADS)

    liu, li; Solmon, Fabien; Giorgi, Filippo; Vautard, Robert

    2014-05-01

    Ragweed Ambrosia artemisiifolia L. is a highly allergenic invasive plant. Its pollen can be transported over large distances and has been recognized as a significant cause of hayfever and asthma (D'Amato et al., 2007). In the context of the ATOPICA EU program we are studying the links between climate, land use and ecological changes on the ragweed pollen emissions and concentrations. For this purpose, we implemented a pollen emission/transport module in the RegCM4 regional climate model in collaboration with ATOPICA partners. The Abdus Salam International Centre for Theoretical Physics (ICTP) regional climate model, i.e. RegCM4 was adapted to incorporate the pollen emissions from (ORCHIDEE French) Global Land Surface Model and a pollen tracer model for describing pollen convective transport, turbulent mixing, dry and wet deposition over extensive domains, using consistent assumption regarding the transport of multiple species (Fabien et al., 2008). We performed two families of recent-past simulations on the Euro-Cordex domain (simulation for future condition is been considering). Hindcast simulations (2000~2011) were driven by the ERA-Interim re-analyses and designed to best simulate past periods airborne pollens, which were calibrated with parts of observations and verified by comparison with the additional observations. Historical simulations (1985~2004) were driven by HadGEM CMPI5 and designed to serve as a baseline for comparison with future airborne concentrations as obtained from climate and land-use scenarios. To reduce the uncertainties on the ragweed pollen emission, an assimilation-like method (Rouǐl et al., 2009) was used to calibrate release based on airborne pollen observations. The observations were divided into two groups and used for calibration and validation separately. A wide range of possible calibration coefficients were tested for each calibration station, making the bias between observations and simulations within an admissible value then

  4. Calibrating Bayesian Network Representations of Social-Behavioral Models

    SciTech Connect

    Whitney, Paul D.; Walsh, Stephen J.

    2010-04-08

    While human behavior has long been studied, recent and ongoing advances in computational modeling present opportunities for recasting research outcomes in human behavior. In this paper we describe how Bayesian networks can represent outcomes of human behavior research. We demonstrate a Bayesian network that represents political radicalization research – and show a corresponding visual representation of aspects of this research outcome. Since Bayesian networks can be quantitatively compared with external observations, the representation can also be used for empirical assessments of the research which the network summarizes. For a political radicalization model based on published research, we show this empirical comparison with data taken from the Minorities at Risk Organizational Behaviors database.

  5. HYDROLOGIC MODEL CALIBRATION AND UNCERTAINTY IN SCENARIO ANALYSIS

    EPA Science Inventory

    A systematic analysis of model performance during simulations based on

    observed land-cover/use change is used to quantify error associated with water-yield

    simulations for a series of known landscape conditions over a 24-year period with the

    goal of evaluatin...

  6. Remote sensing estimation of evapotranspiration for SWAT Model Calibration

    USDA-ARS?s Scientific Manuscript database

    Hydrological models are used to assess many water resource problems from water quantity to water quality issues. The accurate assessment of the water budget, primarily the influence of precipitation and evapotranspiration (ET), is a critical first-step evaluation, which is often overlooked in hydro...

  7. Calibration of state and transition models with FVS

    Treesearch

    Melinda Moeur; Don Vandendriesche

    2010-01-01

    The Interagency Mapping and Assessment Project (IMAP), a partnership between federal and state agencies, is developing mid-scale vegetation data and state and transition models (STM) for comparing the likely outcomes of alternative management policies on forested landscapes across the Pacific Northwest Region. In an STM, acres within a forested ecosystem transition...

  8. Utilizing inventory information to calibrate a landscape simulation model

    Treesearch

    Steven R. Shifley; Frank R., III Thompson; David R. Larsen; David J. Mladenoff; Eric J. Gustafson

    2000-01-01

    LANDIS is a spatially explicit model that uses mapped landscape conditions as a starting point and projects the patterns in forest vegetation that will result from alternative harvest practices, alternative fire regimes, and wind events. LANDIS was originally developed for Lake States forests, but it is capable of handling the input, output, bookkeeping, and mapping...

  9. Self-calibrating models for dynamic monitoring and diagnosis

    NASA Technical Reports Server (NTRS)

    Kuipers, Benjamin

    1996-01-01

    A method for automatically building qualitative and semi-quantitative models of dynamic systems, and using them for monitoring and fault diagnosis, is developed and demonstrated. The qualitative approach and semi-quantitative method are applied to monitoring observation streams, and to design of non-linear control systems.

  10. Integrating spatial altimetry data into the automatic calibration of hydrological models

    NASA Astrophysics Data System (ADS)

    Getirana, Augusto C. V.

    2010-06-01

    SummaryThe automatic calibration of hydrological models has traditionally been performed using gauged data. However, inaccessibility to remote areas and lack of financial support cause data to be lacking in large tropical basins, such as the Amazon basin. Advances in the acquisition, processing and availability of spatially distributed remotely sensed data move the evaluation of computational models easier and more practical. This paper presents the pioneering integration of spatial altimetry data into the automatic calibration of a hydrological model. The study area is the Branco River basin, located in the Northern Amazon basin. An empirical stage × discharge relation is obtained for the Negro River and transposed to the Branco River, which enables the correlation of spatial altimetry data with water discharge derived from the MGB-IPH hydrological model. Six scenarios are created