Sample records for model parameters needed

  1. Parameter optimization of a hydrologic model in a snow-dominated basin using a modular Python framework

    NASA Astrophysics Data System (ADS)

    Volk, J. M.; Turner, M. A.; Huntington, J. L.; Gardner, M.; Tyler, S.; Sheneman, L.

    2016-12-01

    Many distributed models that simulate watershed hydrologic processes require a collection of multi-dimensional parameters as input, some of which need to be calibrated before the model can be applied. The Precipitation Runoff Modeling System (PRMS) is a physically-based and spatially distributed hydrologic model that contains a considerable number of parameters that often need to be calibrated. Modelers can also benefit from uncertainty analysis of these parameters. To meet these needs, we developed a modular framework in Python to conduct PRMS parameter optimization, uncertainty analysis, interactive visual inspection of parameters and outputs, and other common modeling tasks. Here we present results for multi-step calibration of sensitive parameters controlling solar radiation, potential evapo-transpiration, and streamflow in a PRMS model that we applied to the snow-dominated Dry Creek watershed in Idaho. We also demonstrate how our modular approach enables the user to use a variety of parameter optimization and uncertainty methods or easily define their own, such as Monte Carlo random sampling, uniform sampling, or even optimization methods such as the downhill simplex method or its commonly used, more robust counterpart, shuffled complex evolution.

  2. Different Manhattan project: automatic statistical model generation

    NASA Astrophysics Data System (ADS)

    Yap, Chee Keng; Biermann, Henning; Hertzmann, Aaron; Li, Chen; Meyer, Jon; Pao, Hsing-Kuo; Paxia, Salvatore

    2002-03-01

    We address the automatic generation of large geometric models. This is important in visualization for several reasons. First, many applications need access to large but interesting data models. Second, we often need such data sets with particular characteristics (e.g., urban models, park and recreation landscape). Thus we need the ability to generate models with different parameters. We propose a new approach for generating such models. It is based on a top-down propagation of statistical parameters. We illustrate the method in the generation of a statistical model of Manhattan. But the method is generally applicable in the generation of models of large geographical regions. Our work is related to the literature on generating complex natural scenes (smoke, forests, etc) based on procedural descriptions. The difference in our approach stems from three characteristics: modeling with statistical parameters, integration of ground truth (actual map data), and a library-based approach for texture mapping.

  3. Parameter Estimates in Differential Equation Models for Chemical Kinetics

    ERIC Educational Resources Information Center

    Winkel, Brian

    2011-01-01

    We discuss the need for devoting time in differential equations courses to modelling and the completion of the modelling process with efforts to estimate the parameters in the models using data. We estimate the parameters present in several differential equation models of chemical reactions of order n, where n = 0, 1, 2, and apply more general…

  4. An analytical-numerical approach for parameter determination of a five-parameter single-diode model of photovoltaic cells and modules

    NASA Astrophysics Data System (ADS)

    Hejri, Mohammad; Mokhtari, Hossein; Azizian, Mohammad Reza; Söder, Lennart

    2016-04-01

    Parameter extraction of the five-parameter single-diode model of solar cells and modules from experimental data is a challenging problem. These parameters are evaluated from a set of nonlinear equations that cannot be solved analytically. On the other hand, a numerical solution of such equations needs a suitable initial guess to converge to a solution. This paper presents a new set of approximate analytical solutions for the parameters of a five-parameter single-diode model of photovoltaic (PV) cells and modules. The proposed solutions provide a good initial point which guarantees numerical analysis convergence. The proposed technique needs only a few data from the PV current-voltage characteristics, i.e. open circuit voltage Voc, short circuit current Isc and maximum power point current and voltage Im; Vm making it a fast and low cost parameter determination technique. The accuracy of the presented theoretical I-V curves is verified by experimental data.

  5. NASA Workshop on Distributed Parameter Modeling and Control of Flexible Aerospace Systems

    NASA Technical Reports Server (NTRS)

    Marks, Virginia B. (Compiler); Keckler, Claude R. (Compiler)

    1994-01-01

    Although significant advances have been made in modeling and controlling flexible systems, there remains a need for improvements in model accuracy and in control performance. The finite element models of flexible systems are unduly complex and are almost intractable to optimum parameter estimation for refinement using experimental data. Distributed parameter or continuum modeling offers some advantages and some challenges in both modeling and control. Continuum models often result in a significantly reduced number of model parameters, thereby enabling optimum parameter estimation. The dynamic equations of motion of continuum models provide the advantage of allowing the embedding of the control system dynamics, thus forming a complete set of system dynamics. There is also increased insight provided by the continuum model approach.

  6. Summary of the DREAM8 Parameter Estimation Challenge: Toward Parameter Identification for Whole-Cell Models.

    PubMed

    Karr, Jonathan R; Williams, Alex H; Zucker, Jeremy D; Raue, Andreas; Steiert, Bernhard; Timmer, Jens; Kreutz, Clemens; Wilkinson, Simon; Allgood, Brandon A; Bot, Brian M; Hoff, Bruce R; Kellen, Michael R; Covert, Markus W; Stolovitzky, Gustavo A; Meyer, Pablo

    2015-05-01

    Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM) 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model's structure and in silico "experimental" data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation.

  7. Is there a `universal' dynamic zero-parameter hydrological model? Evaluation of a dynamic Budyko model in US and India

    NASA Astrophysics Data System (ADS)

    Patnaik, S.; Biswal, B.; Sharma, V. C.

    2017-12-01

    River flow varies greatly in space and time, and the single biggest challenge for hydrologists and ecologists around the world is the fact that most rivers are either ungauged or poorly gauged. Although it is relatively easier to predict long-term average flow of a river using the `universal' zero-parameter Budyko model, lack of data hinders short-term flow prediction at ungauged locations using traditional hydrological models as they require observed flow data for model calibration. Flow prediction in ungauged basins thus requires a dynamic 'zero-parameter' hydrological model. One way to achieve this is to regionalize a dynamic hydrological model's parameters. However, a regionalization method based zero-parameter dynamic hydrological model is not `universal'. An alternative attempt was made recently to develop a zero-parameter dynamic model by defining an instantaneous dryness index as a function of antecedent rainfall and solar energy inputs with the help of a decay function and using the original Budyko function. The model was tested first in 63 US catchments and later in 50 Indian catchments. The median Nash-Sutcliffe efficiency (NSE) was found to be close to 0.4 in both the cases. Although improvements need to be incorporated in order to use the model for reliable prediction, the main aim of this study was to rather understand hydrological processes. The overall results here seem to suggest that the dynamic zero-parameter Budyko model is `universal.' In other words natural catchments around the world are strikingly similar to each other in the way they respond to hydrologic inputs; we thus need to focus more on utilizing catchment similarities in hydrological modelling instead of over parameterizing our models.

  8. Summary of the DREAM8 Parameter Estimation Challenge: Toward Parameter Identification for Whole-Cell Models

    PubMed Central

    Karr, Jonathan R.; Williams, Alex H.; Zucker, Jeremy D.; Raue, Andreas; Steiert, Bernhard; Timmer, Jens; Kreutz, Clemens; Wilkinson, Simon; Allgood, Brandon A.; Bot, Brian M.; Hoff, Bruce R.; Kellen, Michael R.; Covert, Markus W.; Stolovitzky, Gustavo A.; Meyer, Pablo

    2015-01-01

    Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM) 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model’s structure and in silico “experimental” data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation. PMID:26020786

  9. Quantifying Groundwater Model Uncertainty

    NASA Astrophysics Data System (ADS)

    Hill, M. C.; Poeter, E.; Foglia, L.

    2007-12-01

    Groundwater models are characterized by the (a) processes simulated, (b) boundary conditions, (c) initial conditions, (d) method of solving the equation, (e) parameterization, and (f) parameter values. Models are related to the system of concern using data, some of which form the basis of observations used most directly, through objective functions, to estimate parameter values. Here we consider situations in which parameter values are determined by minimizing an objective function. Other methods of model development are not considered because their ad hoc nature generally prohibits clear quantification of uncertainty. Quantifying prediction uncertainty ideally includes contributions from (a) to (f). The parameter values of (f) tend to be continuous with respect to both the simulated equivalents of the observations and the predictions, while many aspects of (a) through (e) are discrete. This fundamental difference means that there are options for evaluating the uncertainty related to parameter values that generally do not exist for other aspects of a model. While the methods available for (a) to (e) can be used for the parameter values (f), the inferential methods uniquely available for (f) generally are less computationally intensive and often can be used to considerable advantage. However, inferential approaches require calculation of sensitivities. Whether the numerical accuracy and stability of the model solution required for accurate sensitivities is more broadly important to other model uses is an issue that needs to be addressed. Alternative global methods can require 100 or even 1,000 times the number of runs needed by inferential methods, though methods of reducing the number of needed runs are being developed and tested. Here we present three approaches for quantifying model uncertainty and investigate their strengths and weaknesses. (1) Represent more aspects as parameters so that the computationally efficient methods can be broadly applied. This approach is attainable through universal model analysis software such as UCODE-2005, PEST, and joint use of these programs, which allow many aspects of a model to be defined as parameters. (2) Use highly parameterized models to quantify aspects of (e). While promising, this approach implicitly includes parameterizations that may be considered unreasonable if investigated explicitly, so that resulting measures of uncertainty may be too large. (3) Use a combination of inferential and global methods that can be facilitated using the new software MMA (Multi-Model Analysis), which is constructed using the JUPITER API. Here we consider issues related to the model discrimination criteria calculated by MMA.

  10. On the applicability of surrogate-based Markov chain Monte Carlo-Bayesian inversion to the Community Land Model: Case studies at flux tower sites: SURROGATE-BASED MCMC FOR CLM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan

    2016-07-04

    The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesianmore » model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically-average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. Analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.« less

  11. On the applicability of surrogate-based MCMC-Bayesian inversion to the Community Land Model: Case studies at Flux tower sites

    DOE PAGES

    Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan; ...

    2016-06-01

    The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesianmore » model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. As a result, analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.« less

  12. On the applicability of surrogate-based MCMC-Bayesian inversion to the Community Land Model: Case studies at Flux tower sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan

    The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesianmore » model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. As a result, analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.« less

  13. On the applicability of surrogate-based Markov chain Monte Carlo-Bayesian inversion to the Community Land Model: Case studies at flux tower sites

    NASA Astrophysics Data System (ADS)

    Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan; Ren, Huiying; Liu, Ying; Swiler, Laura

    2016-07-01

    The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesian model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. Analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.

  14. How does higher frequency monitoring data affect the calibration of a process-based water quality model?

    NASA Astrophysics Data System (ADS)

    Jackson-Blake, Leah; Helliwell, Rachel

    2015-04-01

    Process-based catchment water quality models are increasingly used as tools to inform land management. However, for such models to be reliable they need to be well calibrated and shown to reproduce key catchment processes. Calibration can be challenging for process-based models, which tend to be complex and highly parameterised. Calibrating a large number of parameters generally requires a large amount of monitoring data, spanning all hydrochemical conditions. However, regulatory agencies and research organisations generally only sample at a fortnightly or monthly frequency, even in well-studied catchments, often missing peak flow events. The primary aim of this study was therefore to investigate how the quality and uncertainty of model simulations produced by a process-based, semi-distributed catchment model, INCA-P (the INtegrated CAtchment model of Phosphorus dynamics), were improved by calibration to higher frequency water chemistry data. Two model calibrations were carried out for a small rural Scottish catchment: one using 18 months of daily total dissolved phosphorus (TDP) concentration data, another using a fortnightly dataset derived from the daily data. To aid comparability, calibrations were carried out automatically using the Markov Chain Monte Carlo - DiffeRential Evolution Adaptive Metropolis (MCMC-DREAM) algorithm. Calibration to daily data resulted in improved simulation of peak TDP concentrations and improved model performance statistics. Parameter-related uncertainty in simulated TDP was large when fortnightly data was used for calibration, with a 95% credible interval of 26 μg/l. This uncertainty is comparable in size to the difference between Water Framework Directive (WFD) chemical status classes, and would therefore make it difficult to use this calibration to predict shifts in WFD status. The 95% credible interval reduced markedly with the higher frequency monitoring data, to 6 μg/l. The number of parameters that could be reliably auto-calibrated was lower for the fortnightly data, with a physically unrealistic TDP simulation being produced when too many parameters were allowed to vary during model calibration. Parameters should not therefore be varied spatially for models such as INCA-P unless there is solid evidence that this is appropriate, or there is a real need to do so for the model to fulfil its purpose. This study highlights the potential pitfalls of using low frequency timeseries of observed water quality to calibrate complex process-based models. For reliable model calibrations to be produced, monitoring programmes need to be designed which capture system variability, in particular nutrient dynamics during high flow events. In addition, there is a need for simpler models, so that all model parameters can be included in auto-calibration and uncertainty analysis, and to reduce the data needs during calibration.

  15. A Lagrangian Subgridscale Model for Particle Transport Improvement and Application in the Adriatic Sea Using the Navy Coastal Ocean Model

    DTIC Science & Technology

    2006-12-01

    based on input statistical parameters , such as the turbulent velocity fluc- tuation and correlation time scale, without the need of an underlying...8217mVr) 2 + (ar, r- ;m Vm) 2 (8) Tr + Tm which is zero if the model and real parameters coincide. The correlation coefficient rmc between the...well correlated with the latter. The parameters estimated from the corrected velocity, Real(top), Model(mid), Corrected(bottom), Tm=1.5, Gm=l 0, Tr

  16. Characterizing uncertainty and variability in physiologically based pharmacokinetic models: state of the science and needs for research and implementation.

    PubMed

    Barton, Hugh A; Chiu, Weihsueh A; Setzer, R Woodrow; Andersen, Melvin E; Bailer, A John; Bois, Frédéric Y; Dewoskin, Robert S; Hays, Sean; Johanson, Gunnar; Jones, Nancy; Loizou, George; Macphail, Robert C; Portier, Christopher J; Spendiff, Martin; Tan, Yu-Mei

    2007-10-01

    Physiologically based pharmacokinetic (PBPK) models are used in mode-of-action based risk and safety assessments to estimate internal dosimetry in animals and humans. When used in risk assessment, these models can provide a basis for extrapolating between species, doses, and exposure routes or for justifying nondefault values for uncertainty factors. Characterization of uncertainty and variability is increasingly recognized as important for risk assessment; this represents a continuing challenge for both PBPK modelers and users. Current practices show significant progress in specifying deterministic biological models and nondeterministic (often statistical) models, estimating parameters using diverse data sets from multiple sources, using them to make predictions, and characterizing uncertainty and variability of model parameters and predictions. The International Workshop on Uncertainty and Variability in PBPK Models, held 31 Oct-2 Nov 2006, identified the state-of-the-science, needed changes in practice and implementation, and research priorities. For the short term, these include (1) multidisciplinary teams to integrate deterministic and nondeterministic/statistical models; (2) broader use of sensitivity analyses, including for structural and global (rather than local) parameter changes; and (3) enhanced transparency and reproducibility through improved documentation of model structure(s), parameter values, sensitivity and other analyses, and supporting, discrepant, or excluded data. Longer-term needs include (1) theoretical and practical methodological improvements for nondeterministic/statistical modeling; (2) better methods for evaluating alternative model structures; (3) peer-reviewed databases of parameters and covariates, and their distributions; (4) expanded coverage of PBPK models across chemicals with different properties; and (5) training and reference materials, such as cases studies, bibliographies/glossaries, model repositories, and enhanced software. The multidisciplinary dialogue initiated by this Workshop will foster the collaboration, research, data collection, and training necessary to make characterizing uncertainty and variability a standard practice in PBPK modeling and risk assessment.

  17. A comparison between a new model and current models for estimating trunk segment inertial parameters.

    PubMed

    Wicke, Jason; Dumas, Genevieve A; Costigan, Patrick A

    2009-01-05

    Modeling of the body segments to estimate segment inertial parameters is required in the kinetic analysis of human motion. A new geometric model for the trunk has been developed that uses various cross-sectional shapes to estimate segment volume and adopts a non-uniform density function that is gender-specific. The goal of this study was to test the accuracy of the new model for estimating the trunk's inertial parameters by comparing it to the more current models used in biomechanical research. Trunk inertial parameters estimated from dual X-ray absorptiometry (DXA) were used as the standard. Twenty-five female and 24 male college-aged participants were recruited for the study. Comparisons of the new model to the accepted models were accomplished by determining the error between the models' trunk inertial estimates and that from DXA. Results showed that the new model was more accurate across all inertial estimates than the other models. The new model had errors within 6.0% for both genders, whereas the other models had higher average errors ranging from 10% to over 50% and were much more inconsistent between the genders. In addition, there was little consistency in the level of accuracy for the other models when estimating the different inertial parameters. These results suggest that the new model provides more accurate and consistent trunk inertial estimates than the other models for both female and male college-aged individuals. However, similar studies need to be performed using other populations, such as elderly or individuals from a distinct morphology (e.g. obese). In addition, the effect of using different models on the outcome of kinetic parameters, such as joint moments and forces needs to be assessed.

  18. Reconstructing the hidden states in time course data of stochastic models.

    PubMed

    Zimmer, Christoph

    2015-11-01

    Parameter estimation is central for analyzing models in Systems Biology. The relevance of stochastic modeling in the field is increasing. Therefore, the need for tailored parameter estimation techniques is increasing as well. Challenges for parameter estimation are partial observability, measurement noise, and the computational complexity arising from the dimension of the parameter space. This article extends the multiple shooting for stochastic systems' method, developed for inference in intrinsic stochastic systems. The treatment of extrinsic noise and the estimation of the unobserved states is improved, by taking into account the correlation between unobserved and observed species. This article demonstrates the power of the method on different scenarios of a Lotka-Volterra model, including cases in which the prey population dies out or explodes, and a Calcium oscillation system. Besides showing how the new extension improves the accuracy of the parameter estimates, this article analyzes the accuracy of the state estimates. In contrast to previous approaches, the new approach is well able to estimate states and parameters for all the scenarios. As it does not need stochastic simulations, it is of the same order of speed as conventional least squares parameter estimation methods with respect to computational time. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  19. Additional Research Needs to Support the GENII Biosphere Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Napier, Bruce A.; Snyder, Sandra F.; Arimescu, Carmen

    In the course of evaluating the current parameter needs for the GENII Version 2 code (Snyder et al. 2013), areas of possible improvement for both the data and the underlying models have been identified. As the data review was implemented, PNNL staff identified areas where the models can be improved both to accommodate the locally significant pathways identified and also to incorporate newer models. The areas are general data needs for the existing models and improved formulations for the pathway models.

  20. Bayesian uncertainty analysis for complex systems biology models: emulation, global parameter searches and evaluation of gene functions.

    PubMed

    Vernon, Ian; Liu, Junli; Goldstein, Michael; Rowe, James; Topping, Jen; Lindsey, Keith

    2018-01-02

    Many mathematical models have now been employed across every area of systems biology. These models increasingly involve large numbers of unknown parameters, have complex structure which can result in substantial evaluation time relative to the needs of the analysis, and need to be compared to observed data of various forms. The correct analysis of such models usually requires a global parameter search, over a high dimensional parameter space, that incorporates and respects the most important sources of uncertainty. This can be an extremely difficult task, but it is essential for any meaningful inference or prediction to be made about any biological system. It hence represents a fundamental challenge for the whole of systems biology. Bayesian statistical methodology for the uncertainty analysis of complex models is introduced, which is designed to address the high dimensional global parameter search problem. Bayesian emulators that mimic the systems biology model but which are extremely fast to evaluate are embeded within an iterative history match: an efficient method to search high dimensional spaces within a more formal statistical setting, while incorporating major sources of uncertainty. The approach is demonstrated via application to a model of hormonal crosstalk in Arabidopsis root development, which has 32 rate parameters, for which we identify the sets of rate parameter values that lead to acceptable matches between model output and observed trend data. The multiple insights into the model's structure that this analysis provides are discussed. The methodology is applied to a second related model, and the biological consequences of the resulting comparison, including the evaluation of gene functions, are described. Bayesian uncertainty analysis for complex models using both emulators and history matching is shown to be a powerful technique that can greatly aid the study of a large class of systems biology models. It both provides insight into model behaviour and identifies the sets of rate parameters of interest.

  1. The Supernovae Analysis Application (SNAP)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bayless, Amanda J.; Fryer, Christopher Lee; Wollaeger, Ryan Thomas

    The SuperNovae Analysis aPplication (SNAP) is a new tool for the analysis of SN observations and validation of SN models. SNAP consists of a publicly available relational database with observational light curve, theoretical light curve, and correlation table sets with statistical comparison software, and a web interface available to the community. The theoretical models are intended to span a gridded range of parameter space. The goal is to have users upload new SN models or new SN observations and run the comparison software to determine correlations via the website. There are problems looming on the horizon that SNAP is beginningmore » to solve. For example, large surveys will discover thousands of SNe annually. Frequently, the parameter space of a new SN event is unbounded. SNAP will be a resource to constrain parameters and determine if an event needs follow-up without spending resources to create new light curve models from scratch. Second, there is no rapidly available, systematic way to determine degeneracies between parameters, or even what physics is needed to model a realistic SN. The correlations made within the SNAP system are beginning to solve these problems.« less

  2. Comparing Different Approaches of Bias Correction for Ability Estimation in IRT Models. Research Report. ETS RR-08-13

    ERIC Educational Resources Information Center

    Lee, Yi-Hsuan; Zhang, Jinming

    2008-01-01

    The method of maximum-likelihood is typically applied to item response theory (IRT) models when the ability parameter is estimated while conditioning on the true item parameters. In practice, the item parameters are unknown and need to be estimated first from a calibration sample. Lewis (1985) and Zhang and Lu (2007) proposed the expected response…

  3. The Supernovae Analysis Application (SNAP)

    DOE PAGES

    Bayless, Amanda J.; Fryer, Christopher Lee; Wollaeger, Ryan Thomas; ...

    2017-09-06

    The SuperNovae Analysis aPplication (SNAP) is a new tool for the analysis of SN observations and validation of SN models. SNAP consists of a publicly available relational database with observational light curve, theoretical light curve, and correlation table sets with statistical comparison software, and a web interface available to the community. The theoretical models are intended to span a gridded range of parameter space. The goal is to have users upload new SN models or new SN observations and run the comparison software to determine correlations via the website. There are problems looming on the horizon that SNAP is beginningmore » to solve. For example, large surveys will discover thousands of SNe annually. Frequently, the parameter space of a new SN event is unbounded. SNAP will be a resource to constrain parameters and determine if an event needs follow-up without spending resources to create new light curve models from scratch. Second, there is no rapidly available, systematic way to determine degeneracies between parameters, or even what physics is needed to model a realistic SN. The correlations made within the SNAP system are beginning to solve these problems.« less

  4. The Supernovae Analysis Application (SNAP)

    NASA Astrophysics Data System (ADS)

    Bayless, Amanda J.; Fryer, Chris L.; Wollaeger, Ryan; Wiggins, Brandon; Even, Wesley; de la Rosa, Janie; Roming, Peter W. A.; Frey, Lucy; Young, Patrick A.; Thorpe, Rob; Powell, Luke; Landers, Rachel; Persson, Heather D.; Hay, Rebecca

    2017-09-01

    The SuperNovae Analysis aPplication (SNAP) is a new tool for the analysis of SN observations and validation of SN models. SNAP consists of a publicly available relational database with observational light curve, theoretical light curve, and correlation table sets with statistical comparison software, and a web interface available to the community. The theoretical models are intended to span a gridded range of parameter space. The goal is to have users upload new SN models or new SN observations and run the comparison software to determine correlations via the website. There are problems looming on the horizon that SNAP is beginning to solve. For example, large surveys will discover thousands of SNe annually. Frequently, the parameter space of a new SN event is unbounded. SNAP will be a resource to constrain parameters and determine if an event needs follow-up without spending resources to create new light curve models from scratch. Second, there is no rapidly available, systematic way to determine degeneracies between parameters, or even what physics is needed to model a realistic SN. The correlations made within the SNAP system are beginning to solve these problems.

  5. Using Bayesian regression to test hypotheses about relationships between parameters and covariates in cognitive models.

    PubMed

    Boehm, Udo; Steingroever, Helen; Wagenmakers, Eric-Jan

    2018-06-01

    An important tool in the advancement of cognitive science are quantitative models that represent different cognitive variables in terms of model parameters. To evaluate such models, their parameters are typically tested for relationships with behavioral and physiological variables that are thought to reflect specific cognitive processes. However, many models do not come equipped with the statistical framework needed to relate model parameters to covariates. Instead, researchers often revert to classifying participants into groups depending on their values on the covariates, and subsequently comparing the estimated model parameters between these groups. Here we develop a comprehensive solution to the covariate problem in the form of a Bayesian regression framework. Our framework can be easily added to existing cognitive models and allows researchers to quantify the evidential support for relationships between covariates and model parameters using Bayes factors. Moreover, we present a simulation study that demonstrates the superiority of the Bayesian regression framework to the conventional classification-based approach.

  6. Important Physiological Parameters and Physical Activity Data for Evaluating Exposure Modeling Performance: a Synthesis

    EPA Science Inventory

    The purpose of this report is to develop a database of physiological parameters needed for understanding and evaluating performance of the APEX and SHEDS exposure/intake dose rate model used by the Environmental Protection Agency (EPA) as part of its regulatory activities. The A...

  7. MMA, A Computer Code for Multi-Model Analysis

    USGS Publications Warehouse

    Poeter, Eileen P.; Hill, Mary C.

    2007-01-01

    This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations. Many applications of MMA will be well served by the default methods provided. To use the default methods, the only required input for MMA is a list of directories where the files for the alternate models are located. Evaluation and development of model-analysis methods are active areas of research. To facilitate exploration and innovation, MMA allows the user broad discretion to define alternatives to the default procedures. For example, MMA allows the user to (a) rank models based on model criteria defined using a wide range of provided and user-defined statistics in addition to the default AIC, AICc, BIC, and KIC criteria, (b) create their own criteria using model measures available from the code, and (c) define how each model criterion is used to calculate related posterior model probabilities. The default model criteria rate models are based on model fit to observations, the number of observations and estimated parameters, and, for KIC, the Fisher information matrix. In addition, MMA allows the analysis to include an evaluation of estimated parameter values. This is accomplished by allowing the user to define unreasonable estimated parameter values or relative estimated parameter values. An example of the latter is that it may be expected that one parameter value will be less than another, as might be the case if two parameters represented the hydraulic conductivity of distinct materials such as fine and coarse sand. Models with parameter values that violate the user-defined conditions are excluded from further consideration by MMA. Ground-water models are used as examples in this report, but MMA can be used to evaluate any set of models for which the required files have been produced. MMA needs to read files from a separate directory for each alternative model considered. The needed files are produced when using the Sensitivity-Analysis or Parameter-Estimation mode of UCODE_2005, or, possibly, the equivalent capability of another program. MMA is constructed using

  8. Volume effects of late term normal tissue toxicity in prostate cancer radiotherapy

    NASA Astrophysics Data System (ADS)

    Bonta, Dacian Viorel

    Modeling of volume effects for treatment toxicity is paramount for optimization of radiation therapy. This thesis proposes a new model for calculating volume effects in gastro-intestinal and genito-urinary normal tissue complication probability (NTCP) following radiation therapy for prostate carcinoma. The radiobiological and the pathological basis for this model and its relationship to other models are detailed. A review of the radiobiological experiments and published clinical data identified salient features and specific properties a biologically adequate model has to conform to. The new model was fit to a set of actual clinical data. In order to verify the goodness of fit, two established NTCP models and a non-NTCP measure for complication risk were fitted to the same clinical data. The method of fit for the model parameters was maximum likelihood estimation. Within the framework of the maximum likelihood approach I estimated the parameter uncertainties for each complication prediction model. The quality-of-fit was determined using the Aikaike Information Criterion. Based on the model that provided the best fit, I identified the volume effects for both types of toxicities. Computer-based bootstrap resampling of the original dataset was used to estimate the bias and variance for the fitted parameter values. Computer simulation was also used to estimate the population size that generates a specific uncertainty level (3%) in the value of predicted complication probability. The same method was used to estimate the size of the patient population needed for accurate choice of the model underlying the NTCP. The results indicate that, depending on the number of parameters of a specific NTCP model, 100 (for two parameter models) and 500 patients (for three parameter models) are needed for accurate parameter fit. Correlation of complication occurrence in patients was also investigated. The results suggest that complication outcomes are correlated in a patient, although the correlation coefficient is rather small.

  9. Parameters Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model

    NASA Astrophysics Data System (ADS)

    Zuhdi, Shaifudin; Retno Sari Saputro, Dewi; Widyaningsih, Purnami

    2017-06-01

    A regression model is the representation of relationship between independent variable and dependent variable. The dependent variable has categories used in the logistic regression model to calculate odds on. The logistic regression model for dependent variable has levels in the logistics regression model is ordinal. GWOLR model is an ordinal logistic regression model influenced the geographical location of the observation site. Parameters estimation in the model needed to determine the value of a population based on sample. The purpose of this research is to parameters estimation of GWOLR model using R software. Parameter estimation uses the data amount of dengue fever patients in Semarang City. Observation units used are 144 villages in Semarang City. The results of research get GWOLR model locally for each village and to know probability of number dengue fever patient categories.

  10. Online Estimation of Model Parameters of Lithium-Ion Battery Using the Cubature Kalman Filter

    NASA Astrophysics Data System (ADS)

    Tian, Yong; Yan, Rusheng; Tian, Jindong; Zhou, Shijie; Hu, Chao

    2017-11-01

    Online estimation of state variables, including state-of-charge (SOC), state-of-energy (SOE) and state-of-health (SOH) is greatly crucial for the operation safety of lithium-ion battery. In order to improve estimation accuracy of these state variables, a precise battery model needs to be established. As the lithium-ion battery is a nonlinear time-varying system, the model parameters significantly vary with many factors, such as ambient temperature, discharge rate and depth of discharge, etc. This paper presents an online estimation method of model parameters for lithium-ion battery based on the cubature Kalman filter. The commonly used first-order resistor-capacitor equivalent circuit model is selected as the battery model, based on which the model parameters are estimated online. Experimental results show that the presented method can accurately track the parameters variation at different scenarios.

  11. Leaf photosynthesis and respiration of three bioenergy crops in relation to temperature and leaf nitrogen: how conserved are biochemical model parameters among crop species?

    PubMed Central

    Archontoulis, S. V.; Yin, X.; Vos, J.; Danalatos, N. G.; Struik, P. C.

    2012-01-01

    Given the need for parallel increases in food and energy production from crops in the context of global change, crop simulation models and data sets to feed these models with photosynthesis and respiration parameters are increasingly important. This study provides information on photosynthesis and respiration for three energy crops (sunflower, kenaf, and cynara), reviews relevant information for five other crops (wheat, barley, cotton, tobacco, and grape), and assesses how conserved photosynthesis parameters are among crops. Using large data sets and optimization techniques, the C3 leaf photosynthesis model of Farquhar, von Caemmerer, and Berry (FvCB) and an empirical night respiration model for tested energy crops accounting for effects of temperature and leaf nitrogen were parameterized. Instead of the common approach of using information on net photosynthesis response to CO2 at the stomatal cavity (An–Ci), the model was parameterized by analysing the photosynthesis response to incident light intensity (An–Iinc). Convincing evidence is provided that the maximum Rubisco carboxylation rate or the maximum electron transport rate was very similar whether derived from An–Ci or from An–Iinc data sets. Parameters characterizing Rubisco limitation, electron transport limitation, the degree to which light inhibits leaf respiration, night respiration, and the minimum leaf nitrogen required for photosynthesis were then determined. Model predictions were validated against independent sets. Only a few FvCB parameters were conserved among crop species, thus species-specific FvCB model parameters are needed for crop modelling. Therefore, information from readily available but underexplored An–Iinc data should be re-analysed, thereby expanding the potential of combining classical photosynthetic data and the biochemical model. PMID:22021569

  12. [Simulation model for estimating the cancer care infrastructure required by the public health system].

    PubMed

    Gomes Junior, Saint Clair Santos; Almeida, Rosimary Terezinha

    2009-02-01

    To develop a simulation model using public data to estimate the cancer care infrastructure required by the public health system in the state of São Paulo, Brazil. Public data from the Unified Health System database regarding cancer surgery, chemotherapy, and radiation therapy, from January 2002-January 2004, were used to estimate the number of cancer cases in the state. The percentages recorded for each therapy in the Hospital Cancer Registry of Brazil were combined with the data collected from the database to estimate the need for services. Mixture models were used to identify subgroups of cancer cases with regard to the length of time that chemotherapy and radiation therapy were required. A simulation model was used to estimate the infrastructure required taking these parameters into account. The model indicated the need for surgery in 52.5% of the cases, radiation therapy in 42.7%, and chemotherapy in 48.5%. The mixture models identified two subgroups for radiation therapy and four subgroups for chemotherapy with regard to mean usage time for each. These parameters allowed the following estimated infrastructure needs to be made: 147 operating rooms, 2 653 operating beds, 297 chemotherapy chairs, and 102 radiation therapy devices. These estimates suggest the need for a 1.2-fold increase in the number of chemotherapy services and a 2.4-fold increase in the number of radiation therapy services when compared with the parameters currently used by the public health system. A simulation model, such as the one used in the present study, permits better distribution of health care resources because it is based on specific, local needs.

  13. Assessment and Reduction of Model Parametric Uncertainties: A Case Study with A Distributed Hydrological Model

    NASA Astrophysics Data System (ADS)

    Gan, Y.; Liang, X. Z.; Duan, Q.; Xu, J.; Zhao, P.; Hong, Y.

    2017-12-01

    The uncertainties associated with the parameters of a hydrological model need to be quantified and reduced for it to be useful for operational hydrological forecasting and decision support. An uncertainty quantification framework is presented to facilitate practical assessment and reduction of model parametric uncertainties. A case study, using the distributed hydrological model CREST for daily streamflow simulation during the period 2008-2010 over ten watershed, was used to demonstrate the performance of this new framework. Model behaviors across watersheds were analyzed by a two-stage stepwise sensitivity analysis procedure, using LH-OAT method for screening out insensitive parameters, followed by MARS-based Sobol' sensitivity indices for quantifying each parameter's contribution to the response variance due to its first-order and higher-order effects. Pareto optimal sets of the influential parameters were then found by the adaptive surrogate-based multi-objective optimization procedure, using MARS model for approximating the parameter-response relationship and SCE-UA algorithm for searching the optimal parameter sets of the adaptively updated surrogate model. The final optimal parameter sets were validated against the daily streamflow simulation of the same watersheds during the period 2011-2012. The stepwise sensitivity analysis procedure efficiently reduced the number of parameters that need to be calibrated from twelve to seven, which helps to limit the dimensionality of calibration problem and serves to enhance the efficiency of parameter calibration. The adaptive MARS-based multi-objective calibration exercise provided satisfactory solutions to the reproduction of the observed streamflow for all watersheds. The final optimal solutions showed significant improvement when compared to the default solutions, with about 65-90% reduction in 1-NSE and 60-95% reduction in |RB|. The validation exercise indicated a large improvement in model performance with about 40-85% reduction in 1-NSE, and 35-90% reduction in |RB|. Overall, this uncertainty quantification framework is robust, effective and efficient for parametric uncertainty analysis, the results of which provide useful information that helps to understand the model behaviors and improve the model simulations.

  14. Computing the modal mass from the state space model in combined experimental-operational modal analysis

    NASA Astrophysics Data System (ADS)

    Cara, Javier

    2016-05-01

    Modal parameters comprise natural frequencies, damping ratios, modal vectors and modal masses. In a theoretic framework, these parameters are the basis for the solution of vibration problems using the theory of modal superposition. In practice, they can be computed from input-output vibration data: the usual procedure is to estimate a mathematical model from the data and then to compute the modal parameters from the estimated model. The most popular models for input-output data are based on the frequency response function, but in recent years the state space model in the time domain has become popular among researchers and practitioners of modal analysis with experimental data. In this work, the equations to compute the modal parameters from the state space model when input and output data are available (like in combined experimental-operational modal analysis) are derived in detail using invariants of the state space model: the equations needed to compute natural frequencies, damping ratios and modal vectors are well known in the operational modal analysis framework, but the equation needed to compute the modal masses has not generated much interest in technical literature. These equations are applied to both a numerical simulation and an experimental study in the last part of the work.

  15. Cognitive models of risky choice: parameter stability and predictive accuracy of prospect theory.

    PubMed

    Glöckner, Andreas; Pachur, Thorsten

    2012-04-01

    In the behavioral sciences, a popular approach to describe and predict behavior is cognitive modeling with adjustable parameters (i.e., which can be fitted to data). Modeling with adjustable parameters allows, among other things, measuring differences between people. At the same time, parameter estimation also bears the risk of overfitting. Are individual differences as measured by model parameters stable enough to improve the ability to predict behavior as compared to modeling without adjustable parameters? We examined this issue in cumulative prospect theory (CPT), arguably the most widely used framework to model decisions under risk. Specifically, we examined (a) the temporal stability of CPT's parameters; and (b) how well different implementations of CPT, varying in the number of adjustable parameters, predict individual choice relative to models with no adjustable parameters (such as CPT with fixed parameters, expected value theory, and various heuristics). We presented participants with risky choice problems and fitted CPT to each individual's choices in two separate sessions (which were 1 week apart). All parameters were correlated across time, in particular when using a simple implementation of CPT. CPT allowing for individual variability in parameter values predicted individual choice better than CPT with fixed parameters, expected value theory, and the heuristics. CPT's parameters thus seem to pick up stable individual differences that need to be considered when predicting risky choice. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. A Probabilistic Approach to Model Update

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.; Voracek, David F.

    2001-01-01

    Finite element models are often developed for load validation, structural certification, response predictions, and to study alternate design concepts. In rare occasions, models developed with a nominal set of parameters agree with experimental data without the need to update parameter values. Today, model updating is generally heuristic and often performed by a skilled analyst with in-depth understanding of the model assumptions. Parameter uncertainties play a key role in understanding the model update problem and therefore probabilistic analysis tools, developed for reliability and risk analysis, may be used to incorporate uncertainty in the analysis. In this work, probability analysis (PA) tools are used to aid the parameter update task using experimental data and some basic knowledge of potential error sources. Discussed here is the first application of PA tools to update parameters of a finite element model for a composite wing structure. Static deflection data at six locations are used to update five parameters. It is shown that while prediction of individual response values may not be matched identically, the system response is significantly improved with moderate changes in parameter values.

  17. Measures of GCM Performance as Functions of Model Parameters Affecting Clouds and Radiation

    NASA Astrophysics Data System (ADS)

    Jackson, C.; Mu, Q.; Sen, M.; Stoffa, P.

    2002-05-01

    This abstract is one of three related presentations at this meeting dealing with several issues surrounding optimal parameter and uncertainty estimation of model predictions of climate. Uncertainty in model predictions of climate depends in part on the uncertainty produced by model approximations or parameterizations of unresolved physics. Evaluating these uncertainties is computationally expensive because one needs to evaluate how arbitrary choices for any given combination of model parameters affects model performance. Because the computational effort grows exponentially with the number of parameters being investigated, it is important to choose parameters carefully. Evaluating whether a parameter is worth investigating depends on two considerations: 1) does reasonable choices of parameter values produce a large range in model response relative to observational uncertainty? and 2) does the model response depend non-linearly on various combinations of model parameters? We have decided to narrow our attention to selecting parameters that affect clouds and radiation, as it is likely that these parameters will dominate uncertainties in model predictions of future climate. We present preliminary results of ~20 to 30 AMIPII style climate model integrations using NCAR's CCM3.10 that show model performance as functions of individual parameters controlling 1) critical relative humidity for cloud formation (RHMIN), and 2) boundary layer critical Richardson number (RICR). We also explore various definitions of model performance that include some or all observational data sources (surface air temperature and pressure, meridional and zonal winds, clouds, long and short-wave cloud forcings, etc...) and evaluate in a few select cases whether the model's response depends non-linearly on the parameter values we have selected.

  18. Inverse models: A necessary next step in ground-water modeling

    USGS Publications Warehouse

    Poeter, E.P.; Hill, M.C.

    1997-01-01

    Inverse models using, for example, nonlinear least-squares regression, provide capabilities that help modelers take full advantage of the insight available from ground-water models. However, lack of information about the requirements and benefits of inverse models is an obstacle to their widespread use. This paper presents a simple ground-water flow problem to illustrate the requirements and benefits of the nonlinear least-squares repression method of inverse modeling and discusses how these attributes apply to field problems. The benefits of inverse modeling include: (1) expedited determination of best fit parameter values; (2) quantification of the (a) quality of calibration, (b) data shortcomings and needs, and (c) confidence limits on parameter estimates and predictions; and (3) identification of issues that are easily overlooked during nonautomated calibration.Inverse models using, for example, nonlinear least-squares regression, provide capabilities that help modelers take full advantage of the insight available from ground-water models. However, lack of information about the requirements and benefits of inverse models is an obstacle to their widespread use. This paper presents a simple ground-water flow problem to illustrate the requirements and benefits of the nonlinear least-squares regression method of inverse modeling and discusses how these attributes apply to field problems. The benefits of inverse modeling include: (1) expedited determination of best fit parameter values; (2) quantification of the (a) quality of calibration, (b) data shortcomings and needs, and (c) confidence limits on parameter estimates and predictions; and (3) identification of issues that are easily overlooked during nonautomated calibration.

  19. Matching experimental and three dimensional numerical models for structural vibration problems with uncertainties

    NASA Astrophysics Data System (ADS)

    Langer, P.; Sepahvand, K.; Guist, C.; Bär, J.; Peplow, A.; Marburg, S.

    2018-03-01

    The simulation model which examines the dynamic behavior of real structures needs to address the impact of uncertainty in both geometry and material parameters. This article investigates three-dimensional finite element models for structural dynamics problems with respect to both model and parameter uncertainties. The parameter uncertainties are determined via laboratory measurements on several beam-like samples. The parameters are then considered as random variables to the finite element model for exploring the uncertainty effects on the quality of the model outputs, i.e. natural frequencies. The accuracy of the output predictions from the model is compared with the experimental results. To this end, the non-contact experimental modal analysis is conducted to identify the natural frequency of the samples. The results show a good agreement compared with experimental data. Furthermore, it is demonstrated that geometrical uncertainties have more influence on the natural frequencies compared to material parameters and material uncertainties are about two times higher than geometrical uncertainties. This gives valuable insights for improving the finite element model due to various parameter ranges required in a modeling process involving uncertainty.

  20. An Initial Non-Equilibrium Porous-Media Model for CFD Simulation of Stirling Regenerators

    NASA Technical Reports Server (NTRS)

    Tew, Roy; Simon, Terry; Gedeon, David; Ibrahim, Mounir; Rong, Wei

    2006-01-01

    The objective of this paper is to define empirical parameters (or closwre models) for an initial thermai non-equilibrium porous-media model for use in Computational Fluid Dynamics (CFD) codes for simulation of Stirling regenerators. The two CFD codes currently being used at Glenn Research Center (GRC) for Stirling engine modeling are Fluent and CFD-ACE. The porous-media models available in each of these codes are equilibrium models, which assmne that the solid matrix and the fluid are in thermal equilibrium at each spatial location within the porous medium. This is believed to be a poor assumption for the oscillating-flow environment within Stirling regenerators; Stirling 1-D regenerator models, used in Stirling design, we non-equilibrium regenerator models and suggest regenerator matrix and gas average temperatures can differ by several degrees at a given axial location end time during the cycle. A NASA regenerator research grant has been providing experimental and computational results to support definition of various empirical coefficients needed in defining a noa-equilibrium, macroscopic, porous-media model (i.e., to define "closure" relations). The grant effort is being led by Cleveland State University, with subcontractor assistance from the University of Minnesota, Gedeon Associates, and Sunpower, Inc. Friction-factor and heat-transfer correlations based on data taken with the NASAlSunpower oscillating-flow test rig also provide experimentally based correlations that are useful in defining parameters for the porous-media model; these correlations are documented in Gedeon Associates' Sage Stirling-Code Manuals. These sources of experimentally based information were used to define the following terms and parameters needed in the non-equilibrium porous-media model: hydrodynamic dispersion, permeability, inertial coefficient, fluid effective thermal conductivity (including themal dispersion and estimate of tortuosity effects}, and fluid-solid heat transfer coefficient. Solid effective thermal conductivity (including the effect of tortuosity) was also estimated. Determination of the porous-media model parameters was based on planned use in a CFD model of Infinia's Stirling Technology Demonstration Convertor (TDC), which uses a random-fiber regenerator matrix. The non-equilibrium porous-media model presented is considered to be an initial, or "draft," model for possible incorporation in commercial CFD codes, with the expectation that the empirical parameters will likely need to be updated once resulting Stirling CFD model regenerator and engine results have been analyzed. The emphasis of the paper is on use of available data to define empirical parameters (and closure models) needed in a thermal non-equilibrium porous-media model for Stirling regenerator simulation. Such a model has not yet been implemented by the authors or their associates. However, it is anticipated that a thermal non-equilibrium model such as that presented here, when iacorporated in the CFD codes, will improve our ability to accurately model Stirling regenerators with CFD relative to current thermal-equilibrium porous-media models.

  1. Soil Erosion as a stochastic process

    NASA Astrophysics Data System (ADS)

    Casper, Markus C.

    2015-04-01

    The main tools to provide estimations concerning risk and amount of erosion are different types of soil erosion models: on the one hand, there are empirically based model concepts on the other hand there are more physically based or process based models. However, both types of models have substantial weak points. All empirical model concepts are only capable of providing rough estimates over larger temporal and spatial scales, they do not account for many driving factors that are in the scope of scenario related analysis. In addition, the physically based models contain important empirical parts and hence, the demand for universality and transferability is not given. As a common feature, we find, that all models rely on parameters and input variables, which are to certain, extend spatially and temporally averaged. A central question is whether the apparent heterogeneity of soil properties or the random nature of driving forces needs to be better considered in our modelling concepts. Traditionally, researchers have attempted to remove spatial and temporal variability through homogenization. However, homogenization has been achieved through physical manipulation of the system, or by statistical averaging procedures. The price for obtaining this homogenized (average) model concepts of soils and soil related processes has often been a failure to recognize the profound importance of heterogeneity in many of the properties and processes that we study. Especially soil infiltrability and the resistance (also called "critical shear stress" or "critical stream power") are the most important empirical factors of physically based erosion models. The erosion resistance is theoretically a substrate specific parameter, but in reality, the threshold where soil erosion begins is determined experimentally. The soil infiltrability is often calculated with empirical relationships (e.g. based on grain size distribution). Consequently, to better fit reality, this value needs to be corrected experimentally. To overcome this disadvantage of our actual models, soil erosion models are needed that are able to use stochastic directly variables and parameter distributions. There are only some minor approaches in this direction. The most advanced is the model "STOSEM" proposed by Sidorchuk in 2005. In this model, only a small part of the soil erosion processes is described, the aggregate detachment and the aggregate transport by flowing water. The concept is highly simplified, for example, many parameters are temporally invariant. Nevertheless, the main problem is that our existing measurements and experiments are not geared to provide stochastic parameters (e.g. as probability density functions); in the best case they deliver a statistical validation of the mean values. Again, we get effective parameters, spatially and temporally averaged. There is an urgent need for laboratory and field experiments on overland flow structure, raindrop effects and erosion rate, which deliver information on spatial and temporal structure of soil and surface properties and processes.

  2. Optimal experimental design for improving the estimation of growth parameters of Lactobacillus viridescens from data under non-isothermal conditions.

    PubMed

    Longhi, Daniel Angelo; Martins, Wiaslan Figueiredo; da Silva, Nathália Buss; Carciofi, Bruno Augusto Mattar; de Aragão, Gláucia Maria Falcão; Laurindo, João Borges

    2017-01-02

    In predictive microbiology, the model parameters have been estimated using the sequential two-step modeling (TSM) approach, in which primary models are fitted to the microbial growth data, and then secondary models are fitted to the primary model parameters to represent their dependence with the environmental variables (e.g., temperature). The Optimal Experimental Design (OED) approach allows reducing the experimental workload and costs, and the improvement of model identifiability because primary and secondary models are fitted simultaneously from non-isothermal data. Lactobacillus viridescens was selected to this study because it is a lactic acid bacterium of great interest to meat products preservation. The objectives of this study were to estimate the growth parameters of L. viridescens in culture medium from TSM and OED approaches and to evaluate both the number of experimental data and the time needed in each approach and the confidence intervals of the model parameters. Experimental data for estimating the model parameters with TSM approach were obtained at six temperatures (total experimental time of 3540h and 196 experimental data of microbial growth). Data for OED approach were obtained from four optimal non-isothermal profiles (total experimental time of 588h and 60 experimental data of microbial growth), two profiles with increasing temperatures (IT) and two with decreasing temperatures (DT). The Baranyi and Roberts primary model and the square root secondary model were used to describe the microbial growth, in which the parameters b and T min (±95% confidence interval) were estimated from the experimental data. The parameters obtained from TSM approach were b=0.0290 (±0.0020) [1/(h 0.5 °C)] and T min =-1.33 (±1.26) [°C], with R 2 =0.986 and RMSE=0.581, and the parameters obtained with the OED approach were b=0.0316 (±0.0013) [1/(h 0.5 °C)] and T min =-0.24 (±0.55) [°C], with R 2 =0.990 and RMSE=0.436. The parameters obtained from OED approach presented smaller confidence intervals and best statistical indexes than those from TSM approach. Besides, less experimental data and time were needed to estimate the model parameters with OED than TSM. Furthermore, the OED model parameters were validated with non-isothermal experimental data with great accuracy. In this way, OED approach is feasible and is a very useful tool to improve the prediction of microbial growth under non-isothermal condition. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Wiener-Hammerstein system identification - an evolutionary approach

    NASA Astrophysics Data System (ADS)

    Naitali, Abdessamad; Giri, Fouad

    2016-01-01

    The problem of identifying parametric Wiener-Hammerstein (WH) systems is addressed within the evolutionary optimisation context. Specifically, a hybrid culture identification method is developed that involves model structure adaptation using genetic recombination and model parameter learning using particle swarm optimisation. The method enjoys three interesting features: (1) the risk of premature convergence of model parameter estimates to local optima is significantly reduced, due to the constantly maintained diversity of model candidates; (2) no prior knowledge is needed except for upper bounds on the system structure indices; (3) the method is fully autonomous as no interaction is needed with the user during the optimum search process. The performances of the proposed method will be illustrated and compared to alternative methods using a well-established WH benchmark.

  4. Smsynth: AN Imagery Synthesis System for Soil Moisture Retrieval

    NASA Astrophysics Data System (ADS)

    Cao, Y.; Xu, L.; Peng, J.

    2018-04-01

    Soil moisture (SM) is a important variable in various research areas, such as weather and climate forecasting, agriculture, drought and flood monitoring and prediction, and human health. An ongoing challenge in estimating SM via synthetic aperture radar (SAR) is the development of the retrieval SM methods, especially the empirical models needs as training samples a lot of measurements of SM and soil roughness parameters which are very difficult to acquire. As such, it is difficult to develop empirical models using realistic SAR imagery and it is necessary to develop methods to synthesis SAR imagery. To tackle this issue, a SAR imagery synthesis system based on the SM named SMSynth is presented, which can simulate radar signals that are realistic as far as possible to the real SAR imagery. In SMSynth, SAR backscatter coefficients for each soil type are simulated via the Oh model under the Bayesian framework, where the spatial correlation is modeled by the Markov random field (MRF) model. The backscattering coefficients simulated based on the designed soil parameters and sensor parameters are added into the Bayesian framework through the data likelihood where the soil parameters and sensor parameters are set as realistic as possible to the circumstances on the ground and in the validity range of the Oh model. In this way, a complete and coherent Bayesian probabilistic framework is established. Experimental results show that SMSynth is capable of generating realistic SAR images that suit the needs of a large amount of training samples of empirical models.

  5. Estimating the Expected Value of Sample Information Using the Probabilistic Sensitivity Analysis Sample

    PubMed Central

    Oakley, Jeremy E.; Brennan, Alan; Breeze, Penny

    2015-01-01

    Health economic decision-analytic models are used to estimate the expected net benefits of competing decision options. The true values of the input parameters of such models are rarely known with certainty, and it is often useful to quantify the value to the decision maker of reducing uncertainty through collecting new data. In the context of a particular decision problem, the value of a proposed research design can be quantified by its expected value of sample information (EVSI). EVSI is commonly estimated via a 2-level Monte Carlo procedure in which plausible data sets are generated in an outer loop, and then, conditional on these, the parameters of the decision model are updated via Bayes rule and sampled in an inner loop. At each iteration of the inner loop, the decision model is evaluated. This is computationally demanding and may be difficult if the posterior distribution of the model parameters conditional on sampled data is hard to sample from. We describe a fast nonparametric regression-based method for estimating per-patient EVSI that requires only the probabilistic sensitivity analysis sample (i.e., the set of samples drawn from the joint distribution of the parameters and the corresponding net benefits). The method avoids the need to sample from the posterior distributions of the parameters and avoids the need to rerun the model. The only requirement is that sample data sets can be generated. The method is applicable with a model of any complexity and with any specification of model parameter distribution. We demonstrate in a case study the superior efficiency of the regression method over the 2-level Monte Carlo method. PMID:25810269

  6. Comparison of Radiation Pressure Perturbations on Rocket Bodies and Debris at Geosynchronous Earth Orbit

    DTIC Science & Technology

    2014-09-01

    has highlighted the need for physically consistent radiation pressure and Bidirectional Reflectance Distribution Function ( BRDF ) models . This paper...seeks to evaluate the impact of BRDF -consistent radiation pres- sure models compared to changes in the other BRDF parameters. The differences in...orbital position arising because of changes in the shape, attitude, angular rates, BRDF parameters, and radiation pressure model are plotted as a

  7. Quantifying the Uncertainties and Multi-parameter Trade-offs in Joint Inversion of Receiver Functions and Surface Wave Velocity and Ellipticity

    NASA Astrophysics Data System (ADS)

    Gao, C.; Lekic, V.

    2016-12-01

    When constraining the structure of the Earth's continental lithosphere, multiple seismic observables are often combined due to their complementary sensitivities.The transdimensional Bayesian (TB) approach in seismic inversion allows model parameter uncertainties and trade-offs to be quantified with few assumptions. TB sampling yields an adaptive parameterization that enables simultaneous inversion for different model parameters (Vp, Vs, density, radial anisotropy), without the need for strong prior information or regularization. We use a reversible jump Markov chain Monte Carlo (rjMcMC) algorithm to incorporate different seismic observables - surface wave dispersion (SWD), Rayleigh wave ellipticity (ZH ratio), and receiver functions - into the inversion for the profiles of shear velocity (Vs), compressional velocity (Vp), density (ρ), and radial anisotropy (ξ) beneath a seismic station. By analyzing all three data types individually and together, we show that TB sampling can eliminate the need for a fixed parameterization based on prior information, and reduce trade-offs in model estimates. We then explore the effect of different types of misfit functions for receiver function inversion, which is a highly non-unique problem. We compare the synthetic inversion results using the L2 norm, cross-correlation type and integral type misfit function by their convergence rates and retrieved seismic structures. In inversions in which only one type of model parameter (Vs for the case of SWD) is inverted, assumed scaling relationships are often applied to account for sensitivity to other model parameters (e.g. Vp, ρ, ξ). Here we show that under a TB framework, we can eliminate scaling assumptions, while simultaneously constraining multiple model parameters to varying degrees. Furthermore, we compare the performance of TB inversion when different types of model parameters either share the same or use independent parameterizations. We show that different parameterizations can lead to differences in retrieved model parameters, consistent with limited data constraints. We then quantitatively examine the model parameter trade-offs and find that trade-offs between Vp and radial anisotropy might limit our ability to constrain shallow-layer radial anisotropy using current seismic observables.

  8. Validation of Slosh Model Parameters and Anti-Slosh Baffle Designs of Propellant Tanks by Using Lateral Slosh Testing

    NASA Technical Reports Server (NTRS)

    Perez, Jose G.; Parks, Russel, A.; Lazor, Daniel R.

    2012-01-01

    The slosh dynamics of propellant tanks can be represented by an equivalent mass-pendulum-dashpot mechanical model. The parameters of this equivalent model, identified as slosh mechanical model parameters, are slosh frequency, slosh mass, and pendulum hinge point location. They can be obtained by both analysis and testing for discrete fill levels. Anti-slosh baffles are usually needed in propellant tanks to control the movement of the fluid inside the tank. Lateral slosh testing, involving both random excitation testing and free-decay testing, are performed to validate the slosh mechanical model parameters and the damping added to the fluid by the anti-slosh baffles. Traditional modal analysis procedures were used to extract the parameters from the experimental data. Test setup of sub-scale tanks will be described. A comparison between experimental results and analysis will be presented.

  9. Validation of Slosh Model Parameters and Anti-Slosh Baffle Designs of Propellant Tanks by Using Lateral Slosh Testing

    NASA Technical Reports Server (NTRS)

    Perez, Jose G.; Parks, Russel A.; Lazor, Daniel R.

    2012-01-01

    The slosh dynamics of propellant tanks can be represented by an equivalent pendulum-mass mechanical model. The parameters of this equivalent model, identified as slosh model parameters, are slosh mass, slosh mass center of gravity, slosh frequency, and smooth-wall damping. They can be obtained by both analysis and testing for discrete fill heights. Anti-slosh baffles are usually needed in propellant tanks to control the movement of the fluid inside the tank. Lateral slosh testing, involving both random testing and free-decay testing, are performed to validate the slosh model parameters and the damping added to the fluid by the anti-slosh baffles. Traditional modal analysis procedures are used to extract the parameters from the experimental data. Test setup of sub-scale test articles of cylindrical and spherical shapes will be described. A comparison between experimental results and analysis will be presented.

  10. A New Formulation of the Filter-Error Method for Aerodynamic Parameter Estimation in Turbulence

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2015-01-01

    A new formulation of the filter-error method for estimating aerodynamic parameters in nonlinear aircraft dynamic models during turbulence was developed and demonstrated. The approach uses an estimate of the measurement noise covariance to identify the model parameters, their uncertainties, and the process noise covariance, in a relaxation method analogous to the output-error method. Prior information on the model parameters and uncertainties can be supplied, and a post-estimation correction to the uncertainty was included to account for colored residuals not considered in the theory. No tuning parameters, needing adjustment by the analyst, are used in the estimation. The method was demonstrated in simulation using the NASA Generic Transport Model, then applied to the subscale T-2 jet-engine transport aircraft flight. Modeling results in different levels of turbulence were compared with results from time-domain output error and frequency- domain equation error methods to demonstrate the effectiveness of the approach.

  11. Wall Shear Stress Distribution in a Patient-Specific Cerebral Aneurysm Model using Reduced Order Modeling

    NASA Astrophysics Data System (ADS)

    Han, Suyue; Chang, Gary Han; Schirmer, Clemens; Modarres-Sadeghi, Yahya

    2016-11-01

    We construct a reduced-order model (ROM) to study the Wall Shear Stress (WSS) distributions in image-based patient-specific aneurysms models. The magnitude of WSS has been shown to be a critical factor in growth and rupture of human aneurysms. We start the process by running a training case using Computational Fluid Dynamics (CFD) simulation with time-varying flow parameters, such that these parameters cover the range of parameters of interest. The method of snapshot Proper Orthogonal Decomposition (POD) is utilized to construct the reduced-order bases using the training CFD simulation. The resulting ROM enables us to study the flow patterns and the WSS distributions over a range of system parameters computationally very efficiently with a relatively small number of modes. This enables comprehensive analysis of the model system across a range of physiological conditions without the need to re-compute the simulation for small changes in the system parameters.

  12. Numerically accurate computational techniques for optimal estimator analyses of multi-parameter models

    NASA Astrophysics Data System (ADS)

    Berger, Lukas; Kleinheinz, Konstantin; Attili, Antonio; Bisetti, Fabrizio; Pitsch, Heinz; Mueller, Michael E.

    2018-05-01

    Modelling unclosed terms in partial differential equations typically involves two steps: First, a set of known quantities needs to be specified as input parameters for a model, and second, a specific functional form needs to be defined to model the unclosed terms by the input parameters. Both steps involve a certain modelling error, with the former known as the irreducible error and the latter referred to as the functional error. Typically, only the total modelling error, which is the sum of functional and irreducible error, is assessed, but the concept of the optimal estimator enables the separate analysis of the total and the irreducible errors, yielding a systematic modelling error decomposition. In this work, attention is paid to the techniques themselves required for the practical computation of irreducible errors. Typically, histograms are used for optimal estimator analyses, but this technique is found to add a non-negligible spurious contribution to the irreducible error if models with multiple input parameters are assessed. Thus, the error decomposition of an optimal estimator analysis becomes inaccurate, and misleading conclusions concerning modelling errors may be drawn. In this work, numerically accurate techniques for optimal estimator analyses are identified and a suitable evaluation of irreducible errors is presented. Four different computational techniques are considered: a histogram technique, artificial neural networks, multivariate adaptive regression splines, and an additive model based on a kernel method. For multiple input parameter models, only artificial neural networks and multivariate adaptive regression splines are found to yield satisfactorily accurate results. Beyond a certain number of input parameters, the assessment of models in an optimal estimator analysis even becomes practically infeasible if histograms are used. The optimal estimator analysis in this paper is applied to modelling the filtered soot intermittency in large eddy simulations using a dataset of a direct numerical simulation of a non-premixed sooting turbulent flame.

  13. Parameter Estimation of Partial Differential Equation Models.

    PubMed

    Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Carroll, Raymond J; Maity, Arnab

    2013-01-01

    Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown, and need to be estimated from the measurements of the dynamic system in the present of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE, and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from LIDAR data.

  14. Analysis of Brown camera distortion model

    NASA Astrophysics Data System (ADS)

    Nowakowski, Artur; Skarbek, Władysław

    2013-10-01

    Contemporary image acquisition devices introduce optical distortion into image. It results in pixel displacement and therefore needs to be compensated for many computer vision applications. The distortion is usually modeled by the Brown distortion model, which parameters can be included in camera calibration task. In this paper we describe original model, its dependencies and analyze orthogonality with regard to radius for its decentering distortion component. We also report experiments with camera calibration algorithm included in OpenCV library, especially a stability of distortion parameters estimation is evaluated.

  15. The application of the pilot points in groundwater numerical inversion model

    NASA Astrophysics Data System (ADS)

    Hu, Bin; Teng, Yanguo; Cheng, Lirong

    2015-04-01

    Numerical inversion simulation of groundwater has been widely applied in groundwater. Compared to traditional forward modeling, inversion model has more space to study. Zones and inversing modeling cell by cell are conventional methods. Pilot points is a method between them. The traditional inverse modeling method often uses software dividing the model into several zones with a few parameters needed to be inversed. However, distribution is usually too simple for modeler and result of simulation deviation. Inverse cell by cell will get the most actual parameter distribution in theory, but it need computational complexity greatly and quantity of survey data for geological statistical simulation areas. Compared to those methods, pilot points distribute a set of points throughout the different model domains for parameter estimation. Property values are assigned to model cells by Kriging to ensure geological units within the parameters of heterogeneity. It will reduce requirements of simulation area geological statistics and offset the gap between above methods. Pilot points can not only save calculation time, increase fitting degree, but also reduce instability of numerical model caused by numbers of parameters and other advantages. In this paper, we use pilot point in a field which structure formation heterogeneity and hydraulics parameter was unknown. We compare inversion modeling results of zones and pilot point methods. With the method of comparative analysis, we explore the characteristic of pilot point in groundwater inversion model. First, modeler generates an initial spatially correlated field given a geostatistical model by the description of the case site with the software named Groundwater Vistas 6. Defining Kriging to obtain the value of the field functions over the model domain on the basis of their values at measurement and pilot point locations (hydraulic conductivity), then we assign pilot points to the interpolated field which have been divided into 4 zones. And add range of disturbance values to inversion targets to calculate the value of hydraulic conductivity. Third, after inversion calculation (PEST), the interpolated field will minimize an objective function measuring the misfit between calculated and measured data. It's an optimization problem to find the optimum value of parameters. After the inversion modeling, the following major conclusion can be found out: (1) In a field structure formation is heterogeneity, the results of pilot point method is more real: better fitting result of parameters, more stable calculation of numerical simulation (stable residual distribution). Compared to zones, it is better of reflecting the heterogeneity of study field. (2) Pilot point method ensures that each parameter is sensitive and not entirely dependent on other parameters. Thus it guarantees the relative independence and authenticity of parameters evaluation results. However, it costs more time to calculate than zones. Key words: groundwater; pilot point; inverse model; heterogeneity; hydraulic conductivity

  16. A comprehensive evaluation of various sensitivity analysis methods: A case study with a hydrological model

    DOE PAGES

    Gan, Yanjun; Duan, Qingyun; Gong, Wei; ...

    2014-01-01

    Sensitivity analysis (SA) is a commonly used approach for identifying important parameters that dominate model behaviors. We use a newly developed software package, a Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), to evaluate the effectiveness and efficiency of ten widely used SA methods, including seven qualitative and three quantitative ones. All SA methods are tested using a variety of sampling techniques to screen out the most sensitive (i.e., important) parameters from the insensitive ones. The Sacramento Soil Moisture Accounting (SAC-SMA) model, which has thirteen tunable parameters, is used for illustration. The South Branch Potomac River basin nearmore » Springfield, West Virginia in the U.S. is chosen as the study area. The key findings from this study are: (1) For qualitative SA methods, Correlation Analysis (CA), Regression Analysis (RA), and Gaussian Process (GP) screening methods are shown to be not effective in this example. Morris One-At-a-Time (MOAT) screening is the most efficient, needing only 280 samples to identify the most important parameters, but it is the least robust method. Multivariate Adaptive Regression Splines (MARS), Delta Test (DT) and Sum-Of-Trees (SOT) screening methods need about 400–600 samples for the same purpose. Monte Carlo (MC), Orthogonal Array (OA) and Orthogonal Array based Latin Hypercube (OALH) are appropriate sampling techniques for them; (2) For quantitative SA methods, at least 2777 samples are needed for Fourier Amplitude Sensitivity Test (FAST) to identity parameter main effect. McKay method needs about 360 samples to evaluate the main effect, more than 1000 samples to assess the two-way interaction effect. OALH and LPτ (LPTAU) sampling techniques are more appropriate for McKay method. For the Sobol' method, the minimum samples needed are 1050 to compute the first-order and total sensitivity indices correctly. These comparisons show that qualitative SA methods are more efficient but less accurate and robust than quantitative ones.« less

  17. OPC modeling by genetic algorithm

    NASA Astrophysics Data System (ADS)

    Huang, W. C.; Lai, C. M.; Luo, B.; Tsai, C. K.; Tsay, C. S.; Lai, C. W.; Kuo, C. C.; Liu, R. G.; Lin, H. T.; Lin, B. J.

    2005-05-01

    Optical proximity correction (OPC) is usually used to pre-distort mask layouts to make the printed patterns as close to the desired shapes as possible. For model-based OPC, a lithographic model to predict critical dimensions after lithographic processing is needed. The model is usually obtained via a regression of parameters based on experimental data containing optical proximity effects. When the parameters involve a mix of the continuous (optical and resist models) and the discrete (kernel numbers) sets, the traditional numerical optimization method may have difficulty handling model fitting. In this study, an artificial-intelligent optimization method was used to regress the parameters of the lithographic models for OPC. The implemented phenomenological models were constant-threshold models that combine diffused aerial image models with loading effects. Optical kernels decomposed from Hopkin"s equation were used to calculate aerial images on the wafer. Similarly, the numbers of optical kernels were treated as regression parameters. This way, good regression results were obtained with different sets of optical proximity effect data.

  18. Is ET often oversimplified in hydrologic models? Using long records to elucidate unaccounted for controls on ET

    NASA Astrophysics Data System (ADS)

    Kelleher, Christa A.; Shaw, Stephen B.

    2018-02-01

    Recent research has found that hydrologic modeling over decadal time periods often requires time variant model parameters. Most prior work has focused on assessing time variance in model parameters conceptualizing watershed features and functions. In this paper, we assess whether adding a time variant scalar to potential evapotranspiration (PET) can be used in place of time variant parameters. Using the HBV hydrologic model and four different simple but common PET methods (Hamon, Priestly-Taylor, Oudin, and Hargreaves), we simulated 60+ years of daily discharge on four rivers in New York state. Allowing all ten model parameters to vary in time achieved good model fits in terms of daily NSE and long-term water balance. However, allowing single model parameters to vary in time - including a scalar on PET - achieved nearly equivalent model fits across PET methods. Overall, varying a PET scalar in time is likely more physically consistent with known biophysical controls on PET as compared to varying parameters conceptualizing innate watershed properties related to soil properties such as wilting point and field capacity. This work suggests that the seeming need for time variance in innate watershed parameters may be due to overly simple evapotranspiration formulations that do not account for all factors controlling evapotranspiration over long time periods.

  19. A multi-model assessment of terrestrial biosphere model data needs

    NASA Astrophysics Data System (ADS)

    Gardella, A.; Cowdery, E.; De Kauwe, M. G.; Desai, A. R.; Duveneck, M.; Fer, I.; Fisher, R.; Knox, R. G.; Kooper, R.; LeBauer, D.; McCabe, T.; Minunno, F.; Raiho, A.; Serbin, S.; Shiklomanov, A. N.; Thomas, A.; Walker, A.; Dietze, M.

    2017-12-01

    Terrestrial biosphere models provide us with the means to simulate the impacts of climate change and their uncertainties. Going beyond direct observation and experimentation, models synthesize our current understanding of ecosystem processes and can give us insight on data needed to constrain model parameters. In previous work, we leveraged the Predictive Ecosystem Analyzer (PEcAn) to assess the contribution of different parameters to the uncertainty of the Ecosystem Demography model v2 (ED) model outputs across various North American biomes (Dietze et al., JGR-G, 2014). While this analysis identified key research priorities, the extent to which these priorities were model- and/or biome-specific was unclear. Furthermore, because the analysis only studied one model, we were unable to comment on the effect of variability in model structure to overall predictive uncertainty. Here, we expand this analysis to all biomes globally and a wide sample of models that vary in complexity: BioCro, CABLE, CLM, DALEC, ED2, FATES, G'DAY, JULES, LANDIS, LINKAGES, LPJ-GUESS, MAESPA, PRELES, SDGVM, SIPNET, and TEM. Prior to performing uncertainty analyses, model parameter uncertainties were assessed by assimilating all available trait data from the combination of the BETYdb and TRY trait databases, using an updated multivariate version of PEcAn's Hierarchical Bayesian meta-analysis. Next, sensitivity analyses were performed for all models across a range of sites globally to assess sensitivities for a range of different outputs (GPP, ET, SH, Ra, NPP, Rh, NEE, LAI) at multiple time scales from the sub-annual to the decadal. Finally, parameter uncertainties and model sensitivities were combined to evaluate the fractional contribution of each parameter to the predictive uncertainty for a specific variable at a specific site and timescale. Facilitated by PEcAn's automated workflows, this analysis represents the broadest assessment of the sensitivities and uncertainties in terrestrial models to date, and provides a comprehensive roadmap for constraining model uncertainties through model development and data collection.

  20. Dynamical compensation and structural identifiability of biological models: Analysis, implications, and reconciliation.

    PubMed

    Villaverde, Alejandro F; Banga, Julio R

    2017-11-01

    The concept of dynamical compensation has been recently introduced to describe the ability of a biological system to keep its output dynamics unchanged in the face of varying parameters. However, the original definition of dynamical compensation amounts to lack of structural identifiability. This is relevant if model parameters need to be estimated, as is often the case in biological modelling. Care should we taken when using an unidentifiable model to extract biological insight: the estimated values of structurally unidentifiable parameters are meaningless, and model predictions about unmeasured state variables can be wrong. Taking this into account, we explore alternative definitions of dynamical compensation that do not necessarily imply structural unidentifiability. Accordingly, we show different ways in which a model can be made identifiable while exhibiting dynamical compensation. Our analyses enable the use of the new concept of dynamical compensation in the context of parameter identification, and reconcile it with the desirable property of structural identifiability.

  1. Simulating settlement during waste placement at a landfill with waste lifts placed under frozen conditions.

    PubMed

    Van Geel, Paul J; Murray, Kathleen E

    2015-12-01

    Twelve instrument bundles were placed within two waste profiles as waste was placed in an operating landfill in Ste. Sophie, Quebec, Canada. The settlement data were simulated using a three-component model to account for primary or instantaneous compression, secondary compression or mechanical creep and biodegradation induced settlement. The regressed model parameters from the first waste layer were able to predict the settlement of the remaining four waste layers with good agreement. The model parameters were compared to values published in the literature. A MSW landfill scenario referenced in the literature was used to illustrate how the parameter values from the different studies predicted settlement. The parameters determined in this study and other studies with total waste heights between 15 and 60 m provided similar estimates of total settlement in the long term while the settlement rates and relative magnitudes of the three components varied. The parameters determined based on studies with total waste heights less than 15m resulted in larger secondary compression indices and lower biodegradation induced settlements. When these were applied to a MSW landfill scenario with a total waste height of 30 m, the settlement was overestimated and provided unrealistic values. This study concludes that more field studies are needed to measure waste settlement during the filling stage of landfill operations and more field data are needed to assess different settlement models and their respective parameters. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Development of a Dynamic Visco-elastic Vehicle-Soil Interaction Model for Rut Depth, and Power Determinations

    DTIC Science & Technology

    2011-09-06

    Presentation Outline A) Review of Soil Model governing equations B) Development of pedo -transfer functions (terrain database to engineering properties) C...lateral earth pressure) UNCLASSIFIED B) Development of pedo -transfer functions Engineering parameters needed by soil model - compression index - rebound...inches, RCI for fine- grained soils, CI for coarse-grained soils. UNCLASSIFIED Pedo -transfer function • Need to transfer existing terrain database

  3. Computationally inexpensive identification of noninformative model parameters by sequential screening

    NASA Astrophysics Data System (ADS)

    Cuntz, Matthias; Mai, Juliane; Zink, Matthias; Thober, Stephan; Kumar, Rohini; Schäfer, David; Schrön, Martin; Craven, John; Rakovec, Oldrich; Spieler, Diana; Prykhodko, Vladyslav; Dalmasso, Giovanni; Musuuza, Jude; Langenberg, Ben; Attinger, Sabine; Samaniego, Luis

    2015-08-01

    Environmental models tend to require increasing computational time and resources as physical process descriptions are improved or new descriptions are incorporated. Many-query applications such as sensitivity analysis or model calibration usually require a large number of model evaluations leading to high computational demand. This often limits the feasibility of rigorous analyses. Here we present a fully automated sequential screening method that selects only informative parameters for a given model output. The method requires a number of model evaluations that is approximately 10 times the number of model parameters. It was tested using the mesoscale hydrologic model mHM in three hydrologically unique European river catchments. It identified around 20 informative parameters out of 52, with different informative parameters in each catchment. The screening method was evaluated with subsequent analyses using all 52 as well as only the informative parameters. Subsequent Sobol's global sensitivity analysis led to almost identical results yet required 40% fewer model evaluations after screening. mHM was calibrated with all and with only informative parameters in the three catchments. Model performances for daily discharge were equally high in both cases with Nash-Sutcliffe efficiencies above 0.82. Calibration using only the informative parameters needed just one third of the number of model evaluations. The universality of the sequential screening method was demonstrated using several general test functions from the literature. We therefore recommend the use of the computationally inexpensive sequential screening method prior to rigorous analyses on complex environmental models.

  4. Computationally inexpensive identification of noninformative model parameters by sequential screening

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Cuntz, Matthias; Zink, Matthias; Thober, Stephan; Kumar, Rohini; Schäfer, David; Schrön, Martin; Craven, John; Rakovec, Oldrich; Spieler, Diana; Prykhodko, Vladyslav; Dalmasso, Giovanni; Musuuza, Jude; Langenberg, Ben; Attinger, Sabine; Samaniego, Luis

    2016-04-01

    Environmental models tend to require increasing computational time and resources as physical process descriptions are improved or new descriptions are incorporated. Many-query applications such as sensitivity analysis or model calibration usually require a large number of model evaluations leading to high computational demand. This often limits the feasibility of rigorous analyses. Here we present a fully automated sequential screening method that selects only informative parameters for a given model output. The method requires a number of model evaluations that is approximately 10 times the number of model parameters. It was tested using the mesoscale hydrologic model mHM in three hydrologically unique European river catchments. It identified around 20 informative parameters out of 52, with different informative parameters in each catchment. The screening method was evaluated with subsequent analyses using all 52 as well as only the informative parameters. Subsequent Sobol's global sensitivity analysis led to almost identical results yet required 40% fewer model evaluations after screening. mHM was calibrated with all and with only informative parameters in the three catchments. Model performances for daily discharge were equally high in both cases with Nash-Sutcliffe efficiencies above 0.82. Calibration using only the informative parameters needed just one third of the number of model evaluations. The universality of the sequential screening method was demonstrated using several general test functions from the literature. We therefore recommend the use of the computationally inexpensive sequential screening method prior to rigorous analyses on complex environmental models.

  5. Using internal discharge data in a distributed conceptual model to reduce uncertainty in streamflow simulations

    NASA Astrophysics Data System (ADS)

    Guerrero, J.; Halldin, S.; Xu, C.; Lundin, L.

    2011-12-01

    Distributed hydrological models are important tools in water management as they account for the spatial variability of the hydrological data, as well as being able to produce spatially distributed outputs. They can directly incorporate and assess potential changes in the characteristics of our basins. A recognized problem for models in general is equifinality, which is only exacerbated for distributed models who tend to have a large number of parameters. We need to deal with the fundamentally ill-posed nature of the problem that such models force us to face, i.e. a large number of parameters and very few variables that can be used to constrain them, often only the catchment discharge. There is a growing but yet limited literature showing how the internal states of a distributed model can be used to calibrate/validate its predictions. In this paper, a distributed version of WASMOD, a conceptual rainfall runoff model with only three parameters, combined with a routing algorithm based on the high-resolution HydroSHEDS data was used to simulate the discharge in the Paso La Ceiba basin in Honduras. The parameter space was explored using Monte-Carlo simulations and the region of space containing the parameter-sets that were considered behavioral according to two different criteria was delimited using the geometric concept of alpha-shapes. The discharge data from five internal sub-basins was used to aid in the calibration of the model and to answer the following questions: Can this information improve the simulations at the outlet of the catchment, or decrease their uncertainty? Also, after reducing the number of model parameters needing calibration through sensitivity analysis: Is it possible to relate them to basin characteristics? The analysis revealed that in most cases the internal discharge data can be used to reduce the uncertainty in the discharge at the outlet, albeit with little improvement in the overall simulation results.

  6. Classification of hydrological parameter sensitivity and evaluation of parameter transferability across 431 US MOPEX basins

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Huiying; Hou, Zhangshuan; Huang, Maoyi

    The Community Land Model (CLM) represents physical, chemical, and biological processes of the terrestrial ecosystems that interact with climate across a range of spatial and temporal scales. As CLM includes numerous sub-models and associated parameters, the high-dimensional parameter space presents a formidable challenge for quantifying uncertainty and improving Earth system predictions needed to assess environmental changes and risks. This study aims to evaluate the potential of transferring hydrologic model parameters in CLM through sensitivity analyses and classification across watersheds from the Model Parameter Estimation Experiment (MOPEX) in the United States. The sensitivity of CLM-simulated water and energy fluxes to hydrologicalmore » parameters across 431 MOPEX basins are first examined using an efficient stochastic sampling-based sensitivity analysis approach. Linear, interaction, and high-order nonlinear impacts are all identified via statistical tests and stepwise backward removal parameter screening. The basins are then classified accordingly to their parameter sensitivity patterns (internal attributes), as well as their hydrologic indices/attributes (external hydrologic factors) separately, using a Principal component analyses (PCA) and expectation-maximization (EM) –based clustering approach. Similarities and differences among the parameter sensitivity-based classification system (S-Class), the hydrologic indices-based classification (H-Class), and the Koppen climate classification systems (K-Class) are discussed. Within each S-class with similar parameter sensitivity characteristics, similar inversion modeling setups can be used for parameter calibration, and the parameters and their contribution or significance to water and energy cycling may also be more transferrable. This classification study provides guidance on identifiable parameters, and on parameterization and inverse model design for CLM but the methodology is applicable to other models. Inverting parameters at representative sites belonging to the same class can significantly reduce parameter calibration efforts.« less

  7. Efficient Bayesian parameter estimation with implicit sampling and surrogate modeling for a vadose zone hydrological problem

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Pau, G. S. H.; Finsterle, S.

    2015-12-01

    Parameter inversion involves inferring the model parameter values based on sparse observations of some observables. To infer the posterior probability distributions of the parameters, Markov chain Monte Carlo (MCMC) methods are typically used. However, the large number of forward simulations needed and limited computational resources limit the complexity of the hydrological model we can use in these methods. In view of this, we studied the implicit sampling (IS) method, an efficient importance sampling technique that generates samples in the high-probability region of the posterior distribution and thus reduces the number of forward simulations that we need to run. For a pilot-point inversion of a heterogeneous permeability field based on a synthetic ponded infiltration experiment simu­lated with TOUGH2 (a subsurface modeling code), we showed that IS with linear map provides an accurate Bayesian description of the parameterized permeability field at the pilot points with just approximately 500 forward simulations. We further studied the use of surrogate models to improve the computational efficiency of parameter inversion. We implemented two reduced-order models (ROMs) for the TOUGH2 forward model. One is based on polynomial chaos expansion (PCE), of which the coefficients are obtained using the sparse Bayesian learning technique to mitigate the "curse of dimensionality" of the PCE terms. The other model is Gaussian process regression (GPR) for which different covariance, likelihood and inference models are considered. Preliminary results indicate that ROMs constructed based on the prior parameter space perform poorly. It is thus impractical to replace this hydrological model by a ROM directly in a MCMC method. However, the IS method can work with a ROM constructed for parameters in the close vicinity of the maximum a posteriori probability (MAP) estimate. We will discuss the accuracy and computational efficiency of using ROMs in the implicit sampling procedure for the hydrological problem considered. This work was supported, in part, by the U.S. Dept. of Energy under Contract No. DE-AC02-05CH11231

  8. Estimated snow parameters for vehicle mobility modeling in Korea, Germany and interior Alaska

    DOT National Transportation Integrated Search

    1995-09-01

    Snow is a crucial factor affecting the U.S. Army's operations in cold regions. Values for snow depth and snow density are needed for vehicle mobility studies, but unfortunately the available historical records of these parameters tend to be relativel...

  9. How Does Higher Frequency Monitoring Data Affect the Calibration of a Process-Based Water Quality Model?

    NASA Astrophysics Data System (ADS)

    Jackson-Blake, L.

    2014-12-01

    Process-based catchment water quality models are increasingly used as tools to inform land management. However, for such models to be reliable they need to be well calibrated and shown to reproduce key catchment processes. Calibration can be challenging for process-based models, which tend to be complex and highly parameterised. Calibrating a large number of parameters generally requires a large amount of monitoring data, but even in well-studied catchments, streams are often only sampled at a fortnightly or monthly frequency. The primary aim of this study was therefore to investigate how the quality and uncertainty of model simulations produced by one process-based catchment model, INCA-P (the INtegrated CAtchment model of Phosphorus dynamics), were improved by calibration to higher frequency water chemistry data. Two model calibrations were carried out for a small rural Scottish catchment: one using 18 months of daily total dissolved phosphorus (TDP) concentration data, another using a fortnightly dataset derived from the daily data. To aid comparability, calibrations were carried out automatically using the MCMC-DREAM algorithm. Using daily rather than fortnightly data resulted in improved simulation of the magnitude of peak TDP concentrations, in turn resulting in improved model performance statistics. Marginal posteriors were better constrained by the higher frequency data, resulting in a large reduction in parameter-related uncertainty in simulated TDP (the 95% credible interval decreased from 26 to 6 μg/l). The number of parameters that could be reliably auto-calibrated was lower for the fortnightly data, leading to the recommendation that parameters should not be varied spatially for models such as INCA-P unless there is solid evidence that this is appropriate, or there is a real need to do so for the model to fulfil its purpose. Secondary study aims were to highlight the subjective elements involved in auto-calibration and suggest practical improvements that could make models such as INCA-P more suited to auto-calibration and uncertainty analyses. Two key improvements include model simplification, so that all model parameters can be included in an analysis of this kind, and better documenting of recommended ranges for each parameter, to help in choosing sensible priors.

  10. Estimation of Transport and Kinetic Parameters of Vanadium Redox Batteries Using Static Cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Seong Beom; Pratt, III, Harry D.; Anderson, Travis M.

    Mathematical models of Redox Flow Batteries (RFBs) can be used to analyze cell performance, optimize battery operation, and control the energy storage system efficiently. Among many other models, physics-based electrochemical models are capable of predicting internal states of the battery, such as temperature, state-of-charge, and state-of-health. In the models, estimating parameters is an important step that can study, analyze, and validate the models using experimental data. A common practice is to determine these parameters either through conducting experiments or based on the information available in the literature. However, it is not easy to investigate all proper parameters for the modelsmore » through this way, and there are occasions when important information, such as diffusion coefficients and rate constants of ions, has not been studied. Also, the parameters needed for modeling charge-discharge are not always available. In this paper, an efficient way to estimate parameters of physics-based redox battery models will be proposed. Furthermore, this paper also demonstrates that the proposed approach can study and analyze aspects of capacity loss/fade, kinetics, and transport phenomena of the RFB system.« less

  11. Estimation of Transport and Kinetic Parameters of Vanadium Redox Batteries Using Static Cells

    DOE PAGES

    Lee, Seong Beom; Pratt, III, Harry D.; Anderson, Travis M.; ...

    2018-03-27

    Mathematical models of Redox Flow Batteries (RFBs) can be used to analyze cell performance, optimize battery operation, and control the energy storage system efficiently. Among many other models, physics-based electrochemical models are capable of predicting internal states of the battery, such as temperature, state-of-charge, and state-of-health. In the models, estimating parameters is an important step that can study, analyze, and validate the models using experimental data. A common practice is to determine these parameters either through conducting experiments or based on the information available in the literature. However, it is not easy to investigate all proper parameters for the modelsmore » through this way, and there are occasions when important information, such as diffusion coefficients and rate constants of ions, has not been studied. Also, the parameters needed for modeling charge-discharge are not always available. In this paper, an efficient way to estimate parameters of physics-based redox battery models will be proposed. Furthermore, this paper also demonstrates that the proposed approach can study and analyze aspects of capacity loss/fade, kinetics, and transport phenomena of the RFB system.« less

  12. Calibrating Physical Parameters in House Models Using Aggregate AC Power Demand

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yannan; Stevens, Andrew J.; Lian, Jianming

    For residential houses, the air conditioning (AC) units are one of the major resources that can provide significant flexibility in energy use for the purpose of demand response. To quantify the flexibility, the characteristics of all the houses need to be accurately estimated, so that certain house models can be used to predict the dynamics of the house temperatures in order to adjust the setpoints accordingly to provide demand response while maintaining the same comfort levels. In this paper, we propose an approach using the Reverse Monte Carlo modeling method and aggregate house models to calibrate the distribution parameters ofmore » the house models for a population of residential houses. Given the aggregate AC power demand for the population, the approach can successfully estimate the distribution parameters for the sensitive physical parameters based on our previous uncertainty quantification study, such as the mean of the floor areas of the houses.« less

  13. Thermodynamic modeling of transcription: sensitivity analysis differentiates biological mechanism from mathematical model-induced effects.

    PubMed

    Dresch, Jacqueline M; Liu, Xiaozhou; Arnosti, David N; Ay, Ahmet

    2010-10-24

    Quantitative models of gene expression generate parameter values that can shed light on biological features such as transcription factor activity, cooperativity, and local effects of repressors. An important element in such investigations is sensitivity analysis, which determines how strongly a model's output reacts to variations in parameter values. Parameters of low sensitivity may not be accurately estimated, leading to unwarranted conclusions. Low sensitivity may reflect the nature of the biological data, or it may be a result of the model structure. Here, we focus on the analysis of thermodynamic models, which have been used extensively to analyze gene transcription. Extracted parameter values have been interpreted biologically, but until now little attention has been given to parameter sensitivity in this context. We apply local and global sensitivity analyses to two recent transcriptional models to determine the sensitivity of individual parameters. We show that in one case, values for repressor efficiencies are very sensitive, while values for protein cooperativities are not, and provide insights on why these differential sensitivities stem from both biological effects and the structure of the applied models. In a second case, we demonstrate that parameters that were thought to prove the system's dependence on activator-activator cooperativity are relatively insensitive. We show that there are numerous parameter sets that do not satisfy the relationships proferred as the optimal solutions, indicating that structural differences between the two types of transcriptional enhancers analyzed may not be as simple as altered activator cooperativity. Our results emphasize the need for sensitivity analysis to examine model construction and forms of biological data used for modeling transcriptional processes, in order to determine the significance of estimated parameter values for thermodynamic models. Knowledge of parameter sensitivities can provide the necessary context to determine how modeling results should be interpreted in biological systems.

  14. Cell death, perfusion and electrical parameters are critical in models of hepatic radiofrequency ablation

    PubMed Central

    Hall, Sheldon K.; Ooi, Ean H.; Payne, Stephen J.

    2015-01-01

    Abstract Purpose: A sensitivity analysis has been performed on a mathematical model of radiofrequency ablation (RFA) in the liver. The purpose of this is to identify the most important parameters in the model, defined as those that produce the largest changes in the prediction. This is important in understanding the role of uncertainty and when comparing the model predictions to experimental data. Materials and methods: The Morris method was chosen to perform the sensitivity analysis because it is ideal for models with many parameters or that take a significant length of time to obtain solutions. A comprehensive literature review was performed to obtain ranges over which the model parameters are expected to vary, crucial input information. Results: The most important parameters in predicting the ablation zone size in our model of RFA are those representing the blood perfusion, electrical conductivity and the cell death model. The size of the 50 °C isotherm is sensitive to the electrical properties of tissue while the heat source is active, and to the thermal parameters during cooling. Conclusions: The parameter ranges chosen for the sensitivity analysis are believed to represent all that is currently known about their values in combination. The Morris method is able to compute global parameter sensitivities taking into account the interaction of all parameters, something that has not been done before. Research is needed to better understand the uncertainties in the cell death, electrical conductivity and perfusion models, but the other parameters are only of second order, providing a significant simplification. PMID:26000972

  15. MODELED MESOSCALE METEOROLOGICAL FIELDS WITH FOUR-DIMENSIONAL DATA ASSIMILATION IN REGIONAL SCALE AIR QUALITY MODELS

    EPA Science Inventory

    This paper addresses the need to increase the temporal and spatial resolution of meteorological data currently used in air quality simulation models, AQSMs. ransport and diffusion parameters including mixing heights and stability used in regulatory air quality dispersion models a...

  16. A "total parameter estimation" method in the varification of distributed hydrological models

    NASA Astrophysics Data System (ADS)

    Wang, M.; Qin, D.; Wang, H.

    2011-12-01

    Conventionally hydrological models are used for runoff or flood forecasting, hence the determination of model parameters are common estimated based on discharge measurements at the catchment outlets. With the advancement in hydrological sciences and computer technology, distributed hydrological models based on the physical mechanism such as SWAT, MIKESHE, and WEP, have gradually become the mainstream models in hydrology sciences. However, the assessments of distributed hydrological models and model parameter determination still rely on runoff and occasionally, groundwater level measurements. It is essential in many countries, including China, to understand the local and regional water cycle: not only do we need to simulate the runoff generation process and for flood forecasting in wet areas, we also need to grasp the water cycle pathways and consumption process of transformation in arid and semi-arid regions for the conservation and integrated water resources management. As distributed hydrological model can simulate physical processes within a catchment, we can get a more realistic representation of the actual water cycle within the simulation model. Runoff is the combined result of various hydrological processes, using runoff for parameter estimation alone is inherits problematic and difficult to assess the accuracy. In particular, in the arid areas, such as the Haihe River Basin in China, runoff accounted for only 17% of the rainfall, and very concentrated during the rainy season from June to August each year. During other months, many of the perennial rivers within the river basin dry up. Thus using single runoff simulation does not fully utilize the distributed hydrological model in arid and semi-arid regions. This paper proposed a "total parameter estimation" method to verify the distributed hydrological models within various water cycle processes, including runoff, evapotranspiration, groundwater, and soil water; and apply it to the Haihe river basin in China. The application results demonstrate that this comprehensive testing method is very useful in the development of a distributed hydrological model and it provides a new way of thinking in hydrological sciences.

  17. Functional Fault Modeling of a Cryogenic System for Real-Time Fault Detection and Isolation

    NASA Technical Reports Server (NTRS)

    Ferrell, Bob; Lewis, Mark; Oostdyk, Rebecca; Perotti, Jose

    2009-01-01

    When setting out to model and/or simulate a complex mechanical or electrical system, a modeler is faced with a vast array of tools, software, equations, algorithms and techniques that may individually or in concert aid in the development of the model. Mature requirements and a well understood purpose for the model may considerably shrink the field of possible tools and algorithms that will suit the modeling solution. Is the model intended to be used in an offline fashion or in real-time? On what platform does it need to execute? How long will the model be allowed to run before it outputs the desired parameters? What resolution is desired? Do the parameters need to be qualitative or quantitative? Is it more important to capture the physics or the function of the system in the model? Does the model need to produce simulated data? All these questions and more will drive the selection of the appropriate tools and algorithms, but the modeler must be diligent to bear in mind the final application throughout the modeling process to ensure the model meets its requirements without needless iterations of the design. The purpose of this paper is to describe the considerations and techniques used in the process of creating a functional fault model of a liquid hydrogen (LH2) system that will be used in a real-time environment to automatically detect and isolate failures.

  18. Applications of Monte Carlo method to nonlinear regression of rheological data

    NASA Astrophysics Data System (ADS)

    Kim, Sangmo; Lee, Junghaeng; Kim, Sihyun; Cho, Kwang Soo

    2018-02-01

    In rheological study, it is often to determine the parameters of rheological models from experimental data. Since both rheological data and values of the parameters vary in logarithmic scale and the number of the parameters is quite large, conventional method of nonlinear regression such as Levenberg-Marquardt (LM) method is usually ineffective. The gradient-based method such as LM is apt to be caught in local minima which give unphysical values of the parameters whenever the initial guess of the parameters is far from the global optimum. Although this problem could be solved by simulated annealing (SA), the Monte Carlo (MC) method needs adjustable parameter which could be determined in ad hoc manner. We suggest a simplified version of SA, a kind of MC methods which results in effective values of the parameters of most complicated rheological models such as the Carreau-Yasuda model of steady shear viscosity, discrete relaxation spectrum and zero-shear viscosity as a function of concentration and molecular weight.

  19. Determining the accuracy of maximum likelihood parameter estimates with colored residuals

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Klein, Vladislav

    1994-01-01

    An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.

  20. FACTORS INFLUENCING TOTAL DIETARY EXPOSURES OF YOUNG CHILDREN

    EPA Science Inventory

    A deterministic model was developed to identify the critical input parameters needed to assess dietary intakes of young children. The model was used as a framework for understanding the important factors in data collection and data analysis. Factors incorporated into the model i...

  1. Using an ensemble smoother to evaluate parameter uncertainty of an integrated hydrological model of Yanqi basin

    NASA Astrophysics Data System (ADS)

    Li, Ning; McLaughlin, Dennis; Kinzelbach, Wolfgang; Li, WenPeng; Dong, XinGuang

    2015-10-01

    Model uncertainty needs to be quantified to provide objective assessments of the reliability of model predictions and of the risk associated with management decisions that rely on these predictions. This is particularly true in water resource studies that depend on model-based assessments of alternative management strategies. In recent decades, Bayesian data assimilation methods have been widely used in hydrology to assess uncertain model parameters and predictions. In this case study, a particular data assimilation algorithm, the Ensemble Smoother with Multiple Data Assimilation (ESMDA) (Emerick and Reynolds, 2012), is used to derive posterior samples of uncertain model parameters and forecasts for a distributed hydrological model of Yanqi basin, China. This model is constructed using MIKESHE/MIKE11software, which provides for coupling between surface and subsurface processes (DHI, 2011a-d). The random samples in the posterior parameter ensemble are obtained by using measurements to update 50 prior parameter samples generated with a Latin Hypercube Sampling (LHS) procedure. The posterior forecast samples are obtained from model runs that use the corresponding posterior parameter samples. Two iterative sample update methods are considered: one based on an a perturbed observation Kalman filter update and one based on a square root Kalman filter update. These alternatives give nearly the same results and converge in only two iterations. The uncertain parameters considered include hydraulic conductivities, drainage and river leakage factors, van Genuchten soil property parameters, and dispersion coefficients. The results show that the uncertainty in many of the parameters is reduced during the smoother updating process, reflecting information obtained from the observations. Some of the parameters are insensitive and do not benefit from measurement information. The correlation coefficients among certain parameters increase in each iteration, although they generally stay below 0.50.

  2. Modelling efforts needed to advance herpes simplex virus (HSV) vaccine development: Key findings from the World Health Organization Consultation on HSV Vaccine Impact Modelling.

    PubMed

    Gottlieb, Sami L; Giersing, Birgitte; Boily, Marie-Claude; Chesson, Harrell; Looker, Katharine J; Schiffer, Joshua; Spicknall, Ian; Hutubessy, Raymond; Broutet, Nathalie

    2017-06-21

    Development of a vaccine against herpes simplex virus (HSV) is an important goal for global sexual and reproductive health. In order to more precisely define the health and economic burden of HSV infection and the theoretical impact and cost-effectiveness of an HSV vaccine, in 2015 the World Health Organization convened an expert consultation meeting on HSV vaccine impact modelling. The experts reviewed existing model-based estimates and dynamic models of HSV infection to outline critical future modelling needs to inform development of a comprehensive business case and preferred product characteristics for an HSV vaccine. This article summarizes key findings and discussions from the meeting on modelling needs related to HSV burden, costs, and vaccine impact, essential data needs to carry out those models, and important model components and parameters. Copyright © 2017. Published by Elsevier Ltd.

  3. Tuning Parameters in Heuristics by Using Design of Experiments Methods

    NASA Technical Reports Server (NTRS)

    Arin, Arif; Rabadi, Ghaith; Unal, Resit

    2010-01-01

    With the growing complexity of today's large scale problems, it has become more difficult to find optimal solutions by using exact mathematical methods. The need to find near-optimal solutions in an acceptable time frame requires heuristic approaches. In many cases, however, most heuristics have several parameters that need to be "tuned" before they can reach good results. The problem then turns into "finding best parameter setting" for the heuristics to solve the problems efficiently and timely. One-Factor-At-a-Time (OFAT) approach for parameter tuning neglects the interactions between parameters. Design of Experiments (DOE) tools can be instead employed to tune the parameters more effectively. In this paper, we seek the best parameter setting for a Genetic Algorithm (GA) to solve the single machine total weighted tardiness problem in which n jobs must be scheduled on a single machine without preemption, and the objective is to minimize the total weighted tardiness. Benchmark instances for the problem are available in the literature. To fine tune the GA parameters in the most efficient way, we compare multiple DOE models including 2-level (2k ) full factorial design, orthogonal array design, central composite design, D-optimal design and signal-to-noise (SIN) ratios. In each DOE method, a mathematical model is created using regression analysis, and solved to obtain the best parameter setting. After verification runs using the tuned parameter setting, the preliminary results for optimal solutions of multiple instances were found efficiently.

  4. Mathematical Model to estimate the wind power using four-parameter Burr distribution

    NASA Astrophysics Data System (ADS)

    Liu, Sanming; Wang, Zhijie; Pan, Zhaoxu

    2018-03-01

    When the real probability of wind speed in the same position needs to be described, the four-parameter Burr distribution is more suitable than other distributions. This paper introduces its important properties and characteristics. Also, the application of the four-parameter Burr distribution in wind speed prediction is discussed, and the expression of probability distribution of output power of wind turbine is deduced.

  5. A global sensitivity analysis approach for morphogenesis models.

    PubMed

    Boas, Sonja E M; Navarro Jimenez, Maria I; Merks, Roeland M H; Blom, Joke G

    2015-11-21

    Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such 'black-box' models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all 'black-box' models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  6. Uncertainty quantification in LES of channel flow

    DOE PAGES

    Safta, Cosmin; Blaylock, Myra; Templeton, Jeremy; ...

    2016-07-12

    Here, in this paper, we present a Bayesian framework for estimating joint densities for large eddy simulation (LES) sub-grid scale model parameters based on canonical forced isotropic turbulence direct numerical simulation (DNS) data. The framework accounts for noise in the independent variables, and we present alternative formulations for accounting for discrepancies between model and data. To generate probability densities for flow characteristics, posterior densities for sub-grid scale model parameters are propagated forward through LES of channel flow and compared with DNS data. Synthesis of the calibration and prediction results demonstrates that model parameters have an explicit filter width dependence andmore » are highly correlated. Discrepancies between DNS and calibrated LES results point to additional model form inadequacies that need to be accounted for.« less

  7. Bayesian Parameter Inference and Model Selection by Population Annealing in Systems Biology

    PubMed Central

    Murakami, Yohei

    2014-01-01

    Parameter inference and model selection are very important for mathematical modeling in systems biology. Bayesian statistics can be used to conduct both parameter inference and model selection. Especially, the framework named approximate Bayesian computation is often used for parameter inference and model selection in systems biology. However, Monte Carlo methods needs to be used to compute Bayesian posterior distributions. In addition, the posterior distributions of parameters are sometimes almost uniform or very similar to their prior distributions. In such cases, it is difficult to choose one specific value of parameter with high credibility as the representative value of the distribution. To overcome the problems, we introduced one of the population Monte Carlo algorithms, population annealing. Although population annealing is usually used in statistical mechanics, we showed that population annealing can be used to compute Bayesian posterior distributions in the approximate Bayesian computation framework. To deal with un-identifiability of the representative values of parameters, we proposed to run the simulations with the parameter ensemble sampled from the posterior distribution, named “posterior parameter ensemble”. We showed that population annealing is an efficient and convenient algorithm to generate posterior parameter ensemble. We also showed that the simulations with the posterior parameter ensemble can, not only reproduce the data used for parameter inference, but also capture and predict the data which was not used for parameter inference. Lastly, we introduced the marginal likelihood in the approximate Bayesian computation framework for Bayesian model selection. We showed that population annealing enables us to compute the marginal likelihood in the approximate Bayesian computation framework and conduct model selection depending on the Bayes factor. PMID:25089832

  8. Comparing an annual and daily time-step model for predicting field-scale P loss

    USDA-ARS?s Scientific Manuscript database

    Several models with varying degrees of complexity are available for describing P movement through the landscape. The complexity of these models is dependent on the amount of data required by the model, the number of model parameters needed to be estimated, the theoretical rigor of the governing equa...

  9. A software tool to assess uncertainty in transient-storage model parameters using Monte Carlo simulations

    USGS Publications Warehouse

    Ward, Adam S.; Kelleher, Christa A.; Mason, Seth J. K.; Wagener, Thorsten; McIntyre, Neil; McGlynn, Brian L.; Runkel, Robert L.; Payn, Robert A.

    2017-01-01

    Researchers and practitioners alike often need to understand and characterize how water and solutes move through a stream in terms of the relative importance of in-stream and near-stream storage and transport processes. In-channel and subsurface storage processes are highly variable in space and time and difficult to measure. Storage estimates are commonly obtained using transient-storage models (TSMs) of the experimentally obtained solute-tracer test data. The TSM equations represent key transport and storage processes with a suite of numerical parameters. Parameter values are estimated via inverse modeling, in which parameter values are iteratively changed until model simulations closely match observed solute-tracer data. Several investigators have shown that TSM parameter estimates can be highly uncertain. When this is the case, parameter values cannot be used reliably to interpret stream-reach functioning. However, authors of most TSM studies do not evaluate or report parameter certainty. Here, we present a software tool linked to the One-dimensional Transport with Inflow and Storage (OTIS) model that enables researchers to conduct uncertainty analyses via Monte-Carlo parameter sampling and to visualize uncertainty and sensitivity results. We demonstrate application of our tool to 2 case studies and compare our results to output obtained from more traditional implementation of the OTIS model. We conclude by suggesting best practices for transient-storage modeling and recommend that future applications of TSMs include assessments of parameter certainty to support comparisons and more reliable interpretations of transport processes.

  10. Quality of traffic flow on urban arterial streets and its relationship with safety.

    PubMed

    Dixit, Vinayak V; Pande, Anurag; Abdel-Aty, Mohamed; Das, Abhishek; Radwan, Essam

    2011-09-01

    The two-fluid model for vehicular traffic flow explains the traffic on arterials as a mix of stopped and running vehicles. It describes the relationship between the vehicles' running speed and the fraction of running vehicles. The two parameters of the model essentially represent 'free flow' travel time and level of interaction among vehicles, and may be used to evaluate urban roadway networks and urban corridors with partially limited access. These parameters are influenced by not only the roadway characteristics but also by behavioral aspects of driver population, e.g., aggressiveness. Two-fluid models are estimated for eight arterial corridors in Orlando, FL for this study. The parameters of the two-fluid model were used to evaluate corridor level operations and the correlations of these parameters' with rates of crashes having different types/severity. Significant correlations were found between two-fluid parameters and rear-end and angle crash rates. Rate of severe crashes was also found to be significantly correlated with the model parameter signifying inter-vehicle interactions. While there is need for further analysis, the findings suggest that the two-fluid model parameters may have potential as surrogate measures for traffic safety on urban arterial streets. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Modeling Complex Equilibria in ITC Experiments: Thermodynamic Parameters Estimation for a Three Binding Site Model

    PubMed Central

    Le, Vu H.; Buscaglia, Robert; Chaires, Jonathan B.; Lewis, Edwin A.

    2013-01-01

    Isothermal Titration Calorimetry, ITC, is a powerful technique that can be used to estimate a complete set of thermodynamic parameters (e.g. Keq (or ΔG), ΔH, ΔS, and n) for a ligand binding interaction described by a thermodynamic model. Thermodynamic models are constructed by combination of equilibrium constant, mass balance, and charge balance equations for the system under study. Commercial ITC instruments are supplied with software that includes a number of simple interaction models, for example one binding site, two binding sites, sequential sites, and n-independent binding sites. More complex models for example, three or more binding sites, one site with multiple binding mechanisms, linked equilibria, or equilibria involving macromolecular conformational selection through ligand binding need to be developed on a case by case basis by the ITC user. In this paper we provide an algorithm (and a link to our MATLAB program) for the non-linear regression analysis of a multiple binding site model with up to four overlapping binding equilibria. Error analysis demonstrates that fitting ITC data for multiple parameters (e.g. up to nine parameters in the three binding site model) yields thermodynamic parameters with acceptable accuracy. PMID:23262283

  12. A seasonal Bartlett-Lewis Rectangular Pulse model

    NASA Astrophysics Data System (ADS)

    Ritschel, Christoph; Agbéko Kpogo-Nuwoklo, Komlan; Rust, Henning; Ulbrich, Uwe; Névir, Peter

    2016-04-01

    Precipitation time series with a high temporal resolution are needed as input for several hydrological applications, e.g. river runoff or sewer system models. As adequate observational data sets are often not available, simulated precipitation series come to use. Poisson-cluster models are commonly applied to generate these series. It has been shown that this class of stochastic precipitation models is able to well reproduce important characteristics of observed rainfall. For the gauge based case study presented here, the Bartlett-Lewis rectangular pulse model (BLRPM) has been chosen. As it has been shown that certain model parameters vary with season in a midlatitude moderate climate due to different rainfall mechanisms dominating in winter and summer, model parameters are typically estimated separately for individual seasons or individual months. Here, we suggest a simultaneous parameter estimation for the whole year under the assumption that seasonal variation of parameters can be described with harmonic functions. We use an observational precipitation series from Berlin with a high temporal resolution to exemplify the approach. We estimate BLRPM parameters with and without this seasonal extention and compare the results in terms of model performance and robustness of the estimation.

  13. Dynamical compensation and structural identifiability of biological models: Analysis, implications, and reconciliation

    PubMed Central

    2017-01-01

    The concept of dynamical compensation has been recently introduced to describe the ability of a biological system to keep its output dynamics unchanged in the face of varying parameters. However, the original definition of dynamical compensation amounts to lack of structural identifiability. This is relevant if model parameters need to be estimated, as is often the case in biological modelling. Care should we taken when using an unidentifiable model to extract biological insight: the estimated values of structurally unidentifiable parameters are meaningless, and model predictions about unmeasured state variables can be wrong. Taking this into account, we explore alternative definitions of dynamical compensation that do not necessarily imply structural unidentifiability. Accordingly, we show different ways in which a model can be made identifiable while exhibiting dynamical compensation. Our analyses enable the use of the new concept of dynamical compensation in the context of parameter identification, and reconcile it with the desirable property of structural identifiability. PMID:29186132

  14. A new methodology based on sensitivity analysis to simplify the recalibration of functional-structural plant models in new conditions.

    PubMed

    Mathieu, Amélie; Vidal, Tiphaine; Jullien, Alexandra; Wu, QiongLi; Chambon, Camille; Bayol, Benoit; Cournède, Paul-Henry

    2018-06-19

    Functional-structural plant models (FSPMs) describe explicitly the interactions between plants and their environment at organ to plant scale. However, the high level of description of the structure or model mechanisms makes this type of model very complex and hard to calibrate. A two-step methodology to facilitate the calibration process is proposed here. First, a global sensitivity analysis method was applied to the calibration loss function. It provided first-order and total-order sensitivity indexes that allow parameters to be ranked by importance in order to select the most influential ones. Second, the Akaike information criterion (AIC) was used to quantify the model's quality of fit after calibration with different combinations of selected parameters. The model with the lowest AIC gives the best combination of parameters to select. This methodology was validated by calibrating the model on an independent data set (same cultivar, another year) with the parameters selected in the second step. All the parameters were set to their nominal value; only the most influential ones were re-estimated. Sensitivity analysis applied to the calibration loss function is a relevant method to underline the most significant parameters in the estimation process. For the studied winter oilseed rape model, 11 out of 26 estimated parameters were selected. Then, the model could be recalibrated for a different data set by re-estimating only three parameters selected with the model selection method. Fitting only a small number of parameters dramatically increases the efficiency of recalibration, increases the robustness of the model and helps identify the principal sources of variation in varying environmental conditions. This innovative method still needs to be more widely validated but already gives interesting avenues to improve the calibration of FSPMs.

  15. Comparative analysis of tree classification models for detecting fusarium oxysporum f. sp cubense (TR4) based on multi soil sensor parameters

    NASA Astrophysics Data System (ADS)

    Estuar, Maria Regina Justina; Victorino, John Noel; Coronel, Andrei; Co, Jerelyn; Tiausas, Francis; Señires, Chiara Veronica

    2017-09-01

    Use of wireless sensor networks and smartphone integration design to monitor environmental parameters surrounding plantations is made possible because of readily available and affordable sensors. Providing low cost monitoring devices would be beneficial, especially to small farm owners, in a developing country like the Philippines, where agriculture covers a significant amount of the labor market. This study discusses the integration of wireless soil sensor devices and smartphones to create an application that will use multidimensional analysis to detect the presence or absence of plant disease. Specifically, soil sensors are designed to collect soil quality parameters in a sink node from which the smartphone collects data from via Bluetooth. Given these, there is a need to develop a classification model on the mobile phone that will report infection status of a soil. Though tree classification is the most appropriate approach for continuous parameter-based datasets, there is a need to determine whether tree models will result to coherent results or not. Soil sensor data that resides on the phone is modeled using several variations of decision tree, namely: decision tree (DT), best-fit (BF) decision tree, functional tree (FT), Naive Bayes (NB) decision tree, J48, J48graft and LAD tree, where decision tree approaches the problem by considering all sensor nodes as one. Results show that there are significant differences among soil sensor parameters indicating that there are variances in scores between the infected and uninfected sites. Furthermore, analysis of variance in accuracy, recall, precision and F1 measure scores from tree classification models homogeneity among NBTree, J48graft and J48 tree classification models.

  16. Exploring theory space with Monte Carlo reweighting

    DOE PAGES

    Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; ...

    2014-10-13

    Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. Specifically, we suggest procedures that allow more efficient collaboration between theorists andmore » experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.« less

  17. Least-Squares Self-Calibration of Imaging Array Data

    NASA Technical Reports Server (NTRS)

    Arendt, R. G.; Moseley, S. H.; Fixsen, D. J.

    2004-01-01

    When arrays are used to collect multiple appropriately-dithered images of the same region of sky, the resulting data set can be calibrated using a least-squares minimization procedure that determines the optimal fit between the data and a model of that data. The model parameters include the desired sky intensities as well as instrument parameters such as pixel-to-pixel gains and offsets. The least-squares solution simultaneously provides the formal error estimates for the model parameters. With a suitable observing strategy, the need for separate calibration observations is reduced or eliminated. We show examples of this calibration technique applied to HST NICMOS observations of the Hubble Deep Fields and simulated SIRTF IRAC observations.

  18. On approaches to analyze the sensitivity of simulated hydrologic fluxes to model parameters in the community land model

    DOE PAGES

    Bao, Jie; Hou, Zhangshuan; Huang, Maoyi; ...

    2015-12-04

    Here, effective sensitivity analysis approaches are needed to identify important parameters or factors and their uncertainties in complex Earth system models composed of multi-phase multi-component phenomena and multiple biogeophysical-biogeochemical processes. In this study, the impacts of 10 hydrologic parameters in the Community Land Model on simulations of runoff and latent heat flux are evaluated using data from a watershed. Different metrics, including residual statistics, the Nash-Sutcliffe coefficient, and log mean square error, are used as alternative measures of the deviations between the simulated and field observed values. Four sensitivity analysis (SA) approaches, including analysis of variance based on the generalizedmore » linear model, generalized cross validation based on the multivariate adaptive regression splines model, standardized regression coefficients based on a linear regression model, and analysis of variance based on support vector machine, are investigated. Results suggest that these approaches show consistent measurement of the impacts of major hydrologic parameters on response variables, but with differences in the relative contributions, particularly for the secondary parameters. The convergence behaviors of the SA with respect to the number of sampling points are also examined with different combinations of input parameter sets and output response variables and their alternative metrics. This study helps identify the optimal SA approach, provides guidance for the calibration of the Community Land Model parameters to improve the model simulations of land surface fluxes, and approximates the magnitudes to be adjusted in the parameter values during parametric model optimization.« less

  19. Toward Scientific Numerical Modeling

    NASA Technical Reports Server (NTRS)

    Kleb, Bil

    2007-01-01

    Ultimately, scientific numerical models need quantified output uncertainties so that modeling can evolve to better match reality. Documenting model input uncertainties and verifying that numerical models are translated into code correctly, however, are necessary first steps toward that goal. Without known input parameter uncertainties, model sensitivities are all one can determine, and without code verification, output uncertainties are simply not reliable. To address these two shortcomings, two proposals are offered: (1) an unobtrusive mechanism to document input parameter uncertainties in situ and (2) an adaptation of the Scientific Method to numerical model development and deployment. Because these two steps require changes in the computational simulation community to bear fruit, they are presented in terms of the Beckhard-Harris-Gleicher change model.

  20. Parameter Balancing in Kinetic Models of Cell Metabolism†

    PubMed Central

    2010-01-01

    Kinetic modeling of metabolic pathways has become a major field of systems biology. It combines structural information about metabolic pathways with quantitative enzymatic rate laws. Some of the kinetic constants needed for a model could be collected from ever-growing literature and public web resources, but they are often incomplete, incompatible, or simply not available. We address this lack of information by parameter balancing, a method to complete given sets of kinetic constants. Based on Bayesian parameter estimation, it exploits the thermodynamic dependencies among different biochemical quantities to guess realistic model parameters from available kinetic data. Our algorithm accounts for varying measurement conditions in the input data (pH value and temperature). It can process kinetic constants and state-dependent quantities such as metabolite concentrations or chemical potentials, and uses prior distributions and data augmentation to keep the estimated quantities within plausible ranges. An online service and free software for parameter balancing with models provided in SBML format (Systems Biology Markup Language) is accessible at www.semanticsbml.org. We demonstrate its practical use with a small model of the phosphofructokinase reaction and discuss its possible applications and limitations. In the future, parameter balancing could become an important routine step in the kinetic modeling of large metabolic networks. PMID:21038890

  1. A New Energy-Critical Plane Damage Parameter for Multiaxial Fatigue Life Prediction of Turbine Blades.

    PubMed

    Yu, Zheng-Yong; Zhu, Shun-Peng; Liu, Qiang; Liu, Yunhan

    2017-05-08

    As one of fracture critical components of an aircraft engine, accurate life prediction of a turbine blade to disk attachment is significant for ensuring the engine structural integrity and reliability. Fatigue failure of a turbine blade is often caused under multiaxial cyclic loadings at high temperatures. In this paper, considering different failure types, a new energy-critical plane damage parameter is proposed for multiaxial fatigue life prediction, and no extra fitted material constants will be needed for practical applications. Moreover, three multiaxial models with maximum damage parameters on the critical plane are evaluated under tension-compression and tension-torsion loadings. Experimental data of GH4169 under proportional and non-proportional fatigue loadings and a case study of a turbine disk-blade contact system are introduced for model validation. Results show that model predictions by Wang-Brown (WB) and Fatemi-Socie (FS) models with maximum damage parameters are conservative and acceptable. For the turbine disk-blade contact system, both of the proposed damage parameters and Smith-Watson-Topper (SWT) model show reasonably acceptable correlations with its field number of flight cycles. However, life estimations of the turbine blade reveal that the definition of the maximum damage parameter is not reasonable for the WB model but effective for both the FS and SWT models.

  2. A New Energy-Critical Plane Damage Parameter for Multiaxial Fatigue Life Prediction of Turbine Blades

    PubMed Central

    Yu, Zheng-Yong; Zhu, Shun-Peng; Liu, Qiang; Liu, Yunhan

    2017-01-01

    As one of fracture critical components of an aircraft engine, accurate life prediction of a turbine blade to disk attachment is significant for ensuring the engine structural integrity and reliability. Fatigue failure of a turbine blade is often caused under multiaxial cyclic loadings at high temperatures. In this paper, considering different failure types, a new energy-critical plane damage parameter is proposed for multiaxial fatigue life prediction, and no extra fitted material constants will be needed for practical applications. Moreover, three multiaxial models with maximum damage parameters on the critical plane are evaluated under tension-compression and tension-torsion loadings. Experimental data of GH4169 under proportional and non-proportional fatigue loadings and a case study of a turbine disk-blade contact system are introduced for model validation. Results show that model predictions by Wang-Brown (WB) and Fatemi-Socie (FS) models with maximum damage parameters are conservative and acceptable. For the turbine disk-blade contact system, both of the proposed damage parameters and Smith-Watson-Topper (SWT) model show reasonably acceptable correlations with its field number of flight cycles. However, life estimations of the turbine blade reveal that the definition of the maximum damage parameter is not reasonable for the WB model but effective for both the FS and SWT models. PMID:28772873

  3. Estimating system parameters for solvent-water and plant cuticle-water using quantum chemically estimated Abraham solute parameters.

    PubMed

    Liang, Yuzhen; Torralba-Sanchez, Tifany L; Di Toro, Dominic M

    2018-04-18

    Polyparameter Linear Free Energy Relationships (pp-LFERs) using Abraham system parameters have many useful applications. However, developing the Abraham system parameters depends on the availability and quality of the Abraham solute parameters. Using Quantum Chemically estimated Abraham solute Parameters (QCAP) is shown to produce pp-LFERs that have lower root mean square errors (RMSEs) of predictions for solvent-water partition coefficients than parameters that are estimated using other presently available methods. pp-LFERs system parameters are estimated for solvent-water, plant cuticle-water systems, and for novel compounds using QCAP solute parameters and experimental partition coefficients. Refitting the system parameter improves the calculation accuracy and eliminates the bias. Refitted models for solvent-water partition coefficients using QCAP solute parameters give better results (RMSE = 0.278 to 0.506 log units for 24 systems) than those based on ABSOLV (0.326 to 0.618) and QSPR (0.294 to 0.700) solute parameters. For munition constituents and munition-like compounds not included in the calibration of the refitted model, QCAP solute parameters produce pp-LFER models with much lower RMSEs for solvent-water partition coefficients (RMSE = 0.734 and 0.664 for original and refitted model, respectively) than ABSOLV (4.46 and 5.98) and QSPR (2.838 and 2.723). Refitting plant cuticle-water pp-LFER including munition constituents using QCAP solute parameters also results in lower RMSE (RMSE = 0.386) than that using ABSOLV (0.778) and QSPR (0.512) solute parameters. Therefore, for fitting a model in situations for which experimental data exist and system parameters can be re-estimated, or for which system parameters do not exist and need to be developed, QCAP is the quantum chemical method of choice.

  4. Herding, minority game, market clearing and efficient markets in a simple spin model framework

    NASA Astrophysics Data System (ADS)

    Kristoufek, Ladislav; Vosvrda, Miloslav

    2018-01-01

    We present a novel approach towards the financial Ising model. Most studies utilize the model to find settings which generate returns closely mimicking the financial stylized facts such as fat tails, volatility clustering and persistence, and others. We tackle the model utility from the other side and look for the combination of parameters which yields return dynamics of the efficient market in the view of the efficient market hypothesis. Working with the Ising model, we are able to present nicely interpretable results as the model is based on only two parameters. Apart from showing the results of our simulation study, we offer a new interpretation of the Ising model parameters via inverse temperature and entropy. We show that in fact market frictions (to a certain level) and herding behavior of the market participants do not go against market efficiency but what is more, they are needed for the markets to be efficient.

  5. Refining Reproductive Parameters for Modelling Sustainability and Extinction in Hunted Primate Populations in the Amazon

    PubMed Central

    Bowler, Mark; Anderson, Matt; Montes, Daniel; Pérez, Pedro; Mayor, Pedro

    2014-01-01

    Primates are frequently hunted in Amazonia. Assessing the sustainability of hunting is essential to conservation planning. The most-used sustainability model, the ‘Production Model’, and more recent spatial models, rely on basic reproductive parameters for accuracy. These parameters are often crudely estimated. To date, parameters used for the Amazon’s most-hunted primate, the woolly monkey (Lagothrix spp.), come from captive populations in the 1960s, when captive births were rare. Furthermore, woolly monkeys have since been split into five species. We provide reproductive parameters calculated by examining the reproductive organs of female Poeppig’s woolly monkeys (Lagothrix poeppigii), collected by hunters as part of their normal subsistence activity. Production was 0.48–0.54 young per female per year, and an interbirth interval of 22.3 to 25.2 months, similar to parameters from captive populations. However, breeding was seasonal, which imposes limits on the maximum reproductive rate attainable. We recommend the use of spatial models over the Production Model, since they are less sensitive to error in estimated reproductive rates. Further refinements to reproductive parameters are needed for most primate taxa. Methods like ours verify the suitability of captive reproductive rates for sustainability analysis and population modelling for populations under differing conditions of hunting pressure and seasonality. Without such research, population modelling is based largely on guesswork. PMID:24714614

  6. Active subspace uncertainty quantification for a polydomain ferroelectric phase-field model

    NASA Astrophysics Data System (ADS)

    Leon, Lider S.; Smith, Ralph C.; Miles, Paul; Oates, William S.

    2018-03-01

    Quantum-informed ferroelectric phase field models capable of predicting material behavior, are necessary for facilitating the development and production of many adaptive structures and intelligent systems. Uncertainty is present in these models, given the quantum scale at which calculations take place. A necessary analysis is to determine how the uncertainty in the response can be attributed to the uncertainty in the model inputs or parameters. A second analysis is to identify active subspaces within the original parameter space, which quantify directions in which the model response varies most dominantly, thus reducing sampling effort and computational cost. In this investigation, we identify an active subspace for a poly-domain ferroelectric phase-field model. Using the active variables as our independent variables, we then construct a surrogate model and perform Bayesian inference. Once we quantify the uncertainties in the active variables, we obtain uncertainties for the original parameters via an inverse mapping. The analysis provides insight into how active subspace methodologies can be used to reduce computational power needed to perform Bayesian inference on model parameters informed by experimental or simulated data.

  7. Application of tire dynamics to aircraft landing gear design analysis

    NASA Technical Reports Server (NTRS)

    Black, R. J.

    1983-01-01

    The tire plays a key part in many analyses used for design of aircraft landing gear. Examples include structural design of wheels, landing gear shimmy, brake whirl, chatter and squeal, complex combination of chatter and shimmy on main landing gear (MLG) systems, anti-skid performance, gear walk, and rough terrain loads and performance. Tire parameters needed in the various analyses are discussed. Two tire models are discussed for shimmy analysis, the modified Moreland approach and the von Schlippe-Dietrich approach. It is shown that the Moreland model can be derived from the Von Schlippe-Dietrich model by certain approximations. The remaining analysis areas are discussed in general terms and the tire parameters needed for each are identified. Accurate tire data allows more accurate design analysis and the correct prediction of dynamic performance of aircraft landing gear.

  8. Mathematical modeling of synthetic unit hydrograph case study: Citarum watershed

    NASA Astrophysics Data System (ADS)

    Islahuddin, Muhammad; Sukrainingtyas, Adiska L. A.; Kusuma, M. Syahril B.; Soewono, Edy

    2015-09-01

    Deriving unit hydrograph is very important in analyzing watershed's hydrologic response of a rainfall event. In most cases, hourly measures of stream flow data needed in deriving unit hydrograph are not always available. Hence, one needs to develop methods for deriving unit hydrograph for ungagged watershed. Methods that have evolved are based on theoretical or empirical formulas relating hydrograph peak discharge and timing to watershed characteristics. These are usually referred to Synthetic Unit Hydrograph. In this paper, a gamma probability density function and its variant are used as mathematical approximations of a unit hydrograph for Citarum Watershed. The model is adjusted with real field condition by translation and scaling. Optimal parameters are determined by using Particle Swarm Optimization method with weighted objective function. With these models, a synthetic unit hydrograph can be developed and hydrologic parameters can be well predicted.

  9. Interdisciplinary Modeling and Dynamics of Archipelago Straits

    DTIC Science & Technology

    2009-01-01

    modeling, tidal modeling and multi-dynamics nested domains and non-hydrostatic modeling WORK COMPLETED Realistic Multiscale Simulations, Real-time...six state variables (chlorophyll, nitrate , ammonium, detritus, phytoplankton, and zooplankton) were needed to initialize simulations. Using biological...parameters from literature, climatology from World Ocean Atlas data for nitrate and chlorophyll profiles extracted from satellite data, a first

  10. Mixed H2/H∞-Based Fusion Estimation for Energy-Limited Multi-Sensors in Wearable Body Networks

    PubMed Central

    Li, Chao; Zhang, Zhenjiang; Chao, Han-Chieh

    2017-01-01

    In wireless sensor networks, sensor nodes collect plenty of data for each time period. If all of data are transmitted to a Fusion Center (FC), the power of sensor node would run out rapidly. On the other hand, the data also needs a filter to remove the noise. Therefore, an efficient fusion estimation model, which can save the energy of the sensor nodes while maintaining higher accuracy, is needed. This paper proposes a novel mixed H2/H∞-based energy-efficient fusion estimation model (MHEEFE) for energy-limited Wearable Body Networks. In the proposed model, the communication cost is firstly reduced efficiently while keeping the estimation accuracy. Then, the parameters in quantization method are discussed, and we confirm them by an optimization method with some prior knowledge. Besides, some calculation methods of important parameters are researched which make the final estimates more stable. Finally, an iteration-based weight calculation algorithm is presented, which can improve the fault tolerance of the final estimate. In the simulation, the impacts of some pivotal parameters are discussed. Meanwhile, compared with the other related models, the MHEEFE shows a better performance in accuracy, energy-efficiency and fault tolerance. PMID:29280950

  11. Application of PBPK modelling in drug discovery and development at Pfizer.

    PubMed

    Jones, Hannah M; Dickins, Maurice; Youdim, Kuresh; Gosset, James R; Attkins, Neil J; Hay, Tanya L; Gurrell, Ian K; Logan, Y Raj; Bungay, Peter J; Jones, Barry C; Gardner, Iain B

    2012-01-01

    Early prediction of human pharmacokinetics (PK) and drug-drug interactions (DDI) in drug discovery and development allows for more informed decision making. Physiologically based pharmacokinetic (PBPK) modelling can be used to answer a number of questions throughout the process of drug discovery and development and is thus becoming a very popular tool. PBPK models provide the opportunity to integrate key input parameters from different sources to not only estimate PK parameters and plasma concentration-time profiles, but also to gain mechanistic insight into compound properties. Using examples from the literature and our own company, we have shown how PBPK techniques can be utilized through the stages of drug discovery and development to increase efficiency, reduce the need for animal studies, replace clinical trials and to increase PK understanding. Given the mechanistic nature of these models, the future use of PBPK modelling in drug discovery and development is promising, however, some limitations need to be addressed to realize its application and utility more broadly.

  12. Method of Individual Forecasting of Technical State of Logging Machines

    NASA Astrophysics Data System (ADS)

    Kozlov, V. G.; Gulevsky, V. A.; Skrypnikov, A. V.; Logoyda, V. S.; Menzhulova, A. S.

    2018-03-01

    Development of the model that evaluates the possibility of failure requires the knowledge of changes’ regularities of technical condition parameters of the machines in use. To study the regularities, the need to develop stochastic models that take into account physical essence of the processes of destruction of structural elements of the machines, the technology of their production, degradation and the stochastic properties of the parameters of the technical state and the conditions and modes of operation arose.

  13. Large-Scale Aerosol Modeling and Analysis

    DTIC Science & Technology

    2009-09-30

    Modeling of Burning Emissions ( FLAMBE ) project, and other related parameters. Our plans to embed NAAPS inside NOGAPS may need to be put on hold...AOD, FLAMBE and FAROP at FNMOC are supported by 6.4 funding from PMW-120 for “Large-scale Atmospheric Models”, “Small-scale Atmospheric Models

  14. A hybrid optimization approach to the estimation of distributed parameters in two-dimensional confined aquifers

    USGS Publications Warehouse

    Heidari, M.; Ranjithan, S.R.

    1998-01-01

    In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.

  15. Communications network design and costing model users manual

    NASA Technical Reports Server (NTRS)

    Logan, K. P.; Somes, S. S.; Clark, C. A.

    1983-01-01

    The information and procedures needed to exercise the communications network design and costing model for performing network analysis are presented. Specific procedures are included for executing the model on the NASA Lewis Research Center IBM 3033 computer. The concepts, functions, and data bases relating to the model are described. Model parameters and their format specifications for running the model are detailed.

  16. Assessment of uncertainties of the models used in thermal-hydraulic computer codes

    NASA Astrophysics Data System (ADS)

    Gricay, A. S.; Migrov, Yu. A.

    2015-09-01

    The article deals with matters concerned with the problem of determining the statistical characteristics of variable parameters (the variation range and distribution law) in analyzing the uncertainty and sensitivity of calculation results to uncertainty in input data. A comparative analysis of modern approaches to uncertainty in input data is presented. The need to develop an alternative method for estimating the uncertainty of model parameters used in thermal-hydraulic computer codes, in particular, in the closing correlations of the loop thermal hydraulics block, is shown. Such a method shall feature the minimal degree of subjectivism and must be based on objective quantitative assessment criteria. The method includes three sequential stages: selecting experimental data satisfying the specified criteria, identifying the key closing correlation using a sensitivity analysis, and carrying out case calculations followed by statistical processing of the results. By using the method, one can estimate the uncertainty range of a variable parameter and establish its distribution law in the above-mentioned range provided that the experimental information is sufficiently representative. Practical application of the method is demonstrated taking as an example the problem of estimating the uncertainty of a parameter appearing in the model describing transition to post-burnout heat transfer that is used in the thermal-hydraulic computer code KORSAR. The performed study revealed the need to narrow the previously established uncertainty range of this parameter and to replace the uniform distribution law in the above-mentioned range by the Gaussian distribution law. The proposed method can be applied to different thermal-hydraulic computer codes. In some cases, application of the method can make it possible to achieve a smaller degree of conservatism in the expert estimates of uncertainties pertinent to the model parameters used in computer codes.

  17. Improving flood forecasting capability of physically based distributed hydrological model by parameter optimization

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Li, J.; Xu, H.

    2015-10-01

    Physically based distributed hydrological models discrete the terrain of the whole catchment into a number of grid cells at fine resolution, and assimilate different terrain data and precipitation to different cells, and are regarded to have the potential to improve the catchment hydrological processes simulation and prediction capability. In the early stage, physically based distributed hydrological models are assumed to derive model parameters from the terrain properties directly, so there is no need to calibrate model parameters, but unfortunately, the uncertanties associated with this model parameter deriving is very high, which impacted their application in flood forecasting, so parameter optimization may also be necessary. There are two main purposes for this study, the first is to propose a parameter optimization method for physically based distributed hydrological models in catchment flood forecasting by using PSO algorithm and to test its competence and to improve its performances, the second is to explore the possibility of improving physically based distributed hydrological models capability in cathcment flood forecasting by parameter optimization. In this paper, based on the scalar concept, a general framework for parameter optimization of the PBDHMs for catchment flood forecasting is first proposed that could be used for all PBDHMs. Then, with Liuxihe model as the study model, which is a physically based distributed hydrological model proposed for catchment flood forecasting, the improverd Particle Swarm Optimization (PSO) algorithm is developed for the parameter optimization of Liuxihe model in catchment flood forecasting, the improvements include to adopt the linear decreasing inertia weight strategy to change the inertia weight, and the arccosine function strategy to adjust the acceleration coefficients. This method has been tested in two catchments in southern China with different sizes, and the results show that the improved PSO algorithm could be used for Liuxihe model parameter optimization effectively, and could improve the model capability largely in catchment flood forecasting, thus proven that parameter optimization is necessary to improve the flood forecasting capability of physically based distributed hydrological model. It also has been found that the appropriate particle number and the maximum evolution number of PSO algorithm used for Liuxihe model catchment flood forcasting is 20 and 30, respectively.

  18. Adaptive control based on retrospective cost optimization

    NASA Technical Reports Server (NTRS)

    Bernstein, Dennis S. (Inventor); Santillo, Mario A. (Inventor)

    2012-01-01

    A discrete-time adaptive control law for stabilization, command following, and disturbance rejection that is effective for systems that are unstable, MIMO, and/or nonminimum phase. The adaptive control algorithm includes guidelines concerning the modeling information needed for implementation. This information includes the relative degree, the first nonzero Markov parameter, and the nonminimum-phase zeros. Except when the plant has nonminimum-phase zeros whose absolute value is less than the plant's spectral radius, the required zero information can be approximated by a sufficient number of Markov parameters. No additional information about the poles or zeros need be known. Numerical examples are presented to illustrate the algorithm's effectiveness in handling systems with errors in the required modeling data, unknown latency, sensor noise, and saturation.

  19. Comment on ;Evaluation of a physically based quasi-linear and a conceptually based nonlinear Muskingum methods; [J. Hydrol., 546, 437-449, 10.1016/j.jhydrol.2017.01.025

    NASA Astrophysics Data System (ADS)

    Barati, Reza

    2017-07-01

    Perumal et al. (2017) compared the performances of the variable parameter McCarthy-Muskingum (VPMM) model of Perumal and Price (2013) and the nonlinear Muskingum (NLM) model of Gill (1978) using hypothetical inflow hydrographs in an artificial channel. As input parameters, first model needs the initial condition, upstream boundary condition, Manning's roughness coefficient, length of the routing reach, cross-sections of the river reach and the bed slope, while the latter one requires the initial condition, upstream boundary condition and the hydrologic parameters (three parameters which can be calibrated using flood hydrographs of the upstream and downstream sections). The VPMM model was examined by available Manning's roughness values, whereas the NLM model was tested in both calibration and validation steps. As final conclusion, Perumal et al. (2017) claimed that the NLM model should be retired from the literature of the Muskingum model. While the author's intention is laudable, this comment examines some important issues in the subject matter of the original study.

  20. Atomic Radius and Charge Parameter Uncertainty in Biomolecular Solvation Energy Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiu; Lei, Huan; Gao, Peiyuan

    Atomic radii and charges are two major parameters used in implicit solvent electrostatics and energy calculations. The optimization problem for charges and radii is under-determined, leading to uncertainty in the values of these parameters and in the results of solvation energy calculations using these parameters. This paper presents a method for quantifying this uncertainty in solvation energies using surrogate models based on generalized polynomial chaos (gPC) expansions. There are relatively few atom types used to specify radii parameters in implicit solvation calculations; therefore, surrogate models for these low-dimensional spaces could be constructed using least-squares fitting. However, there are many moremore » types of atomic charges; therefore, construction of surrogate models for the charge parameter space required compressed sensing combined with an iterative rotation method to enhance problem sparsity. We present results for the uncertainty in small molecule solvation energies based on these approaches. Additionally, we explore the correlation between uncertainties due to radii and charges which motivates the need for future work in uncertainty quantification methods for high-dimensional parameter spaces.« less

  1. Literature review of outcome parameters used in studies of Geriatric Fracture Centers.

    PubMed

    Liem, I S L; Kammerlander, C; Suhm, N; Kates, S L; Blauth, M

    2014-02-01

    A variety of multidisciplinary treatment models have been described to improve outcome after osteoporotic hip fractures. There is a tendency toward better outcomes after implementation of the most sophisticated model with a shared leadership for orthopedic surgeons and geriatricians; the Geriatric Fracture Center. The purpose of this review is to evaluate the use of outcome parameters in published literature on the Geriatric Fracture Center evaluation studies. A literature search was performed using Medline and the Cochrane Library to identify Geriatric Fracture Center evaluation studies. The outcome parameters used in the included studies were evaluated. A total of 16 outcome parameters were used in 11 studies to evaluate patient outcome in 8 different Geriatric Fracture Centers. Two of these outcome parameters are patient-reported outcome measures and 14 outcome parameters were objective measures. In-hospital mortality, length of stay, time to surgery, place of residence and complication rate are the most frequently used outcome parameters. The patient-reported outcomes included activities of daily living and mobility scores. There is a need for generally agreed upon outcome measures to facilitate comparison of different care models.

  2. Sensitivity analysis of pulse pileup model parameter in photon counting detectors

    NASA Astrophysics Data System (ADS)

    Shunhavanich, Picha; Pelc, Norbert J.

    2017-03-01

    Photon counting detectors (PCDs) may provide several benefits over energy-integrating detectors (EIDs), including spectral information for tissue characterization and the elimination of electronic noise. PCDs, however, suffer from pulse pileup, which distorts the detected spectrum and degrades the accuracy of material decomposition. Several analytical models have been proposed to address this problem. The performance of these models are dependent on the assumptions used, including the estimated pulse shape whose parameter values could differ from the actual physical ones. As the incident flux increases and the corrections become more significant the needed parameter value accuracy may be more crucial. In this work, the sensitivity of model parameter accuracies is analyzed for the pileup model of Taguchi et al. The spectra distorted by pileup at different count rates are simulated using either the model or Monte Carlo simulations, and the basis material thicknesses are estimated by minimizing the negative log-likelihood with Poisson or multivariate Gaussian distributions. From simulation results, we find that the accuracy of the deadtime, the height of pulse negative tail, and the timing to the end of the pulse are more important than most other parameters, and they matter more with increasing count rate. This result can help facilitate further work on parameter calibrations.

  3. Validation of a mathematical model of the bovine estrous cycle for cows with different estrous cycle characteristics.

    PubMed

    Boer, H M T; Butler, S T; Stötzel, C; Te Pas, M F W; Veerkamp, R F; Woelders, H

    2017-11-01

    A recently developed mechanistic mathematical model of the bovine estrous cycle was parameterized to fit empirical data sets collected during one estrous cycle of 31 individual cows, with the main objective to further validate the model. The a priori criteria for validation were (1) the resulting model can simulate the measured data correctly (i.e. goodness of fit), and (2) this is achieved without needing extreme, probably non-physiological parameter values. We used a least squares optimization procedure to identify parameter configurations for the mathematical model to fit the empirical in vivo measurements of follicle and corpus luteum sizes, and the plasma concentrations of progesterone, estradiol, FSH and LH for each cow. The model was capable of accommodating normal variation in estrous cycle characteristics of individual cows. With the parameter sets estimated for the individual cows, the model behavior changed for 21 cows, with improved fit of the simulated output curves for 18 of these 21 cows. Moreover, the number of follicular waves was predicted correctly for 18 of the 25 two-wave and three-wave cows, without extreme parameter value changes. Estimation of specific parameters confirmed results of previous model simulations indicating that parameters involved in luteolytic signaling are very important for regulation of general estrous cycle characteristics, and are likely responsible for differences in estrous cycle characteristics between cows.

  4. Automatic Calibration of a Semi-Distributed Hydrologic Model Using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Bekele, E. G.; Nicklow, J. W.

    2005-12-01

    Hydrologic simulation models need to be calibrated and validated before using them for operational predictions. Spatially-distributed hydrologic models generally have a large number of parameters to capture the various physical characteristics of a hydrologic system. Manual calibration of such models is a very tedious and daunting task, and its success depends on the subjective assessment of a particular modeler, which includes knowledge of the basic approaches and interactions in the model. In order to alleviate these shortcomings, an automatic calibration model, which employs an evolutionary optimization technique known as Particle Swarm Optimizer (PSO) for parameter estimation, is developed. PSO is a heuristic search algorithm that is inspired by social behavior of bird flocking or fish schooling. The newly-developed calibration model is integrated to the U.S. Department of Agriculture's Soil and Water Assessment Tool (SWAT). SWAT is a physically-based, semi-distributed hydrologic model that was developed to predict the long term impacts of land management practices on water, sediment and agricultural chemical yields in large complex watersheds with varying soils, land use, and management conditions. SWAT was calibrated for streamflow and sediment concentration. The calibration process involves parameter specification, whereby sensitive model parameters are identified, and parameter estimation. In order to reduce the number of parameters to be calibrated, parameterization was performed. The methodology is applied to a demonstration watershed known as Big Creek, which is located in southern Illinois. Application results show the effectiveness of the approach and model predictions are significantly improved.

  5. High temperature superconductors applications in telecommunications

    NASA Technical Reports Server (NTRS)

    Kumar, A. Anil; Li, Jiang; Zhang, Ming Fang

    1995-01-01

    The purpose of this paper is twofold: (1) to discuss high temperature superconductors with specific reference to their employment in telecommunications applications; and (2) to discuss a few of the limitations of the normally employed two-fluid model. While the debate on the actual usage of high temperature superconductors in the design of electronic and telecommunications devices - obvious advantages versus practical difficulties - needs to be settled in the near future, it is of great interest to investigate the parameters and the assumptions that will be employed in such designs. This paper deals with the issue of providing the microwave design engineer with performance data for such superconducting waveguides. The values of conductivity and surface resistance, which are the primary determining factors of a waveguide performance, are computed based on the two-fluid model. A comparison between two models - a theoretical one in terms of microscopic parameters (termed Model A) and an experimental fit in terms of macroscopic parameters (termed Model B) - shows the limitations and the resulting ambiguities of the two-fluid model at high frequencies and at temperatures close to the transition temperature. The validity of the two-fluid model is then discussed. Our preliminary results show that the electrical transport description in the normal and superconducting phases as they are formulated in the two-fluid model needs to be modified to incorporate the new and special features of high temperature superconductors. Parameters describing the waveguide performance - conductivity, surface resistance and attenuation constant - will be computed. Potential applications in communications networks and large scale integrated circuits will be discussed. Some of the ongoing work will be reported. In particular, a brief proposal is made to investigate of the effects of electromagnetic interference and the concomitant notion of electromagnetic compatibility (EMI/EMC) of high T(sub c) superconductors.

  6. High temperature superconductors applications in telecommunications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, A.A.; Li, J.; Zhang, M.F.

    1994-12-31

    The purpose of this paper is twofold: to discuss high temperature superconductors with specific reference to their employment in telecommunications applications; and to discuss a few of the limitations of the normally employed two-fluid model. While the debate on the actual usage of high temperature superconductors in the design of electronic and telecommunications devices-obvious advantages versus practical difficulties-needs to be settled in the near future, it is of great interest to investigate the parameters and the assumptions that will be employed in such designs. This paper deals with the issue of providing the microwave design engineer with performance data formore » such superconducting waveguides. The values of conductivity and surface resistance, which are the primary determining factors of a waveguide performance, are computed based on the two-fluid model. A comparison between two models-a theoretical one in terms of microscopic parameters (termed Model A) and an experimental fit in terms of macroscopic parameters (termed Model B)-shows the limitations and the resulting ambiguities of the two-fluid model at high frequencies and at temperatures close to the transition temperature. The validity of the two-fluid model is then discussed. Our preliminary results show that the electrical transport description in the normal and superconducting phases as they are formulated in the two-fluid model needs to be modified to incorporate the new and special features of high temperature superconductors. Parameters describing the waveguide performance-conductivity, surface resistance and attenuation constant-will be computed. Potential applications in communications networks and large scale integrated circuits will be discussed. Some of the ongoing work will be reported. In particular, a brief proposal is made to investigate of the effects of electromagnetic interference and the concomitant notion of electromagnetic compatibility (EMI/EMC) of high T{sub c} superconductors.« less

  7. Reviewing the evidence to inform the population of cost-effectiveness models within health technology assessments.

    PubMed

    Kaltenthaler, Eva; Tappenden, Paul; Paisley, Suzy

    2013-01-01

    Health technology assessments (HTAs) typically require the development of a cost-effectiveness model, which necessitates the identification, selection, and use of other types of information beyond clinical effectiveness evidence to populate the model parameters. The reviewing activity associated with model development should be transparent and reproducible but can result in a tension between being both timely and systematic. Little procedural guidance exists in this area. The purpose of this article was to provide guidance, informed by focus groups, on what might constitute a systematic and transparent approach to reviewing information to populate model parameters. A focus group series was held with HTA experts in the United Kingdom including systematic reviewers, information specialists, and health economic modelers to explore these issues. Framework analysis was used to analyze the qualitative data elicited during focus groups. Suggestions included the use of rapid reviewing methods and the need to consider the trade-off between relevance and quality. The need for transparency in the reporting of review methods was emphasized. It was suggested that additional attention should be given to the reporting of parameters deemed to be more important to the model or where the preferred decision regarding the choice of evidence is equivocal. These recommendations form part of a Technical Support Document produced for the National Institute for Health and Clinical Excellence Decision Support Unit in the United Kingdom. It is intended that these recommendations will help to ensure a more systematic, transparent, and reproducible process for the review of model parameters within HTA. Copyright © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  8. Need of tetraiodothyronine supplemental therapy in pregnant women

    NASA Astrophysics Data System (ADS)

    Stoian, Dana; Craciunescu, Mihalea; Timar, Romulus; Schiller, Adalbert; Pater, Liana; Craina, Marius

    2013-10-01

    Thyroid hormones are essential for fetal development. Normal thyroid function in pregnant women adjusts by itself in cases of pregnancy, phenomenon that is deficient in cases of previous maternal thyroid disease. The study group was represented by 120 females, with reproductive age, with known thyroid disease, that had a up to delivery pregnancy. Thyroid ultrasound parameters and functional parameters were follow-up during the 9-month of gestation. The study proposes a mathematical model of predicting the need and the amount of tetraiodothyronine treatment in pregnant women with prevalent thyroid disease.

  9. Update on mathematical modeling research to support the development of automated insulin delivery systems.

    PubMed

    Steil, Garry M; Hipszer, Brian; Reifman, Jaques

    2010-05-01

    One year after its initial meeting, the Glycemia Modeling Working Group reconvened during the 2009 Diabetes Technology Meeting in San Francisco, CA. The discussion, involving 39 scientists, again focused on the need for individual investigators to have access to the clinical data required to develop and refine models of glucose metabolism, the need to understand the differences among the distinct models and control algorithms, and the significance of day-to-day subject variability. The key conclusion was that model-based comparisons of different control algorithms, or the models themselves, are limited by the inability to access individual model-patient parameters. It was widely agreed that these parameters, as opposed to the average parameters that are typically reported, are necessary to perform such comparisons. However, the prevailing view was that, if investigators were to make the parameters available, it would limit their ability (and that of their institution) to benefit from the invested work in developing their models. A general agreement was reached regarding the importance of each model having an insulin pharmacokinetic/pharmacodynamic profile that is not different from profiles reported in the literature (88% of the respondents agreed that the model should have similar curves or be analyzed separately) and the importance of capturing intraday variance in insulin sensitivity (91% of the respondents indicated that this could result in changes in fasting glucose of >or=15%, with 52% of the respondents believing that the variability could effect changes of >or=30%). Seventy-six percent of the participants indicated that high-fat meals were thought to effect changes in other model parameters in addition to gastric emptying. There was also widespread consensus as to how a closed-loop controller should respond to day-to-day changes in model parameters (with 76% of the participants indicating that fasting glucose should be within 15% of target, with 30% of the participants believing that it should be at target). The group was evenly divided as to whether the glucose sensor per se continues to be the major obstacle in achieving closed-loop control. Finally, virtually all participants agreed that a future two-day workshop should be organized to compare, contrast, and understand the differences among the different models and control algorithms. (c) 2010 Diabetes Technology Society.

  10. Identification of Synchronous Machine Stability - Parameters: AN On-Line Time-Domain Approach.

    NASA Astrophysics Data System (ADS)

    Le, Loc Xuan

    1987-09-01

    A time-domain modeling approach is described which enables the stability-study parameters of the synchronous machine to be determined directly from input-output data measured at the terminals of the machine operating under normal conditions. The transient responses due to system perturbations are used to identify the parameters of the equivalent circuit models. The described models are verified by comparing their responses with the machine responses generated from the transient stability models of a small three-generator multi-bus power system and of a single -machine infinite-bus power network. The least-squares method is used for the solution of the model parameters. As a precaution against ill-conditioned problems, the singular value decomposition (SVD) is employed for its inherent numerical stability. In order to identify the equivalent-circuit parameters uniquely, the solution of a linear optimization problem with non-linear constraints is required. Here, the SVD appears to offer a simple solution to this otherwise difficult problem. Furthermore, the SVD yields solutions with small bias and, therefore, physically meaningful parameters even in the presence of noise in the data. The question concerning the need for a more advanced model of the synchronous machine which describes subtransient and even sub-subtransient behavior is dealt with sensibly by the concept of condition number. The concept provides a quantitative measure for determining whether such an advanced model is indeed necessary. Finally, the recursive SVD algorithm is described for real-time parameter identification and tracking of slowly time-variant parameters. The algorithm is applied to identify the dynamic equivalent power system model.

  11. Modelling fourier regression for time series data- a case study: modelling inflation in foods sector in Indonesia

    NASA Astrophysics Data System (ADS)

    Prahutama, Alan; Suparti; Wahyu Utami, Tiani

    2018-03-01

    Regression analysis is an analysis to model the relationship between response variables and predictor variables. The parametric approach to the regression model is very strict with the assumption, but nonparametric regression model isn’t need assumption of model. Time series data is the data of a variable that is observed based on a certain time, so if the time series data wanted to be modeled by regression, then we should determined the response and predictor variables first. Determination of the response variable in time series is variable in t-th (yt), while the predictor variable is a significant lag. In nonparametric regression modeling, one developing approach is to use the Fourier series approach. One of the advantages of nonparametric regression approach using Fourier series is able to overcome data having trigonometric distribution. In modeling using Fourier series needs parameter of K. To determine the number of K can be used Generalized Cross Validation method. In inflation modeling for the transportation sector, communication and financial services using Fourier series yields an optimal K of 120 parameters with R-square 99%. Whereas if it was modeled by multiple linear regression yield R-square 90%.

  12. Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model

    NASA Astrophysics Data System (ADS)

    Urrego-Blanco, Jorge R.; Urban, Nathan M.; Hunke, Elizabeth C.; Turner, Adrian K.; Jeffery, Nicole

    2016-04-01

    Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. It is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.

  13. ERM model analysis for adaptation to hydrological model errors

    NASA Astrophysics Data System (ADS)

    Baymani-Nezhad, M.; Han, D.

    2018-05-01

    Hydrological conditions are changed continuously and these phenomenons generate errors on flood forecasting models and will lead to get unrealistic results. Therefore, to overcome these difficulties, a concept called model updating is proposed in hydrological studies. Real-time model updating is one of the challenging processes in hydrological sciences and has not been entirely solved due to lack of knowledge about the future state of the catchment under study. Basically, in terms of flood forecasting process, errors propagated from the rainfall-runoff model are enumerated as the main source of uncertainty in the forecasting model. Hence, to dominate the exciting errors, several methods have been proposed by researchers to update the rainfall-runoff models such as parameter updating, model state updating, and correction on input data. The current study focuses on investigations about the ability of rainfall-runoff model parameters to cope with three types of existing errors, timing, shape and volume as the common errors in hydrological modelling. The new lumped model, the ERM model, has been selected for this study to evaluate its parameters for its use in model updating to cope with the stated errors. Investigation about ten events proves that the ERM model parameters can be updated to cope with the errors without the need to recalibrate the model.

  14. Parameter extraction with neural networks

    NASA Astrophysics Data System (ADS)

    Cazzanti, Luca; Khan, Mumit; Cerrina, Franco

    1998-06-01

    In semiconductor processing, the modeling of the process is becoming more and more important. While the ultimate goal is that of developing a set of tools for designing a complete process (Technology CAD), it is also necessary to have modules to simulate the various technologies and, in particular, to optimize specific steps. This need is particularly acute in lithography, where the continuous decrease in CD forces the technologies to operate near their limits. In the development of a 'model' for a physical process, we face several levels of challenges. First, it is necessary to develop a 'physical model,' i.e. a rational description of the process itself on the basis of know physical laws. Second, we need an 'algorithmic model' to represent in a virtual environment the behavior of the 'physical model.' After a 'complete' model has been developed and verified, it becomes possible to do performance analysis. In many cases the input parameters are poorly known or not accessible directly to experiment. It would be extremely useful to obtain the values of these 'hidden' parameters from experimental results by comparing model to data. This is particularly severe, because the complexity and costs associated with semiconductor processing make a simple 'trial-and-error' approach infeasible and cost- inefficient. Even when computer models of the process already exists, obtaining data through simulations may be time consuming. Neural networks (NN) are powerful computational tools to predict the behavior of a system from an existing data set. They are able to adaptively 'learn' input/output mappings and to act as universal function approximators. In this paper we use artificial neural networks to build a mapping from the input parameters of the process to output parameters which are indicative of the performance of the process. Once the NN has been 'trained,' it is also possible to observe the process 'in reverse,' and to extract the values of the inputs which yield outputs with desired characteristics. Using this method, we can extract optimum values for the parameters and determine the process latitude very quickly.

  15. User's manual for BRI-STARS (BRIdge Stream Tube model for Alluvial River Simulation)

    DOT National Transportation Integrated Search

    1998-07-01

    There is a need for a generalized water and sediment-routing computer model for solving complicated river engineering problems with limited data and resources. This program should have the following capabilities: to compute hydraulic parameters for o...

  16. A Note on Item-Restscore Association in Rasch Models

    ERIC Educational Resources Information Center

    Kreiner, Svend

    2011-01-01

    To rule out the need for a two-parameter item response theory (IRT) model during item analysis by Rasch models, it is important to check the Rasch model's assumption that all items have the same item discrimination. Biserial and polyserial correlation coefficients measuring the association between items and restscores are often used in an informal…

  17. Enhancing model prediction reliability through improved soil representation and constrained model auto calibration - A paired waterhsed study

    USDA-ARS?s Scientific Manuscript database

    Process based and distributed watershed models possess a large number of parameters that are not directly measured in field and need to be calibrated through matching modeled in-stream fluxes with monitored data. Recently, there have been waves of concern about the reliability of this common practic...

  18. Modeling actual evapotranspiration with routine meteorological variables in the data-scarce region of the Tibetan Plateau: Comparisons and implications

    NASA Astrophysics Data System (ADS)

    Ma, Ning; Zhang, Yinsheng; Xu, Chong-Yu; Szilagyi, Jozsef

    2015-08-01

    Quantitative estimation of actual evapotranspiration (ETa) by in situ measurements and mathematical modeling is a fundamental task for physical understanding of ETa as well as the feedback mechanisms between land and the ambient atmosphere. However, the ETa information in the Tibetan Plateau (TP) has been greatly impeded by the extremely sparse ground observation network in the region. Approaches for estimating ETa solely from routine meteorological variables are therefore important for investigating spatiotemporal variations of ETa in the data-scarce region of the TP. Motivated by this need, the complementary relationship (CR) and Penman-Monteith approaches were evaluated against in situ measurements of ETa on a daily basis in an alpine steppe region of the TP. The former includes the Nonlinear Complementary Relationship (Nonlinear-CR) as well as the Complementary Relationship Areal Evapotranspiration (CRAE) models, while the latter involves the Katerji-Perrier and the Todorovic models. Results indicate that the Nonlinear-CR, CRAE, and Katerji-Perrier models are all capable of efficiently simulating daily ETa, provided their parameter values were appropriately calibrated. The Katerji-Perrier model performed best since its site-specific parameters take the soil water status into account. The Nonlinear-CR model also performed well with the advantage of not requiring the user to choose between a symmetric and asymmetric CR. The CRAE model, even with a relatively low Nash-Sutcliffe efficiency (NSE) value, is also an acceptable approach in this data-scarce region as it does not need information of wind speed and ground surface conditions. In contrast, application of the Todorovic model was found to be inappropriate in the dry regions of the TP due to its significant overestimation of ETa as it neglects the effect of water stress on the bulk surface resistance. Sensitivity analysis of the parameter values demonstrated the relative importance of each parameter in the corresponding model. Overall, the Nonlinear-CR model is recommended in the absence of measured ETa for local calibration of the model parameter values.

  19. On the robust optimization to the uncertain vaccination strategy problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chaerani, D., E-mail: d.chaerani@unpad.ac.id; Anggriani, N., E-mail: d.chaerani@unpad.ac.id; Firdaniza, E-mail: d.chaerani@unpad.ac.id

    2014-02-21

    In order to prevent an epidemic of infectious diseases, the vaccination coverage needs to be minimized and also the basic reproduction number needs to be maintained below 1. This means that as we get the vaccination coverage as minimum as possible, thus we need to prevent the epidemic to a small number of people who already get infected. In this paper, we discuss the case of vaccination strategy in term of minimizing vaccination coverage, when the basic reproduction number is assumed as an uncertain parameter that lies between 0 and 1. We refer to the linear optimization model for vaccinationmore » strategy that propose by Becker and Starrzak (see [2]). Assuming that there is parameter uncertainty involved, we can see Tanner et al (see [9]) who propose the optimal solution of the problem using stochastic programming. In this paper we discuss an alternative way of optimizing the uncertain vaccination strategy using Robust Optimization (see [3]). In this approach we assume that the parameter uncertainty lies within an ellipsoidal uncertainty set such that we can claim that the obtained result will be achieved in a polynomial time algorithm (as it is guaranteed by the RO methodology). The robust counterpart model is presented.« less

  20. Sensitivity Analysis of Biome-Bgc Model for Dry Tropical Forests of Vindhyan Highlands, India

    NASA Astrophysics Data System (ADS)

    Kumar, M.; Raghubanshi, A. S.

    2011-08-01

    A process-based model BIOME-BGC was run for sensitivity analysis to see the effect of ecophysiological parameters on net primary production (NPP) of dry tropical forest of India. The sensitivity test reveals that the forest NPP was highly sensitive to the following ecophysiological parameters: Canopy light extinction coefficient (k), Canopy average specific leaf area (SLA), New stem C : New leaf C (SC:LC), Maximum stomatal conductance (gs,max), C:N of fine roots (C:Nfr), All-sided to projected leaf area ratio and Canopy water interception coefficient (Wint). Therefore, these parameters need more precision and attention during estimation and observation in the field studies.

  1. Bayesian approach to analyzing holograms of colloidal particles.

    PubMed

    Dimiduk, Thomas G; Manoharan, Vinothan N

    2016-10-17

    We demonstrate a Bayesian approach to tracking and characterizing colloidal particles from in-line digital holograms. We model the formation of the hologram using Lorenz-Mie theory. We then use a tempered Markov-chain Monte Carlo method to sample the posterior probability distributions of the model parameters: particle position, size, and refractive index. Compared to least-squares fitting, our approach allows us to more easily incorporate prior information about the parameters and to obtain more accurate uncertainties, which are critical for both particle tracking and characterization experiments. Our approach also eliminates the need to supply accurate initial guesses for the parameters, so it requires little tuning.

  2. Critical asset and portfolio risk analysis: an all-hazards framework.

    PubMed

    Ayyub, Bilal M; McGill, William L; Kaminskiy, Mark

    2007-08-01

    This article develops a quantitative all-hazards framework for critical asset and portfolio risk analysis (CAPRA) that considers both natural and human-caused hazards. Following a discussion on the nature of security threats, the need for actionable risk assessments, and the distinction between asset and portfolio-level analysis, a general formula for all-hazards risk analysis is obtained that resembles the traditional model based on the notional product of consequence, vulnerability, and threat, though with clear meanings assigned to each parameter. Furthermore, a simple portfolio consequence model is presented that yields first-order estimates of interdependency effects following a successful attack on an asset. Moreover, depending on the needs of the decisions being made and available analytical resources, values for the parameters in this model can be obtained at a high level or through detailed systems analysis. Several illustrative examples of the CAPRA methodology are provided.

  3. Automated palpation for breast tissue discrimination based on viscoelastic biomechanical properties.

    PubMed

    Tsukune, Mariko; Kobayashi, Yo; Miyashita, Tomoyuki; Fujie, G Masakatsu

    2015-05-01

    Accurate, noninvasive methods are sought for breast tumor detection and diagnosis. In particular, a need for noninvasive techniques that measure both the nonlinear elastic and viscoelastic properties of breast tissue has been identified. For diagnostic purposes, it is important to select a nonlinear viscoelastic model with a small number of parameters that highly correlate with histological structure. However, the combination of conventional viscoelastic models with nonlinear elastic models requires a large number of parameters. A nonlinear viscoelastic model of breast tissue based on a simple equation with few parameters was developed and tested. The nonlinear viscoelastic properties of soft tissues in porcine breast were measured experimentally using fresh ex vivo samples. Robotic palpation was used for measurements employed in a finite element model. These measurements were used to calculate nonlinear viscoelastic parameters for fat, fibroglandular breast parenchyma and muscle. The ability of these parameters to distinguish the tissue types was evaluated in a two-step statistical analysis that included Holm's pairwise [Formula: see text] test. The discrimination error rate of a set of parameters was evaluated by the Mahalanobis distance. Ex vivo testing in porcine breast revealed significant differences in the nonlinear viscoelastic parameters among combinations of three tissue types. The discrimination error rate was low among all tested combinations of three tissue types. Although tissue discrimination was not achieved using only a single nonlinear viscoelastic parameter, a set of four nonlinear viscoelastic parameters were able to reliably and accurately discriminate fat, breast fibroglandular tissue and muscle.

  4. Multi objective optimization model for minimizing production cost and environmental impact in CNC turning process

    NASA Astrophysics Data System (ADS)

    Widhiarso, Wahyu; Rosyidi, Cucuk Nur

    2018-02-01

    Minimizing production cost in a manufacturing company will increase the profit of the company. The cutting parameters will affect total processing time which then will affect the production cost of machining process. Besides affecting the production cost and processing time, the cutting parameters will also affect the environment. An optimization model is needed to determine the optimum cutting parameters. In this paper, we develop an optimization model to minimize the production cost and the environmental impact in CNC turning process. The model is used a multi objective optimization. Cutting speed and feed rate are served as the decision variables. Constraints considered are cutting speed, feed rate, cutting force, output power, and surface roughness. The environmental impact is converted from the environmental burden by using eco-indicator 99. Numerical example is given to show the implementation of the model and solved using OptQuest of Oracle Crystal Ball software. The results of optimization indicate that the model can be used to optimize the cutting parameters to minimize the production cost and the environmental impact.

  5. Investigating the Impact of Item Parameter Drift for Item Response Theory Models with Mixture Distributions.

    PubMed

    Park, Yoon Soo; Lee, Young-Sun; Xing, Kuan

    2016-01-01

    This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS) were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results also showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effects on item parameters and examinee ability.

  6. Investigating the Impact of Item Parameter Drift for Item Response Theory Models with Mixture Distributions

    PubMed Central

    Park, Yoon Soo; Lee, Young-Sun; Xing, Kuan

    2016-01-01

    This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS) were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results also showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effects on item parameters and examinee ability. PMID:26941699

  7. Multiple Kernel Learning with Data Augmentation

    DTIC Science & Technology

    2016-11-22

    model, we further show how to make inference and learn parameters in the following section. Note that we still need hyper -parameters (i.e., κ0, θ0, µ0...parameter α and sparsity tuning parameter β, these hyper -parameters are not sensitive to data. As in the BEMKL method, we fix the values of these... hyper -parameters for all datasets. αxn β yn wmf λmf µ0 σ0κ0 θ0 N M × F α ∼ G (κ0, θ0) β ∼ N ( µ0, σ 2 0 ) wmf , λmf | α, β ∼ Equation (8) yn | xn,wmf

  8. Uncertainty Quantification and Sensitivity Analysis in the CICE v5.1 Sea Ice Model

    NASA Astrophysics Data System (ADS)

    Urrego-Blanco, J. R.; Urban, N. M.

    2015-12-01

    Changes in the high latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with mid latitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. In this work we characterize parametric uncertainty in Los Alamos Sea Ice model (CICE) and quantify the sensitivity of sea ice area, extent and volume with respect to uncertainty in about 40 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one-at-a-time, this study uses a global variance-based approach in which Sobol sequences are used to efficiently sample the full 40-dimensional parameter space. This approach requires a very large number of model evaluations, which are expensive to run. A more computationally efficient approach is implemented by training and cross-validating a surrogate (emulator) of the sea ice model with model output from 400 model runs. The emulator is used to make predictions of sea ice extent, area, and volume at several model configurations, which are then used to compute the Sobol sensitivity indices of the 40 parameters. A ranking based on the sensitivity indices indicates that model output is most sensitive to snow parameters such as conductivity and grain size, and the drainage of melt ponds. The main effects and interactions among the most influential parameters are also estimated by a non-parametric regression technique based on generalized additive models. It is recommended research to be prioritized towards more accurately determining these most influential parameters values by observational studies or by improving existing parameterizations in the sea ice model.

  9. Estimation of anisotropy parameters in organic-rich shale: Rock physics forward modeling approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herawati, Ida, E-mail: ida.herawati@students.itb.ac.id; Winardhi, Sonny; Priyono, Awali

    Anisotropy analysis becomes an important step in processing and interpretation of seismic data. One of the most important things in anisotropy analysis is anisotropy parameter estimation which can be estimated using well data, core data or seismic data. In seismic data, anisotropy parameter calculation is generally based on velocity moveout analysis. However, the accuracy depends on data quality, available offset, and velocity moveout picking. Anisotropy estimation using seismic data is needed to obtain wide coverage of particular layer anisotropy. In anisotropic reservoir, analysis of anisotropy parameters also helps us to better understand the reservoir characteristics. Anisotropy parameters, especially ε, aremore » related to rock property and lithology determination. Current research aims to estimate anisotropy parameter from seismic data and integrate well data with case study in potential shale gas reservoir. Due to complexity in organic-rich shale reservoir, extensive study from different disciplines is needed to understand the reservoir. Shale itself has intrinsic anisotropy caused by lamination of their formed minerals. In order to link rock physic with seismic response, it is necessary to build forward modeling in organic-rich shale. This paper focuses on studying relationship between reservoir properties such as clay content, porosity and total organic content with anisotropy. Organic content which defines prospectivity of shale gas can be considered as solid background or solid inclusion or both. From the forward modeling result, it is shown that organic matter presence increases anisotropy in shale. The relationships between total organic content and other seismic properties such as acoustic impedance and Vp/Vs are also presented.« less

  10. Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Urrego-Blanco, Jorge Rolando; Urban, Nathan Mark; Hunke, Elizabeth Clare

    Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual modelmore » parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. Lastly, it is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.« less

  11. Statistical Parameter Study of the Time Interval Distribution for Nonparalyzable, Paralyzable, and Hybrid Dead Time Models

    NASA Astrophysics Data System (ADS)

    Syam, Nur Syamsi; Maeng, Seongjin; Kim, Myo Gwang; Lim, Soo Yeon; Lee, Sang Hoon

    2018-05-01

    A large dead time of a Geiger Mueller (GM) detector may cause a large count loss in radiation measurements and consequently may cause distortion of the Poisson statistic of radiation events into a new distribution. The new distribution will have different statistical parameters compared to the original distribution. Therefore, the variance, skewness, and excess kurtosis in association with the observed count rate of the time interval distribution for well-known nonparalyzable, paralyzable, and nonparalyzable-paralyzable hybrid dead time models of a Geiger Mueller detector were studied using Monte Carlo simulation (GMSIM). These parameters were then compared with the statistical parameters of a perfect detector to observe the change in the distribution. The results show that the behaviors of the statistical parameters for the three dead time models were different. The values of the skewness and the excess kurtosis of the nonparalyzable model are equal or very close to those of the perfect detector, which are ≅2 for skewness, and ≅6 for excess kurtosis, while the statistical parameters in the paralyzable and hybrid model obtain minimum values that occur around the maximum observed count rates. The different trends of the three models resulting from the GMSIM simulation can be used to distinguish the dead time behavior of a GM counter; i.e. whether the GM counter can be described best by using the nonparalyzable, paralyzable, or hybrid model. In a future study, these statistical parameters need to be analyzed further to determine the possibility of using them to determine a dead time for each model, particularly for paralyzable and hybrid models.

  12. Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model

    DOE PAGES

    Urrego-Blanco, Jorge Rolando; Urban, Nathan Mark; Hunke, Elizabeth Clare; ...

    2016-04-01

    Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual modelmore » parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. Lastly, it is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.« less

  13. THE TEMPORAL AND SPECTRAL CHARACTERISTICS OF 'FAST RISE AND EXPONENTIAL DECAY' GAMMA-RAY BURST PULSES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng, Z. Y.; Ma, L.; Yin, Y.

    2010-08-01

    In this paper, we have analyzed the temporal and spectral behavior of 52 fast rise and exponential decay (FRED) pulses in 48 long-duration gamma-ray bursts (GRBs) observed by the CGRO/BATSE, using a pulse model with two shape parameters and the Band model with three shape parameters, respectively. It is found that these FRED pulses are distinguished both temporally and spectrally from those in the long-lag pulses. In contrast to the long-lag pulses, only one parameter pair indicates an evident correlation among the five parameters, which suggests that at least four parameters are needed to model burst temporal and spectral behavior.more » In addition, our studies reveal that these FRED pulses have the following correlated properties: (1) long-duration pulses have harder spectra and are less luminous than short-duration pulses and (2) the more asymmetric the pulses are, the steeper are the evolutionary curves of the peak energy (E{sub p}) in the {nu}f{sub {nu}} spectrum within the pulse decay phase. Our statistical results give some constraints on the current GRB models.« less

  14. The role of updraft velocity in temporal variability of cloud hydrometeor number

    NASA Astrophysics Data System (ADS)

    Sullivan, Sylvia; Nenes, Athanasios; Lee, Dong Min; Oreopoulos, Lazaros

    2016-04-01

    Significant effort has been dedicated to incorporating direct aerosol-cloud links, through parameterization of liquid droplet activation and ice crystal nucleation, within climate models. This significant accomplishment has generated the need for understanding which parameters affecting hydrometer formation drives its variability in coupled climate simulations, as it provides the basis for optimal parameter estimation as well as robust comparison with data, and other models. Sensitivity analysis alone does not address this issue, given that the importance of each parameter for hydrometer formation depends on its variance and sensitivity. To address the above issue, we develop and use a series of attribution metrics defined with adjoint sensitivities to attribute the temporal variability in droplet and crystal number to important aerosol and dynamical parameters. This attribution analysis is done both for the NASA Global Modeling and Assimilation Office Goddard Earth Observing System Model, Version 5 and the National Center for Atmospheric Research Community Atmosphere Model Version 5.1. Within the GEOS simulation, up to 48% of temporal variability in output ice crystal number and 61% in droplet number can be attributed to input updraft velocity fluctuations, while for the CAM simulation, they explain as much as 89% of the ice crystal number variability. This above results suggest that vertical velocity in both model frameworks is seen to be a very important (or dominant) driver of hydrometer variability. Yet, observations of vertical velocity are seldomly available (or used) to evaluate the vertical velocities in simulations; this strikingly contrasts the amount and quality of data available for aerosol-related parameters. Consequentially, there is a strong need for retrievals or measurements of vertical velocity for addressing this important knowledge gap that requires a significant investment and effort by the atmospheric community. The attribution metrics as a tool of understanding for hydrometer variability can be instrumental for understanding the source of differences between models used for aerosol-cloud-climate interaction studies.

  15. The impact of temporal sampling resolution on parameter inference for biological transport models.

    PubMed

    Harrison, Jonathan U; Baker, Ruth E

    2018-06-25

    Imaging data has become an essential tool to explore key biological questions at various scales, for example the motile behaviour of bacteria or the transport of mRNA, and it has the potential to transform our understanding of important transport mechanisms. Often these imaging studies require us to compare biological species or mutants, and to do this we need to quantitatively characterise their behaviour. Mathematical models offer a quantitative description of a system that enables us to perform this comparison, but to relate mechanistic mathematical models to imaging data, we need to estimate their parameters. In this work we study how collecting data at different temporal resolutions impacts our ability to infer parameters of biological transport models; performing exact inference for simple velocity jump process models in a Bayesian framework. The question of how best to choose the frequency with which data is collected is prominent in a host of studies because the majority of imaging technologies place constraints on the frequency with which images can be taken, and the discrete nature of observations can introduce errors into parameter estimates. In this work, we mitigate such errors by formulating the velocity jump process model within a hidden states framework. This allows us to obtain estimates of the reorientation rate and noise amplitude for noisy observations of a simple velocity jump process. We demonstrate the sensitivity of these estimates to temporal variations in the sampling resolution and extent of measurement noise. We use our methodology to provide experimental guidelines for researchers aiming to characterise motile behaviour that can be described by a velocity jump process. In particular, we consider how experimental constraints resulting in a trade-off between temporal sampling resolution and observation noise may affect parameter estimates. Finally, we demonstrate the robustness of our methodology to model misspecification, and then apply our inference framework to a dataset that was generated with the aim of understanding the localization of RNA-protein complexes.

  16. Non-robust dynamic inferences from macroeconometric models: Bifurcation stratification of confidence regions

    NASA Astrophysics Data System (ADS)

    Barnett, William A.; Duzhak, Evgeniya Aleksandrovna

    2008-06-01

    Grandmont [J.M. Grandmont, On endogenous competitive business cycles, Econometrica 53 (1985) 995-1045] found that the parameter space of the most classical dynamic models is stratified into an infinite number of subsets supporting an infinite number of different kinds of dynamics, from monotonic stability at one extreme to chaos at the other extreme, and with many forms of multiperiodic dynamics in between. The econometric implications of Grandmont’s findings are particularly important, if bifurcation boundaries cross the confidence regions surrounding parameter estimates in policy-relevant models. Stratification of a confidence region into bifurcated subsets seriously damages robustness of dynamical inferences. Recently, interest in policy in some circles has moved to New-Keynesian models. As a result, in this paper we explore bifurcation within the class of New-Keynesian models. We develop the econometric theory needed to locate bifurcation boundaries in log-linearized New-Keynesian models with Taylor policy rules or inflation-targeting policy rules. Central results needed in this research are our theorems on the existence and location of Hopf bifurcation boundaries in each of the cases that we consider.

  17. Geophysical technique for mineral exploration and discrimination based on electromagnetic methods and associated systems

    DOEpatents

    Zhdanov,; Michael, S [Salt Lake City, UT

    2008-01-29

    Mineral exploration needs a reliable method to distinguish between uneconomic mineral deposits and economic mineralization. A method and system includes a geophysical technique for subsurface material characterization, mineral exploration and mineral discrimination. The technique introduced in this invention detects induced polarization effects in electromagnetic data and uses remote geophysical observations to determine the parameters of an effective conductivity relaxation model using a composite analytical multi-phase model of the rock formations. The conductivity relaxation model and analytical model can be used to determine parameters related by analytical expressions to the physical characteristics of the microstructure of the rocks and minerals. These parameters are ultimately used for the discrimination of different components in underground formations, and in this way provide an ability to distinguish between uneconomic mineral deposits and zones of economic mineralization using geophysical remote sensing technology.

  18. Preliminary Spreadsheet of Eruption Source Parameters for Volcanoes of the World

    USGS Publications Warehouse

    Mastin, Larry G.; Guffanti, Marianne; Ewert, John W.; Spiegel, Jessica

    2009-01-01

    Volcanic eruptions that spew tephra into the atmosphere pose a hazard to jet aircraft. For this reason, the International Civil Aviation Organization (ICAO) has designated nine Volcanic Ash and Aviation Centers (VAACs) around the world whose purpose is to track ash clouds from eruptions and notify aircraft so that they may avoid these ash clouds. During eruptions, VAACs and their collaborators run volcanic-ashtransport- and-dispersion (VATD) models that forecast the location and movement of ash clouds. These models require as input parameters the plume height H, the mass-eruption rate , duration D, erupted volume V (in cubic kilometers of bubble-free or 'dense rock equivalent' [DRE] magma), and the mass fraction of erupted tephra with a particle size smaller than 63 um (m63). Some parameters, such as mass-eruption rate and mass fraction of fine debris, are not obtainable by direct observation; others, such as plume height or duration, are obtainable from observations but may be unavailable in the early hours of an eruption when VATD models are being initiated. For this reason, ash-cloud modelers need to have at their disposal source parameters for a particular volcano that are based on its recent eruptive history and represent the most likely anticipated eruption. They also need source parameters that encompass the range of uncertainty in eruption size or characteristics. In spring of 2007, a workshop was held at the U.S. Geological Survey (USGS) Cascades Volcano Observatory to derive a protocol for assigning eruption source parameters to ash-cloud models during eruptions. The protocol derived from this effort was published by Mastin and others (in press), along with a world map displaying the assigned eruption type for each of the world's volcanoes. Their report, however, did not include the assigned eruption types in tabular form. Therefore, this Open-File Report presents that table in the form of an Excel spreadsheet. These assignments are preliminary and will be modified to follow upcoming recommendations by the volcanological and aviation communities.

  19. Extended Kalman Filter framework for forecasting shoreline evolution

    USGS Publications Warehouse

    Long, Joseph; Plant, Nathaniel G.

    2012-01-01

    A shoreline change model incorporating both long- and short-term evolution is integrated into a data assimilation framework that uses sparse observations to generate an updated forecast of shoreline position and to estimate unobserved geophysical variables and model parameters. Application of the assimilation algorithm provides quantitative statistical estimates of combined model-data forecast uncertainty which is crucial for developing hazard vulnerability assessments, evaluation of prediction skill, and identifying future data collection needs. Significant attention is given to the estimation of four non-observable parameter values and separating two scales of shoreline evolution using only one observable morphological quantity (i.e. shoreline position).

  20. Historical HIV incidence modelling in regional subgroups: use of flexible discrete models with penalized splines based on prior curves.

    PubMed

    Greenland, S

    1996-03-15

    This paper presents an approach to back-projection (back-calculation) of human immunodeficiency virus (HIV) person-year infection rates in regional subgroups based on combining a log-linear model for subgroup differences with a penalized spline model for trends. The penalized spline approach allows flexible trend estimation but requires far fewer parameters than fully non-parametric smoothers, thus saving parameters that can be used in estimating subgroup effects. Use of reasonable prior curve to construct the penalty function minimizes the degree of smoothing needed beyond model specification. The approach is illustrated in application to acquired immunodeficiency syndrome (AIDS) surveillance data from Los Angeles County.

  1. Model of Numerical Spatial Classification for Sustainable Agriculture in Badung Regency and Denpasar City, Indonesia

    NASA Astrophysics Data System (ADS)

    Trigunasih, N. M.; Lanya, I.; Subadiyasa, N. N.; Hutauruk, J.

    2018-02-01

    Increasing number and activity of the population to meet the needs of their lives greatly affect the utilization of land resources. Land needs for activities of the population continue to grow, while the availability of land is limited. Therefore, there will be changes in land use. As a result, the problems faced by land degradation and conversion of agricultural land become non-agricultural. The objectives of this research are: (1) to determine parameter of spatial numerical classification of sustainable food agriculture in Badung Regency and Denpasar City (2) to know the projection of food balance in Badung Regency and Denpasar City in 2020, 2030, 2040, and 2050 (3) to specify of function of spatial numerical classification in the making of zonation model of sustainable agricultural land area in Badung regency and Denpasar city (4) to determine the appropriate model of the area to protect sustainable agricultural land in spatial and time scale in Badung and Denpasar regencies. The method used in this research was quantitative method include: survey, soil analysis, spatial data development, geoprocessing analysis (spatial analysis of overlay and proximity analysis), interpolation of raster digital elevation model data, and visualization (cartography). Qualitative methods consisted of literature studies, and interviews. The parameters observed for a total of 11 parameters Badung regency and Denpasar as much as 9 parameters. Numerical classification parameter analysis results used the standard deviation and the mean of the population data and projections relationship rice field in the food balance sheet by modelling. The result of the research showed that, the number of different numerical classification parameters in rural areas (Badung) and urban areas (Denpasar), in urban areas the number of parameters is less than the rural areas. The based on numerical classification weighting and scores generate population distribution parameter analysis results of a standard deviation and average value. Numerical classification produced 5 models, which was divided into three zones are sustainable neighbourhood, buffer and converted in Denpasar and Badung. The results of Population curve parameter analysis in Denpasar showed normal curve, in contrast to the Badung regency showed abnormal curve, therefore Denpasar modeling carried out throughout the region, while in the Badung regency modeling done in each district. Relationship modelling and projections lands role in food balance in Badung views of sustainable land area whereas in Denpasar seen from any connection to the green open spaces in the spatial plan Denpasar 2011-2031. Modelling in Badung (rural) is different in Denpasar (urban), as well as population curve parameter analysis results in Badung showed abnormal curve while in Denpasar showed normal curve. Relationship modelling and projections lands role in food balance in the Badung regency sustainable in terms of land area, while in Denpasar in terms of linkages with urban green space in Denpasar City’s regional landuse plan of 2011-2031.

  2. Gaussian copula as a likelihood function for environmental models

    NASA Astrophysics Data System (ADS)

    Wani, O.; Espadas, G.; Cecinati, F.; Rieckermann, J.

    2017-12-01

    Parameter estimation of environmental models always comes with uncertainty. To formally quantify this parametric uncertainty, a likelihood function needs to be formulated, which is defined as the probability of observations given fixed values of the parameter set. A likelihood function allows us to infer parameter values from observations using Bayes' theorem. The challenge is to formulate a likelihood function that reliably describes the error generating processes which lead to the observed monitoring data, such as rainfall and runoff. If the likelihood function is not representative of the error statistics, the parameter inference will give biased parameter values. Several uncertainty estimation methods that are currently being used employ Gaussian processes as a likelihood function, because of their favourable analytical properties. Box-Cox transformation is suggested to deal with non-symmetric and heteroscedastic errors e.g. for flow data which are typically more uncertain in high flows than in periods with low flows. Problem with transformations is that the results are conditional on hyper-parameters, for which it is difficult to formulate the analyst's belief a priori. In an attempt to address this problem, in this research work we suggest learning the nature of the error distribution from the errors made by the model in the "past" forecasts. We use a Gaussian copula to generate semiparametric error distributions . 1) We show that this copula can be then used as a likelihood function to infer parameters, breaking away from the practice of using multivariate normal distributions. Based on the results from a didactical example of predicting rainfall runoff, 2) we demonstrate that the copula captures the predictive uncertainty of the model. 3) Finally, we find that the properties of autocorrelation and heteroscedasticity of errors are captured well by the copula, eliminating the need to use transforms. In summary, our findings suggest that copulas are an interesting departure from the usage of fully parametric distributions as likelihood functions - and they could help us to better capture the statistical properties of errors and make more reliable predictions.

  3. Refined Dummy Atom Model of Mg(2+) by Simple Parameter Screening Strategy with Revised Experimental Solvation Free Energy.

    PubMed

    Jiang, Yang; Zhang, Haiyang; Feng, Wei; Tan, Tianwei

    2015-12-28

    Metal ions play an important role in the catalysis of metalloenzymes. To investigate metalloenzymes via molecular modeling, a set of accurate force field parameters for metal ions is highly imperative. To extend its application range and improve the performance, the dummy atom model of metal ions was refined through a simple parameter screening strategy using the Mg(2+) ion as an example. Using the AMBER ff03 force field with the TIP3P model, the refined model accurately reproduced the experimental geometric and thermodynamic properties of Mg(2+). Compared with point charge models and previous dummy atom models, the refined dummy atom model yields an enhanced performance for producing reliable ATP/GTP-Mg(2+)-protein conformations in three metalloenzyme systems with single or double metal centers. Similar to other unbounded models, the refined model failed to reproduce the Mg-Mg distance and favored a monodentate binding of carboxylate groups, and these drawbacks needed to be considered with care. The outperformance of the refined model is mainly attributed to the use of a revised (more accurate) experimental solvation free energy and a suitable free energy correction protocol. This work provides a parameter screening strategy that can be readily applied to refine the dummy atom models for metal ions.

  4. A fully-stochasticized, age-structured population model for population viability analysis of fish: Lower Missouri River endangered pallid sturgeon example

    USGS Publications Warehouse

    Wildhaber, Mark L.; Albers, Janice; Green, Nicholas; Moran, Edward H.

    2017-01-01

    We develop a fully-stochasticized, age-structured population model suitable for population viability analysis (PVA) of fish and demonstrate its use with the endangered pallid sturgeon (Scaphirhynchus albus) of the Lower Missouri River as an example. The model incorporates three levels of variance: parameter variance (uncertainty about the value of a parameter itself) applied at the iteration level, temporal variance (uncertainty caused by random environmental fluctuations over time) applied at the time-step level, and implicit individual variance (uncertainty caused by differences between individuals) applied within the time-step level. We found that population dynamics were most sensitive to survival rates, particularly age-2+ survival, and to fecundity-at-length. The inclusion of variance (unpartitioned or partitioned), stocking, or both generally decreased the influence of individual parameters on population growth rate. The partitioning of variance into parameter and temporal components had a strong influence on the importance of individual parameters, uncertainty of model predictions, and quasiextinction risk (i.e., pallid sturgeon population size falling below 50 age-1+ individuals). Our findings show that appropriately applying variance in PVA is important when evaluating the relative importance of parameters, and reinforce the need for better and more precise estimates of crucial life-history parameters for pallid sturgeon.

  5. Assessment Study of the State of the Art in Adaptive Control and its Applications to Aircraft Control

    NASA Technical Reports Server (NTRS)

    Kaufman, Howard

    1998-01-01

    Many papers relevant to reconfigurable flight control have appeared over the past fifteen years. In general these have consisted of theoretical issues, simulation experiments, and in some cases, actual flight tests. Results indicate that reconfiguration of flight controls is certainly feasible for a wide class of failures. However many of the proposed procedures although quite attractive, need further analytical and experimental studies for meaningful validation. Many procedures assume the availability of failure detection and identification logic that will supply adequately fast, the dynamics corresponding to the failed aircraft. This in general implies that the failure detection and fault identification logic must have access to all possible anticipated faults and the corresponding dynamical equations of motion. Unless some sort of explicit on line parameter identification is included, the computational demands could possibly be too excessive. This suggests the need for some form of adaptive control, either by itself as the prime procedure for control reconfiguration or in conjunction with the failure detection logic. If explicit or indirect adaptive control is used, then it is important that the identified models be such that the corresponding computed controls deliver adequate performance to the actual aircraft. Unknown changes in trim should be modelled, and parameter identification needs to be adequately insensitive to noise and at the same time capable of tracking abrupt changes. If however, both failure detection and system parameter identification turn out to be too time consuming in an emergency situation, then the concepts of direct adaptive control should be considered. If direct model reference adaptive control is to be used (on a linear model) with stability assurances, then a positive real or passivity condition needs to be satisfied for all possible configurations. This condition is often satisfied with a feedforward compensator around the plant. This compensator must be robustly designed such that the compensated plant satisfies the required positive real conditions over all expected parameter values. Furthermore, with the feedforward only around the plant, a nonzero (but bounded error) will exist in steady state between the plant and model outputs. This error can be removed by placing the compensator also in the reference model. Design of such a compensator should not be too difficult a problem since for flight control it is generally possible to feedback all the system states.

  6. Automated system for generation of soil moisture products for agricultural drought assessment

    NASA Astrophysics Data System (ADS)

    Raja Shekhar, S. S.; Chandrasekar, K.; Sesha Sai, M. V. R.; Diwakar, P. G.; Dadhwal, V. K.

    2014-11-01

    Drought is a frequently occurring disaster affecting lives of millions of people across the world every year. Several parameters, indices and models are being used globally to forecast / early warning of drought and monitoring drought for its prevalence, persistence and severity. Since drought is a complex phenomenon, large number of parameter/index need to be evaluated to sufficiently address the problem. It is a challenge to generate input parameters from different sources like space based data, ground data and collateral data in short intervals of time, where there may be limitation in terms of processing power, availability of domain expertise, specialized models & tools. In this study, effort has been made to automate the derivation of one of the important parameter in the drought studies viz Soil Moisture. Soil water balance bucket model is in vogue to arrive at soil moisture products, which is widely popular for its sensitivity to soil conditions and rainfall parameters. This model has been encoded into "Fish-Bone" architecture using COM technologies and Open Source libraries for best possible automation to fulfill the needs for a standard procedure of preparing input parameters and processing routines. The main aim of the system is to provide operational environment for generation of soil moisture products by facilitating users to concentrate on further enhancements and implementation of these parameters in related areas of research, without re-discovering the established models. Emphasis of the architecture is mainly based on available open source libraries for GIS and Raster IO operations for different file formats to ensure that the products can be widely distributed without the burden of any commercial dependencies. Further the system is automated to the extent of user free operations if required with inbuilt chain processing for every day generation of products at specified intervals. Operational software has inbuilt capabilities to automatically download requisite input parameters like rainfall, Potential Evapotranspiration (PET) from respective servers. It can import file formats like .grd, .hdf, .img, generic binary etc, perform geometric correction and re-project the files to native projection system. The software takes into account the weather, crop and soil parameters to run the designed soil water balance model. The software also has additional features like time compositing of outputs to generate weekly, fortnightly profiles for further analysis. Other tools to generate "Area Favorable for Crop Sowing" using the daily soil moisture with highly customizable parameters interface has been provided. A whole India analysis would now take a mere 20 seconds for generation of soil moisture products which would normally take one hour per day using commercial software.

  7. Real-time individualization of the unified model of performance.

    PubMed

    Liu, Jianbo; Ramakrishnan, Sridhar; Laxminarayan, Srinivas; Balkin, Thomas J; Reifman, Jaques

    2017-12-01

    Existing mathematical models for predicting neurobehavioural performance are not suited for mobile computing platforms because they cannot adapt model parameters automatically in real time to reflect individual differences in the effects of sleep loss. We used an extended Kalman filter to develop a computationally efficient algorithm that continually adapts the parameters of the recently developed Unified Model of Performance (UMP) to an individual. The algorithm accomplishes this in real time as new performance data for the individual become available. We assessed the algorithm's performance by simulating real-time model individualization for 18 subjects subjected to 64 h of total sleep deprivation (TSD) and 7 days of chronic sleep restriction (CSR) with 3 h of time in bed per night, using psychomotor vigilance task (PVT) data collected every 2 h during wakefulness. This UMP individualization process produced parameter estimates that progressively approached the solution produced by a post-hoc fitting of model parameters using all data. The minimum number of PVT measurements needed to individualize the model parameters depended upon the type of sleep-loss challenge, with ~30 required for TSD and ~70 for CSR. However, model individualization depended upon the overall duration of data collection, yielding increasingly accurate model parameters with greater number of days. Interestingly, reducing the PVT sampling frequency by a factor of two did not notably hamper model individualization. The proposed algorithm facilitates real-time learning of an individual's trait-like responses to sleep loss and enables the development of individualized performance prediction models for use in a mobile computing platform. © 2017 European Sleep Research Society.

  8. Sediment Acoustics: Wideband Model, Reflection Loss and Ambient Noise Inversion

    DTIC Science & Technology

    2010-01-01

    DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Sediment acoustics : Wideband model , reflection loss and...Physically sound models of acoustic interaction with the ocean floor including penetration, reflection and scattering in support of MCM and ASW needs...OBJECTIVES (1) Consolidation of the BIC08 model of sediment acoustics , its verification in a variety of sediment types, parameter reduction and

  9. Agrochemical fate models applied in agricultural areas from Colombia

    NASA Astrophysics Data System (ADS)

    Garcia-Santos, Glenda; Yang, Jing; Andreoli, Romano; Binder, Claudia

    2010-05-01

    The misuse application of pesticides in mainly agricultural catchments can lead to severe problems for humans and environment. Especially in developing countries where there is often found overuse of agrochemicals and incipient or lack of water quality monitoring at local and regional levels, models are needed for decision making and hot spots identification. However, the complexity of the water cycle contrasts strongly with the scarce data availability, limiting the number of analysis, techniques, and models available to researchers. Therefore there is a strong need for model simplification able to appropriate model complexity and still represent the processes. We have developed a new model so-called Westpa-Pest to improve water quality management of an agricultural catchment located in the highlands of Colombia. Westpa-Pest is based on the fully distributed hydrologic model Wetspa and a fate pesticide module. We have applied a multi-criteria analysis for model selection under the conditions and data availability found in the region and compared with the new developed Westpa-Pest model. Furthermore, both models were empirically calibrated and validated. The following questions were addressed i) what are the strengths and weaknesses of the models?, ii) which are the most sensitive parameters of each model?, iii) what happens with uncertainties in soil parameters?, and iv) how sensitive are the transfer coefficients?

  10. Objective calibration of regional climate models

    NASA Astrophysics Data System (ADS)

    Bellprat, O.; Kotlarski, S.; Lüthi, D.; SchäR, C.

    2012-12-01

    Climate models are subject to high parametric uncertainty induced by poorly confined model parameters of parameterized physical processes. Uncertain model parameters are typically calibrated in order to increase the agreement of the model with available observations. The common practice is to adjust uncertain model parameters manually, often referred to as expert tuning, which lacks objectivity and transparency in the use of observations. These shortcomings often haze model inter-comparisons and hinder the implementation of new model parameterizations. Methods which would allow to systematically calibrate model parameters are unfortunately often not applicable to state-of-the-art climate models, due to computational constraints facing the high dimensionality and non-linearity of the problem. Here we present an approach to objectively calibrate a regional climate model, using reanalysis driven simulations and building upon a quadratic metamodel presented by Neelin et al. (2010) that serves as a computationally cheap surrogate of the model. Five model parameters originating from different parameterizations are selected for the optimization according to their influence on the model performance. The metamodel accurately estimates spatial averages of 2 m temperature, precipitation and total cloud cover, with an uncertainty of similar magnitude as the internal variability of the regional climate model. The non-linearities of the parameter perturbations are well captured, such that only a limited number of 20-50 simulations are needed to estimate optimal parameter settings. Parameter interactions are small, which allows to further reduce the number of simulations. In comparison to an ensemble of the same model which has undergone expert tuning, the calibration yields similar optimal model configurations, but leading to an additional reduction of the model error. The performance range captured is much wider than sampled with the expert-tuned ensemble and the presented methodology is effective and objective. It is argued that objective calibration is an attractive tool and could become standard procedure after introducing new model implementations, or after a spatial transfer of a regional climate model. Objective calibration of parameterizations with regional models could also serve as a strategy toward improving parameterization packages of global climate models.

  11. Model of optical phantoms thermal response upon irradiation with 975 nm dermatological laser

    NASA Astrophysics Data System (ADS)

    Wróbel, M. S.; Bashkatov, A. N.; Yakunin, A. N.; Avetisyan, Yu. A.; Genina, E. A.; Galla, S.; Sekowska, A.; Truchanowicz, D.; Cenian, A.; Jedrzejewska-Szczerska, M.; Tuchin, V. V.

    2018-04-01

    We have developed a numerical model describing the optical and thermal behavior of optical tissue phantoms upon laser irradiation. According to our previous studies, the phantoms can be used as substitute of real skin from the optical, as well as thermal point of view. However, the thermal parameters are not entirely similar to those of real tissues thus there is a need to develop mathematical model, describing the thermal and optical response of such materials. This will facilitate the correction factors, which would be invaluable in translation between measurements on skin phantom to real tissues, and gave a good representation of a real case application. Here, we present the model dependent on the data of our optical phantoms fabricated and measured in our previous preliminary study. The ambiguity between the modeling and the thermal measurements depend on lack of accurate knowledge of material's thermal properties and some exact parameters of the laser beam. Those parameters were varied in the simulation, to provide an overview of possible parameters' ranges and the magnitude of thermal response.

  12. Distributed parameter modelling of flexible spacecraft: Where's the beef?

    NASA Technical Reports Server (NTRS)

    Hyland, D. C.

    1994-01-01

    This presentation discusses various misgivings concerning the directions and productivity of Distributed Parameter System (DPS) theory as applied to spacecraft vibration control. We try to show the need for greater cross-fertilization between DPS theorists and spacecraft control designers. We recommend a shift in research directions toward exploration of asymptotic frequency response characteristics of critical importance to control designers.

  13. Parameter estimation of multivariate multiple regression model using bayesian with non-informative Jeffreys’ prior distribution

    NASA Astrophysics Data System (ADS)

    Saputro, D. R. S.; Amalia, F.; Widyaningsih, P.; Affan, R. C.

    2018-05-01

    Bayesian method is a method that can be used to estimate the parameters of multivariate multiple regression model. Bayesian method has two distributions, there are prior and posterior distributions. Posterior distribution is influenced by the selection of prior distribution. Jeffreys’ prior distribution is a kind of Non-informative prior distribution. This prior is used when the information about parameter not available. Non-informative Jeffreys’ prior distribution is combined with the sample information resulting the posterior distribution. Posterior distribution is used to estimate the parameter. The purposes of this research is to estimate the parameters of multivariate regression model using Bayesian method with Non-informative Jeffreys’ prior distribution. Based on the results and discussion, parameter estimation of β and Σ which were obtained from expected value of random variable of marginal posterior distribution function. The marginal posterior distributions for β and Σ are multivariate normal and inverse Wishart. However, in calculation of the expected value involving integral of a function which difficult to determine the value. Therefore, approach is needed by generating of random samples according to the posterior distribution characteristics of each parameter using Markov chain Monte Carlo (MCMC) Gibbs sampling algorithm.

  14. FITPOP, a heuristic simulation model of population dynamics and genetics with special reference to fisheries

    USGS Publications Warehouse

    McKenna, James E.

    2000-01-01

    Although, perceiving genetic differences and their effects on fish population dynamics is difficult, simulation models offer a means to explore and illustrate these effects. I partitioned the intrinsic rate of increase parameter of a simple logistic-competition model into three components, allowing specification of effects of relative differences in fitness and mortality, as well as finite rate of increase. This model was placed into an interactive, stochastic environment to allow easy manipulation of model parameters (FITPOP). Simulation results illustrated the effects of subtle differences in genetic and population parameters on total population size, overall fitness, and sensitivity of the system to variability. Several consequences of mixing genetically distinct populations were illustrated. For example, behaviors such as depression of population size after initial introgression and extirpation of native stocks due to continuous stocking of genetically inferior fish were reproduced. It also was shown that carrying capacity relative to the amount of stocking had an important influence on population dynamics. Uncertainty associated with parameter estimates reduced confidence in model projections. The FITPOP model provided a simple tool to explore population dynamics, which may assist in formulating management strategies and identifying research needs.

  15. W3MAMCAT: a world wide web based tool for mammillary and catenary compartmental modeling and expert system distinguishability.

    PubMed

    Russell, Solomon; Distefano, Joseph J

    2006-07-01

    W(3)MAMCAT is a new web-based and interactive system for building and quantifying the parameters or parameter ranges of n-compartment mammillary and catenary model structures, with input and output in the first compartment, from unstructured multiexponential (sum-of-n-exponentials) models. It handles unidentifiable as well as identifiable models and, as such, provides finite parameter interval solutions for unidentifiable models, whereas direct parameter search programs typically do not. It also tutorially develops the theory of model distinguishability for same order mammillary versus catenary models, as did its desktop application predecessor MAMCAT+. This includes expert system analysis for distinguishing mammillary from catenary structures, given input and output in similarly numbered compartments. W(3)MAMCAT provides for universal deployment via the internet and enhanced application error checking. It uses supported Microsoft technologies to form an extensible application framework for maintaining a stable and easily updatable application. Most important, anybody, anywhere, is welcome to access it using Internet Explorer 6.0 over the internet for their teaching or research needs. It is available on the Biocybernetics Laboratory website at UCLA: www.biocyb.cs.ucla.edu.

  16. A sparse representation of gravitational waves from precessing compact binaries

    NASA Astrophysics Data System (ADS)

    Blackman, Jonathan; Szilagyi, Bela; Galley, Chad; Tiglio, Manuel

    2014-03-01

    With the advanced generation of gravitational wave detectors coming online in the near future, there is a need for accurate models of gravitational waveforms emitted by binary neutron stars and/or black holes. Post-Newtonian approximations work well for the early inspiral and there are models covering the late inspiral as well as merger and ringdown for the non-precessing case. While numerical relativity simulations have no difficulty with precession and can now provide accurate waveforms for a broad range of parameters, covering the 7 dimensional precessing parameter space with ~107 simulations is not feasible. There is still hope, as reduced order modelling techniques have been highly successful in reducing the impact of the curse of dimensionality for lower dimensional cases. We construct a reduced basis of Post-Newtonian waveforms for the full parameter space with mass ratios up to 10 and spins up to 0 . 9 , and find that for the last 100 orbits only ~ 50 waveforms are needed. The huge compression relies heavily on a reparametrization which seeks to reduce the non-linearity of the waveforms. We also show that the addition of merger and ringdown only mildly increases the size of the basis.

  17. Bayesian Modeling of Exposure and Airflow Using Two-Zone Models

    PubMed Central

    Zhang, Yufen; Banerjee, Sudipto; Yang, Rui; Lungu, Claudiu; Ramachandran, Gurumurthy

    2009-01-01

    Mathematical modeling is being increasingly used as a means for assessing occupational exposures. However, predicting exposure in real settings is constrained by lack of quantitative knowledge of exposure determinants. Validation of models in occupational settings is, therefore, a challenge. Not only do the model parameters need to be known, the models also need to predict the output with some degree of accuracy. In this paper, a Bayesian statistical framework is used for estimating model parameters and exposure concentrations for a two-zone model. The model predicts concentrations in a zone near the source and far away from the source as functions of the toluene generation rate, air ventilation rate through the chamber, and the airflow between near and far fields. The framework combines prior or expert information on the physical model along with the observed data. The framework is applied to simulated data as well as data obtained from the experiments conducted in a chamber. Toluene vapors are generated from a source under different conditions of airflow direction, the presence of a mannequin, and simulated body heat of the mannequin. The Bayesian framework accounts for uncertainty in measurement as well as in the unknown rate of airflow between the near and far fields. The results show that estimates of the interzonal airflow are always close to the estimated equilibrium solutions, which implies that the method works efficiently. The predictions of near-field concentration for both the simulated and real data show nice concordance with the true values, indicating that the two-zone model assumptions agree with the reality to a large extent and the model is suitable for predicting the contaminant concentration. Comparison of the estimated model and its margin of error with the experimental data thus enables validation of the physical model assumptions. The approach illustrates how exposure models and information on model parameters together with the knowledge of uncertainty and variability in these quantities can be used to not only provide better estimates of model outputs but also model parameters. PMID:19403840

  18. Intelligent inversion method for pre-stack seismic big data based on MapReduce

    NASA Astrophysics Data System (ADS)

    Yan, Xuesong; Zhu, Zhixin; Wu, Qinghua

    2018-01-01

    Seismic exploration is a method of oil exploration that uses seismic information; that is, according to the inversion of seismic information, the useful information of the reservoir parameters can be obtained to carry out exploration effectively. Pre-stack data are characterised by a large amount of data, abundant information, and so on, and according to its inversion, the abundant information of the reservoir parameters can be obtained. Owing to the large amount of pre-stack seismic data, existing single-machine environments have not been able to meet the computational needs of the huge amount of data; thus, the development of a method with a high efficiency and the speed to solve the inversion problem of pre-stack seismic data is urgently needed. The optimisation of the elastic parameters by using a genetic algorithm easily falls into a local optimum, which results in a non-obvious inversion effect, especially for the optimisation effect of the density. Therefore, an intelligent optimisation algorithm is proposed in this paper and used for the elastic parameter inversion of pre-stack seismic data. This algorithm improves the population initialisation strategy by using the Gardner formula and the genetic operation of the algorithm, and the improved algorithm obtains better inversion results when carrying out a model test with logging data. All of the elastic parameters obtained by inversion and the logging curve of theoretical model are fitted well, which effectively improves the inversion precision of the density. This algorithm was implemented with a MapReduce model to solve the seismic big data inversion problem. The experimental results show that the parallel model can effectively reduce the running time of the algorithm.

  19. The need for control of magnetic parameters for energy efficient performance of magnetic tunnel junctions

    NASA Astrophysics Data System (ADS)

    Farhat, I. A. H.; Gale, E.; Alpha, C.; Isakovic, A. F.

    2017-07-01

    Optimizing energy performance of Magnetic Tunnel Junctions (MTJs) is the key for embedding Spin Transfer Torque-Random Access Memory (STT-RAM) in low power circuits. Due to the complex interdependencies of the parameters and variables of the device operating energy, it is important to analyse parameters with most effective control of MTJ power. The impact of threshold current density, Jco , on the energy and the impact of HK on Jco are studied analytically, following the expressions that stem from Landau-Lifshitz-Gilbert-Slonczewski (LLGS-STT) model. In addition, the impact of other magnetic material parameters, such as Ms , and geometric parameters such as tfree and λ is discussed. Device modelling study was conducted to analyse the impact at the circuit level. Nano-magnetism simulation based on NMAGTM package was conducted to analyse the impact of controlling HK on the switching dynamics of the film.

  20. Optimisation of lateral car dynamics taking into account parameter uncertainties

    NASA Astrophysics Data System (ADS)

    Busch, Jochen; Bestle, Dieter

    2014-02-01

    Simulation studies on an active all-wheel-steering car show that disturbance of vehicle parameters have high influence on lateral car dynamics. This motivates the need of robust design against such parameter uncertainties. A specific parametrisation is established combining deterministic, velocity-dependent steering control parameters with partly uncertain, velocity-independent vehicle parameters for simultaneous use in a numerical optimisation process. Model-based objectives are formulated and summarised in a multi-objective optimisation problem where especially the lateral steady-state behaviour is improved by an adaption strategy based on measurable uncertainties. The normally distributed uncertainties are generated by optimal Latin hypercube sampling and a response surface based strategy helps to cut down time consuming model evaluations which offers the possibility to use a genetic optimisation algorithm. Optimisation results are discussed in different criterion spaces and the achieved improvements confirm the validity of the proposed procedure.

  1. Fracture simulation of restored teeth using a continuum damage mechanics failure model.

    PubMed

    Li, Haiyan; Li, Jianying; Zou, Zhenmin; Fok, Alex Siu-Lun

    2011-07-01

    The aim of this paper is to validate the use of a finite-element (FE) based continuum damage mechanics (CDM) failure model to simulate the debonding and fracture of restored teeth. Fracture testing of plastic model teeth, with or without a standard Class-II MOD (mesial-occusal-distal) restoration, was carried out to investigate their fracture behavior. In parallel, 2D FE models of the teeth are constructed and analyzed using the commercial FE software ABAQUS. A CDM failure model, implemented into ABAQUS via the user element subroutine (UEL), is used to simulate the debonding and/or final fracture of the model teeth under a compressive load. The material parameters needed for the CDM model to simulate fracture are obtained through separate mechanical tests. The predicted results are then compared with the experimental data of the fracture tests to validate the failure model. The failure processes of the intact and restored model teeth are successfully reproduced by the simulation. However, the fracture parameters obtained from testing small specimens need to be adjusted to account for the size effect. The results indicate that the CDM model is a viable model for the prediction of debonding and fracture in dental restorations. Copyright © 2011 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  2. Optimal experimental design for parameter estimation of a cell signaling model.

    PubMed

    Bandara, Samuel; Schlöder, Johannes P; Eils, Roland; Bock, Hans Georg; Meyer, Tobias

    2009-11-01

    Differential equation models that describe the dynamic changes of biochemical signaling states are important tools to understand cellular behavior. An essential task in building such representations is to infer the affinities, rate constants, and other parameters of a model from actual measurement data. However, intuitive measurement protocols often fail to generate data that restrict the range of possible parameter values. Here we utilized a numerical method to iteratively design optimal live-cell fluorescence microscopy experiments in order to reveal pharmacological and kinetic parameters of a phosphatidylinositol 3,4,5-trisphosphate (PIP(3)) second messenger signaling process that is deregulated in many tumors. The experimental approach included the activation of endogenous phosphoinositide 3-kinase (PI3K) by chemically induced recruitment of a regulatory peptide, reversible inhibition of PI3K using a kinase inhibitor, and monitoring of the PI3K-mediated production of PIP(3) lipids using the pleckstrin homology (PH) domain of Akt. We found that an intuitively planned and established experimental protocol did not yield data from which relevant parameters could be inferred. Starting from a set of poorly defined model parameters derived from the intuitively planned experiment, we calculated concentration-time profiles for both the inducing and the inhibitory compound that would minimize the predicted uncertainty of parameter estimates. Two cycles of optimization and experimentation were sufficient to narrowly confine the model parameters, with the mean variance of estimates dropping more than sixty-fold. Thus, optimal experimental design proved to be a powerful strategy to minimize the number of experiments needed to infer biological parameters from a cell signaling assay.

  3. Mapping (dis)agreement in hydrologic projections

    NASA Astrophysics Data System (ADS)

    Melsen, Lieke A.; Addor, Nans; Mizukami, Naoki; Newman, Andrew J.; Torfs, Paul J. J. F.; Clark, Martyn P.; Uijlenhoet, Remko; Teuling, Adriaan J.

    2018-03-01

    Hydrologic projections are of vital socio-economic importance. However, they are also prone to uncertainty. In order to establish a meaningful range of storylines to support water managers in decision making, we need to reveal the relevant sources of uncertainty. Here, we systematically and extensively investigate uncertainty in hydrologic projections for 605 basins throughout the contiguous US. We show that in the majority of the basins, the sign of change in average annual runoff and discharge timing for the period 2070-2100 compared to 1985-2008 differs among combinations of climate models, hydrologic models, and parameters. Mapping the results revealed that different sources of uncertainty dominate in different regions. Hydrologic model induced uncertainty in the sign of change in mean runoff was related to snow processes and aridity, whereas uncertainty in both mean runoff and discharge timing induced by the climate models was related to disagreement among the models regarding the change in precipitation. Overall, disagreement on the sign of change was more widespread for the mean runoff than for the discharge timing. The results demonstrate the need to define a wide range of quantitative hydrologic storylines, including parameter, hydrologic model, and climate model forcing uncertainty, to support water resource planning.

  4. A Driving Behaviour Model of Electrical Wheelchair Users

    PubMed Central

    Hamam, Y.; Djouani, K.; Daachi, B.; Steyn, N.

    2016-01-01

    In spite of the presence of powered wheelchairs, some of the users still experience steering challenges and manoeuvring difficulties that limit their capacity of navigating effectively. For such users, steering support and assistive systems may be very necessary. To appreciate the assistance, there is need that the assistive control is adaptable to the user's steering behaviour. This paper contributes to wheelchair steering improvement by modelling the steering behaviour of powered wheelchair users, for integration into the control system. More precisely, the modelling is based on the improved Directed Potential Field (DPF) method for trajectory planning. The method has facilitated the formulation of a simple behaviour model that is also linear in parameters. To obtain the steering data for parameter identification, seven individuals participated in driving the wheelchair in different virtual worlds on the augmented platform. The obtained data facilitated the estimation of user parameters, using the ordinary least square method, with satisfactory regression analysis results. PMID:27148362

  5. Orbit control of a stratospheric satellite with parameter uncertainties

    NASA Astrophysics Data System (ADS)

    Xu, Ming; Huo, Wei

    2016-12-01

    When a stratospheric satellite travels by prevailing winds in the stratosphere, its cross-track displacement needs to be controlled to keep a constant latitude orbital flight. To design the orbit control system, a 6 degree-of-freedom (DOF) model of the satellite is established based on the second Lagrangian formulation, it is proven that the input/output feedback linearization theory cannot be directly implemented for the orbit control with this model, thus three subsystem models are deduced from the 6-DOF model to develop a sequential nonlinear control strategy. The control strategy includes an adaptive controller for the balloon-tether subsystem with uncertain balloon parameters, a PD controller based on feedback linearization for the tether-sail subsystem, and a sliding mode controller for the sail-rudder subsystem with uncertain sail parameters. Simulation studies demonstrate that the proposed control strategy is robust to uncertainties and satisfies high precision requirements for the orbit flight of the satellite.

  6. A generalized procedure for the prediction of multicomponent adsorption equilibria

    DOE PAGES

    Ladshaw, Austin; Yiacoumi, Sotira; Tsouris, Costas

    2015-04-07

    Prediction of multicomponent adsorption equilibria has been investigated for several decades. While there are theories available to predict the adsorption behavior of ideal mixtures, there are few purely predictive theories to account for nonidealities in real systems. Most models available for dealing with nonidealities contain interaction parameters that must be obtained through correlation with binary-mixture data. However, as the number of components in a system grows, the number of parameters needed to be obtained increases exponentially. Here, a generalized procedure is proposed, as an extension of the predictive real adsorbed solution theory, for determining the parameters of any activity model,more » for any number of components, without correlation. This procedure is then combined with the adsorbed solution theory to predict the adsorption behavior of mixtures. As this method can be applied to any isotherm model and any activity model, it is referred to as the generalized predictive adsorbed solution theory.« less

  7. Classification framework for partially observed dynamical systems

    NASA Astrophysics Data System (ADS)

    Shen, Yuan; Tino, Peter; Tsaneva-Atanasova, Krasimira

    2017-04-01

    We present a general framework for classifying partially observed dynamical systems based on the idea of learning in the model space. In contrast to the existing approaches using point estimates of model parameters to represent individual data items, we employ posterior distributions over model parameters, thus taking into account in a principled manner the uncertainty due to both the generative (observational and/or dynamic noise) and observation (sampling in time) processes. We evaluate the framework on two test beds: a biological pathway model and a stochastic double-well system. Crucially, we show that the classification performance is not impaired when the model structure used for inferring posterior distributions is much more simple than the observation-generating model structure, provided the reduced-complexity inferential model structure captures the essential characteristics needed for the given classification task.

  8. Characterizing white matter tissue in large strain via asymmetric indentation and inverse finite element modeling.

    PubMed

    Feng, Yuan; Lee, Chung-Hao; Sun, Lining; Ji, Songbai; Zhao, Xuefeng

    2017-01-01

    Characterizing the mechanical properties of white matter is important to understand and model brain development and injury. With embedded aligned axonal fibers, white matter is typically modeled as a transversely isotropic material. However, most studies characterize the white matter tissue using models with a single anisotropic invariant or in a small-strain regime. In this study, we combined a single experimental procedure - asymmetric indentation - with inverse finite element (FE) modeling to estimate the nearly incompressible transversely isotropic material parameters of white matter. A minimal form comprising three parameters was employed to simulate indentation responses in the large-strain regime. The parameters were estimated using a global optimization procedure based on a genetic algorithm (GA). Experimental data from two indentation configurations of porcine white matter, parallel and perpendicular to the axonal fiber direction, were utilized to estimate model parameters. Results in this study confirmed a strong mechanical anisotropy of white matter in large strain. Further, our results suggested that both indentation configurations are needed to estimate the parameters with sufficient accuracy, and that the indenter-sample friction is important. Finally, we also showed that the estimated parameters were consistent with those previously obtained via a trial-and-error forward FE method in the small-strain regime. These findings are useful in modeling and parameterization of white matter, especially under large deformation, and demonstrate the potential of the proposed asymmetric indentation technique to characterize other soft biological tissues with transversely isotropic properties. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Automatic Black-Box Model Order Reduction using Radial Basis Functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stephanson, M B; Lee, J F; White, D A

    Finite elements methods have long made use of model order reduction (MOR), particularly in the context of fast freqeucny sweeps. In this paper, we discuss a black-box MOR technique, applicable to a many solution methods and not restricted only to spectral responses. We also discuss automated methods for generating a reduced order model that meets a given error tolerance. Numerical examples demonstrate the effectiveness and wide applicability of the method. With the advent of improved computing hardware and numerous fast solution techniques, the field of computational electromagnetics are progressed rapidly in terms of the size and complexity of problems thatmore » can be solved. Numerous applications, however, require the solution of a problem for many different configurations, including optimization, parameter exploration, and uncertainly quantification, where the parameters that may be changed include frequency, material properties, geometric dimensions, etc. In such cases, thousands of solutions may be needed, so solve times of even a few minutes can be burdensome. Model order reduction (MOR) may alleviate this difficulty by creating a small model that can be evaluated quickly. Many MOR techniques have been applied to electromagnetic problems over the past few decades, particularly in the context of fast frequency sweeps. Recent works have extended these methods to allow more than one parameter and to allow the parameters to represent material and geometric properties. There are still limitations with these methods, however. First, they almost always assume that the finite element method is used to solve the problem, so that the system matrix is a known function of the parameters. Second, although some authors have presented adaptive methods (e.g., [2]), the order of the model is often determined before the MOR process begins, with little insight about what order is actually needed to reach the desired accuracy. Finally, it not clear how to efficiently extend most methods to the multiparameter case. This paper address the above shortcomings be developing a method that uses a block-box approach to the solution method, is adaptive, and is easily extensible to many parameters.« less

  10. Comparison of particle-tracking and lumped-parameter age-distribution models for evaluating vulnerability of production wells to contamination

    USGS Publications Warehouse

    Eberts, S.M.; Böhlke, J.K.; Kauffman, L.J.; Jurgens, B.C.

    2012-01-01

    Environmental age tracers have been used in various ways to help assess vulnerability of drinking-water production wells to contamination. The most appropriate approach will depend on the information that is available and that which is desired. To understand how the well will respond to changing nonpoint-source contaminant inputs at the water table, some representation of the distribution of groundwater ages in the well is needed. Such information for production wells is sparse and difficult to obtain, especially in areas lacking detailed field studies. In this study, age distributions derived from detailed groundwater-flow models with advective particle tracking were compared with those generated from lumped-parameter models to examine conditions in which estimates from simpler, less resource-intensive lumped-parameter models could be used in place of estimates from particle-tracking models. In each of four contrasting hydrogeologic settings in the USA, particle-tracking and lumped-parameter models yielded roughly similar age distributions and largely indistinguishable contaminant trends when based on similar conceptual models and calibrated to similar tracer data. Although model calibrations and predictions were variably affected by tracer limitations and conceptual ambiguities, results illustrated the importance of full age distributions, rather than apparent tracer ages or model mean ages, for trend analysis and forecasting.

  11. Probabilistic calibration of the SPITFIRE fire spread model using Earth observation data

    NASA Astrophysics Data System (ADS)

    Gomez-Dans, Jose; Wooster, Martin; Lewis, Philip; Spessa, Allan

    2010-05-01

    There is a great interest in understanding how fire affects vegetation distribution and dynamics in the context of global vegetation modelling. A way to include these effects is through the development of embedded fire spread models. However, fire is a complex phenomenon, thus difficult to model. Statistical models based on fire return intervals, or fire danger indices need large amounts of data for calibration, and are often prisoner to the epoch they were calibrated to. Mechanistic models, such as SPITFIRE, try to model the complete fire phenomenon based on simple physical rules, making these models mostly independent of calibration data. However, the processes expressed in models such as SPITFIRE require many parameters. These parametrisations are often reliant on site-specific experiments, or in some other cases, paremeters might not be measured directly. Additionally, in many cases, changes in temporal and/or spatial resolution result in parameters becoming effective. To address the difficulties with parametrisation and the often-used fitting methodologies, we propose using a probabilistic framework to calibrate some areas of the SPITFIRE fire spread model. We calibrate the model against Earth Observation (EO) data, a global and ever-expanding source of relevant data. We develop a methodology that tries to incorporate the limitations of the EO data, reasonable prior values for parameters and that results in distributions of parameters, which can be used to infer uncertainty due to parameter estimates. Additionally, the covariance structure of parameters and observations is also derived, whcih can help inform data gathering efforts and model development, respectively. For this work, we focus on Southern African savannas, an important ecosystem for fire studies, and one with a good amount of EO data relevnt to fire studies. As calibration datasets, we use burned area data, estimated number of fires and vegetation moisture dynamics.

  12. SBSI: an extensible distributed software infrastructure for parameter estimation in systems biology.

    PubMed

    Adams, Richard; Clark, Allan; Yamaguchi, Azusa; Hanlon, Neil; Tsorman, Nikos; Ali, Shakir; Lebedeva, Galina; Goltsov, Alexey; Sorokin, Anatoly; Akman, Ozgur E; Troein, Carl; Millar, Andrew J; Goryanin, Igor; Gilmore, Stephen

    2013-03-01

    Complex computational experiments in Systems Biology, such as fitting model parameters to experimental data, can be challenging to perform. Not only do they frequently require a high level of computational power, but the software needed to run the experiment needs to be usable by scientists with varying levels of computational expertise, and modellers need to be able to obtain up-to-date experimental data resources easily. We have developed a software suite, the Systems Biology Software Infrastructure (SBSI), to facilitate the parameter-fitting process. SBSI is a modular software suite composed of three major components: SBSINumerics, a high-performance library containing parallelized algorithms for performing parameter fitting; SBSIDispatcher, a middleware application to track experiments and submit jobs to back-end servers; and SBSIVisual, an extensible client application used to configure optimization experiments and view results. Furthermore, we have created a plugin infrastructure to enable project-specific modules to be easily installed. Plugin developers can take advantage of the existing user-interface and application framework to customize SBSI for their own uses, facilitated by SBSI's use of standard data formats. All SBSI binaries and source-code are freely available from http://sourceforge.net/projects/sbsi under an Apache 2 open-source license. The server-side SBSINumerics runs on any Unix-based operating system; both SBSIVisual and SBSIDispatcher are written in Java and are platform independent, allowing use on Windows, Linux and Mac OS X. The SBSI project website at http://www.sbsi.ed.ac.uk provides documentation and tutorials.

  13. Modelling of Local Necking and Fracture in Aluminium Alloys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Achani, D.; Eriksson, M.; Hopperstad, O. S.

    2007-05-17

    Non-linear Finite Element simulations are extensively used in forming and crashworthiness studies of automotive components and structures in which fracture need to be controlled. For thin-walled ductile materials, the fracture-related phenomena that must be properly represented are thinning instability, ductile fracture and through-thickness shear instability. Proper representation of the fracture process relies on the accuracy of constitutive and fracture models and their parameters that need to be calibrated through well defined experiments. The present study focuses on local necking and fracture which is of high industrial importance, and uses a phenomenological criterion for modelling fracture in aluminium alloys. As anmore » accurate description of plastic anisotropy is important, advanced phenomenological constitutive equations based on the yield criterion YLD2000/YLD2003 are used. Uniaxial tensile tests and disc compression tests are performed for identification of the constitutive model parameters. Ductile fracture is described by the Cockcroft-Latham fracture criterion and an in-plane shear tests is performed to identify the fracture parameter. The reason is that in a well designed in-plane shear test no thinning instability should occur and it thus gives more direct information about the phenomenon of ductile fracture. Numerical simulations have been performed using a user-defined material model implemented in the general-purpose non-linear FE code LS-DYNA. The applicability of the model is demonstrated by correlating the predicted and experimental response in the in-plane shear tests and additional plane strain tension tests.« less

  14. A generalized multi-dimensional mathematical model for charging and discharging processes in a supercapacitor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allu, Srikanth; Velamur Asokan, Badri; Shelton, William A

    A generalized three dimensional computational model based on unied formulation of electrode- electrolyte-electrode system of a electric double layer supercapacitor has been developed. The model accounts for charge transport across the solid-liquid system. This formulation based on volume averaging process is a widely used concept for the multiphase ow equations ([28] [36]) and is analogous to porous media theory typically employed for electrochemical systems [22] [39] [12]. This formulation is extended to the electrochemical equations for a supercapacitor in a consistent fashion, which allows for a single-domain approach with no need for explicit interfacial boundary conditions as previously employed ([38]).more » In this model it is easy to introduce the spatio-temporal variations, anisotropies of physical properties and it is also conducive for introducing any upscaled parameters from lower length{scale simulations and experiments. Due to the irregular geometric congurations including porous electrode, the charge transport and subsequent performance characteristics of the super-capacitor can be easily captured in higher dimensions. A generalized model of this nature also provides insight into the applicability of 1D models ([38]) and where multidimensional eects need to be considered. In addition, simple sensitivity analysis on key input parameters is performed in order to ascertain the dependence of the charge and discharge processes on these parameters. Finally, we demonstarted how this new formulation can be applied to non-planar supercapacitors« less

  15. Improving flood forecasting capability of physically based distributed hydrological models by parameter optimization

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Li, J.; Xu, H.

    2016-01-01

    Physically based distributed hydrological models (hereafter referred to as PBDHMs) divide the terrain of the whole catchment into a number of grid cells at fine resolution and assimilate different terrain data and precipitation to different cells. They are regarded to have the potential to improve the catchment hydrological process simulation and prediction capability. In the early stage, physically based distributed hydrological models are assumed to derive model parameters from the terrain properties directly, so there is no need to calibrate model parameters. However, unfortunately the uncertainties associated with this model derivation are very high, which impacted their application in flood forecasting, so parameter optimization may also be necessary. There are two main purposes for this study: the first is to propose a parameter optimization method for physically based distributed hydrological models in catchment flood forecasting by using particle swarm optimization (PSO) algorithm and to test its competence and to improve its performances; the second is to explore the possibility of improving physically based distributed hydrological model capability in catchment flood forecasting by parameter optimization. In this paper, based on the scalar concept, a general framework for parameter optimization of the PBDHMs for catchment flood forecasting is first proposed that could be used for all PBDHMs. Then, with the Liuxihe model as the study model, which is a physically based distributed hydrological model proposed for catchment flood forecasting, the improved PSO algorithm is developed for the parameter optimization of the Liuxihe model in catchment flood forecasting. The improvements include adoption of the linearly decreasing inertia weight strategy to change the inertia weight and the arccosine function strategy to adjust the acceleration coefficients. This method has been tested in two catchments in southern China with different sizes, and the results show that the improved PSO algorithm could be used for the Liuxihe model parameter optimization effectively and could improve the model capability largely in catchment flood forecasting, thus proving that parameter optimization is necessary to improve the flood forecasting capability of physically based distributed hydrological models. It also has been found that the appropriate particle number and the maximum evolution number of PSO algorithm used for the Liuxihe model catchment flood forecasting are 20 and 30 respectively.

  16. Aircraft Engine Thrust Estimator Design Based on GSA-LSSVM

    NASA Astrophysics Data System (ADS)

    Sheng, Hanlin; Zhang, Tianhong

    2017-08-01

    In view of the necessity of highly precise and reliable thrust estimator to achieve direct thrust control of aircraft engine, based on support vector regression (SVR), as well as least square support vector machine (LSSVM) and a new optimization algorithm - gravitational search algorithm (GSA), by performing integrated modelling and parameter optimization, a GSA-LSSVM-based thrust estimator design solution is proposed. The results show that compared to particle swarm optimization (PSO) algorithm, GSA can find unknown optimization parameter better and enables the model developed with better prediction and generalization ability. The model can better predict aircraft engine thrust and thus fulfills the need of direct thrust control of aircraft engine.

  17. Design of Experiments for the Thermal Characterization of Metallic Foam

    NASA Technical Reports Server (NTRS)

    Crittenden, Paul E.; Cole, Kevin D.

    2003-01-01

    Metallic foams are being investigated for possible use in the thermal protection systems of reusable launch vehicles. As a result, the performance of these materials needs to be characterized over a wide range of temperatures and pressures. In this paper a radiation/conduction model is presented for heat transfer in metallic foams. Candidates for the optimal transient experiment to determine the intrinsic properties of the model are found by two methods. First, an optimality criterion is used to find an experiment to find all of the parameters using one heating event. Second, a pair of heating events is used to determine the parameters in which one heating event is optimal for finding the parameters related to conduction, while the other heating event is optimal for finding the parameters associated with radiation. Simulated data containing random noise was analyzed to determine the parameters using both methods. In all cases the parameter estimates could be improved by analyzing a larger data record than suggested by the optimality criterion.

  18. Parameter estimation and order selection for an empirical model of VO2 on-kinetics.

    PubMed

    Alata, O; Bernard, O

    2007-04-27

    In humans, VO2 on-kinetics are noisy numerical signals that reflect the pulmonary oxygen exchange kinetics at the onset of exercise. They are empirically modelled as a sum of an offset and delayed exponentials. The number of delayed exponentials; i.e. the order of the model, is commonly supposed to be 1 for low-intensity exercises and 2 for high-intensity exercises. As no ground truth has ever been provided to validate these postulates, physiologists still need statistical methods to verify their hypothesis about the number of exponentials of the VO2 on-kinetics especially in the case of high-intensity exercises. Our objectives are first to develop accurate methods for estimating the parameters of the model at a fixed order, and then, to propose statistical tests for selecting the appropriate order. In this paper, we provide, on simulated Data, performances of Simulated Annealing for estimating model parameters and performances of Information Criteria for selecting the order. These simulated Data are generated with both single-exponential and double-exponential models, and noised by white and Gaussian noise. The performances are given at various Signal to Noise Ratio (SNR). Considering parameter estimation, results show that the confidences of estimated parameters are improved by increasing the SNR of the response to be fitted. Considering model selection, results show that Information Criteria are adapted statistical criteria to select the number of exponentials.

  19. Sequential updating of multimodal hydrogeologic parameter fields using localization and clustering techniques

    NASA Astrophysics Data System (ADS)

    Sun, Alexander Y.; Morris, Alan P.; Mohanty, Sitakanta

    2009-07-01

    Estimated parameter distributions in groundwater models may contain significant uncertainties because of data insufficiency. Therefore, adaptive uncertainty reduction strategies are needed to continuously improve model accuracy by fusing new observations. In recent years, various ensemble Kalman filters have been introduced as viable tools for updating high-dimensional model parameters. However, their usefulness is largely limited by the inherent assumption of Gaussian error statistics. Hydraulic conductivity distributions in alluvial aquifers, for example, are usually non-Gaussian as a result of complex depositional and diagenetic processes. In this study, we combine an ensemble Kalman filter with grid-based localization and a Gaussian mixture model (GMM) clustering techniques for updating high-dimensional, multimodal parameter distributions via dynamic data assimilation. We introduce innovative strategies (e.g., block updating and dimension reduction) to effectively reduce the computational costs associated with these modified ensemble Kalman filter schemes. The developed data assimilation schemes are demonstrated numerically for identifying the multimodal heterogeneous hydraulic conductivity distributions in a binary facies alluvial aquifer. Our results show that localization and GMM clustering are very promising techniques for assimilating high-dimensional, multimodal parameter distributions, and they outperform the corresponding global ensemble Kalman filter analysis scheme in all scenarios considered.

  20. Greedy Sampling and Incremental Surrogate Model-Based Tailoring of Aeroservoelastic Model Database for Flexible Aircraft

    NASA Technical Reports Server (NTRS)

    Wang, Yi; Pant, Kapil; Brenner, Martin J.; Ouellette, Jeffrey A.

    2018-01-01

    This paper presents a data analysis and modeling framework to tailor and develop linear parameter-varying (LPV) aeroservoelastic (ASE) model database for flexible aircrafts in broad 2D flight parameter space. The Kriging surrogate model is constructed using ASE models at a fraction of grid points within the original model database, and then the ASE model at any flight condition can be obtained simply through surrogate model interpolation. The greedy sampling algorithm is developed to select the next sample point that carries the worst relative error between the surrogate model prediction and the benchmark model in the frequency domain among all input-output channels. The process is iterated to incrementally improve surrogate model accuracy till a pre-determined tolerance or iteration budget is met. The methodology is applied to the ASE model database of a flexible aircraft currently being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies indicate that the proposed method can reduce the number of models in the original database by 67%. Even so the ASE models obtained through Kriging interpolation match the model in the original database constructed directly from the physics-based tool with the worst relative error far below 1%. The interpolated ASE model exhibits continuously-varying gains along a set of prescribed flight conditions. More importantly, the selected grid points are distributed non-uniformly in the parameter space, a) capturing the distinctly different dynamic behavior and its dependence on flight parameters, and b) reiterating the need and utility for adaptive space sampling techniques for ASE model database compaction. The present framework is directly extendible to high-dimensional flight parameter space, and can be used to guide the ASE model development, model order reduction, robust control synthesis and novel vehicle design of flexible aircraft.

  1. Informing soil models using pedotransfer functions: challenges and perspectives

    NASA Astrophysics Data System (ADS)

    Pachepsky, Yakov; Romano, Nunzio

    2015-04-01

    Pedotransfer functions (PTFs) are empirical relationships between parameters of soil models and more easily obtainable data on soil properties. PTFs have become an indispensable tool in modeling soil processes. As alternative methods to direct measurements, they bridge the data we have and data we need by using soil survey and monitoring data to enable modeling for real-world applications. Pedotransfer is extensively used in soil models addressing the most pressing environmental issues. The following is an attempt to provoke a discussion by listing current issues that are faced by PTF development. 1. As more intricate biogeochemical processes are being modeled, development of PTFs for parameters of those processes becomes essential. 2. Since the equations to express PTF relationships are essentially unknown, there has been a trend to employ highly nonlinear equations, e.g. neural networks, which in theory are flexible enough to simulate any dependence. This, however, comes with the penalty of large number of coefficients that are difficult to estimate reliably. A preliminary classification applied to PTF inputs and PTF development for each of the resulting groups may provide simple, transparent, and more reliable pedotransfer equations. 3. The multiplicity of models, i.e. presence of several models producing the same output variables, is commonly found in soil modeling, and is a typical feature in the PTF research field. However, PTF intercomparisons are lagging behind PTF development. This is aggravated by the fact that coefficients of PTF based on machine-learning methods are usually not reported. 4. The existence of PTFs is the result of some soil processes. Using models of those processes to generate PTFs, and more general, developing physics-based PTFs remains to be explored. 5. Estimating the variability of soil model parameters becomes increasingly important, as the newer modeling technologies such as data assimilation, ensemble modeling, and model abstraction, become progressively more popular. The variability PTFs rely on the spatio-temporal dynamics of soil variables, and that opens new sources of PTF inputs stemming from technology advances such as monitoring networks, remote and proximal sensing, and omics. 6. Burgeoning PTF development has not so far affected several persisting regional knowledge gaps. Remarkably little effort was put so far into PTF development for saline soils, calcareous and gypsiferous soils, peat soils, paddy soils, soils with well expressed shrink-swell behavior, and soils affected by freeze-thaw cycles. 7. Soils from tropical regions are quite often considered as a pseudo-entity for which a single PTF can be applied. This assumption will not be needed as more regional data will be accumulated and analyzed. 8. Other advances in regional PTFs will be possible due to presence of large databases on region-specific useful PTF inputs such as moisture equivalent, laser diffractometry data, or soil specific surface. 9. Most of flux models in soils, be it water, solutes, gas, or heat, involve parameters that are scale-dependent. Including scale dependencies in PTFs will be critical to improve PTF usability. 10. Another scale-related matter is pedotransfer for coarse-scale soil modeling, for example, in weather or climate models. Soil hydraulic parameters in these models cannot be measured and the efficiency of the pedotransfer can be evaluated only in terms of its utility. There is a pressing need to determine combinations of pedotransfer and upscaling procedures that can lead to the derivation of suitable coarse-scale soil model parameters. 11. The spatial coarse scale often assumes a coarse temporal support, and that may lead to including in PTFs other environmental variables such as topographic, weather, and management attributes. 12. Some PTF inputs are time- or space-dependent, and yet little is known whether the spatial or temporal structure of PTF outputs is properly predicted from such inputs 13. Further exploration is needed to use PTF as a source of hypotheses on and insights into relationships between soil processes and soil composition as well as between soil structure and soil functioning. PTFs are empirical relationships and their accuracy outside the database used for the PTF development is essentially unknown. Therefore they should never be considered as an ultimate source of parameters in soil modeling. Rather they strive to provide a balance between accuracy and availability. The primary role of PTF is to assist in modeling for screening and comparative purposes, establishing ranges and/or probability distributions of model parameters, and creating realistic synthetic soil datasets and scenarios. Developing and improving PTFs will remain the mainstream way of packaging data and knowledge for applications of soil modeling.

  2. Optimizing Muscle Parameters in Musculoskeletal Modeling Using Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Hanson, Andrea; Reed, Erik; Cavanagh, Peter

    2011-01-01

    Astronauts assigned to long-duration missions experience bone and muscle atrophy in the lower limbs. The use of musculoskeletal simulation software has become a useful tool for modeling joint and muscle forces during human activity in reduced gravity as access to direct experimentation is limited. Knowledge of muscle and joint loads can better inform the design of exercise protocols and exercise countermeasure equipment. In this study, the LifeModeler(TM) (San Clemente, CA) biomechanics simulation software was used to model a squat exercise. The initial model using default parameters yielded physiologically reasonable hip-joint forces. However, no activation was predicted in some large muscles such as rectus femoris, which have been shown to be active in 1-g performance of the activity. Parametric testing was conducted using Monte Carlo methods and combinatorial reduction to find a muscle parameter set that more closely matched physiologically observed activation patterns during the squat exercise. Peak hip joint force using the default parameters was 2.96 times body weight (BW) and increased to 3.21 BW in an optimized, feature-selected test case. The rectus femoris was predicted to peak at 60.1% activation following muscle recruitment optimization, compared to 19.2% activation with default parameters. These results indicate the critical role that muscle parameters play in joint force estimation and the need for exploration of the solution space to achieve physiologically realistic muscle activation.

  3. An open, object-based modeling approach for simulating subsurface heterogeneity

    NASA Astrophysics Data System (ADS)

    Bennett, J.; Ross, M.; Haslauer, C. P.; Cirpka, O. A.

    2017-12-01

    Characterization of subsurface heterogeneity with respect to hydraulic and geochemical properties is critical in hydrogeology as their spatial distribution controls groundwater flow and solute transport. Many approaches of characterizing subsurface heterogeneity do not account for well-established geological concepts about the deposition of the aquifer materials; those that do (i.e. process-based methods) often require forcing parameters that are difficult to derive from site observations. We have developed a new method for simulating subsurface heterogeneity that honors concepts of sequence stratigraphy, resolves fine-scale heterogeneity and anisotropy of distributed parameters, and resembles observed sedimentary deposits. The method implements a multi-scale hierarchical facies modeling framework based on architectural element analysis, with larger features composed of smaller sub-units. The Hydrogeological Virtual Reality simulator (HYVR) simulates distributed parameter models using an object-based approach. Input parameters are derived from observations of stratigraphic morphology in sequence type-sections. Simulation outputs can be used for generic simulations of groundwater flow and solute transport, and for the generation of three-dimensional training images needed in applications of multiple-point geostatistics. The HYVR algorithm is flexible and easy to customize. The algorithm was written in the open-source programming language Python, and is intended to form a code base for hydrogeological researchers, as well as a platform that can be further developed to suit investigators' individual needs. This presentation will encompass the conceptual background and computational methods of the HYVR algorithm, the derivation of input parameters from site characterization, and the results of groundwater flow and solute transport simulations in different depositional settings.

  4. Model Parameter Estimation Experiment (MOPEX): An overview of science strategy and major results from the second and third workshops

    USGS Publications Warehouse

    Duan, Q.; Schaake, J.; Andreassian, V.; Franks, S.; Goteti, G.; Gupta, H.V.; Gusev, Y.M.; Habets, F.; Hall, A.; Hay, L.; Hogue, T.; Huang, M.; Leavesley, G.; Liang, X.; Nasonova, O.N.; Noilhan, J.; Oudin, L.; Sorooshian, S.; Wagener, T.; Wood, E.F.

    2006-01-01

    The Model Parameter Estimation Experiment (MOPEX) is an international project aimed at developing enhanced techniques for the a priori estimation of parameters in hydrologic models and in land surface parameterization schemes of atmospheric models. The MOPEX science strategy involves three major steps: data preparation, a priori parameter estimation methodology development, and demonstration of parameter transferability. A comprehensive MOPEX database has been developed that contains historical hydrometeorological data and land surface characteristics data for many hydrologic basins in the United States (US) and in other countries. This database is being continuously expanded to include more basins in all parts of the world. A number of international MOPEX workshops have been convened to bring together interested hydrologists and land surface modelers from all over world to exchange knowledge and experience in developing a priori parameter estimation techniques. This paper describes the results from the second and third MOPEX workshops. The specific objective of these workshops is to examine the state of a priori parameter estimation techniques and how they can be potentially improved with observations from well-monitored hydrologic basins. Participants of the second and third MOPEX workshops were provided with data from 12 basins in the southeastern US and were asked to carry out a series of numerical experiments using a priori parameters as well as calibrated parameters developed for their respective hydrologic models. Different modeling groups carried out all the required experiments independently using eight different models, and the results from these models have been assembled for analysis in this paper. This paper presents an overview of the MOPEX experiment and its design. The main experimental results are analyzed. A key finding is that existing a priori parameter estimation procedures are problematic and need improvement. Significant improvement of these procedures may be achieved through model calibration of well-monitored hydrologic basins. This paper concludes with a discussion of the lessons learned, and points out further work and future strategy. ?? 2005 Elsevier Ltd. All rights reserved.

  5. Development of response models for the Earth Radiation Budget Experiment (ERBE) sensors. Part 1: Dynamic models and computer simulations for the ERBE nonscanner, scanner and solar monitor sensors

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim; Choi, Sang H.; Chrisman, Dan A., Jr.; Samms, Richard W.

    1987-01-01

    Dynamic models and computer simulations were developed for the radiometric sensors utilized in the Earth Radiation Budget Experiment (ERBE). The models were developed to understand performance, improve measurement accuracy by updating model parameters and provide the constants needed for the count conversion algorithms. Model simulations were compared with the sensor's actual responses demonstrated in the ground and inflight calibrations. The models consider thermal and radiative exchange effects, surface specularity, spectral dependence of a filter, radiative interactions among an enclosure's nodes, partial specular and diffuse enclosure surface characteristics and steady-state and transient sensor responses. Relatively few sensor nodes were chosen for the models since there is an accuracy tradeoff between increasing the number of nodes and approximating parameters such as the sensor's size, material properties, geometry, and enclosure surface characteristics. Given that the temperature gradients within a node and between nodes are small enough, approximating with only a few nodes does not jeopardize the accuracy required to perform the parameter estimates and error analyses.

  6. Toward automatic time-series forecasting using neural networks.

    PubMed

    Yan, Weizhong

    2012-07-01

    Over the past few decades, application of artificial neural networks (ANN) to time-series forecasting (TSF) has been growing rapidly due to several unique features of ANN models. However, to date, a consistent ANN performance over different studies has not been achieved. Many factors contribute to the inconsistency in the performance of neural network models. One such factor is that ANN modeling involves determining a large number of design parameters, and the current design practice is essentially heuristic and ad hoc, this does not exploit the full potential of neural networks. Systematic ANN modeling processes and strategies for TSF are, therefore, greatly needed. Motivated by this need, this paper attempts to develop an automatic ANN modeling scheme. It is based on the generalized regression neural network (GRNN), a special type of neural network. By taking advantage of several GRNN properties (i.e., a single design parameter and fast learning) and by incorporating several design strategies (e.g., fusing multiple GRNNs), we have been able to make the proposed modeling scheme to be effective for modeling large-scale business time series. The initial model was entered into the NN3 time-series competition. It was awarded the best prediction on the reduced dataset among approximately 60 different models submitted by scholars worldwide.

  7. Interactive model evaluation tool based on IPython notebook

    NASA Astrophysics Data System (ADS)

    Balemans, Sophie; Van Hoey, Stijn; Nopens, Ingmar; Seuntjes, Piet

    2015-04-01

    In hydrological modelling, some kind of parameter optimization is mostly performed. This can be the selection of a single best parameter set, a split in behavioural and non-behavioural parameter sets based on a selected threshold or a posterior parameter distribution derived with a formal Bayesian approach. The selection of the criterion to measure the goodness of fit (likelihood or any objective function) is an essential step in all of these methodologies and will affect the final selected parameter subset. Moreover, the discriminative power of the objective function is also dependent from the time period used. In practice, the optimization process is an iterative procedure. As such, in the course of the modelling process, an increasing amount of simulations is performed. However, the information carried by these simulation outputs is not always fully exploited. In this respect, we developed and present an interactive environment that enables the user to intuitively evaluate the model performance. The aim is to explore the parameter space graphically and to visualize the impact of the selected objective function on model behaviour. First, a set of model simulation results is loaded along with the corresponding parameter sets and a data set of the same variable as the model outcome (mostly discharge). The ranges of the loaded parameter sets define the parameter space. A selection of the two parameters visualised can be made by the user. Furthermore, an objective function and a time period of interest need to be selected. Based on this information, a two-dimensional parameter response surface is created, which actually just shows a scatter plot of the parameter combinations and assigns a color scale corresponding with the goodness of fit of each parameter combination. Finally, a slider is available to change the color mapping of the points. Actually, the slider provides a threshold to exclude non behaviour parameter sets and the color scale is only attributed to the remaining parameter sets. As such, by interactively changing the settings and interpreting the graph, the user gains insight in the model structural behaviour. Moreover, a more deliberate choice of objective function and periods of high information content can be identified. The environment is written in an IPython notebook and uses the available interactive functions provided by the IPython community. As such, the power of the IPython notebook as a development environment for scientific computing is illustrated (Shen, 2014).

  8. A mechanistic modeling and data assimilation framework for Mojave Desert ecohydrology

    USGS Publications Warehouse

    Ng, Gene-Hua Crystal.; Bedford, David; Miller, David

    2014-01-01

    This study demonstrates and addresses challenges in coupled ecohydrological modeling in deserts, which arise due to unique plant adaptations, marginal growing conditions, slow net primary production rates, and highly variable rainfall. We consider model uncertainty from both structural and parameter errors and present a mechanistic model for the shrub Larrea tridentata (creosote bush) under conditions found in the Mojave National Preserve in southeastern California (USA). Desert-specific plant and soil features are incorporated into the CLM-CN model by Oleson et al. (2010). We then develop a data assimilation framework using the ensemble Kalman filter (EnKF) to estimate model parameters based on soil moisture and leaf-area index observations. A new implementation procedure, the “multisite loop EnKF,” tackles parameter estimation difficulties found to affect desert ecohydrological applications. Specifically, the procedure iterates through data from various observation sites to alleviate adverse filter impacts from non-Gaussianity in small desert vegetation state values. It also readjusts inconsistent parameters and states through a model spin-up step that accounts for longer dynamical time scales due to infrequent rainfall in deserts. Observation error variance inflation may also be needed to help prevent divergence of estimates from true values. Synthetic test results highlight the importance of adequate observations for reducing model uncertainty, which can be achieved through data quality or quantity.

  9. Uncertainties of flood frequency estimation approaches based on continuous simulation using data resampling

    NASA Astrophysics Data System (ADS)

    Arnaud, Patrick; Cantet, Philippe; Odry, Jean

    2017-11-01

    Flood frequency analyses (FFAs) are needed for flood risk management. Many methods exist ranging from classical purely statistical approaches to more complex approaches based on process simulation. The results of these methods are associated with uncertainties that are sometimes difficult to estimate due to the complexity of the approaches or the number of parameters, especially for process simulation. This is the case of the simulation-based FFA approach called SHYREG presented in this paper, in which a rainfall generator is coupled with a simple rainfall-runoff model in an attempt to estimate the uncertainties due to the estimation of the seven parameters needed to estimate flood frequencies. The six parameters of the rainfall generator are mean values, so their theoretical distribution is known and can be used to estimate the generator uncertainties. In contrast, the theoretical distribution of the single hydrological model parameter is unknown; consequently, a bootstrap method is applied to estimate the calibration uncertainties. The propagation of uncertainty from the rainfall generator to the hydrological model is also taken into account. This method is applied to 1112 basins throughout France. Uncertainties coming from the SHYREG method and from purely statistical approaches are compared, and the results are discussed according to the length of the recorded observations, basin size and basin location. Uncertainties of the SHYREG method decrease as the basin size increases or as the length of the recorded flow increases. Moreover, the results show that the confidence intervals of the SHYREG method are relatively small despite the complexity of the method and the number of parameters (seven). This is due to the stability of the parameters and takes into account the dependence of uncertainties due to the rainfall model and the hydrological calibration. Indeed, the uncertainties on the flow quantiles are on the same order of magnitude as those associated with the use of a statistical law with two parameters (here generalised extreme value Type I distribution) and clearly lower than those associated with the use of a three-parameter law (here generalised extreme value Type II distribution). For extreme flood quantiles, the uncertainties are mostly due to the rainfall generator because of the progressive saturation of the hydrological model.

  10. Describing dengue epidemics: Insights from simple mechanistic models

    NASA Astrophysics Data System (ADS)

    Aguiar, Maíra; Stollenwerk, Nico; Kooi, Bob W.

    2012-09-01

    We present a set of nested models to be applied to dengue fever epidemiology. We perform a qualitative study in order to show how much complexity we really need to add into epidemiological models to be able to describe the fluctuations observed in empirical dengue hemorrhagic fever incidence data offering a promising perspective on inference of parameter values from dengue case notifications.

  11. NEIGHBORHOOD SCALE AIR QUALITY MODELING IN HOUSTON USING URBAN CANOPY PARAMETERS IN MM5 AND CMAQ WITH IMPROVED CHARACTERIZATION OF MESOSCALE LAKE-LAND BREEZE CIRCULATION

    EPA Science Inventory

    Advanced capability of air quality simulation models towards accurate performance at finer scales will be needed for such models to serve as tools for performing exposure and risk assessments in urban areas. It is recognized that the impact of urban features such as street and t...

  12. Gap model development, validation, and application to succession of secondary subtropical dry forests of Puerto Rico

    Treesearch

    Jennifer A. Holm; H.H. Shugart; Skip J. Van Bloem; G.R. Larocque

    2012-01-01

    Because of human pressures, the need to understand and predict the long-term dynamics and development of subtropical dry forests is urgent. Through modifications to the ZELIG simulation model, including the development of species- and site-specific parameters and internal modifications, the capability to model and predict forest change within the 4500-ha Guanica State...

  13. Determining optimal parameters in magnetic spacecraft stabilization via attitude feedback

    NASA Astrophysics Data System (ADS)

    Bruni, Renato; Celani, Fabio

    2016-10-01

    The attitude control of a spacecraft using magnetorquers can be achieved by a feedback control law which has four design parameters. However, the practical determination of appropriate values for these parameters is a critical open issue. We propose here an innovative systematic approach for finding these values: they should be those that minimize the convergence time to the desired attitude. This a particularly diffcult optimization problem, for several reasons: 1) such time cannot be expressed in analytical form as a function of parameters and initial conditions; 2) design parameters may range over very wide intervals; 3) convergence time depends also on the initial conditions of the spacecraft, which are not known in advance. To overcome these diffculties, we present a solution approach based on derivative-free optimization. These algorithms do not need to write analytically the objective function: they only need to compute it in a number of points. We also propose a fast probing technique to identify which regions of the search space have to be explored densely. Finally, we formulate a min-max model to find robust parameters, namely design parameters that minimize convergence time under the worst initial conditions. Results are very promising.

  14. An inverse problem for a mathematical model of aquaponic agriculture

    NASA Astrophysics Data System (ADS)

    Bobak, Carly; Kunze, Herb

    2017-01-01

    Aquaponic agriculture is a sustainable ecosystem that relies on a symbiotic relationship between fish and macrophytes. While the practice has been growing in popularity, relatively little mathematical models exist which aim to study the system processes. In this paper, we present a system of ODEs which aims to mathematically model the population and concetrations dynamics present in an aquaponic environment. Values of the parameters in the system are estimated from the literature so that simulated results can be presented to illustrate the nature of the solutions to the system. As well, a brief sensitivity analysis is performed in order to identify redundant parameters and highlight those which may need more reliable estimates. Specifically, an inverse problem with manufactured data for fish and plants is presented to demonstrate the ability of the collage theorem to recover parameter estimates.

  15. Estimation of the Reactive Flow Model Parameters for an Ammonium Nitrate-Based Emulsion Explosive Using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Ribeiro, J. B.; Silva, C.; Mendes, R.

    2010-10-01

    A real coded genetic algorithm methodology that has been developed for the estimation of the parameters of the reaction rate equation of the Lee-Tarver reactive flow model is described in detail. This methodology allows, in a single optimization procedure, using only one experimental result and, without the need of any starting solution, to seek the 15 parameters of the reaction rate equation that fit the numerical to the experimental results. Mass averaging and the plate-gap model have been used for the determination of the shock data used in the unreacted explosive JWL equation of state (EOS) assessment and the thermochemical code THOR retrieved the data used in the detonation products' JWL EOS assessments. The developed methodology was applied for the estimation of the referred parameters for an ammonium nitrate-based emulsion explosive using poly(methyl methacrylate) (PMMA)-embedded manganin gauge pressure-time data. The obtained parameters allow a reasonably good description of the experimental data and show some peculiarities arising from the intrinsic nature of this kind of composite explosive.

  16. Parameter Stability of the Functional–Structural Plant Model GREENLAB as Affected by Variation within Populations, among Seasons and among Growth Stages

    PubMed Central

    Ma, Yuntao; Li, Baoguo; Zhan, Zhigang; Guo, Yan; Luquet, Delphine; de Reffye, Philippe; Dingkuhn, Michael

    2007-01-01

    Background and Aims It is increasingly accepted that crop models, if they are to simulate genotype-specific behaviour accurately, should simulate the morphogenetic process generating plant architecture. A functional–structural plant model, GREENLAB, was previously presented and validated for maize. The model is based on a recursive mathematical process, with parameters whose values cannot be measured directly and need to be optimized statistically. This study aims at evaluating the stability of GREENLAB parameters in response to three types of phenotype variability: (1) among individuals from a common population; (2) among populations subjected to different environments (seasons); and (3) among different development stages of the same plants. Methods Five field experiments were conducted in the course of 4 years on irrigated fields near Beijing, China. Detailed observations were conducted throughout the seasons on the dimensions and fresh biomass of all above-ground plant organs for each metamer. Growth stage-specific target files were assembled from the data for GREENLAB parameter optimization. Optimization was conducted for specific developmental stages or the entire growth cycle, for individual plants (replicates), and for different seasons. Parameter stability was evaluated by comparing their CV with that of phenotype observation for the different sources of variability. A reduced data set was developed for easier model parameterization using one season, and validated for the four other seasons. Key Results and Conclusions The analysis of parameter stability among plants sharing the same environment and among populations grown in different environments indicated that the model explains some of the inter-seasonal variability of phenotype (parameters varied less than the phenotype itself), but not inter-plant variability (parameter and phenotype variability were similar). Parameter variability among developmental stages was small, indicating that parameter values were largely development-stage independent. The authors suggest that the high level of parameter stability observed in GREENLAB can be used to conduct comparisons among genotypes and, ultimately, genetic analyses. PMID:17158141

  17. A parallel calibration utility for WRF-Hydro on high performance computers

    NASA Astrophysics Data System (ADS)

    Wang, J.; Wang, C.; Kotamarthi, V. R.

    2017-12-01

    A successful modeling of complex hydrological processes comprises establishing an integrated hydrological model which simulates the hydrological processes in each water regime, calibrates and validates the model performance based on observation data, and estimates the uncertainties from different sources especially those associated with parameters. Such a model system requires large computing resources and often have to be run on High Performance Computers (HPC). The recently developed WRF-Hydro modeling system provides a significant advancement in the capability to simulate regional water cycles more completely. The WRF-Hydro model has a large range of parameters such as those in the input table files — GENPARM.TBL, SOILPARM.TBL and CHANPARM.TBL — and several distributed scaling factors such as OVROUGHRTFAC. These parameters affect the behavior and outputs of the model and thus may need to be calibrated against the observations in order to obtain a good modeling performance. Having a parameter calibration tool specifically for automate calibration and uncertainty estimates of WRF-Hydro model can provide significant convenience for the modeling community. In this study, we developed a customized tool using the parallel version of the model-independent parameter estimation and uncertainty analysis tool, PEST, to enabled it to run on HPC with PBS and SLURM workload manager and job scheduler. We also developed a series of PEST input file templates that are specifically for WRF-Hydro model calibration and uncertainty analysis. Here we will present a flood case study occurred in April 2013 over Midwest. The sensitivity and uncertainties are analyzed using the customized PEST tool we developed.

  18. MODFLOW-2000, the U.S. Geological Survey Modular Ground-Water Model -Documentation of the Hydrogeologic-Unit Flow (HUF) Package

    USGS Publications Warehouse

    Anderman, E.R.; Hill, M.C.

    2000-01-01

    This report documents the Hydrogeologic-Unit Flow (HUF) Package for the groundwater modeling computer program MODFLOW-2000. The HUF Package is an alternative internal flow package that allows the vertical geometry of the system hydrogeology to be defined explicitly within the model using hydrogeologic units that can be different than the definition of the model layers. The HUF Package works with all the processes of MODFLOW-2000. For the Ground-Water Flow Process, the HUF Package calculates effective hydraulic properties for the model layers based on the hydraulic properties of the hydrogeologic units, which are defined by the user using parameters. The hydraulic properties are used to calculate the conductance coefficients and other terms needed to solve the ground-water flow equation. The sensitivity of the model to the parameters defined within the HUF Package input file can be calculated using the Sensitivity Process, using observations defined with the Observation Process. Optimal values of the parameters can be estimated by using the Parameter-Estimation Process. The HUF Package is nearly identical to the Layer-Property Flow (LPF) Package, the major difference being the definition of the vertical geometry of the system hydrogeology. Use of the HUF Package is illustrated in two test cases, which also serve to verify the performance of the package by showing that the Parameter-Estimation Process produces the true parameter values when exact observations are used.

  19. Mesoscopic modeling and parameter estimation of a lithium-ion battery based on LiFePO4/graphite

    NASA Astrophysics Data System (ADS)

    Jokar, Ali; Désilets, Martin; Lacroix, Marcel; Zaghib, Karim

    2018-03-01

    A novel numerical model for simulating the behavior of lithium-ion batteries based on LiFePO4(LFP)/graphite is presented. The model is based on the modified Single Particle Model (SPM) coupled to a mesoscopic approach for the LFP electrode. The model comprises one representative spherical particle as the graphite electrode, and N LFP units as the positive electrode. All the SPM equations are retained to model the negative electrode performance. The mesoscopic model rests on non-equilibrium thermodynamic conditions and uses a non-monotonic open circuit potential for each unit. A parameter estimation study is also carried out to identify all the parameters needed for the model. The unknown parameters are the solid diffusion coefficient of the negative electrode (Ds,n), reaction-rate constant of the negative electrode (Kn), negative and positive electrode porosity (εn&εn), initial State-Of-Charge of the negative electrode (SOCn,0), initial partial composition of the LFP units (yk,0), minimum and maximum resistance of the LFP units (Rmin&Rmax), and solution resistance (Rcell). The results show that the mesoscopic model can simulate successfully the electrochemical behavior of lithium-ion batteries at low and high charge/discharge rates. The model also describes adequately the lithiation/delithiation of the LFP particles, however, it is computationally expensive compared to macro-based models.

  20. Detection of Rossby Waves in Multi-Parameters in Multi-Mission Satellite Observations and HYCOM Simulations in the Indian Ocean

    NASA Technical Reports Server (NTRS)

    Subrahmanyam, Bulusu; Heffner, David M.; Cromwell, David; Shriver, Jay F.

    2009-01-01

    Rossby waves are difficult to detect with in situ methods. However, as we show in this paper, they can be clearly identified in multi-parameters in multi-mission satellite observations of sea surface height (SSH), sea surface temperature (SST) and ocean color observations of chlorophyll-a (chl-a), as well as 1/12-deg global HYbrid Coordinate Ocean Model (HYCOM) simulations of SSH, SST and sea surface salinity (SSS) in the Indian Ocean. While the surface structure of Rossby waves can be elucidated from comparisons of the signal in different sea surface parameters, models are needed to gain direct information about how these waves affect the ocean at depth. The first three baroclinic modes of the Rossby waves are inferred from the Fast Fourier Transform (FFT), and two-dimensional Radon Transform (2D RT). At many latitudes the first and second baroclinic mode Rossby wave phase speeds from satellite observations and model parameters are identified.

  1. Optomechanical design software for segmented mirrors

    NASA Astrophysics Data System (ADS)

    Marrero, Juan

    2016-08-01

    The software package presented in this paper, still under development, was born to help analyzing the influence of the many parameters involved in the design of a large segmented mirror telescope. In summary, it is a set of tools which were added to a common framework as they were needed. Great emphasis has been made on the graphical presentation, as scientific visualization nowadays cannot be conceived without the use of a helpful 3d environment, showing the analyzed system as close to reality as possible. Use of third party software packages is limited to ANSYS, which should be available in the system only if the FEM results are needed. Among the various functionalities of the software, the next ones are worth mentioning here: automatic 3d model construction of a segmented mirror from a set of parameters, geometric ray tracing, automatic 3d model construction of a telescope structure around the defined mirrors from a set of parameters, segmented mirror human access assessment, analysis of integration tolerances, assessment of segments collision, structural deformation under gravity and thermal variation, mirror support system analysis including warping harness mechanisms, etc.

  2. The evolution of process-based hydrologic models: historical challenges and the collective quest for physical realism

    NASA Astrophysics Data System (ADS)

    Clark, Martyn P.; Bierkens, Marc F. P.; Samaniego, Luis; Woods, Ross A.; Uijlenhoet, Remko; Bennett, Katrina E.; Pauwels, Valentijn R. N.; Cai, Xitian; Wood, Andrew W.; Peters-Lidard, Christa D.

    2017-07-01

    The diversity in hydrologic models has historically led to great controversy on the correct approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. In this paper, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We illustrate how modeling advances have been made by groups using models of different type and complexity, and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.

  3. The evolution of process-based hydrologic models: historical challenges and the collective quest for physical realism

    NASA Astrophysics Data System (ADS)

    Clark, M. P.; Nijssen, B.; Wood, A.; Mizukami, N.; Newman, A. J.

    2017-12-01

    The diversity in hydrologic models has historically led to great controversy on the "correct" approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. In this paper, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We illustrate how modeling advances have been made by groups using models of different type and complexity, and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.

  4. Inference of reactive transport model parameters using a Bayesian multivariate approach

    NASA Astrophysics Data System (ADS)

    Carniato, Luca; Schoups, Gerrit; van de Giesen, Nick

    2014-08-01

    Parameter estimation of subsurface transport models from multispecies data requires the definition of an objective function that includes different types of measurements. Common approaches are weighted least squares (WLS), where weights are specified a priori for each measurement, and weighted least squares with weight estimation (WLS(we)) where weights are estimated from the data together with the parameters. In this study, we formulate the parameter estimation task as a multivariate Bayesian inference problem. The WLS and WLS(we) methods are special cases in this framework, corresponding to specific prior assumptions about the residual covariance matrix. The Bayesian perspective allows for generalizations to cases where residual correlation is important and for efficient inference by analytically integrating out the variances (weights) and selected covariances from the joint posterior. Specifically, the WLS and WLS(we) methods are compared to a multivariate (MV) approach that accounts for specific residual correlations without the need for explicit estimation of the error parameters. When applied to inference of reactive transport model parameters from column-scale data on dissolved species concentrations, the following results were obtained: (1) accounting for residual correlation between species provides more accurate parameter estimation for high residual correlation levels whereas its influence for predictive uncertainty is negligible, (2) integrating out the (co)variances leads to an efficient estimation of the full joint posterior with a reduced computational effort compared to the WLS(we) method, and (3) in the presence of model structural errors, none of the methods is able to identify the correct parameter values.

  5. [A preliminary study on dental-manpower forecasting model of Miyun County in Beijing].

    PubMed

    Huang, H; Wang, H; Yang, S

    1999-01-01

    To explore the dental-manpower forecasting model of Chinese rural region and provide references for Chinese dental-manpower researches. Chose rural Miyun County in Beijing as a sample, according to the need-based and demand-weighted forecasting method, a protocol WHO-CH model and corresponding JWG-6-M package developed by authors were used to calculate the present and future need and demand of dental-manpower in Miyun County. Further predications were also calculated on the effects of four modeling parameters to the demand of dental manpower. The present need and demand of oral care personnel for Miyun were 114.5 and 29.1 respectively. At present, Miyun has 43 oral care providers who can satisfy the demand but not the need. The change of oral health demand had a major effect on the forecast of the manpower. Dental-manpower planning should consider the need as a prime factor but must be modified by the demand. It was suggested that corresponding factors of oral care personnel need to be discussed further.

  6. Directions for computational mechanics in automotive crashworthiness

    NASA Technical Reports Server (NTRS)

    Bennett, James A.; Khalil, T. B.

    1993-01-01

    The automotive industry has used computational methods for crashworthiness since the early 1970's. These methods have ranged from simple lumped parameter models to full finite element models. The emergence of the full finite element models in the mid 1980's has significantly altered the research direction. However, there remains a need for both simple, rapid modeling methods and complex detailed methods. Some directions for continuing research are discussed.

  7. Directions for computational mechanics in automotive crashworthiness

    NASA Astrophysics Data System (ADS)

    Bennett, James A.; Khalil, T. B.

    1993-08-01

    The automotive industry has used computational methods for crashworthiness since the early 1970's. These methods have ranged from simple lumped parameter models to full finite element models. The emergence of the full finite element models in the mid 1980's has significantly altered the research direction. However, there remains a need for both simple, rapid modeling methods and complex detailed methods. Some directions for continuing research are discussed.

  8. Combined structures-controls optimization of lattice trusses

    NASA Technical Reports Server (NTRS)

    Balakrishnan, A. V.

    1991-01-01

    The role that distributed parameter model can play in CSI is demonstrated, in particular in combined structures controls optimization problems of importance in preliminary design. Closed form solutions can be obtained for performance criteria such as rms attitude error, making possible analytical solutions of the optimization problem. This is in contrast to the need for numerical computer solution involving the inversion of large matrices in traditional finite element model (FEM) use. Another advantage of the analytic solution is that it can provide much needed insight into phenomena that can otherwise be obscured or difficult to discern from numerical computer results. As a compromise in level of complexity between a toy lab model and a real space structure, the lattice truss used in the EPS (Earth Pointing Satellite) was chosen. The optimization problem chosen is a generic one: of minimizing the structure mass subject to a specified stability margin and to a specified upper bond on the rms attitude error, using a co-located controller and sensors. Standard FEM treating each bar as a truss element is used, while the continuum model is anisotropic Timoshenko beam model. Performance criteria are derived for each model, except that for the distributed parameter model, explicit closed form solutions was obtained. Numerical results obtained by the two model show complete agreement.

  9. Study on Material Parameters Identification of Brain Tissue Considering Uncertainty of Friction Coefficient

    NASA Astrophysics Data System (ADS)

    Guan, Fengjiao; Zhang, Guanjun; Liu, Jie; Wang, Shujing; Luo, Xu; Zhu, Feng

    2017-10-01

    Accurate material parameters are critical to construct the high biofidelity finite element (FE) models. However, it is hard to obtain the brain tissue parameters accurately because of the effects of irregular geometry and uncertain boundary conditions. Considering the complexity of material test and the uncertainty of friction coefficient, a computational inverse method for viscoelastic material parameters identification of brain tissue is presented based on the interval analysis method. Firstly, the intervals are used to quantify the friction coefficient in the boundary condition. And then the inverse problem of material parameters identification under uncertain friction coefficient is transformed into two types of deterministic inverse problem. Finally the intelligent optimization algorithm is used to solve the two types of deterministic inverse problems quickly and accurately, and the range of material parameters can be easily acquired with no need of a variety of samples. The efficiency and convergence of this method are demonstrated by the material parameters identification of thalamus. The proposed method provides a potential effective tool for building high biofidelity human finite element model in the study of traffic accident injury.

  10. Robust design of configurations and parameters of adaptable products

    NASA Astrophysics Data System (ADS)

    Zhang, Jian; Chen, Yongliang; Xue, Deyi; Gu, Peihua

    2014-03-01

    An adaptable product can satisfy different customer requirements by changing its configuration and parameter values during the operation stage. Design of adaptable products aims at reducing the environment impact through replacement of multiple different products with single adaptable ones. Due to the complex architecture, multiple functional requirements, and changes of product configurations and parameter values in operation, impact of uncertainties to the functional performance measures needs to be considered in design of adaptable products. In this paper, a robust design approach is introduced to identify the optimal design configuration and parameters of an adaptable product whose functional performance measures are the least sensitive to uncertainties. An adaptable product in this paper is modeled by both configurations and parameters. At the configuration level, methods to model different product configuration candidates in design and different product configuration states in operation to satisfy design requirements are introduced. At the parameter level, four types of product/operating parameters and relations among these parameters are discussed. A two-level optimization approach is developed to identify the optimal design configuration and its parameter values of the adaptable product. A case study is implemented to illustrate the effectiveness of the newly developed robust adaptable design method.

  11. Investigating the Metallicity–Mixing-length Relation

    NASA Astrophysics Data System (ADS)

    Viani, Lucas S.; Basu, Sarbani; Joel Ong J., M.; Bonaca, Ana; Chaplin, William J.

    2018-05-01

    Stellar models typically use the mixing-length approximation as a way to implement convection in a simplified manner. While conventionally the value of the mixing-length parameter, α, used is the solar-calibrated value, many studies have shown that other values of α are needed to properly model stars. This uncertainty in the value of the mixing-length parameter is a major source of error in stellar models and isochrones. Using asteroseismic data, we determine the value of the mixing-length parameter required to properly model a set of about 450 stars ranging in log g, {T}eff}, and [{Fe}/{{H}}]. The relationship between the value of α required and the properties of the star is then investigated. For Eddington atmosphere, non-diffusion models, we find that the value of α can be approximated by a linear model, in the form of α /{α }ȯ =5.426{--}0.101 {log}(g)-1.071 {log}({T}eff}) +0.437([{Fe}/{{H}}]). This process is repeated using a variety of model physics, as well as compared with previous studies and results from 3D convective simulations.

  12. Dynamic characterization and modeling of potting materials for electronics assemblies

    NASA Astrophysics Data System (ADS)

    Joshi, Vasant S.; Lee, Gilbert F.; Santiago, Jaime R.

    2017-01-01

    Prediction of survivability of encapsulated electronic components subject to impact relies on accurate modeling, which in turn needs both static and dynamic characterization of individual electronic components and encapsulation material to generate reliable material parameters for a robust material model. Current focus is on potting materials to mitigate high rate loading on impact. In this effort, difficulty arises in capturing one of the critical features characteristic of the loading environment in a high velocity impact: multiple loading events coupled with multi-axial stress states. Hence, potting materials need to be characterized well to understand its damping capacity at different frequencies and strain rates. An encapsulation scheme to protect electronic boards consists of multiple layers of filled as well as unfilled polymeric materials like Sylgard 184 and Trigger bond Epoxy # 20-3001. A combination of experiments conducted for characterization of materials used Split Hopkinson Pressure Bar (SHPB), and dynamic material analyzer (DMA). For material which behaves in an ideal manner, a master curve can be fitted to Williams-Landel-Ferry (WLF) model. To verify the applicability of WLF model, a new temperature-time shift (TTS) macro was written to compare idealized temperature shift factor with experimental incremental shift factor. Deviations can be readily observed by comparison of experimental data with the model fit to determine if model parameters reflect the actual material behavior. Similarly, another macro written for obtaining Ogden model parameter from Hopkinson Bar tests can readily indicate deviations from experimental high strain rate data. Experimental results for different materials used for mitigating impact, and ways to combine data from DMA and Hopkinson bar together with modeling refinements are presented.

  13. Numerical relativity waveform surrogate model for generically precessing binary black hole mergers

    NASA Astrophysics Data System (ADS)

    Blackman, Jonathan; Field, Scott E.; Scheel, Mark A.; Galley, Chad R.; Ott, Christian D.; Boyle, Michael; Kidder, Lawrence E.; Pfeiffer, Harald P.; Szilágyi, Béla

    2017-07-01

    A generic, noneccentric binary black hole (BBH) system emits gravitational waves (GWs) that are completely described by seven intrinsic parameters: the black hole spin vectors and the ratio of their masses. Simulating a BBH coalescence by solving Einstein's equations numerically is computationally expensive, requiring days to months of computing resources for a single set of parameter values. Since theoretical predictions of the GWs are often needed for many different source parameters, a fast and accurate model is essential. We present the first surrogate model for GWs from the coalescence of BBHs including all seven dimensions of the intrinsic noneccentric parameter space. The surrogate model, which we call NRSur7dq2, is built from the results of 744 numerical relativity simulations. NRSur7dq2 covers spin magnitudes up to 0.8 and mass ratios up to 2, includes all ℓ≤4 modes, begins about 20 orbits before merger, and can be evaluated in ˜50 ms . We find the largest NRSur7dq2 errors to be comparable to the largest errors in the numerical relativity simulations, and more than an order of magnitude smaller than the errors of other waveform models. Our model, and more broadly the methods developed here, will enable studies that were not previously possible when using highly accurate waveforms, such as parameter inference and tests of general relativity with GW observations.

  14. Compressed Sensing for Metrics Development

    NASA Astrophysics Data System (ADS)

    McGraw, R. L.; Giangrande, S. E.; Liu, Y.

    2012-12-01

    Models by their very nature tend to be sparse in the sense that they are designed, with a few optimally selected key parameters, to provide simple yet faithful representations of a complex observational dataset or computer simulation output. This paper seeks to apply methods from compressed sensing (CS), a new area of applied mathematics currently undergoing a very rapid development (see for example Candes et al., 2006), to FASTER needs for new approaches to model evaluation and metrics development. The CS approach will be illustrated for a time series generated using a few-parameter (i.e. sparse) model. A seemingly incomplete set of measurements, taken at a just few random sampling times, is then used to recover the hidden model parameters. Remarkably there is a sharp transition in the number of required measurements, beyond which both the model parameters and time series are recovered exactly. Applications to data compression, data sampling/collection strategies, and to the development of metrics for model evaluation by comparison with observation (e.g. evaluation of model predictions of cloud fraction using cloud radar observations) are presented and discussed in context of the CS approach. Cited reference: Candes, E. J., Romberg, J., and Tao, T. (2006), Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Transactions on Information Theory, 52, 489-509.

  15. Flexible parameter-sparse global temperature time profiles that stabilise at 1.5 and 2.0 °C

    NASA Astrophysics Data System (ADS)

    Huntingford, Chris; Yang, Hui; Harper, Anna; Cox, Peter M.; Gedney, Nicola; Burke, Eleanor J.; Lowe, Jason A.; Hayman, Garry; Collins, William J.; Smith, Stephen M.; Comyn-Platt, Edward

    2017-07-01

    The meeting of the United Nations Framework Convention on Climate Change (UNFCCC) in December 2015 committed parties at the convention to hold the rise in global average temperature to well below 2.0 °C above pre-industrial levels. It also committed the parties to pursue efforts to limit warming to 1.5 °C. This leads to two key questions. First, what extent of emissions reduction will achieve either target? Second, what is the benefit of the reduced climate impacts from keeping warming at or below 1.5 °C? To provide answers, climate model simulations need to follow trajectories consistent with these global temperature limits. It is useful to operate models in an inverse mode to make model-specific estimates of greenhouse gas (GHG) concentration pathways consistent with the prescribed temperature profiles. Further inversion derives related emissions pathways for these concentrations. For this to happen, and to enable climate research centres to compare GHG concentrations and emissions estimates, common temperature trajectory scenarios are required. Here we define algebraic curves that asymptote to a stabilised limit, while also matching the magnitude and gradient of recent warming levels. The curves are deliberately parameter-sparse, needing the prescription of just two parameters plus the final temperature. Yet despite this simplicity, they can allow for temperature overshoot and for generational changes, for which more effort to decelerate warming change needs to be made by future generations. The curves capture temperature profiles from the existing Representative Concentration Pathway (RCP2.6) scenario projections by a range of different Earth system models (ESMs), which have warming amounts towards the lower levels of those that society is discussing.

  16. Modeling for the optimal biodegradation of toxic wastewater in a discontinuous reactor.

    PubMed

    Betancur, Manuel J; Moreno-Andrade, Iván; Moreno, Jaime A; Buitrón, Germán; Dochain, Denis

    2008-06-01

    The degradation of toxic compounds in Sequencing Batch Reactors (SBRs) poses inhibition problems. Time Optimal Control (TOC) methods may be used to avoid such inhibition thus exploiting the maximum capabilities of this class of reactors. Biomass and substrate online measurements, however, are usually unavailable for wastewater applications, so TOC must use only related variables as dissolved oxygen and volume. Although the standard mathematical model to describe the reaction phase of SBRs is good enough for explaining its general behavior in uncontrolled batch mode, better details are needed to model its dynamics when the reactor operates near the maximum degradation rate zone, as when TOC is used. In this paper two improvements to the model are suggested: to include the sensor delay effects and to modify the classical Haldane curve in a piecewise manner. These modifications offer a good solution for a reasonable complexification tradeoff. Additionally, a new way to look at the Haldane K-parameters (micro(o),K(I),K(S)) is described, the S-parameters (micro*,S*,S(m)). These parameters do have a clear physical meaning and, unlike the K-parameters, allow for the statistical treatment to find a single model to fit data from multiple experiments.

  17. Anomalous solute transport in saturated porous media: Relating transport model parameters to electrical and nuclear magnetic resonance properties

    USGS Publications Warehouse

    Swanson, Ryan D; Binley, Andrew; Keating, Kristina; France, Samantha; Osterman, Gordon; Day-Lewis, Frederick D.; Singha, Kamini

    2015-01-01

    The advection-dispersion equation (ADE) fails to describe commonly observed non-Fickian solute transport in saturated porous media, necessitating the use of other models such as the dual-domain mass-transfer (DDMT) model. DDMT model parameters are commonly calibrated via curve fitting, providing little insight into the relation between effective parameters and physical properties of the medium. There is a clear need for material characterization techniques that can provide insight into the geometry and connectedness of pore spaces related to transport model parameters. Here, we consider proton nuclear magnetic resonance (NMR), direct-current (DC) resistivity, and complex conductivity (CC) measurements for this purpose, and assess these methods using glass beads as a control and two different samples of the zeolite clinoptilolite, a material that demonstrates non-Fickian transport due to intragranular porosity. We estimate DDMT parameters via calibration of a transport model to column-scale solute tracer tests, and compare NMR, DC resistivity, CC results, which reveal that grain size alone does not control transport properties and measured geophysical parameters; rather, volume and arrangement of the pore space play important roles. NMR cannot provide estimates of more-mobile and less-mobile pore volumes in the absence of tracer tests because these estimates depend critically on the selection of a material-dependent and flow-dependent cutoff time. Increased electrical connectedness from DC resistivity measurements are associated with greater mobile pore space determined from transport model calibration. CC was hypothesized to be related to length scales of mass transfer, but the CC response is unrelated to DDMT.

  18. Sensitivity analysis of a sediment dynamics model applied in a Mediterranean river basin: global change and management implications.

    PubMed

    Sánchez-Canales, M; López-Benito, A; Acuña, V; Ziv, G; Hamel, P; Chaplin-Kramer, R; Elorza, F J

    2015-01-01

    Climate change and land-use change are major factors influencing sediment dynamics. Models can be used to better understand sediment production and retention by the landscape, although their interpretation is limited by large uncertainties, including model parameter uncertainties. The uncertainties related to parameter selection may be significant and need to be quantified to improve model interpretation for watershed management. In this study, we performed a sensitivity analysis of the InVEST (Integrated Valuation of Environmental Services and Tradeoffs) sediment retention model in order to determine which model parameters had the greatest influence on model outputs, and therefore require special attention during calibration. The estimation of the sediment loads in this model is based on the Universal Soil Loss Equation (USLE). The sensitivity analysis was performed in the Llobregat basin (NE Iberian Peninsula) for exported and retained sediment, which support two different ecosystem service benefits (avoided reservoir sedimentation and improved water quality). Our analysis identified the model parameters related to the natural environment as the most influential for sediment export and retention. Accordingly, small changes in variables such as the magnitude and frequency of extreme rainfall events could cause major changes in sediment dynamics, demonstrating the sensitivity of these dynamics to climate change in Mediterranean basins. Parameters directly related to human activities and decisions (such as cover management factor, C) were also influential, especially for sediment exported. The importance of these human-related parameters in the sediment export process suggests that mitigation measures have the potential to at least partially ameliorate climate-change driven changes in sediment exportation. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Understanding Climate Uncertainty with an Ocean Focus

    NASA Astrophysics Data System (ADS)

    Tokmakian, R. T.

    2009-12-01

    Uncertainty in climate simulations arises from various aspects of the end-to-end process of modeling the Earth’s climate. First, there is uncertainty from the structure of the climate model components (e.g. ocean/ice/atmosphere). Even the most complex models are deficient, not only in the complexity of the processes they represent, but in which processes are included in a particular model. Next, uncertainties arise from the inherent error in the initial and boundary conditions of a simulation. Initial conditions are the state of the weather or climate at the beginning of the simulation and other such things, and typically come from observations. Finally, there is the uncertainty associated with the values of parameters in the model. These parameters may represent physical constants or effects, such as ocean mixing, or non-physical aspects of modeling and computation. The uncertainty in these input parameters propagates through the non-linear model to give uncertainty in the outputs. The models in 2020 will no doubt be better than today’s models, but they will still be imperfect, and development of uncertainty analysis technology is a critical aspect of understanding model realism and prediction capability. Smith [2002] and Cox and Stephenson [2007] discuss the need for methods to quantify the uncertainties within complicated systems so that limitations or weaknesses of the climate model can be understood. In making climate predictions, we need to have available both the most reliable model or simulation and a methods to quantify the reliability of a simulation. If quantitative uncertainty questions of the internal model dynamics are to be answered with complex simulations such as AOGCMs, then the only known path forward is based on model ensembles that characterize behavior with alternative parameter settings [e.g. Rougier, 2007]. The relevance and feasibility of using "Statistical Analysis of Computer Code Output" (SACCO) methods for examining uncertainty in ocean circulation due to parameter specification will be described and early results using the ocean/ice components of the CCSM climate model in a designed experiment framework will be shown. Cox, P. and D. Stephenson, Climate Change: A Changing Climate for Prediction, 2007, Science 317 (5835), 207, DOI: 10.1126/science.1145956. Rougier, J. C., 2007: Probabilistic Inference for Future Climate Using an Ensemble of Climate Model Evaluations, Climatic Change, 81, 247-264. Smith L., 2002, What might we learn from climate forecasts? Proc. Nat’l Academy of Sciences, Vol. 99, suppl. 1, 2487-2492 doi:10.1073/pnas.012580599.

  20. Distribution-centric 3-parameter thermodynamic models of partition gas chromatography.

    PubMed

    Blumberg, Leonid M

    2017-03-31

    If both parameters (the entropy, ΔS, and the enthalpy, ΔH) of the classic van't Hoff model of dependence of distribution coefficients (K) of analytes on temperature (T) are treated as the temperature-independent constants then the accuracy of the model is known to be insufficient for the needed accuracy of retention time prediction. A more accurate 3-parameter Clarke-Glew model offers a way to treat ΔS and ΔH as functions, ΔS(T) and ΔH(T), of T. A known T-centric construction of these functions is based on relating them to the reference values (ΔS ref and ΔH ref ) corresponding to a predetermined reference temperature (T ref ). Choosing a single T ref for all analytes in a complex sample or in a large database might lead to practically irrelevant values of ΔS ref and ΔH ref for those analytes that have too small or too large retention factors at T ref . Breaking all analytes in several subsets each with its own T ref leads to discontinuities in the analyte parameters. These problems are avoided in the K-centric modeling where ΔS(T) and ΔS(T) and other analyte parameters are described in relation to their values corresponding to a predetermined reference distribution coefficient (K Ref ) - the same for all analytes. In this report, the mathematics of the K-centric modeling are described and the properties of several types of K-centric parameters are discussed. It has been shown that the earlier introduced characteristic parameters of the analyte-column interaction (the characteristic temperature, T char , and the characteristic thermal constant, θ char ) are a special chromatographically convenient case of the K-centric parameters. Transformations of T-centric parameters into K-centric ones and vice-versa as well as the transformations of one set of K-centric parameters into another set and vice-versa are described. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. surrkick: Black-hole kicks from numerical-relativity surrogate models

    NASA Astrophysics Data System (ADS)

    Gerosa, Davide; Hébert, François; Stein, Leo C.

    2018-04-01

    surrkick quickly and reliably extract recoils imparted to generic, precessing, black hole binaries. It uses a numerical-relativity surrogate model to obtain the gravitational waveform given a set of binary parameters, and from this waveform directly integrates the gravitational-wave linear momentum flux. This entirely bypasses the need of fitting formulae which are typically used to model black-hole recoils in astrophysical contexts.

  2. Modeling of microporous silicon betaelectric converter with 63Ni plating in GEANT4 toolkit*

    NASA Astrophysics Data System (ADS)

    Zelenkov, P. V.; Sidorov, V. G.; Lelekov, E. T.; Khoroshko, A. Y.; Bogdanov, S. V.; Lelekov, A. T.

    2016-04-01

    The model of electron-hole pairs generation rate distribution in semiconductor is needed to optimize the parameters of microporous silicon betaelectric converter, which uses 63Ni isotope radiation. By using Monte-Carlo methods of GEANT4 software with ultra-low energy electron physics models this distribution in silicon was calculated and approximated with exponential function. Optimal pore configuration was estimated.

  3. Advanced approach to the analysis of a series of in-situ nuclear forward scattering experiments

    NASA Astrophysics Data System (ADS)

    Vrba, Vlastimil; Procházka, Vít; Smrčka, David; Miglierini, Marcel

    2017-03-01

    This study introduces a sequential fitting procedure as a specific approach to nuclear forward scattering (NFS) data evaluation. Principles and usage of this advanced evaluation method are described in details and its utilization is demonstrated on NFS in-situ investigations of fast processes. Such experiments frequently consist of hundreds of time spectra which need to be evaluated. The introduced procedure allows the analysis of these experiments and significantly decreases the time needed for the data evaluation. The key contributions of the study are the sequential use of the output fitting parameters of a previous data set as the input parameters for the next data set and the model suitability crosscheck option of applying the procedure in ascending and descending directions of the data sets. Described fitting methodology is beneficial for checking of model validity and reliability of obtained results.

  4. Auto-tuning for NMR probe using LabVIEW

    NASA Astrophysics Data System (ADS)

    Quen, Carmen; Pham, Stephanie; Bernal, Oscar

    2014-03-01

    Typical manual NMR-tuning method is not suitable for broadband spectra spanning several megahertz linewidths. Among the main problems encountered during manual tuning are pulse-power reproducibility, baselines, and transmission line reflections, to name a few. We present a design of an auto-tuning system using graphic programming language, LabVIEW, to minimize these problems. The program uses a simplified model of the NMR probe conditions near perfect tuning to mimic the tuning process and predict the position of the capacitor shafts needed to achieve the desirable impedance. The tuning capacitors of the probe are controlled by stepper motors through a LabVIEW/computer interface. Our program calculates the effective capacitance needed to tune the probe and provides controlling parameters to advance the motors in the right direction. The impedance reading of a network analyzer can be used to correct the model parameters in real time for feedback control.

  5. Accounting for uncertainty in model-based prevalence estimation: paratuberculosis control in dairy herds.

    PubMed

    Davidson, Ross S; McKendrick, Iain J; Wood, Joanna C; Marion, Glenn; Greig, Alistair; Stevenson, Karen; Sharp, Michael; Hutchings, Michael R

    2012-09-10

    A common approach to the application of epidemiological models is to determine a single (point estimate) parameterisation using the information available in the literature. However, in many cases there is considerable uncertainty about parameter values, reflecting both the incomplete nature of current knowledge and natural variation, for example between farms. Furthermore model outcomes may be highly sensitive to different parameter values. Paratuberculosis is an infection for which many of the key parameter values are poorly understood and highly variable, and for such infections there is a need to develop and apply statistical techniques which make maximal use of available data. A technique based on Latin hypercube sampling combined with a novel reweighting method was developed which enables parameter uncertainty and variability to be incorporated into a model-based framework for estimation of prevalence. The method was evaluated by applying it to a simulation of paratuberculosis in dairy herds which combines a continuous time stochastic algorithm with model features such as within herd variability in disease development and shedding, which have not been previously explored in paratuberculosis models. Generated sample parameter combinations were assigned a weight, determined by quantifying the model's resultant ability to reproduce prevalence data. Once these weights are generated the model can be used to evaluate other scenarios such as control options. To illustrate the utility of this approach these reweighted model outputs were used to compare standard test and cull control strategies both individually and in combination with simple husbandry practices that aim to reduce infection rates. The technique developed has been shown to be applicable to a complex model incorporating realistic control options. For models where parameters are not well known or subject to significant variability, the reweighting scheme allowed estimated distributions of parameter values to be combined with additional sources of information, such as that available from prevalence distributions, resulting in outputs which implicitly handle variation and uncertainty. This methodology allows for more robust predictions from modelling approaches by allowing for parameter uncertainty and combining different sources of information, and is thus expected to be useful in application to a large number of disease systems.

  6. Micromechanical investigation of sand migration in gas hydrate-bearing sediments

    NASA Astrophysics Data System (ADS)

    Uchida, S.; Klar, A.; Cohen, E.

    2017-12-01

    Past field gas production tests from hydrate bearing sediments have indicated that sand migration is an important phenomenon that needs to be considered for successful long-term gas production. The authors previously developed the continuum based analytical thermo-hydro-mechanical sand migration model that can be applied to predict wellbore responses during gas production. However, the model parameters involved in the model still needs to be calibrated and studied thoroughly and it still remains a challenge to conduct well-defined laboratory experiments of sand migration, especially in hydrate-bearing sediments. Taking the advantage of capability of micromechanical modelling approach through discrete element method (DEM), this work presents a first step towards quantifying one of the model parameters that governs stresses reduction due to grain detachment. Grains represented by DEM particles are randomly removed from an isotropically loaded DEM specimen and statistical analyses reveal that linear proportionality exists between the normalized volume of detached solids and normalized reduced stresses. The DEM specimen with different porosities (different packing densities) are also considered and statistical analyses show that there is a clear transition between loose sand behavior and dense sand behavior, characterized by the relative density.

  7. Simulation of a Radio-Frequency Photogun for the Generation of Ultrashort Beams

    NASA Astrophysics Data System (ADS)

    Nikiforov, D. A.; Levichev, A. E.; Barnyakov, A. M.; Andrianov, A. V.; Samoilov, S. L.

    2018-04-01

    A radio-frequency photogun for the generation of ultrashort electron beams to be used in fast electron diffractoscopy, wakefield acceleration experiments, and the design of accelerating structures of the millimeter range is modeled. The beam parameters at the photogun output needed for each type of experiment are determined. The general outline of the photogun is given, its electrodynamic parameters are calculated, and the accelerating field distribution is obtained. The particle dynamics is analyzed in the context of the required output beam parameters. The optimal initial beam characteristics and field amplitudes are chosen. A conclusion is made regarding the obtained beam parameters.

  8. Uncertainty in BMP evaluation and optimization for watershed management

    NASA Astrophysics Data System (ADS)

    Chaubey, I.; Cibin, R.; Sudheer, K.; Her, Y.

    2012-12-01

    Use of computer simulation models have increased substantially to make watershed management decisions and to develop strategies for water quality improvements. These models are often used to evaluate potential benefits of various best management practices (BMPs) for reducing losses of pollutants from sources areas into receiving waterbodies. Similarly, use of simulation models in optimizing selection and placement of best management practices under single (maximization of crop production or minimization of pollutant transport) and multiple objective functions has increased recently. One of the limitations of the currently available assessment and optimization approaches is that the BMP strategies are considered deterministic. Uncertainties in input data (e.g. precipitation, streamflow, sediment, nutrient and pesticide losses measured, land use) and model parameters may result in considerable uncertainty in watershed response under various BMP options. We have developed and evaluated options to include uncertainty in BMP evaluation and optimization for watershed management. We have also applied these methods to evaluate uncertainty in ecosystem services from mixed land use watersheds. In this presentation, we will discuss methods to to quantify uncertainties in BMP assessment and optimization solutions due to uncertainties in model inputs and parameters. We have used a watershed model (Soil and Water Assessment Tool or SWAT) to simulate the hydrology and water quality in mixed land use watershed located in Midwest USA. The SWAT model was also used to represent various BMPs in the watershed needed to improve water quality. SWAT model parameters, land use change parameters, and climate change parameters were considered uncertain. It was observed that model parameters, land use and climate changes resulted in considerable uncertainties in BMP performance in reducing P, N, and sediment loads. In addition, climate change scenarios also affected uncertainties in SWAT simulated crop yields. Considerable uncertainties in the net cost and the water quality improvements resulted due to uncertainties in land use, climate change, and model parameter values.

  9. On Finding and Using Identifiable Parameter Combinations in Nonlinear Dynamic Systems Biology Models and COMBOS: A Novel Web Implementation

    PubMed Central

    DiStefano, Joseph

    2014-01-01

    Parameter identifiability problems can plague biomodelers when they reach the quantification stage of development, even for relatively simple models. Structural identifiability (SI) is the primary question, usually understood as knowing which of P unknown biomodel parameters p 1,…, pi,…, pP are-and which are not-quantifiable in principle from particular input-output (I-O) biodata. It is not widely appreciated that the same database also can provide quantitative information about the structurally unidentifiable (not quantifiable) subset, in the form of explicit algebraic relationships among unidentifiable pi. Importantly, this is a first step toward finding what else is needed to quantify particular unidentifiable parameters of interest from new I–O experiments. We further develop, implement and exemplify novel algorithms that address and solve the SI problem for a practical class of ordinary differential equation (ODE) systems biology models, as a user-friendly and universally-accessible web application (app)–COMBOS. Users provide the structural ODE and output measurement models in one of two standard forms to a remote server via their web browser. COMBOS provides a list of uniquely and non-uniquely SI model parameters, and–importantly-the combinations of parameters not individually SI. If non-uniquely SI, it also provides the maximum number of different solutions, with important practical implications. The behind-the-scenes symbolic differential algebra algorithms are based on computing Gröbner bases of model attributes established after some algebraic transformations, using the computer-algebra system Maxima. COMBOS was developed for facile instructional and research use as well as modeling. We use it in the classroom to illustrate SI analysis; and have simplified complex models of tumor suppressor p53 and hormone regulation, based on explicit computation of parameter combinations. It’s illustrated and validated here for models of moderate complexity, with and without initial conditions. Built-in examples include unidentifiable 2 to 4-compartment and HIV dynamics models. PMID:25350289

  10. Characterization and Modeling of Indium Gallium Antimonide Avalanche Photodiode and of Indium Gallium Arsenide Two-band Detector

    NASA Technical Reports Server (NTRS)

    2006-01-01

    A model of the optical properties of Al(x)Ga(1-x)As(y)Sb(1-y) and In(x)Ga(1-x)As(y)Sb(1-y) is presented, including the refractive, extinction, absorption and reflection coefficients in terms of the optical dielectric function of the materials. Energy levels and model parameters for each binary compound are interpolated to obtain the needed ternaries and quaternaries for various compositions. Bowing parameters are considered in the interpolation scheme to take into account the deviation of the calculated ternary and quaternary values from experimental data due to lattice disorders. The inclusion of temperature effects is currently being considered.

  11. Application of latent variable model in Rosenberg self-esteem scale.

    PubMed

    Leung, Shing-On; Wu, Hui-Ping

    2013-01-01

    Latent Variable Models (LVM) are applied to Rosenberg Self-Esteem Scale (RSES). Parameter estimations automatically give negative signs hence no recoding is necessary for negatively scored items. Bad items can be located through parameter estimate, item characteristic curves and other measures. Two factors are extracted with one on self-esteem and the other on the degree to take moderate views, with the later not often being covered in previous studies. A goodness-of-fit measure based on two-way margins is used but more works are needed. Results show that scaling provided by models with more formal statistical ground correlated highly with conventional method, which may provide justification for usual practice.

  12. Nonextensivity at the Circum-Pacific subduction zones-Preliminary studies

    NASA Astrophysics Data System (ADS)

    Scherrer, T. M.; França, G. S.; Silva, R.; de Freitas, D. B.; Vilar, C. S.

    2015-05-01

    Following the fragment-asperity interaction model introduced by Sotolongo-Costa and Posadas (2004) and revised by Silva et al. (2006), we try to explain the nonextensive effect in the context of the asperity model designed by Lay and Kanamori (1981). To address this issue, we used data from the NEIC catalog in the decade between 2001 and 2010, in order to investigate the so-called Circum-Pacific subduction zones. We propose a geophysical explanation to nonextensive parameter q. The results need further investigation however evidence of correlation between the nonextensive parameter and the asperity model is shown, i.e., we show that q-value is higher for areas with larger asperities and stronger coupling.

  13. Framework for Uncertainty Assessment - Hanford Site-Wide Groundwater Flow and Transport Modeling

    NASA Astrophysics Data System (ADS)

    Bergeron, M. P.; Cole, C. R.; Murray, C. J.; Thorne, P. D.; Wurstner, S. K.

    2002-05-01

    Pacific Northwest National Laboratory is in the process of development and implementation of an uncertainty estimation methodology for use in future site assessments that addresses parameter uncertainty as well as uncertainties related to the groundwater conceptual model. The long-term goals of the effort are development and implementation of an uncertainty estimation methodology for use in future assessments and analyses being made with the Hanford site-wide groundwater model. The basic approach in the framework developed for uncertainty assessment consists of: 1) Alternate conceptual model (ACM) identification to identify and document the major features and assumptions of each conceptual model. The process must also include a periodic review of the existing and proposed new conceptual models as data or understanding become available. 2) ACM development of each identified conceptual model through inverse modeling with historical site data. 3) ACM evaluation to identify which of conceptual models are plausible and should be included in any subsequent uncertainty assessments. 4) ACM uncertainty assessments will only be carried out for those ACMs determined to be plausible through comparison with historical observations and model structure identification measures. The parameter uncertainty assessment process generally involves: a) Model Complexity Optimization - to identify the important or relevant parameters for the uncertainty analysis; b) Characterization of Parameter Uncertainty - to develop the pdfs for the important uncertain parameters including identification of any correlations among parameters; c) Propagation of Uncertainty - to propagate parameter uncertainties (e.g., by first order second moment methods if applicable or by a Monte Carlo approach) through the model to determine the uncertainty in the model predictions of interest. 5)Estimation of combined ACM and scenario uncertainty by a double sum with each component of the inner sum (an individual CCDF) representing parameter uncertainty associated with a particular scenario and ACM and the outer sum enumerating the various plausible ACM and scenario combinations in order to represent the combined estimate of uncertainty (a family of CCDFs). A final important part of the framework includes identification, enumeration, and documentation of all the assumptions, which include those made during conceptual model development, required by the mathematical model, required by the numerical model, made during the spatial and temporal descretization process, needed to assign the statistical model and associated parameters that describe the uncertainty in the relevant input parameters, and finally those assumptions required by the propagation method. Pacific Northwest National Laboratory is operated for the U.S. Department of Energy under Contract DE-AC06-76RL01830.

  14. Conditional parametric models for storm sewer runoff

    NASA Astrophysics Data System (ADS)

    Jonsdottir, H.; Nielsen, H. Aa; Madsen, H.; Eliasson, J.; Palsson, O. P.; Nielsen, M. K.

    2007-05-01

    The method of conditional parametric modeling is introduced for flow prediction in a sewage system. It is a well-known fact that in hydrological modeling the response (runoff) to input (precipitation) varies depending on soil moisture and several other factors. Consequently, nonlinear input-output models are needed. The model formulation described in this paper is similar to the traditional linear models like final impulse response (FIR) and autoregressive exogenous (ARX) except that the parameters vary as a function of some external variables. The parameter variation is modeled by local lines, using kernels for local linear regression. As such, the method might be referred to as a nearest neighbor method. The results achieved in this study were compared to results from the conventional linear methods, FIR and ARX. The increase in the coefficient of determination is substantial. Furthermore, the new approach conserves the mass balance better. Hence this new approach looks promising for various hydrological models and analysis.

  15. Atmospheric parameters, spectral indexes and their relation to CPV spectral performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Núñez, Rubén, E-mail: ruben.nunez@ies-def.upm.es; Antón, Ignacio, E-mail: ruben.nunez@ies-def.upm.es; Askins, Steve, E-mail: ruben.nunez@ies-def.upm.es

    2014-09-26

    Air Mass and atmosphere components (basically aerosol (AOD) and precipitable water (PW)) define the absorption of the sunlight that arrive to Earth. Radiative models such as SMARTS or MODTRAN use these parameters to generate an equivalent spectrum. However, complex and expensive instruments (as AERONET network devices) are needed to obtain AOD and PW. On the other hand, the use of isotype cells is a convenient way to characterize spectrally a place for CPV considering that they provide the photocurrent of the different internal subcells individually. Crossing data from AERONET station and a Tri-band Spectroheliometer, a model that correlates Spectral Mismatchmore » Ratios and atmospheric parameters is proposed. Considering the amount of stations of AERONET network, this model may be used to estimate the spectral influence on energy performance of CPV systems close to all the stations worldwide.« less

  16. Calibration and compensation method of three-axis geomagnetic sensor based on pre-processing total least square iteration

    NASA Astrophysics Data System (ADS)

    Zhou, Y.; Zhang, X.; Xiao, W.

    2018-04-01

    As the geomagnetic sensor is susceptible to interference, a pre-processing total least square iteration method is proposed for calibration compensation. Firstly, the error model of the geomagnetic sensor is analyzed and the correction model is proposed, then the characteristics of the model are analyzed and converted into nine parameters. The geomagnetic data is processed by Hilbert transform (HHT) to improve the signal-to-noise ratio, and the nine parameters are calculated by using the combination of Newton iteration method and the least squares estimation method. The sifter algorithm is used to filter the initial value of the iteration to ensure that the initial error is as small as possible. The experimental results show that this method does not need additional equipment and devices, can continuously update the calibration parameters, and better than the two-step estimation method, it can compensate geomagnetic sensor error well.

  17. Sensitivity derivatives for advanced CFD algorithm and viscous modelling parameters via automatic differentiation

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Newman, Perry A.; Haigler, Kara J.

    1993-01-01

    The computational technique of automatic differentiation (AD) is applied to a three-dimensional thin-layer Navier-Stokes multigrid flow solver to assess the feasibility and computational impact of obtaining exact sensitivity derivatives typical of those needed for sensitivity analyses. Calculations are performed for an ONERA M6 wing in transonic flow with both the Baldwin-Lomax and Johnson-King turbulence models. The wing lift, drag, and pitching moment coefficients are differentiated with respect to two different groups of input parameters. The first group consists of the second- and fourth-order damping coefficients of the computational algorithm, whereas the second group consists of two parameters in the viscous turbulent flow physics modelling. Results obtained via AD are compared, for both accuracy and computational efficiency with the results obtained with divided differences (DD). The AD results are accurate, extremely simple to obtain, and show significant computational advantage over those obtained by DD for some cases.

  18. Multi-objective optimization model of CNC machining to minimize processing time and environmental impact

    NASA Astrophysics Data System (ADS)

    Hamada, Aulia; Rosyidi, Cucuk Nur; Jauhari, Wakhid Ahmad

    2017-11-01

    Minimizing processing time in a production system can increase the efficiency of a manufacturing company. Processing time are influenced by application of modern technology and machining parameter. Application of modern technology can be apply by use of CNC machining, one of the machining process can be done with a CNC machining is turning. However, the machining parameters not only affect the processing time but also affect the environmental impact. Hence, optimization model is needed to optimize the machining parameters to minimize the processing time and environmental impact. This research developed a multi-objective optimization to minimize the processing time and environmental impact in CNC turning process which will result in optimal decision variables of cutting speed and feed rate. Environmental impact is converted from environmental burden through the use of eco-indicator 99. The model were solved by using OptQuest optimization software from Oracle Crystal Ball.

  19. An evaluation of the predictive capabilities of CTRW and MRMT

    NASA Astrophysics Data System (ADS)

    Fiori, Aldo; Zarlenga, Antonio; Gotovac, Hrvoje; Jankovic, Igor; Cvetkovic, Vladimir; Dagan, Gedeon

    2016-04-01

    The prediction capability of two approximate models of non-Fickian transport in highly heterogeneous aquifers is checked by comparison with accurate numerical simulations, for mean uniform flow of velocity U. The two models considered are the MRMT (Multi Rate Mass Transfer) and CTRW (Continuous Time Random Walk) models. Both circumvent the need to solve the flow and transport equations by using proxy models, which provide the BTC μ(x,t) depending on a vector a of unknown 5 parameters. Although underlain by different conceptualisations, the two models have a similar mathematical structure. The proponents of the models suggest using field transport experiments at a small scale to calibrate a, toward predicting transport at larger scale. The strategy was tested with the aid of accurate numerical simulations in two and three dimensions from the literature. First, the 5 parameter values were calibrated by using the simulated μ at a control plane close to the injection one and subsequently using these same parameters for predicting μ at further 10 control planes. It is found that the two methods perform equally well, though the parameters identification is nonunique, with a large set of parameters providing similar fitting. Also, errors in the determination of the mean eulerian velocity may lead to significant shifts of the predicted BTC. It is found that the simulated BTCs satisfy Markovianity: they can be found as n-fold convolutions of a "kernel", in line with the models' main assumption.

  20. Designing novel cellulase systems through agent-based modeling and global sensitivity analysis.

    PubMed

    Apte, Advait A; Senger, Ryan S; Fong, Stephen S

    2014-01-01

    Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement.

  1. Logic-based models in systems biology: a predictive and parameter-free network analysis method†

    PubMed Central

    Wynn, Michelle L.; Consul, Nikita; Merajver, Sofia D.

    2012-01-01

    Highly complex molecular networks, which play fundamental roles in almost all cellular processes, are known to be dysregulated in a number of diseases, most notably in cancer. As a consequence, there is a critical need to develop practical methodologies for constructing and analysing molecular networks at a systems level. Mathematical models built with continuous differential equations are an ideal methodology because they can provide a detailed picture of a network’s dynamics. To be predictive, however, differential equation models require that numerous parameters be known a priori and this information is almost never available. An alternative dynamical approach is the use of discrete logic-based models that can provide a good approximation of the qualitative behaviour of a biochemical system without the burden of a large parameter space. Despite their advantages, there remains significant resistance to the use of logic-based models in biology. Here, we address some common concerns and provide a brief tutorial on the use of logic-based models, which we motivate with biological examples. PMID:23072820

  2. Designing novel cellulase systems through agent-based modeling and global sensitivity analysis

    PubMed Central

    Apte, Advait A; Senger, Ryan S; Fong, Stephen S

    2014-01-01

    Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement. PMID:24830736

  3. Modelling hen harrier dynamics to inform human-wildlife conflict resolution: a spatially-realistic, individual-based approach.

    PubMed

    Heinonen, Johannes P M; Palmer, Stephen C F; Redpath, Steve M; Travis, Justin M J

    2014-01-01

    Individual-based models have gained popularity in ecology, and enable simultaneous incorporation of spatial explicitness and population dynamic processes to understand spatio-temporal patterns of populations. We introduce an individual-based model for understanding and predicting spatial hen harrier (Circus cyaneus) population dynamics in Great Britain. The model uses a landscape with habitat, prey and game management indices. The hen harrier population was initialised according to empirical census estimates for 1988/89 and simulated until 2030, and predictions for 1998, 2004 and 2010 were compared to empirical census estimates for respective years. The model produced a good qualitative match to overall trends between 1989 and 2010. Parameter explorations revealed relatively high elasticity in particular to demographic parameters such as juvenile male mortality. This highlights the need for robust parameter estimates from empirical research. There are clearly challenges for replication of real-world population trends, but this model provides a useful tool for increasing understanding of drivers of hen harrier dynamics and focusing research efforts in order to inform conflict management decisions.

  4. Modelling Hen Harrier Dynamics to Inform Human-Wildlife Conflict Resolution: A Spatially-Realistic, Individual-Based Approach

    PubMed Central

    Heinonen, Johannes P. M.; Palmer, Stephen C. F.; Redpath, Steve M.; Travis, Justin M. J.

    2014-01-01

    Individual-based models have gained popularity in ecology, and enable simultaneous incorporation of spatial explicitness and population dynamic processes to understand spatio-temporal patterns of populations. We introduce an individual-based model for understanding and predicting spatial hen harrier (Circus cyaneus) population dynamics in Great Britain. The model uses a landscape with habitat, prey and game management indices. The hen harrier population was initialised according to empirical census estimates for 1988/89 and simulated until 2030, and predictions for 1998, 2004 and 2010 were compared to empirical census estimates for respective years. The model produced a good qualitative match to overall trends between 1989 and 2010. Parameter explorations revealed relatively high elasticity in particular to demographic parameters such as juvenile male mortality. This highlights the need for robust parameter estimates from empirical research. There are clearly challenges for replication of real-world population trends, but this model provides a useful tool for increasing understanding of drivers of hen harrier dynamics and focusing research efforts in order to inform conflict management decisions. PMID:25405860

  5. A wideband channel model for land mobile satellite systems

    NASA Technical Reports Server (NTRS)

    Jahn, Axel; Buonomo, Sergio; Sforza, Mario; Lutz, Erich

    1995-01-01

    A wideband channel model for Land Mobile Satellite (LMS) services is presented which characterizes the time-varying transmission channel between a satellite and a mobile user terminal. The channel model statistic parameters are the results of fitting procedures to measured data. The data used for fitting have a time resolution of 33 ns corresponding to a bandwidth of 30 MHz. Thus, the model is capable to characterize the channel behaviour for a wide range of services e.g., voice transmission, digital audio broadcasting (DAB), and spread spectrum modulation schemes. The model is presented for different environments and scenarios. The model is derived for a quasi-mobile user with hand-held terminal being in two different environments: rural and urban. The parameters needed for the description are (a) the number of echoes, (b) the distribution of the echo power, and (c) the distribution of the echo delay. It is shown that the direct path follows a Rician distribution whereas the reflected paths are Rayleigh/lognormal distributed. The parameters are given for an elevation angle of 25 deg.

  6. Monthly hydroclimatology of the continental United States

    NASA Astrophysics Data System (ADS)

    Petersen, Thomas; Devineni, Naresh; Sankarasubramanian, A.

    2018-04-01

    Physical/semi-empirical models that do not require any calibration are of paramount need for estimating hydrological fluxes for ungauged sites. We develop semi-empirical models for estimating the mean and variance of the monthly streamflow based on Taylor Series approximation of a lumped physically based water balance model. The proposed models require mean and variance of monthly precipitation and potential evapotranspiration, co-variability of precipitation and potential evapotranspiration and regionally calibrated catchment retention sensitivity, atmospheric moisture uptake sensitivity, groundwater-partitioning factor, and the maximum soil moisture holding capacity parameters. Estimates of mean and variance of monthly streamflow using the semi-empirical equations are compared with the observed estimates for 1373 catchments in the continental United States. Analyses show that the proposed models explain the spatial variability in monthly moments for basins in lower elevations. A regionalization of parameters for each water resources region show good agreement between observed moments and model estimated moments during January, February, March and April for mean and all months except May and June for variance. Thus, the proposed relationships could be employed for understanding and estimating the monthly hydroclimatology of ungauged basins using regional parameters.

  7. Tracer kinetics of forearm endothelial function: comparison of an empirical method and a quantitative modeling technique.

    PubMed

    Zhao, Xueli; Arsenault, Andre; Lavoie, Kim L; Meloche, Bernard; Bacon, Simon L

    2007-01-01

    Forearm Endothelial Function (FEF) is a marker that has been shown to discriminate patients with cardiovascular disease (CVD). FEF has been assessed using several parameters: the Rate of Uptake Ratio (RUR), EWUR (Elbow-to-Wrist Uptake Ratio) and EWRUR (Elbow-to-Wrist Relative Uptake Ratio). However, the modeling functions of FEF require more robust models. The present study was designed to compare an empirical method with quantitative modeling techniques to better estimate the physiological parameters and understand the complex dynamic processes. The fitted time activity curves of the forearms, estimating blood and muscle components, were assessed using both an empirical method and a two-compartment model. Although correlational analyses suggested a good correlation between the methods for RUR (r=.90) and EWUR (r=.79), but not EWRUR (r=.34), Altman-Bland plots found poor agreement between the methods for all 3 parameters. These results indicate that there is a large discrepancy between the empirical and computational method for FEF. Further work is needed to establish the physiological and mathematical validity of the 2 modeling methods.

  8. Modeling of Processing-Induced Pore Morphology in an Additively-Manufactured Ti-6Al-4V Alloy

    PubMed Central

    Kabir, Mohammad Rizviul; Richter, Henning

    2017-01-01

    A selective laser melting (SLM)-based, additively-manufactured Ti-6Al-4V alloy is prone to the accumulation of undesirable defects during layer-by-layer material build-up. Defects in the form of complex-shaped pores are one of the critical issues that need to be considered during the processing of this alloy. Depending on the process parameters, pores with concave or convex boundaries may occur. To exploit the full potential of additively-manufactured Ti-6Al-4V, the interdependency between the process parameters, pore morphology, and resultant mechanical properties, needs to be understood. By incorporating morphological details into numerical models for micromechanical analyses, an in-depth understanding of how these pores interact with the Ti-6Al-4V microstructure can be gained. However, available models for pore analysis lack a realistic description of both the Ti-6Al-4V grain microstructure, and the pore geometry. To overcome this, we propose a comprehensive approach for modeling and discretizing pores with complex geometry, situated in a polycrystalline microstructure. In this approach, the polycrystalline microstructure is modeled by means of Voronoi tessellations, and the complex pore geometry is approximated by strategically combining overlapping spheres of varied sizes. The proposed approach provides an elegant way to model the microstructure of SLM-processed Ti-6Al-4V containing pores or crack-like voids, and makes it possible to investigate the relationship between process parameters, pore morphology, and resultant mechanical properties in a finite-element-based simulation framework. PMID:28772504

  9. Modeling of Processing-Induced Pore Morphology in an Additively-Manufactured Ti-6Al-4V Alloy.

    PubMed

    Kabir, Mohammad Rizviul; Richter, Henning

    2017-02-08

    A selective laser melting (SLM)-based, additively-manufactured Ti-6Al-4V alloy is prone to the accumulation of undesirable defects during layer-by-layer material build-up. Defects in the form of complex-shaped pores are one of the critical issues that need to be considered during the processing of this alloy. Depending on the process parameters, pores with concave or convex boundaries may occur. To exploit the full potential of additively-manufactured Ti-6Al-4V, the interdependency between the process parameters, pore morphology, and resultant mechanical properties, needs to be understood. By incorporating morphological details into numerical models for micromechanical analyses, an in-depth understanding of how these pores interact with the Ti-6Al-4V microstructure can be gained. However, available models for pore analysis lack a realistic description of both the Ti-6Al-4V grain microstructure, and the pore geometry. To overcome this, we propose a comprehensive approach for modeling and discretizing pores with complex geometry, situated in a polycrystalline microstructure. In this approach, the polycrystalline microstructure is modeled by means of Voronoi tessellations, and the complex pore geometry is approximated by strategically combining overlapping spheres of varied sizes. The proposed approach provides an elegant way to model the microstructure of SLM-processed Ti-6Al-4V containing pores or crack-like voids, and makes it possible to investigate the relationship between process parameters, pore morphology, and resultant mechanical properties in a finite-element-based simulation framework.

  10. An extended plasma model for Saturn

    NASA Technical Reports Server (NTRS)

    Richardson, John D.

    1995-01-01

    The Saturn magnetosphere model of Richardson and Sittler (1990) is extended to include the outer magnetosphere. The inner magnetospheric portion of this model is updated based on a recent reanalysis of the plasma data near the Voyager 2 ring plane crossing. The result is an axially symmetric model of the plasma parameters which is designed to provide accurate input for models needing either in situ or line-of-sight data and to be a useful tool for Cassini planning.

  11. Characterizing Uncertainty and Variability in PBPK Models ...

    EPA Pesticide Factsheets

    Mode-of-action based risk and safety assessments can rely upon tissue dosimetry estimates in animals and humans obtained from physiologically-based pharmacokinetic (PBPK) modeling. However, risk assessment also increasingly requires characterization of uncertainty and variability; such characterization for PBPK model predictions represents a continuing challenge to both modelers and users. Current practices show significant progress in specifying deterministic biological models and the non-deterministic (often statistical) models, estimating their parameters using diverse data sets from multiple sources, and using them to make predictions and characterize uncertainty and variability. The International Workshop on Uncertainty and Variability in PBPK Models, held Oct 31-Nov 2, 2006, sought to identify the state-of-the-science in this area and recommend priorities for research and changes in practice and implementation. For the short term, these include: (1) multidisciplinary teams to integrate deterministic and non-deterministic/statistical models; (2) broader use of sensitivity analyses, including for structural and global (rather than local) parameter changes; and (3) enhanced transparency and reproducibility through more complete documentation of the model structure(s) and parameter values, the results of sensitivity and other analyses, and supporting, discrepant, or excluded data. Longer-term needs include: (1) theoretic and practical methodological impro

  12. A preliminary evaluation of the relationship between bioconcentration and hydrophobicity for surfactants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tolls, J.; Sijm, D.T.H.M.

    1995-10-01

    A statistical analysis was done of the relationship between hydrophobicity and bioconcentration parameters (uptake and elimination rate constants and bioconcentration factor) predicted by the diffusive mass-transfer (DMT) concept of bioconcentration developed previously. The authors employed polychlorinated biphenyls and benzenes (PCB/zs) as model compounds and the octanol/water partition coefficient as hydrophobicity parameter. They conclude that the model is consistent with the data. Subsequently, they applied the DMT concept to a set of preliminary bioconcentration data for surfactants using the critical micelle concentration (CMC) as hydrophobicity parameter. The obtained relationships qualitatively agree with the DMT concept, indicating that hydrophobicity is of greatmore » influence on surfactant bioconcentration. Finally, they investigated the hydrophobicity-bioconcentration relationships of surfactants and PCB/zs using aqueous solubility as common hydrophobicity parameter and found the relationships between the bioconcentration parameters and hydrophobicity to agree with the DMT concept. These findings are based on total radiolabel data. Therefore, they need to be confirmed using compound-specific surfactant bioconcentration data.« less

  13. Relative effects of survival and reproduction on the population dynamics of emperor geese

    USGS Publications Warehouse

    Schmutz, Joel A.; Rockwell, Robert F.; Petersen, Margaret R.

    1997-01-01

    Populations of emperor geese (Chen canagica) in Alaska declined sometime between the mid-1960s and the mid-1980s and have increased little since. To promote recovery of this species to former levels, managers need to know how much their perturbations of survival and/or reproduction would affect population growth rate (λ). We constructed an individual-based population model to evaluate the relative effect of altering mean values of various survival and reproductive parameters on λ and fall age structure (AS, defined as the proportion of juv), assuming additive rather than compensatory relations among parameters. Altering survival of adults had markedly greater relative effects on λ than did equally proportionate changes in either juvenile survival or reproductive parameters. We found the opposite pattern for relative effects on AS. Due to concerns about bias in the initial parameter estimates used in our model, we used 5 additional sets of parameter estimates with this model structure. We found that estimates of survival based on aerial survey data gathered each fall resulted in models that corresponded more closely to independent estimates of λ than did models that used mark-recapture estimates of survival. This disparity suggests that mark-recapture estimates of survival are biased low. To further explore how parameter estimates affected estimates of λ, we used values of survival and reproduction found in other goose species, and we examined the effect of an hypothesized correlation between an individual's clutch size and the subsequent survival of her young. The rank order of parameters in their relative effects on λ was consistent for all 6 parameter sets we examined. The observed variation in relative effects on λ among the 6 parameter sets is indicative of how relative effects on λ may vary among goose populations. With this knowledge of the relative effects of survival and reproductive parameters on λ, managers can make more informed decisions about which parameters to influence through management or to target for future study.

  14. The evolution of process-based hydrologic models: historical challenges and the collective quest for physical realism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, Martyn P.; Bierkens, Marc F. P.; Samaniego, Luis

    The diversity in hydrologic models has historically led to great controversy on the correct approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. Here, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We also illustrate how modeling advances have been made by groups using models of different type and complexity,more » and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.« less

  15. The evolution of process-based hydrologic models: historical challenges and the collective quest for physical realism

    DOE PAGES

    Clark, Martyn P.; Bierkens, Marc F. P.; Samaniego, Luis; ...

    2017-07-11

    The diversity in hydrologic models has historically led to great controversy on the correct approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. Here, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We also illustrate how modeling advances have been made by groups using models of different type and complexity,more » and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.« less

  16. A Systematic Approach for Identifying Level-1 Error Covariance Structures in Latent Growth Modeling

    ERIC Educational Resources Information Center

    Ding, Cherng G.; Jane, Ten-Der; Wu, Chiu-Hui; Lin, Hang-Rung; Shen, Chih-Kang

    2017-01-01

    It has been pointed out in the literature that misspecification of the level-1 error covariance structure in latent growth modeling (LGM) has detrimental impacts on the inferences about growth parameters. Since correct covariance structure is difficult to specify by theory, the identification needs to rely on a specification search, which,…

  17. Quantitative interpretations of Visible-NIR reflectance spectra of blood.

    PubMed

    Serebrennikova, Yulia M; Smith, Jennifer M; Huffman, Debra E; Leparc, German F; García-Rubio, Luis H

    2008-10-27

    This paper illustrates the implementation of a new theoretical model for rapid quantitative analysis of the Vis-NIR diffuse reflectance spectra of blood cultures. This new model is based on the photon diffusion theory and Mie scattering theory that have been formulated to account for multiple scattering populations and absorptive components. This study stresses the significance of the thorough solution of the scattering and absorption problem in order to accurately resolve for optically relevant parameters of blood culture components. With advantages of being calibration-free and computationally fast, the new model has two basic requirements. First, wavelength-dependent refractive indices of the basic chemical constituents of blood culture components are needed. Second, multi-wavelength measurements or at least the measurements of characteristic wavelengths equal to the degrees of freedom, i.e. number of optically relevant parameters, of blood culture system are required. The blood culture analysis model was tested with a large number of diffuse reflectance spectra of blood culture samples characterized by an extensive range of the relevant parameters.

  18. Model selection and Bayesian inference for high-resolution seabed reflection inversion.

    PubMed

    Dettmer, Jan; Dosso, Stan E; Holland, Charles W

    2009-02-01

    This paper applies Bayesian inference, including model selection and posterior parameter inference, to inversion of seabed reflection data to resolve sediment structure at a spatial scale below the pulse length of the acoustic source. A practical approach to model selection is used, employing the Bayesian information criterion to decide on the number of sediment layers needed to sufficiently fit the data while satisfying parsimony to avoid overparametrization. Posterior parameter inference is carried out using an efficient Metropolis-Hastings algorithm for high-dimensional models, and results are presented as marginal-probability depth distributions for sound velocity, density, and attenuation. The approach is applied to plane-wave reflection-coefficient inversion of single-bounce data collected on the Malta Plateau, Mediterranean Sea, which indicate complex fine structure close to the water-sediment interface. This fine structure is resolved in the geoacoustic inversion results in terms of four layers within the upper meter of sediments. The inversion results are in good agreement with parameter estimates from a gravity core taken at the experiment site.

  19. Is our medical school socially accountable? The case of Faculty of Medicine, Suez Canal University.

    PubMed

    Hosny, Somaya; Ghaly, Mona; Boelen, Charles

    2015-04-01

    Faculty of Medicine, Suez Canal University (FOM/SCU) was established as community oriented school with innovative educational strategies. Social accountability represents the commitment of the medical school towards the community it serves. To assess FOM/SCU compliance to social accountability using the "Conceptualization, Production, Usability" (CPU) model. FOM/SCU's practice was reviewed against CPU model parameters. CPU consists of three domains, 11 sections and 31 parameters. Data were collected through unstructured interviews with the main stakeholders and documents review since 2005 to 2013. FOM/SCU shows general compliance to the three domains of the CPU. Very good compliance was shown to the "P" domain of the model through FOM/SCU's innovative educational system, students and faculty members. More work is needed on the "C" and "U" domains. FOM/SCU complies with many parameters of the CPU model; however, more work should be accomplished to comply with some items in the C and U domains so that FOM/SCU can be recognized as a proactive socially accountable school.

  20. Combined Molecular Dynamics Simulation-Molecular-Thermodynamic Theory Framework for Predicting Surface Tensions.

    PubMed

    Sresht, Vishnu; Lewandowski, Eric P; Blankschtein, Daniel; Jusufi, Arben

    2017-08-22

    A molecular modeling approach is presented with a focus on quantitative predictions of the surface tension of aqueous surfactant solutions. The approach combines classical Molecular Dynamics (MD) simulations with a molecular-thermodynamic theory (MTT) [ Y. J. Nikas, S. Puvvada, D. Blankschtein, Langmuir 1992 , 8 , 2680 ]. The MD component is used to calculate thermodynamic and molecular parameters that are needed in the MTT model to determine the surface tension isotherm. The MD/MTT approach provides the important link between the surfactant bulk concentration, the experimental control parameter, and the surfactant surface concentration, the MD control parameter. We demonstrate the capability of the MD/MTT modeling approach on nonionic alkyl polyethylene glycol surfactants at the air-water interface and observe reasonable agreement of the predicted surface tensions and the experimental surface tension data over a wide range of surfactant concentrations below the critical micelle concentration. Our modeling approach can be extended to ionic surfactants and their mixtures with both ionic and nonionic surfactants at liquid-liquid interfaces.

  1. Parameterization of aquatic ecosystem functioning and its natural variation: Hierarchical Bayesian modelling of plankton food web dynamics

    NASA Astrophysics Data System (ADS)

    Norros, Veera; Laine, Marko; Lignell, Risto; Thingstad, Frede

    2017-10-01

    Methods for extracting empirically and theoretically sound parameter values are urgently needed in aquatic ecosystem modelling to describe key flows and their variation in the system. Here, we compare three Bayesian formulations for mechanistic model parameterization that differ in their assumptions about the variation in parameter values between various datasets: 1) global analysis - no variation, 2) separate analysis - independent variation and 3) hierarchical analysis - variation arising from a shared distribution defined by hyperparameters. We tested these methods, using computer-generated and empirical data, coupled with simplified and reasonably realistic plankton food web models, respectively. While all methods were adequate, the simulated example demonstrated that a well-designed hierarchical analysis can result in the most accurate and precise parameter estimates and predictions, due to its ability to combine information across datasets. However, our results also highlighted sensitivity to hyperparameter prior distributions as an important caveat of hierarchical analysis. In the more complex empirical example, hierarchical analysis was able to combine precise identification of parameter values with reasonably good predictive performance, although the ranking of the methods was less straightforward. We conclude that hierarchical Bayesian analysis is a promising tool for identifying key ecosystem-functioning parameters and their variation from empirical datasets.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mondy, Lisa Ann; Rao, Rekha Ranjana; Shelden, Bion

    We are developing computational models to elucidate the expansion and dynamic filling process of a polyurethane foam, PMDI. The polyurethane of interest is chemically blown, where carbon dioxide is produced via the reaction of water, the blowing agent, and isocyanate. The isocyanate also reacts with polyol in a competing reaction, which produces the polymer. Here we detail the experiments needed to populate a processing model and provide parameters for the model based on these experiments. The model entails solving the conservation equations, including the equations of motion, an energy balance, and two rate equations for the polymerization and foaming reactions,more » following a simplified mathematical formalism that decouples these two reactions. Parameters for the polymerization kinetics model are reported based on infrared spectrophotometry. Parameters describing the gas generating reaction are reported based on measurements of volume, temperature and pressure evolution with time. A foam rheology model is proposed and parameters determined through steady-shear and oscillatory tests. Heat of reaction and heat capacity are determined through differential scanning calorimetry. Thermal conductivity of the foam as a function of density is measured using a transient method based on the theory of the transient plane source technique. Finally, density variations of the resulting solid foam in several simple geometries are directly measured by sectioning and sampling mass, as well as through x-ray computed tomography. These density measurements will be useful for model validation once the complete model is implemented in an engineering code.« less

  3. Temporal rainfall estimation using input data reduction and model inversion

    NASA Astrophysics Data System (ADS)

    Wright, A. J.; Vrugt, J. A.; Walker, J. P.; Pauwels, V. R. N.

    2016-12-01

    Floods are devastating natural hazards. To provide accurate, precise and timely flood forecasts there is a need to understand the uncertainties associated with temporal rainfall and model parameters. The estimation of temporal rainfall and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of rainfall input to be considered when estimating model parameters and provides the ability to estimate rainfall from poorly gauged catchments. Current methods to estimate temporal rainfall distributions from streamflow are unable to adequately explain and invert complex non-linear hydrologic systems. This study uses the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia. The reduction of rainfall to DWT coefficients allows the input rainfall time series to be simultaneously estimated along with model parameters. The estimation process is conducted using multi-chain Markov chain Monte Carlo simulation with the DREAMZS algorithm. The use of a likelihood function that considers both rainfall and streamflow error allows for model parameter and temporal rainfall distributions to be estimated. Estimation of the wavelet approximation coefficients of lower order decomposition structures was able to estimate the most realistic temporal rainfall distributions. These rainfall estimates were all able to simulate streamflow that was superior to the results of a traditional calibration approach. It is shown that the choice of wavelet has a considerable impact on the robustness of the inversion. The results demonstrate that streamflow data contains sufficient information to estimate temporal rainfall and model parameter distributions. The extent and variance of rainfall time series that are able to simulate streamflow that is superior to that simulated by a traditional calibration approach is a demonstration of equifinality. The use of a likelihood function that considers both rainfall and streamflow error combined with the use of the DWT as a model data reduction technique allows the joint inference of hydrologic model parameters along with rainfall.

  4. VARIATIONS IN SEASONAL PATTERNS OF GASTROINTESTINAL INFECTIONS ALONG A RIVER

    EPA Science Inventory

    Epidemiologic analysis of waterborne diseases typically considers socio-economic, demographic, and pathogen-specific characteristics. However, hydrological parameters may need to be considered as well. Fate and transport models of watersheds have demonstrated impairment due to li...

  5. Nondimensional parameter for conformal grinding: combining machine and process parameters

    NASA Astrophysics Data System (ADS)

    Funkenbusch, Paul D.; Takahashi, Toshio; Gracewski, Sheryl M.; Ruckman, Jeffrey L.

    1999-11-01

    Conformal grinding of optical materials with CNC (Computer Numerical Control) machining equipment can be used to achieve precise control over complex part configurations. However complications can arise due to the need to fabricate complex geometrical shapes at reasonable production rates. For example high machine stiffness is essential, but the need to grind 'inside' small or highly concave surfaces may require use of tooling with less than ideal stiffness characteristics. If grinding generates loads sufficient for significant tool deflection, the programmed removal depth will not be achieved. Moreover since grinding load is a function of the volumetric removal rate the amount of load deflection can vary with location on the part, potentially producing complex figure errors. In addition to machine/tool stiffness and removal rate, load generation is a function of the process parameters. For example by reducing the feed rate of the tool into the part, both the load and resultant deflection/removal error can be decreased. However this must be balanced against the need for part through put. In this paper a simple model which permits combination of machine stiffness and process parameters into a single non-dimensional parameter is adapted for a conformal grinding geometry. Errors in removal can be minimized by maintaining this parameter above a critical value. Moreover, since the value of this parameter depends on the local part geometry, it can be used to optimize process settings during grinding. For example it may be used to guide adjustment of the feed rate as a function of location on the part to eliminate figure errors while minimizing the total grinding time required.

  6. Gaps in knowledge and data driving uncertainty in models of photosynthesis.

    PubMed

    Dietze, Michael C

    2014-02-01

    Regional and global models of the terrestrial biosphere depend critically on models of photosynthesis when predicting impacts of global change. This paper focuses on identifying the primary data needs of these models, what scales drive uncertainty, and how to improve measurements. Overall, there is a need for an open, cross-discipline database on leaf-level photosynthesis in general, and response curves in particular. The parameters in photosynthetic models are not constant through time, space, or canopy position but there is a need for a better understanding of whether relationships with drivers, such as leaf nitrogen, are themselves scale dependent. Across time scales, as ecosystem models become more sophisticated in their representations of succession they needs to be able to approximate sunfleck responses to capture understory growth and survival. At both high and low latitudes, photosynthetic data are inadequate in general and there is a particular need to better understand thermal acclimation. Simple models of acclimation suggest that shifts in optimal temperature are important. However, there is little advantage to synoptic-scale responses and circadian rhythms may be more beneficial than acclimation over shorter timescales. At high latitudes, there is a need for a better understanding of low-temperature photosynthetic limits, while at low latitudes the need is for a better understanding of phosphorus limitations on photosynthesis. In terms of sampling, measuring multivariate photosynthetic response surfaces are potentially more efficient and more accurate than traditional univariate response curves. Finally, there is a need for greater community involvement in model validation and model-data synthesis.

  7. Model structures amplify uncertainty in predicted soil carbon responses to climate change.

    PubMed

    Shi, Zheng; Crowell, Sean; Luo, Yiqi; Moore, Berrien

    2018-06-04

    Large model uncertainty in projected future soil carbon (C) dynamics has been well documented. However, our understanding of the sources of this uncertainty is limited. Here we quantify the uncertainties arising from model parameters, structures and their interactions, and how those uncertainties propagate through different models to projections of future soil carbon stocks. Both the vertically resolved model and the microbial explicit model project much greater uncertainties to climate change than the conventional soil C model, with both positive and negative C-climate feedbacks, whereas the conventional model consistently predicts positive soil C-climate feedback. Our findings suggest that diverse model structures are necessary to increase confidence in soil C projection. However, the larger uncertainty in the complex models also suggests that we need to strike a balance between model complexity and the need to include diverse model structures in order to forecast soil C dynamics with high confidence and low uncertainty.

  8. Critical elements on fitting the Bayesian multivariate Poisson Lognormal model

    NASA Astrophysics Data System (ADS)

    Zamzuri, Zamira Hasanah binti

    2015-10-01

    Motivated by a problem on fitting multivariate models to traffic accident data, a detailed discussion of the Multivariate Poisson Lognormal (MPL) model is presented. This paper reveals three critical elements on fitting the MPL model: the setting of initial estimates, hyperparameters and tuning parameters. These issues have not been highlighted in the literature. Based on simulation studies conducted, we have shown that to use the Univariate Poisson Model (UPM) estimates as starting values, at least 20,000 iterations are needed to obtain reliable final estimates. We also illustrated the sensitivity of the specific hyperparameter, which if it is not given extra attention, may affect the final estimates. The last issue is regarding the tuning parameters where they depend on the acceptance rate. Finally, a heuristic algorithm to fit the MPL model is presented. This acts as a guide to ensure that the model works satisfactorily given any data set.

  9. Effective Hubbard model for Helium atoms adsorbed on a graphite

    NASA Astrophysics Data System (ADS)

    Motoyama, Yuichi; Masaki-Kato, Akiko; Kawashima, Naoki

    Helium atoms adsorbed on a graphite is a two-dimensional strongly correlated quantum system and it has been an attractive subject of research for a long time. A helium atom feels Lennard-Jones like potential (Aziz potential) from another one and corrugated potential from the graphite. Therefore, this system may be described by a hardcore Bose Hubbard model with the nearest neighbor repulsion on the triangular lattice, which is the dual lattice of the honeycomb lattice formed by carbons. A Hubbard model is easier to simulate than the original problem in continuous space, but we need to know the model parameters of the effective model, hopping constant t and interaction V. In this presentation, we will present an estimation of the model parameters from ab initio quantum Monte Carlo calculation in continuous space in addition to results of quantum Monte Carlo simulation for an obtained discrete model.

  10. A new multistage groundwater transport inverse method: presentation, evaluation, and implications

    USGS Publications Warehouse

    Anderman, Evan R.; Hill, Mary C.

    1999-01-01

    More computationally efficient methods of using concentration data are needed to estimate groundwater flow and transport parameters. This work introduces and evaluates a three‐stage nonlinear‐regression‐based iterative procedure in which trial advective‐front locations link decoupled flow and transport models. Method accuracy and efficiency are evaluated by comparing results to those obtained when flow‐ and transport‐model parameters are estimated simultaneously. The new method is evaluated as conclusively as possible by using a simple test case that includes distinct flow and transport parameters, but does not include any approximations that are problem dependent. The test case is analytical; the only flow parameter is a constant velocity, and the transport parameters are longitudinal and transverse dispersivity. Any difficulties detected using the new method in this ideal situation are likely to be exacerbated in practical problems. Monte‐Carlo analysis of observation error ensures that no specific error realization obscures the results. Results indicate that, while this, and probably other, multistage methods do not always produce optimal parameter estimates, the computational advantage may make them useful in some circumstances, perhaps as a precursor to using a simultaneous method.

  11. Simultaneous estimation of local-scale and flow path-scale dual-domain mass transfer parameters using geoelectrical monitoring

    USGS Publications Warehouse

    Briggs, Martin A.; Day-Lewis, Frederick D.; Ong, John B.; Curtis, Gary P.; Lane, John W.

    2013-01-01

    Anomalous solute transport, modeled as rate-limited mass transfer, has an observable geoelectrical signature that can be exploited to infer the controlling parameters. Previous experiments indicate the combination of time-lapse geoelectrical and fluid conductivity measurements collected during ionic tracer experiments provides valuable insight into the exchange of solute between mobile and immobile porosity. Here, we use geoelectrical measurements to monitor tracer experiments at a former uranium mill tailings site in Naturita, Colorado. We use nonlinear regression to calibrate dual-domain mass transfer solute-transport models to field data. This method differs from previous approaches by calibrating the model simultaneously to observed fluid conductivity and geoelectrical tracer signals using two parameter scales: effective parameters for the flow path upgradient of the monitoring point and the parameters local to the monitoring point. We use regression statistics to rigorously evaluate the information content and sensitivity of fluid conductivity and geophysical data, demonstrating multiple scales of mass transfer parameters can simultaneously be estimated. Our results show, for the first time, field-scale spatial variability of mass transfer parameters (i.e., exchange-rate coefficient, porosity) between local and upgradient effective parameters; hence our approach provides insight into spatial variability and scaling behavior. Additional synthetic modeling is used to evaluate the scope of applicability of our approach, indicating greater range than earlier work using temporal moments and a Lagrangian-based Damköhler number. The introduced Eulerian-based Damköhler is useful for estimating tracer injection duration needed to evaluate mass transfer exchange rates that range over several orders of magnitude.

  12. Estimation of Staphylococcus aureus growth parameters from turbidity data: characterization of strain variation and comparison of methods.

    PubMed

    Lindqvist, R

    2006-07-01

    Turbidity methods offer possibilities for generating data required for addressing microorganism variability in risk modeling given that the results of these methods correspond to those of viable count methods. The objectives of this study were to identify the best approach for determining growth parameters based on turbidity data and use of a Bioscreen instrument and to characterize variability in growth parameters of 34 Staphylococcus aureus strains of different biotypes isolated from broiler carcasses. Growth parameters were estimated by fitting primary growth models to turbidity growth curves or to detection times of serially diluted cultures either directly or by using an analysis of variance (ANOVA) approach. The maximum specific growth rates in chicken broth at 17 degrees C estimated by time to detection methods were in good agreement with viable count estimates, whereas growth models (exponential and Richards) underestimated growth rates. Time to detection methods were selected for strain characterization. The variation of growth parameters among strains was best described by either the logistic or lognormal distribution, but definitive conclusions require a larger data set. The distribution of the physiological state parameter ranged from 0.01 to 0.92 and was not significantly different from a normal distribution. Strain variability was important, and the coefficient of variation of growth parameters was up to six times larger among strains than within strains. It is suggested to apply a time to detection (ANOVA) approach using turbidity measurements for convenient and accurate estimation of growth parameters. The results emphasize the need to consider implications of strain variability for predictive modeling and risk assessment.

  13. The Model of Gas Supply Capacity Simulation In Regional Energy Security Framework: Policy Studies PT. X Cirebon Area

    NASA Astrophysics Data System (ADS)

    Nuryadin; Ronny Rahman Nitibaskara, Tb; Herdiansyah, Herdis; Sari, Ravita

    2017-10-01

    The needs of energy are increasing every year. The unavailability of energy will cause economic losses and weaken energy security. To overcome the availability of gas supply in the future, planning are cruacially needed. Therefore, it is necessary to approach the system, so that the process of gas distribution is running properly. In this research, system dynamic method will be used to measure how much supply capacity planning is needed until 2050, with parameters of demand in industrial, household and commercial sectors. From the model obtained PT.X Cirebon area in 2031 was not able to meet the needs of gas customers in the Cirebon region, as well as with Businnes as usual scenario, the ratio of gas fulfillment only until 2027. The implementation of the national energy policy that is the use of NRE as government intervention in the model is produced up to 2035 PT.X Cirebon area is still able to supply the gas needs of its customers.

  14. Method for selection of optimal road safety composite index with examples from DEA and TOPSIS method.

    PubMed

    Rosić, Miroslav; Pešić, Dalibor; Kukić, Dragoslav; Antić, Boris; Božović, Milan

    2017-01-01

    Concept of composite road safety index is a popular and relatively new concept among road safety experts around the world. As there is a constant need for comparison among different units (countries, municipalities, roads, etc.) there is need to choose an adequate method which will make comparison fair to all compared units. Usually comparisons using one specific indicator (parameter which describes safety or unsafety) can end up with totally different ranking of compared units which is quite complicated for decision maker to determine "real best performers". Need for composite road safety index is becoming dominant since road safety presents a complex system where more and more indicators are constantly being developed to describe it. Among wide variety of models and developed composite indexes, a decision maker can come to even bigger dilemma than choosing one adequate risk measure. As DEA and TOPSIS are well-known mathematical models and have recently been increasingly used for risk evaluation in road safety, we used efficiencies (composite indexes) obtained by different models, based on DEA and TOPSIS, to present PROMETHEE-RS model for selection of optimal method for composite index. Method for selection of optimal composite index is based on three parameters (average correlation, average rank variation and average cluster variation) inserted into a PROMETHEE MCDM method in order to choose the optimal one. The model is tested by comparing 27 police departments in Serbia. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Performance Evaluation and Parameter Identification on DROID III

    NASA Technical Reports Server (NTRS)

    Plumb, Julianna J.

    2011-01-01

    The DROID III project consisted of two main parts. The former, performance evaluation, focused on the performance characteristics of the aircraft such as lift to drag ratio, thrust required for level flight, and rate of climb. The latter, parameter identification, focused on finding the aerodynamic coefficients for the aircraft using a system that creates a mathematical model to match the flight data of doublet maneuvers and the aircraft s response. Both portions of the project called for flight testing and that data is now available on account of this project. The conclusion of the project is that the performance evaluation data is well-within desired standards but could be improved with a thrust model, and that parameter identification is still in need of more data processing but seems to produce reasonable results thus far.

  16. An eigensystem realization algorithm using data correlations (ERA/DC) for modal parameter identification

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Cooper, J. E.; Wright, J. R.

    1987-01-01

    A modification to the Eigensystem Realization Algorithm (ERA) for modal parameter identification is presented in this paper. The ERA minimum order realization approach using singular value decomposition is combined with the philosophy of the Correlation Fit method in state space form such that response data correlations rather than actual response values are used for modal parameter identification. This new method, the ERA using data correlations (ERA/DC), reduces bias errors due to noise corruption significantly without the need for model overspecification. This method is tested using simulated five-degree-of-freedom system responses corrupted by measurement noise. It is found for this case that, when model overspecification is permitted and a minimum order solution obtained via singular value truncation, the results from the two methods are of similar quality.

  17. Method for Predicting and Optimizing System Parameters for Electrospinning System

    NASA Technical Reports Server (NTRS)

    Wincheski, Russell A. (Inventor)

    2011-01-01

    An electrospinning system using a spinneret and a counter electrode is first operated for a fixed amount of time at known system and operational parameters to generate a fiber mat having a measured fiber mat width associated therewith. Next, acceleration of the fiberizable material at the spinneret is modeled to determine values of mass, drag, and surface tension associated with the fiberizable material at the spinneret output. The model is then applied in an inversion process to generate predicted values of an electric charge at the spinneret output and an electric field between the spinneret and electrode required to fabricate a selected fiber mat design. The electric charge and electric field are indicative of design values for system and operational parameters needed to fabricate the selected fiber mat design.

  18. No Future in the Past? The role of initial topography on landform evolution model predictions

    NASA Astrophysics Data System (ADS)

    Hancock, G. R.; Coulthard, T. J.; Lowry, J.

    2014-12-01

    Our understanding of earth surface processes is based on long-term empirical understandings, short-term field measurements as well as numerical models. In particular, numerical landscape evolution models (LEMs) have been developed which have the capability to capture a range of both surface (erosion and deposition), tectonics, as well as near surface or critical zone processes (i.e. pedogenesis). These models have a range of applications for understanding both surface and whole of landscape dynamics through to more applied situations such as degraded site rehabilitation. LEMs are now at the stage of development where if calibrated, can provide some level of reliability. However, these models are largely calibrated based on parameters determined from present surface conditions which are the product of much longer-term geology-soil-climate-vegetation interactions. Here, we assess the effect of the initial landscape dimensions and associated error as well as parameterisation for a potential post-mining landform design. The results demonstrate that subtle surface changes in the initial DEM as well as parameterisation can have a large impact on landscape behaviour, erosion depth and sediment discharge. For example, the predicted sediment output from LEM's is shown to be highly variable even with very subtle changes in initial surface conditions. This has two important implications in that decadal time scale field data is needed to (a) better parameterise models and (b) evaluate their predictions. We question how a LEM using parameters derived from field plots can firstly be employed to examine long-term landscape evolution. Secondly, the potential range of outcomes is examined based on estimated temporal parameter change and thirdly, the need for more detailed and rigorous field data for calibration and validation of these models is discussed.

  19. Linked Sensitivity Analysis, Calibration, and Uncertainty Analysis Using a System Dynamics Model for Stroke Comparative Effectiveness Research.

    PubMed

    Tian, Yuan; Hassmiller Lich, Kristen; Osgood, Nathaniel D; Eom, Kirsten; Matchar, David B

    2016-11-01

    As health services researchers and decision makers tackle more difficult problems using simulation models, the number of parameters and the corresponding degree of uncertainty have increased. This often results in reduced confidence in such complex models to guide decision making. To demonstrate a systematic approach of linked sensitivity analysis, calibration, and uncertainty analysis to improve confidence in complex models. Four techniques were integrated and applied to a System Dynamics stroke model of US veterans, which was developed to inform systemwide intervention and research planning: Morris method (sensitivity analysis), multistart Powell hill-climbing algorithm and generalized likelihood uncertainty estimation (calibration), and Monte Carlo simulation (uncertainty analysis). Of 60 uncertain parameters, sensitivity analysis identified 29 needing calibration, 7 that did not need calibration but significantly influenced key stroke outcomes, and 24 not influential to calibration or stroke outcomes that were fixed at their best guess values. One thousand alternative well-calibrated baselines were obtained to reflect calibration uncertainty and brought into uncertainty analysis. The initial stroke incidence rate among veterans was identified as the most influential uncertain parameter, for which further data should be collected. That said, accounting for current uncertainty, the analysis of 15 distinct prevention and treatment interventions provided a robust conclusion that hypertension control for all veterans would yield the largest gain in quality-adjusted life years. For complex health care models, a mixed approach was applied to examine the uncertainty surrounding key stroke outcomes and the robustness of conclusions. We demonstrate that this rigorous approach can be practical and advocate for such analysis to promote understanding of the limits of certainty in applying models to current decisions and to guide future data collection. © The Author(s) 2016.

  20. A calibration protocol of a one-dimensional moving bed bioreactor (MBBR) dynamic model for nitrogen removal.

    PubMed

    Barry, U; Choubert, J-M; Canler, J-P; Héduit, A; Robin, L; Lessard, P

    2012-01-01

    This work suggests a procedure to correctly calibrate the parameters of a one-dimensional MBBR dynamic model in nitrification treatment. The study deals with the MBBR configuration with two reactors in series, one for carbon treatment and the other for nitrogen treatment. Because of the influence of the first reactor on the second one, the approach needs a specific calibration strategy. Firstly, a comparison between measured values and simulated ones obtained with default parameters has been carried out. Simulated values of filtered COD, NH(4)-N and dissolved oxygen are underestimated and nitrates are overestimated compared with observed data. Thus, nitrifying rate and oxygen transfer into the biofilm are overvalued. Secondly, a sensitivity analysis was carried out for parameters and for COD fractionation. It revealed three classes of sensitive parameters: physical, diffusional and kinetic. Then a calibration protocol of the MBBR dynamic model was proposed. It was successfully tested on data recorded at a pilot-scale plant and a calibrated set of values was obtained for four parameters: the maximum biofilm thickness, the detachment rate, the maximum autotrophic growth rate and the oxygen transfer rate.

  1. Constraining the symmetry energy with heavy-ion collisions and Bayesian analysis

    NASA Astrophysics Data System (ADS)

    Tsang, C. Y.; Jhang, G.; Morfouace, P.; Lynch, W. G.; Tsang, M. B.; HiRA Collaboration

    2017-09-01

    To extract constraints on symmetry energy terms in nuclear Equation of State (EoS), data from heavy ion reactions, are often compared to calculations from transport models. As multiple model input parameters are needed in the transport model, it is necessary to do multi-parameter analysis to understand the relationship especially if strong correlations exist among the parameters. In this talk, I will discuss how four symmetry energy parameters, So, (Symmetry energy) and L (slope) at saturation density as well as the nucleon scaler effective mass (ms*) and the nucleon effective mass splitting, (FI) are obtained by comparing transport mode results with experimental data such as isospin diffusions and n/p spectral ratios using MADAI Bayesian analysis software. Probability of each parameter having a certain value given experimental data can be calculated with Bayes theorem by Markov Chain Monte Carlo integration. Results using single and double ratios of neutron and proton spectra from 124Sn +124Sn, 112Sn +112Sn collisions at 120 MeV/u as well as isospin diffusion from Sn +Sn isotopes, at 50 and 35 MeV/u will be presented. This research is supported by the National Science Foundation under Grant No. PHY-1565546.

  2. Estimation of Time-Varying Pilot Model Parameters

    NASA Technical Reports Server (NTRS)

    Zaal, Peter M. T.; Sweet, Barbara T.

    2011-01-01

    Human control behavior is rarely completely stationary over time due to fatigue or loss of attention. In addition, there are many control tasks for which human operators need to adapt their control strategy to vehicle dynamics that vary in time. In previous studies on the identification of time-varying pilot control behavior wavelets were used to estimate the time-varying frequency response functions. However, the estimation of time-varying pilot model parameters was not considered. Estimating these parameters can be a valuable tool for the quantification of different aspects of human time-varying manual control. This paper presents two methods for the estimation of time-varying pilot model parameters, a two-step method using wavelets and a windowed maximum likelihood estimation method. The methods are evaluated using simulations of a closed-loop control task with time-varying pilot equalization and vehicle dynamics. Simulations are performed with and without remnant. Both methods give accurate results when no pilot remnant is present. The wavelet transform is very sensitive to measurement noise, resulting in inaccurate parameter estimates when considerable pilot remnant is present. Maximum likelihood estimation is less sensitive to pilot remnant, but cannot detect fast changes in pilot control behavior.

  3. The Easy Way of Finding Parameters in IBM (EWofFP-IBM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turkan, Nureddin

    E2/M1 multipole mixing ratios of even-even nuclei in transitional region can be calculated as soon as B(E2) and B(M1) values by using the PHINT and/or NP-BOS codes. The correct calculations of energies must be obtained to produce such calculations. Also, the correct parameter values are needed to calculate the energies. The logic of the codes is based on the mathematical and physical Statements describing interacting boson model (IBM) which is one of the model of nuclear structure physics. Here, the big problem is to find the best fitted parameters values of the model. So, by using the Easy Way ofmore » Finding Parameters in IBM (EWofFP-IBM), the best parameter values of IBM Hamiltonian for {sup 102-110}Pd and {sup 102-110}Ru isotopes were firstly obtained and then the energies were calculated. At the end, it was seen that the calculated results are in good agreement with the experimental ones. In addition, it was carried out that the presented energy values obtained by using the EWofFP-IBM are dominantly better than the previous theoretical data.« less

  4. Helicopter mathematical models and control law development for handling qualities research

    NASA Technical Reports Server (NTRS)

    Chen, Robert T. N.; Lebacqz, J. Victor; Aiken, Edwin W.; Tischler, Mark B.

    1988-01-01

    Progress made in joint NASA/Army research concerning rotorcraft flight-dynamics modeling, design methodologies for rotorcraft flight-control laws, and rotorcraft parameter identification is reviewed. Research into these interactive disciplines is needed to develop the analytical tools necessary to conduct flying qualities investigations using both the ground-based and in-flight simulators, and to permit an efficient means of performing flight test evaluation of rotorcraft flying qualities for specification compliance. The need for the research is particularly acute for rotorcraft because of their mathematical complexity, high order dynamic characteristics, and demanding mission requirements. The research in rotorcraft flight-dynamics modeling is pursued along two general directions: generic nonlinear models and nonlinear models for specific rotorcraft. In addition, linear models are generated that extend their utilization from 1-g flight to high-g maneuvers and expand their frequency range of validity for the design analysis of high-gain flight control systems. A variety of methods ranging from classical frequency-domain approaches to modern time-domain control methodology that are used in the design of rotorcraft flight control laws is reviewed. Also reviewed is a study conducted to investigate the design details associated with high-gain, digital flight control systems for combat rotorcraft. Parameter identification techniques developed for rotorcraft applications are reviewed.

  5. Numerical simulation of heat transfer and phase change during freezing of potatoes with different shapes at the presence or absence of ultrasound irradiation

    NASA Astrophysics Data System (ADS)

    Kiani, Hossein; Sun, Da-Wen

    2018-03-01

    As novel processes such as ultrasound assisted heat transfer are emerged, new models and simulations are needed to describe these processes. In this paper, a numerical model was developed to study the freezing process of potatoes. Different thermal conductivity models were investigated, and the effect of sonication was evaluated on the convective heat transfer in a fluid to the particle heat transfer system. Potato spheres and sticks were the geometries researched, and the effect of different processing parameters on the results were studied. The numerical model successfully predicted the ultrasound assisted freezing of various shapes in comparison with experimental data of the process. The model was sensitive to processing parameters variation (sound intensity, duty cycle, shape, etc.) and could accurately simulate the freezing process. Among the thermal conductivity correlations studied, de Vries and Maxwell models gave closer estimations. The maximum temperature difference was obtained for the series equation that underestimated the thermal conductivity. Both numerical and experimental data confirmed that an optimum condition of intensity and duty cycle is needed for reducing the freezing time, as increasing the intensity, increased the heat transfer rate and sonically heating rate, simultaneously, that acted against each other.

  6. On the generation of climate model ensembles

    NASA Astrophysics Data System (ADS)

    Haughton, Ned; Abramowitz, Gab; Pitman, Andy; Phipps, Steven J.

    2014-10-01

    Climate model ensembles are used to estimate uncertainty in future projections, typically by interpreting the ensemble distribution for a particular variable probabilistically. There are, however, different ways to produce climate model ensembles that yield different results, and therefore different probabilities for a future change in a variable. Perhaps equally importantly, there are different approaches to interpreting the ensemble distribution that lead to different conclusions. Here we use a reduced-resolution climate system model to compare three common ways to generate ensembles: initial conditions perturbation, physical parameter perturbation, and structural changes. Despite these three approaches conceptually representing very different categories of uncertainty within a modelling system, when comparing simulations to observations of surface air temperature they can be very difficult to separate. Using the twentieth century CMIP5 ensemble for comparison, we show that initial conditions ensembles, in theory representing internal variability, significantly underestimate observed variance. Structural ensembles, perhaps less surprisingly, exhibit over-dispersion in simulated variance. We argue that future climate model ensembles may need to include parameter or structural perturbation members in addition to perturbed initial conditions members to ensure that they sample uncertainty due to internal variability more completely. We note that where ensembles are over- or under-dispersive, such as for the CMIP5 ensemble, estimates of uncertainty need to be treated with care.

  7. Parametric sensitivity analysis of an agro-economic model of management of irrigation water

    NASA Astrophysics Data System (ADS)

    El Ouadi, Ihssan; Ouazar, Driss; El Menyari, Younesse

    2015-04-01

    The current work aims to build an analysis and decision support tool for policy options concerning the optimal allocation of water resources, while allowing a better reflection on the issue of valuation of water by the agricultural sector in particular. Thus, a model disaggregated by farm type was developed for the rural town of Ait Ben Yacoub located in the east Morocco. This model integrates economic, agronomic and hydraulic data and simulates agricultural gross margin across in this area taking into consideration changes in public policy and climatic conditions, taking into account the competition for collective resources. To identify the model input parameters that influence over the results of the model, a parametric sensitivity analysis is performed by the "One-Factor-At-A-Time" approach within the "Screening Designs" method. Preliminary results of this analysis show that among the 10 parameters analyzed, 6 parameters affect significantly the objective function of the model, it is in order of influence: i) Coefficient of crop yield response to water, ii) Average daily gain in weight of livestock, iii) Exchange of livestock reproduction, iv) maximum yield of crops, v) Supply of irrigation water and vi) precipitation. These 6 parameters register sensitivity indexes ranging between 0.22 and 1.28. Those results show high uncertainties on these parameters that can dramatically skew the results of the model or the need to pay particular attention to their estimates. Keywords: water, agriculture, modeling, optimal allocation, parametric sensitivity analysis, Screening Designs, One-Factor-At-A-Time, agricultural policy, climate change.

  8. Bayesian Regression of Thermodynamic Models of Redox Active Materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnston, Katherine

    Finding a suitable functional redox material is a critical challenge to achieving scalable, economically viable technologies for storing concentrated solar energy in the form of a defected oxide. Demonstrating e ectiveness for thermal storage or solar fuel is largely accomplished by using a thermodynamic model derived from experimental data. The purpose of this project is to test the accuracy of our regression model on representative data sets. Determining the accuracy of the model includes parameter tting the model to the data, comparing the model using di erent numbers of param- eters, and analyzing the entropy and enthalpy calculated from themore » model. Three data sets were considered in this project: two demonstrating materials for solar fuels by wa- ter splitting and the other of a material for thermal storage. Using Bayesian Inference and Markov Chain Monte Carlo (MCMC), parameter estimation was preformed on the three data sets. Good results were achieved, except some there was some deviations on the edges of the data input ranges. The evidence values were then calculated in a variety of ways and used to compare models with di erent number of parameters. It was believed that at least one of the parameters was unnecessary and comparing evidence values demonstrated that the parameter was need on one data set and not signi cantly helpful on another. The entropy was calculated by taking the derivative in one variable and integrating over another. and its uncertainty was also calculated by evaluating the entropy over multiple MCMC samples. Afterwards, all the parts were written up as a tutorial for the Uncertainty Quanti cation Toolkit (UQTk).« less

  9. SU-E-T-405: Evaluation of the Raystation Electron Monte Carlo Algorithm for Varian Linear Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sansourekidou, P; Allen, C

    2015-06-15

    Purpose: To evaluate the Raystation v4.51 Electron Monte Carlo algorithm for Varian Trilogy, IX and 2100 series linear accelerators and commission for clinical use. Methods: Seventy two water and forty air scans were acquired with a water tank in the form of profiles and depth doses, as requested by vendor. Data was imported into Rayphysics beam modeling module. Energy spectrum was modeled using seven parameters. Contamination photons were modeled using five parameters. Source phase space was modeled using six parameters. Calculations were performed in clinical version 4.51 and percent depth dose curves and profiles were extracted to be compared tomore » water tank measurements. Sensitivity tests were performed for all parameters. Grid size and particle histories were evaluated per energy for statistical uncertainty performance. Results: Model accuracy for air profiles is poor in the shoulder and penumbra region. However, model accuracy for water scans is acceptable. All energies and cones are within 2%/2mm for 90% of the points evaluated. Source phase space parameters have a cumulative effect. To achieve distributions with satisfactory smoothness level a 0.1cm grid and 3,000,000 particle histories were used for commissioning calculations. Calculation time was approximately 3 hours per energy. Conclusion: Raystation electron Monte Carlo is acceptable for clinical use for the Varian accelerators listed. Results are inferior to Elekta Electron Monte Carlo modeling. Known issues were reported to Raysearch and will be resolved in upcoming releases. Auto-modeling is limited to open cone depth dose curves and needs expansion.« less

  10. Building a USGS National Crustal Model: Theoretical foundation, inputs, and calibration for the Western United States

    NASA Astrophysics Data System (ADS)

    Shah, A. K.; Boyd, O. S.; Sowers, T.; Thompson, E.

    2017-12-01

    Seismic hazard assessments depend on an accurate prediction of ground motion, which in turn depends on a base knowledge of three-dimensional variations in density, seismic velocity, and attenuation. We are building a National Crustal Model (NCM) using a physical theoretical foundation, 3-D geologic model, and measured data for calibration. An initial version of the NCM for the western U.S. is planned to be available in mid-2018 and for the remainder of the U.S. in 2019. The theoretical foundation of the NCM couples Biot-Gassmann theory for the porous composite with mineral physics calculations for the solid mineral matrix. The 3-D geologic model is defined through integration of results from a range of previous studies including maps of surficial porosity, surface and subsurface lithology, and the depths to bedrock and crystalline basement or seismic equivalent. The depths to bedrock and basement are estimated using well, seismic, and gravity data; in many cases these data are compiled by combining previous studies. Two parameters controlling how porosity changes with depth are assumed to be a function of lithology and calibrated using measured shear- and compressional-wave velocity and density profiles. Uncertainties in parameters derived from the model increase with depth and are dependent on the quantity and quality of input data sets. An interface to the model provides parameters needed for ground motion prediction equations in the Western U.S., including, for example, the time-averaged shear-wave velocity in the upper 30 meters (VS30) and the depths to 1.0 and 2.5 km/s shear-wave speeds (Z1.0 and Z2.5), which have a very rough correlation to the depths to bedrock and basement, as well as interpolated 3D models for use with various Urban Hazard Mapping strategies. We compare parameters needed for ground motion prediction equations including VS30, Z1.0, and Z2.5 between those derived from existing models, for example, 3-D velocity models for southern California available from the Southern California Earthquake Center, and those derived from the NCM and assess their ability to reduce the variance of observed ground motions.

  11. Analyzing Strategic Business Rules through Simulation Modeling

    NASA Astrophysics Data System (ADS)

    Orta, Elena; Ruiz, Mercedes; Toro, Miguel

    Service Oriented Architecture (SOA) holds promise for business agility since it allows business process to change to meet new customer demands or market needs without causing a cascade effect of changes in the underlying IT systems. Business rules are the instrument chosen to help business and IT to collaborate. In this paper, we propose the utilization of simulation models to model and simulate strategic business rules that are then disaggregated at different levels of an SOA architecture. Our proposal is aimed to help find a good configuration for strategic business objectives and IT parameters. The paper includes a case study where a simulation model is built to help business decision-making in a context where finding a good configuration for different business parameters and performance is too complex to analyze by trial and error.

  12. Effect of Bearing Housings on Centrifugal Pump Rotor Dynamics

    NASA Astrophysics Data System (ADS)

    Yashchenko, A. S.; Rudenko, A. A.; Simonovskiy, V. I.; Kozlov, O. M.

    2017-08-01

    The article deals with the effect of a bearing housing on rotor dynamics of a barrel casing centrifugal boiler feed pump rotor. The calculation of the rotor model including the bearing housing has been performed by the method of initial parameters. The calculation of a rotor solid model including the bearing housing has been performed by the finite element method. Results of both calculations highlight the need to add bearing housings into dynamic analyses of the pump rotor. The calculation performed by modern software packages is more a time-taking process, at the same time it is a preferred one due to a graphic editor that is employed for creating a numerical model. When it is necessary to view many variants of design parameters, programs for beam modeling should be used.

  13. Web-Based Model Visualization Tools to Aid in Model Optimization and Uncertainty Analysis

    NASA Astrophysics Data System (ADS)

    Alder, J.; van Griensven, A.; Meixner, T.

    2003-12-01

    Individuals applying hydrologic models have a need for a quick easy to use visualization tools to permit them to assess and understand model performance. We present here the Interactive Hydrologic Modeling (IHM) visualization toolbox. The IHM utilizes high-speed Internet access, the portability of the web and the increasing power of modern computers to provide an online toolbox for quick and easy model result visualization. This visualization interface allows for the interpretation and analysis of Monte-Carlo and batch model simulation results. Often times a given project will generate several thousands or even hundreds of thousands simulations. This large number of simulations creates a challenge for post-simulation analysis. IHM's goal is to try to solve this problem by loading all of the data into a database with a web interface that can dynamically generate graphs for the user according to their needs. IHM currently supports: a global samples statistics table (e.g. sum of squares error, sum of absolute differences etc.), top ten simulations table and graphs, graphs of an individual simulation using time step data, objective based dotty plots, threshold based parameter cumulative density function graphs (as used in the regional sensitivity analysis of Spear and Hornberger) and 2D error surface graphs of the parameter space. IHM is ideal for the simplest bucket model to the largest set of Monte-Carlo model simulations with a multi-dimensional parameter and model output space. By using a web interface, IHM offers the user complete flexibility in the sense that they can be anywhere in the world using any operating system. IHM can be a time saving and money saving alternative to spending time producing graphs or conducting analysis that may not be informative or being forced to purchase or use expensive and proprietary software. IHM is a simple, free, method of interpreting and analyzing batch model results, and is suitable for novice to expert hydrologic modelers.

  14. Inverse modeling of rainfall infiltration with a dual permeability approach using different matrix-fracture coupling variants.

    NASA Astrophysics Data System (ADS)

    Blöcher, Johanna; Kuraz, Michal

    2017-04-01

    In this contribution we propose implementations of the dual permeability model with different inter-domain exchange descriptions and metaheuristic optimization algorithms for parameter identification and mesh optimization. We compare variants of the coupling term with different numbers of parameters to test if a reduction of parameters is feasible. This can reduce parameter uncertainty in inverse modeling, but also allow for different conceptual models of the domain and matrix coupling. The different variants of the dual permeability model are implemented in the open-source objective library DRUtES written in FORTRAN 2003/2008 in 1D and 2D. For parameter identification we use adaptations of the particle swarm optimization (PSO) and Teaching-learning-based optimization (TLBO), which are population-based metaheuristics with different learning strategies. These are high-level stochastic-based search algorithms that don't require gradient information or a convex search space. Despite increasing computing power and parallel processing, an overly fine mesh is not feasible for parameter identification. This creates the need to find a mesh that optimizes both accuracy and simulation time. We use a bi-objective PSO algorithm to generate a Pareto front of optimal meshes to account for both objectives. The dual permeability model and the optimization algorithms were tested on virtual data and field TDR sensor readings. The TDR sensor readings showed a very steep increase during rapid rainfall events and a subsequent steep decrease. This was theorized to be an effect of artificial macroporous envelopes surrounding TDR sensors creating an anomalous region with distinct local soil hydraulic properties. One of our objectives is to test how well the dual permeability model can describe this infiltration behavior and what coupling term would be most suitable.

  15. Feasibility of employing model-based optimization of pulse amplitude and electrode distance for effective tumor electropermeabilization.

    PubMed

    Sel, Davorka; Lebar, Alenka Macek; Miklavcic, Damijan

    2007-05-01

    In electrochemotherapy (ECT) electropermeabilization, parameters (pulse amplitude, electrode setup) need to be customized in order to expose the whole tumor to electric field intensities above permeabilizing threshold to achieve effective ECT. In this paper, we present a model-based optimization approach toward determination of optimal electropermeabilization parameters for effective ECT. The optimization is carried out by minimizing the difference between the permeabilization threshold and electric field intensities computed by finite element model in selected points of tumor. We examined the feasibility of model-based optimization of electropermeabilization parameters on a model geometry generated from computer tomography images, representing brain tissue with tumor. Continuous parameter subject to optimization was pulse amplitude. The distance between electrode pairs was optimized as a discrete parameter. Optimization also considered the pulse generator constraints on voltage and current. During optimization the two constraints were reached preventing the exposure of the entire volume of the tumor to electric field intensities above permeabilizing threshold. However, despite the fact that with the particular needle array holder and pulse generator the entire volume of the tumor was not permeabilized, the maximal extent of permeabilization for the particular case (electrodes, tissue) was determined with the proposed approach. Model-based optimization approach could also be used for electro-gene transfer, where electric field intensities should be distributed between permeabilizing threshold and irreversible threshold-the latter causing tissue necrosis. This can be obtained by adding constraints on maximum electric field intensity in optimization procedure.

  16. Optimization of multi-environment trials for genomic selection based on crop models.

    PubMed

    Rincent, R; Kuhn, E; Monod, H; Oury, F-X; Rousset, M; Allard, V; Le Gouis, J

    2017-08-01

    We propose a statistical criterion to optimize multi-environment trials to predict genotype × environment interactions more efficiently, by combining crop growth models and genomic selection models. Genotype × environment interactions (GEI) are common in plant multi-environment trials (METs). In this context, models developed for genomic selection (GS) that refers to the use of genome-wide information for predicting breeding values of selection candidates need to be adapted. One promising way to increase prediction accuracy in various environments is to combine ecophysiological and genetic modelling thanks to crop growth models (CGM) incorporating genetic parameters. The efficiency of this approach relies on the quality of the parameter estimates, which depends on the environments composing this MET used for calibration. The objective of this study was to determine a method to optimize the set of environments composing the MET for estimating genetic parameters in this context. A criterion called OptiMET was defined to this aim, and was evaluated on simulated and real data, with the example of wheat phenology. The MET defined with OptiMET allowed estimating the genetic parameters with lower error, leading to higher QTL detection power and higher prediction accuracies. MET defined with OptiMET was on average more efficient than random MET composed of twice as many environments, in terms of quality of the parameter estimates. OptiMET is thus a valuable tool to determine optimal experimental conditions to best exploit MET and the phenotyping tools that are currently developed.

  17. Effect of Damping and Yielding on the Seismic Response of 3D Steel Buildings with PMRF

    PubMed Central

    Haldar, Achintya; Rodelo-López, Ramon Eduardo; Bojórquez, Eden

    2014-01-01

    The effect of viscous damping and yielding, on the reduction of the seismic responses of steel buildings modeled as three-dimensional (3D) complex multidegree of freedom (MDOF) systems, is studied. The reduction produced by damping may be larger or smaller than that of yielding. This reduction can significantly vary from one structural idealization to another and is smaller for global than for local response parameters, which in turn depends on the particular local response parameter. The uncertainty in the estimation is significantly larger for local response parameter and decreases as damping increases. The results show the limitations of the commonly used static equivalent lateral force procedure where local and global response parameters are reduced in the same proportion. It is concluded that estimating the effect of damping and yielding on the seismic response of steel buildings by using simplified models may be a very crude approximation. Moreover, the effect of yielding should be explicitly calculated by using complex 3D MDOF models instead of estimating it in terms of equivalent viscous damping. The findings of this paper are for the particular models used in the study. Much more research is needed to reach more general conclusions. PMID:25097892

  18. Effect of damping and yielding on the seismic response of 3D steel buildings with PMRF.

    PubMed

    Reyes-Salazar, Alfredo; Haldar, Achintya; Rodelo-López, Ramon Eduardo; Bojórquez, Eden

    2014-01-01

    The effect of viscous damping and yielding, on the reduction of the seismic responses of steel buildings modeled as three-dimensional (3D) complex multidegree of freedom (MDOF) systems, is studied. The reduction produced by damping may be larger or smaller than that of yielding. This reduction can significantly vary from one structural idealization to another and is smaller for global than for local response parameters, which in turn depends on the particular local response parameter. The uncertainty in the estimation is significantly larger for local response parameter and decreases as damping increases. The results show the limitations of the commonly used static equivalent lateral force procedure where local and global response parameters are reduced in the same proportion. It is concluded that estimating the effect of damping and yielding on the seismic response of steel buildings by using simplified models may be a very crude approximation. Moreover, the effect of yielding should be explicitly calculated by using complex 3D MDOF models instead of estimating it in terms of equivalent viscous damping. The findings of this paper are for the particular models used in the study. Much more research is needed to reach more general conclusions.

  19. Bridging the gap between theoretical ecology and real ecosystems: modeling invertebrate community composition in streams.

    PubMed

    Schuwirth, Nele; Reichert, Peter

    2013-02-01

    For the first time, we combine concepts of theoretical food web modeling, the metabolic theory of ecology, and ecological stoichiometry with the use of functional trait databases to predict the coexistence of invertebrate taxa in streams. We developed a mechanistic model that describes growth, death, and respiration of different taxa dependent on various environmental influence factors to estimate survival or extinction. Parameter and input uncertainty is propagated to model results. Such a model is needed to test our current quantitative understanding of ecosystem structure and function and to predict effects of anthropogenic impacts and restoration efforts. The model was tested using macroinvertebrate monitoring data from a catchment of the Swiss Plateau. Even without fitting model parameters, the model is able to represent key patterns of the coexistence structure of invertebrates at sites varying in external conditions (litter input, shading, water quality). This confirms the suitability of the model concept. More comprehensive testing and resulting model adaptations will further increase the predictive accuracy of the model.

  20. Model-based high-throughput design of ion exchange protein chromatography.

    PubMed

    Khalaf, Rushd; Heymann, Julia; LeSaout, Xavier; Monard, Florence; Costioli, Matteo; Morbidelli, Massimo

    2016-08-12

    This work describes the development of a model-based high-throughput design (MHD) tool for the operating space determination of a chromatographic cation-exchange protein purification process. Based on a previously developed thermodynamic mechanistic model, the MHD tool generates a large amount of system knowledge and thereby permits minimizing the required experimental workload. In particular, each new experiment is designed to generate information needed to help refine and improve the model. Unnecessary experiments that do not increase system knowledge are avoided. Instead of aspiring to a perfectly parameterized model, the goal of this design tool is to use early model parameter estimates to find interesting experimental spaces, and to refine the model parameter estimates with each new experiment until a satisfactory set of process parameters is found. The MHD tool is split into four sections: (1) prediction, high throughput experimentation using experiments in (2) diluted conditions and (3) robotic automated liquid handling workstations (robotic workstation), and (4) operating space determination and validation. (1) Protein and resin information, in conjunction with the thermodynamic model, is used to predict protein resin capacity. (2) The predicted model parameters are refined based on gradient experiments in diluted conditions. (3) Experiments on the robotic workstation are used to further refine the model parameters. (4) The refined model is used to determine operating parameter space that allows for satisfactory purification of the protein of interest on the HPLC scale. Each section of the MHD tool is used to define the adequate experimental procedures for the next section, thus avoiding any unnecessary experimental work. We used the MHD tool to design a polishing step for two proteins, a monoclonal antibody and a fusion protein, on two chromatographic resins, in order to demonstrate it has the ability to strongly accelerate the early phases of process development. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. UCODE, a computer code for universal inverse modeling

    USGS Publications Warehouse

    Poeter, E.P.; Hill, M.C.

    1999-01-01

    This article presents the US Geological Survey computer program UCODE, which was developed in collaboration with the US Army Corps of Engineers Waterways Experiment Station and the International Ground Water Modeling Center of the Colorado School of Mines. UCODE performs inverse modeling, posed as a parameter-estimation problem, using nonlinear regression. Any application model or set of models can be used; the only requirement is that they have numerical (ASCII or text only) input and output files and that the numbers in these files have sufficient significant digits. Application models can include preprocessors and postprocessors as well as models related to the processes of interest (physical, chemical and so on), making UCODE extremely powerful for model calibration. Estimated parameters can be defined flexibly with user-specified functions. Observations to be matched in the regression can be any quantity for which a simulated equivalent value can be produced, thus simulated equivalent values are calculated using values that appear in the application model output files and can be manipulated with additive and multiplicative functions, if necessary. Prior, or direct, information on estimated parameters also can be included in the regression. The nonlinear regression problem is solved by minimizing a weighted least-squares objective function with respect to the parameter values using a modified Gauss-Newton method. Sensitivities needed for the method are calculated approximately by forward or central differences and problems and solutions related to this approximation are discussed. Statistics are calculated and printed for use in (1) diagnosing inadequate data or identifying parameters that probably cannot be estimated with the available data, (2) evaluating estimated parameter values, (3) evaluating the model representation of the actual processes and (4) quantifying the uncertainty of model simulated values. UCODE is intended for use on any computer operating system: it consists of algorithms programmed in perl, a freeware language designed for text manipulation and Fortran90, which efficiently performs numerical calculations.

  2. An Initial Non-Equilibrium Porous-Media Model for CFD Simulation of Stirling Regenerators

    NASA Technical Reports Server (NTRS)

    Tew, Roy C.; Simon, Terry; Gedeon, David; Ibrahim, Mounir; Rong, Wei

    2006-01-01

    The objective of this paper is to define empirical parameters for an initial thermal non-equilibrium porous-media model for use in Computational Fluid Dynamics (CFD) codes for simulation of Stirling regenerators. The two codes currently used at Glenn Research Center for Stirling modeling are Fluent and CFD-ACE. The codes porous-media models are equilibrium models, which assume solid matrix and fluid are in thermal equilibrium. This is believed to be a poor assumption for Stirling regenerators; Stirling 1-D regenerator models, used in Stirling design, use non-equilibrium regenerator models and suggest regenerator matrix and gas average temperatures can differ by several degrees at a given axial location and time during the cycle. Experimentally based information was used to define: hydrodynamic dispersion, permeability, inertial coefficient, fluid effective thermal conductivity, and fluid-solid heat transfer coefficient. Solid effective thermal conductivity was also estimated. Determination of model parameters was based on planned use in a CFD model of Infinia's Stirling Technology Demonstration Converter (TDC), which uses a random-fiber regenerator matrix. Emphasis is on use of available data to define empirical parameters needed in a thermal non-equilibrium porous media model for Stirling regenerator simulation. Such a model has not yet been implemented by the authors or their associates.

  3. Probabilistic modeling of percutaneous absorption for risk-based exposure assessments and transdermal drug delivery.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ho, Clifford Kuofei

    Chemical transport through human skin can play a significant role in human exposure to toxic chemicals in the workplace, as well as to chemical/biological warfare agents in the battlefield. The viability of transdermal drug delivery also relies on chemical transport processes through the skin. Models of percutaneous absorption are needed for risk-based exposure assessments and drug-delivery analyses, but previous mechanistic models have been largely deterministic. A probabilistic, transient, three-phase model of percutaneous absorption of chemicals has been developed to assess the relative importance of uncertain parameters and processes that may be important to risk-based assessments. Penetration routes through the skinmore » that were modeled include the following: (1) intercellular diffusion through the multiphase stratum corneum; (2) aqueous-phase diffusion through sweat ducts; and (3) oil-phase diffusion through hair follicles. Uncertainty distributions were developed for the model parameters, and a Monte Carlo analysis was performed to simulate probability distributions of mass fluxes through each of the routes. Sensitivity analyses using stepwise linear regression were also performed to identify model parameters that were most important to the simulated mass fluxes at different times. This probabilistic analysis of percutaneous absorption (PAPA) method has been developed to improve risk-based exposure assessments and transdermal drug-delivery analyses, where parameters and processes can be highly uncertain.« less

  4. Variance-based Sensitivity Analysis of Large-scale Hydrological Model to Prepare an Ensemble-based SWOT-like Data Assimilation Experiments

    NASA Astrophysics Data System (ADS)

    Emery, C. M.; Biancamaria, S.; Boone, A. A.; Ricci, S. M.; Garambois, P. A.; Decharme, B.; Rochoux, M. C.

    2015-12-01

    Land Surface Models (LSM) coupled with River Routing schemes (RRM), are used in Global Climate Models (GCM) to simulate the continental part of the water cycle. They are key component of GCM as they provide boundary conditions to atmospheric and oceanic models. However, at global scale, errors arise mainly from simplified physics, atmospheric forcing, and input parameters. More particularly, those used in RRM, such as river width, depth and friction coefficients, are difficult to calibrate and are mostly derived from geomorphologic relationships, which may not always be realistic. In situ measurements are then used to calibrate these relationships and validate the model, but global in situ data are very sparse. Additionally, due to the lack of existing global river geomorphology database and accurate forcing, models are run at coarse resolution. This is typically the case of the ISBA-TRIP model used in this study.A complementary alternative to in-situ data are satellite observations. In this regard, the Surface Water and Ocean Topography (SWOT) satellite mission, jointly developed by NASA/CNES/CSA/UKSA and scheduled for launch around 2020, should be very valuable to calibrate RRM parameters. It will provide maps of water surface elevation for rivers wider than 100 meters over continental surfaces in between 78°S and 78°N and also direct observation of river geomorphological parameters such as width ans slope.Yet, before assimilating such kind of data, it is needed to analyze RRM temporal sensitivity to time-constant parameters. This study presents such analysis over large river basins for the TRIP RRM. Model output uncertainty, represented by unconditional variance, is decomposed into ordered contribution from each parameter. Doing a time-dependent analysis allows then to identify to which parameters modeled water level and discharge are the most sensitive along a hydrological year. The results show that local parameters directly impact water levels, while discharge is more affected by parameters from the whole upstream drainage area. Understanding model output variance behavior will have a direct impact on the design and performance of the ensemble-based data assimilation platform, for which uncertainties are also modeled by variances. It will help to select more objectively RRM parameters to correct.

  5. Developing a quality by design approach to model tablet dissolution testing: an industrial case study.

    PubMed

    Yekpe, Ketsia; Abatzoglou, Nicolas; Bataille, Bernard; Gosselin, Ryan; Sharkawi, Tahmer; Simard, Jean-Sébastien; Cournoyer, Antoine

    2018-07-01

    This study applied the concept of Quality by Design (QbD) to tablet dissolution. Its goal was to propose a quality control strategy to model dissolution testing of solid oral dose products according to International Conference on Harmonization guidelines. The methodology involved the following three steps: (1) a risk analysis to identify the material- and process-related parameters impacting the critical quality attributes of dissolution testing, (2) an experimental design to evaluate the influence of design factors (attributes and parameters selected by risk analysis) on dissolution testing, and (3) an investigation of the relationship between design factors and dissolution profiles. Results show that (a) in the case studied, the two parameters impacting dissolution kinetics are active pharmaceutical ingredient particle size distributions and tablet hardness and (b) these two parameters could be monitored with PAT tools to predict dissolution profiles. Moreover, based on the results obtained, modeling dissolution is possible. The practicality and effectiveness of the QbD approach were demonstrated through this industrial case study. Implementing such an approach systematically in industrial pharmaceutical production would reduce the need for tablet dissolution testing.

  6. Finding optimal vaccination strategies under parameter uncertainty using stochastic programming.

    PubMed

    Tanner, Matthew W; Sattenspiel, Lisa; Ntaimo, Lewis

    2008-10-01

    We present a stochastic programming framework for finding the optimal vaccination policy for controlling infectious disease epidemics under parameter uncertainty. Stochastic programming is a popular framework for including the effects of parameter uncertainty in a mathematical optimization model. The problem is initially formulated to find the minimum cost vaccination policy under a chance-constraint. The chance-constraint requires that the probability that R(*)

  7. Computational Electrocardiography: Revisiting Holter ECG Monitoring.

    PubMed

    Deserno, Thomas M; Marx, Nikolaus

    2016-08-05

    Since 1942, when Goldberger introduced the 12-lead electrocardiography (ECG), this diagnostic method has not been changed. After 70 years of technologic developments, we revisit Holter ECG from recording to understanding. A fundamental change is fore-seen towards "computational ECG" (CECG), where continuous monitoring is producing big data volumes that are impossible to be inspected conventionally but require efficient computational methods. We draw parallels between CECG and computational biology, in particular with respect to computed tomography, computed radiology, and computed photography. From that, we identify technology and methodology needed for CECG. Real-time transfer of raw data into meaningful parameters that are tracked over time will allow prediction of serious events, such as sudden cardiac death. Evolved from Holter's technology, portable smartphones with Bluetooth-connected textile-embedded sensors will capture noisy raw data (recording), process meaningful parameters over time (analysis), and transfer them to cloud services for sharing (handling), predicting serious events, and alarming (understanding). To make this happen, the following fields need more research: i) signal processing, ii) cycle decomposition; iii) cycle normalization, iv) cycle modeling, v) clinical parameter computation, vi) physiological modeling, and vii) event prediction. We shall start immediately developing methodology for CECG analysis and understanding.

  8. A new approach to flow simulation using hybrid models

    NASA Astrophysics Data System (ADS)

    Solgi, Abazar; Zarei, Heidar; Nourani, Vahid; Bahmani, Ramin

    2017-11-01

    The necessity of flow prediction in rivers, for proper management of water resource, and the need for determining the inflow to the dam reservoir, designing efficient flood warning systems and so forth, have always led water researchers to think about models with high-speed response and low error. In the recent years, the development of Artificial Neural Networks and Wavelet theory and using the combination of models help researchers to estimate the river flow better and better. In this study, daily and monthly scales were used for simulating the flow of Gamasiyab River, Nahavand, Iran. The first simulation was done using two types of ANN and ANFIS models. Then, using wavelet theory and decomposing input signals of the used parameters, sub-signals were obtained and were fed into the ANN and ANFIS to obtain hybrid models of WANN and WANFIS. In this study, in addition to the parameters of precipitation and flow, parameters of temperature and evaporation were used to analyze their effects on the simulation. The results showed that using wavelet transform improved the performance of the models in both monthly and daily scale. However, it had a better effect on the monthly scale and the WANFIS was the best model.

  9. Obtaining short-fiber orientation model parameters using non-lubricated squeeze flow

    NASA Astrophysics Data System (ADS)

    Lambert, Gregory; Wapperom, Peter; Baird, Donald

    2017-12-01

    Accurate models of fiber orientation dynamics during the processing of polymer-fiber composites are needed for the design work behind important automobile parts. All of the existing models utilize empirical parameters, but a standard method for obtaining them independent of processing does not exist. This study considers non-lubricated squeeze flow through a rectangular channel as a solution. A two-dimensional finite element method simulation of the kinematics and fiber orientation evolution along the centerline of a sample is developed as a first step toward a fully three-dimensional simulation. The model is used to fit to orientation data in a short-fiber-reinforced polymer composite after squeezing. Fiber orientation model parameters obtained in this study do not agree well with those obtained for the same material during startup of simple shear. This is attributed to the vastly different rates at which fibers orient during shearing and extensional flows. A stress model is also used to try to fit to experimental closure force data. Although the model can be tuned to the correct magnitude of the closure force, it does not fully recreate the transient behavior, which is attributed to the lack of any consideration for fiber-fiber interactions.

  10. The Rasch Model and Missing Data, with an Emphasis on Tailoring Test Items.

    ERIC Educational Resources Information Center

    de Gruijter, Dato N. M.

    Many applications of educational testing have a missing data aspect (MDA). This MDA is perhaps most pronounced in item banking, where each examinee responds to a different subtest of items from a large item pool and where both person and item parameter estimates are needed. The Rasch model is emphasized, and its non-parametric counterpart (the…

  11. Minimal models from W-constrained hierarchies via the Kontsevich-Miwa transform

    NASA Astrophysics Data System (ADS)

    Gato-Rivera, B.; Semikhatov, A. M.

    1992-08-01

    A direct relation between the conformal formalism for 2D quantum gravity and the W-constrained KP hierarchy is found, without the need to invoke intermediate matrix model technology. The Kontsevich-Miwa transform of the KP hierarchy is used to establish an identification between W constraints on the KP tau function and decoupling equations corresponding to Virasoro null vectors. The Kontsevich-Miwa transform maps the W ( l) -constrained KP hierarchy to the ( p‧, p‧) minimal model, with the tau function being given by the correlator of a product of (dressed) ( l, 1) [or (1, l)] operators, provided the Miwa parameter ni and the free parameter (an abstract bc spin) present in the constraint are expressed through the ratio p‧/ p and the level l.

  12. Use of partial AUC to demonstrate bioequivalence of Zolpidem Tartrate Extended Release formulations.

    PubMed

    Lionberger, Robert A; Raw, Andre S; Kim, Stephanie H; Zhang, Xinyuan; Yu, Lawrence X

    2012-04-01

    FDA's bioequivalence recommendation for Zolpidem Tartrate Extended Release Tablets is the first to use partial AUC (pAUC) metrics for determining bioequivalence of modified-release dosage forms. Modeling and simulation studies were performed to aid in understanding the need for pAUC measures and also the proper pAUC truncation times. Deconvolution techniques, In Vitro/In Vivo Correlations, and the CAT (Compartmental Absorption and Transit) model were used to predict the PK profiles for zolpidem. Models were validated using in-house data submitted to the FDA. Using dissolution profiles expressed by the Weibull model as input for the CAT model, dissolution spaces were derived for simulated test formulations. The AUC(0-1.5) parameter was indicative of IR characteristics of early exposure and effectively distinguished among formulations that produced different pharmacodynamic effects. The AUC(1.5-t) parameter ensured equivalence with respect to the sustained release phase of Ambien CR. The variability of AUC(0-1.5) is higher than other PK parameters, but is reasonable for use in an equivalence test. In addition to the traditional PK parameters of AUCinf and Cmax, AUC(0-1.5) and AUC(1.5-t) are recommended to provide bioequivalence measures with respect to label indications for Ambien CR: onset of sleep and sleep maintenance.

  13. Direct Retrieval of Exterior Orientation Parameters Using A 2-D Projective Transformation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seedahmed, Gamal H.

    2006-09-01

    Direct solutions are very attractive because they obviate the need for initial approximations associated with non-linear solutions. The Direct Linear Transformation (DLT) establishes itself as a method of choice for direct solutions in photogrammetry and other fields. The use of the DLT with coplanar object space points leads to a rank deficient model. This rank deficient model leaves the DLT defined up to a 2-D projective transformation, which makes the direct retrieval of the exterior orientation parameters (EOPs) a non-trivial task. This paper presents a novel direct algorithm to retrieve the EOPs from the 2-D projective transformation. It is basedmore » on a direct relationship between the 2-D projective transformation and the collinearity model using homogeneous coordinates representation. This representation offers a direct matrix correspondence between the 2-D projective transformation parameters and the collinearity model parameters. This correspondence lends itself to a direct matrix factorization to retrieve the EOPs. An important step in the proposed algorithm is a normalization process that provides the actual link between the 2-D projective transformation and the collinearity model. This paper explains the theoretical basis of the proposed algorithm as well as the necessary steps for its practical implementation. In addition, numerical examples are provided to demonstrate its validity.« less

  14. Constraints on pulsed emission model for repeating FRB 121102

    NASA Astrophysics Data System (ADS)

    Kisaka, Shota; Enoto, Teruaki; Shibata, Shinpei

    2017-12-01

    Recent localization of the repeating fast radio burst (FRB) 121102 revealed the distance of its host galaxy and luminosities of the bursts. We investigated constraints on the young neutron star (NS) model, that (a) the FRB intrinsic luminosity is supported by the spin-down energy, and (b) the FRB duration is shorter than the NS rotation period. In the case of a circular cone emission geometry, conditions (a) and (b) determine the NS parameters within very small ranges, compared with that from only condition (a) discussed in previous works. Anisotropy of the pulsed emission does not affect the area of the allowed parameter region by virtue of condition (b). The determined parameters are consistent with those independently limited by the properties of the possible persistent radio counterpart and the circumburst environments such as surrounding materials. Since the NS in the allowed parameter region is older than the spin-down timescale, the hypothetical GRP (giant radio pulse)-like model expects a rapid radio flux decay of ≲1 Jy within a few years as the spin-down luminosity decreases. The continuous monitoring will provide constraints on the young NS models. If no flux evolution is seen, we need to consider an alternative model, e.g., the magnetically powered flare.

  15. Model identification using stochastic differential equation grey-box models in diabetes.

    PubMed

    Duun-Henriksen, Anne Katrine; Schmidt, Signe; Røge, Rikke Meldgaard; Møller, Jonas Bech; Nørgaard, Kirsten; Jørgensen, John Bagterp; Madsen, Henrik

    2013-03-01

    The acceptance of virtual preclinical testing of control algorithms is growing and thus also the need for robust and reliable models. Models based on ordinary differential equations (ODEs) can rarely be validated with standard statistical tools. Stochastic differential equations (SDEs) offer the possibility of building models that can be validated statistically and that are capable of predicting not only a realistic trajectory, but also the uncertainty of the prediction. In an SDE, the prediction error is split into two noise terms. This separation ensures that the errors are uncorrelated and provides the possibility to pinpoint model deficiencies. An identifiable model of the glucoregulatory system in a type 1 diabetes mellitus (T1DM) patient is used as the basis for development of a stochastic-differential-equation-based grey-box model (SDE-GB). The parameters are estimated on clinical data from four T1DM patients. The optimal SDE-GB is determined from likelihood-ratio tests. Finally, parameter tracking is used to track the variation in the "time to peak of meal response" parameter. We found that the transformation of the ODE model into an SDE-GB resulted in a significant improvement in the prediction and uncorrelated errors. Tracking of the "peak time of meal absorption" parameter showed that the absorption rate varied according to meal type. This study shows the potential of using SDE-GBs in diabetes modeling. Improved model predictions were obtained due to the separation of the prediction error. SDE-GBs offer a solid framework for using statistical tools for model validation and model development. © 2013 Diabetes Technology Society.

  16. Concepts, challenges, and successes in modeling thermodynamics of metabolism.

    PubMed

    Cannon, William R

    2014-01-01

    The modeling of the chemical reactions involved in metabolism is a daunting task. Ideally, the modeling of metabolism would use kinetic simulations, but these simulations require knowledge of the thousands of rate constants involved in the reactions. The measurement of rate constants is very labor intensive, and hence rate constants for most enzymatic reactions are not available. Consequently, constraint-based flux modeling has been the method of choice because it does not require the use of the rate constants of the law of mass action. However, this convenience also limits the predictive power of constraint-based approaches in that the law of mass action is used only as a constraint, making it difficult to predict metabolite levels or energy requirements of pathways. An alternative to both of these approaches is to model metabolism using simulations of states rather than simulations of reactions, in which the state is defined as the set of all metabolite counts or concentrations. While kinetic simulations model reactions based on the likelihood of the reaction derived from the law of mass action, states are modeled based on likelihood ratios of mass action. Both approaches provide information on the energy requirements of metabolic reactions and pathways. However, modeling states rather than reactions has the advantage that the parameters needed to model states (chemical potentials) are much easier to determine than the parameters needed to model reactions (rate constants). Herein, we discuss recent results, assumptions, and issues in using simulations of state to model metabolism.

  17. Relative mass distributions of neutron-rich thermally fissile nuclei within a statistical model

    NASA Astrophysics Data System (ADS)

    Kumar, Bharat; Kannan, M. T. Senthil; Balasubramaniam, M.; Agrawal, B. K.; Patra, S. K.

    2017-09-01

    We study the binary mass distribution for the recently predicted thermally fissile neutron-rich uranium and thorium nuclei using a statistical model. The level density parameters needed for the study are evaluated from the excitation energies of the temperature-dependent relativistic mean field formalism. The excitation energy and the level density parameter for a given temperature are employed in the convolution integral method to obtain the probability of the particular fragmentation. As representative cases, we present the results for the binary yields of 250U and 254Th. The relative yields are presented for three different temperatures: T =1 , 2, and 3 MeV.

  18. The statistics of primordial density fluctuations

    NASA Astrophysics Data System (ADS)

    Barrow, John D.; Coles, Peter

    1990-05-01

    The statistical properties of the density fluctuations produced by power-law inflation are investigated. It is found that, even the fluctuations present in the scalar field driving the inflation are Gaussian, the resulting density perturbations need not be, due to stochastic variations in the Hubble parameter. All the moments of the density fluctuations are calculated, and is is argued that, for realistic parameter choices, the departures from Gaussian statistics are small and would have a negligible effect on the large-scale structure produced in the model. On the other hand, the model predicts a power spectrum with n not equal to 1, and this could be good news for large-scale structure.

  19. Coulomb matrix elements in multi-orbital Hubbard models.

    PubMed

    Bünemann, Jörg; Gebhard, Florian

    2017-04-26

    Coulomb matrix elements are needed in all studies in solid-state theory that are based on Hubbard-type multi-orbital models. Due to symmetries, the matrix elements are not independent. We determine a set of independent Coulomb parameters for a d-shell and an f-shell and all point groups with up to 16 elements (O h , O, T d , T h , D 6h , and D 4h ). Furthermore, we express all other matrix elements as a function of the independent Coulomb parameters. Apart from the solution of the general point-group problem we investigate in detail the spherical approximation and first-order corrections to the spherical approximation.

  20. Parameter Estimation of Spacecraft Fuel Slosh Model

    NASA Technical Reports Server (NTRS)

    Gangadharan, Sathya; Sudermann, James; Marlowe, Andrea; Njengam Charles

    2004-01-01

    Fuel slosh in the upper stages of a spinning spacecraft during launch has been a long standing concern for the success of a space mission. Energy loss through the movement of the liquid fuel in the fuel tank affects the gyroscopic stability of the spacecraft and leads to nutation (wobble) which can cause devastating control issues. The rate at which nutation develops (defined by Nutation Time Constant (NTC can be tedious to calculate and largely inaccurate if done during the early stages of spacecraft design. Pure analytical means of predicting the influence of onboard liquids have generally failed. A strong need exists to identify and model the conditions of resonance between nutation motion and liquid modes and to understand the general characteristics of the liquid motion that causes the problem in spinning spacecraft. A 3-D computerized model of the fuel slosh that accounts for any resonant modes found in the experimental testing will allow for increased accuracy in the overall modeling process. Development of a more accurate model of the fuel slosh currently lies in a more generalized 3-D computerized model incorporating masses, springs and dampers. Parameters describing the model include the inertia tensor of the fuel, spring constants, and damper coefficients. Refinement and understanding the effects of these parameters allow for a more accurate simulation of fuel slosh. The current research will focus on developing models of different complexity and estimating the model parameters that will ultimately provide a more realistic prediction of Nutation Time Constant obtained through simulation.

  1. Model-data integration to improve the LPJmL dynamic global vegetation model

    NASA Astrophysics Data System (ADS)

    Forkel, Matthias; Thonicke, Kirsten; Schaphoff, Sibyll; Thurner, Martin; von Bloh, Werner; Dorigo, Wouter; Carvalhais, Nuno

    2017-04-01

    Dynamic global vegetation models show large uncertainties regarding the development of the land carbon balance under future climate change conditions. This uncertainty is partly caused by differences in how vegetation carbon turnover is represented in global vegetation models. Model-data integration approaches might help to systematically assess and improve model performances and thus to potentially reduce the uncertainty in terrestrial vegetation responses under future climate change. Here we present several applications of model-data integration with the LPJmL (Lund-Potsdam-Jena managed Lands) dynamic global vegetation model to systematically improve the representation of processes or to estimate model parameters. In a first application, we used global satellite-derived datasets of FAPAR (fraction of absorbed photosynthetic activity), albedo and gross primary production to estimate phenology- and productivity-related model parameters using a genetic optimization algorithm. Thereby we identified major limitations of the phenology module and implemented an alternative empirical phenology model. The new phenology module and optimized model parameters resulted in a better performance of LPJmL in representing global spatial patterns of biomass, tree cover, and the temporal dynamic of atmospheric CO2. Therefore, we used in a second application additionally global datasets of biomass and land cover to estimate model parameters that control vegetation establishment and mortality. The results demonstrate the ability to improve simulations of vegetation dynamics but also highlight the need to improve the representation of mortality processes in dynamic global vegetation models. In a third application, we used multiple site-level observations of ecosystem carbon and water exchange, biomass and soil organic carbon to jointly estimate various model parameters that control ecosystem dynamics. This exercise demonstrates the strong role of individual data streams on the simulated ecosystem dynamics which consequently changed the development of ecosystem carbon stocks and fluxes under future climate and CO2 change. In summary, our results demonstrate challenges and the potential of using model-data integration approaches to improve a dynamic global vegetation model.

  2. Numerical modeling techniques for flood analysis

    NASA Astrophysics Data System (ADS)

    Anees, Mohd Talha; Abdullah, K.; Nawawi, M. N. M.; Ab Rahman, Nik Norulaini Nik; Piah, Abd. Rahni Mt.; Zakaria, Nor Azazi; Syakir, M. I.; Mohd. Omar, A. K.

    2016-12-01

    Topographic and climatic changes are the main causes of abrupt flooding in tropical areas. It is the need to find out exact causes and effects of these changes. Numerical modeling techniques plays a vital role for such studies due to their use of hydrological parameters which are strongly linked with topographic changes. In this review, some of the widely used models utilizing hydrological and river modeling parameters and their estimation in data sparse region are discussed. Shortcomings of 1D and 2D numerical models and the possible improvements over these models through 3D modeling are also discussed. It is found that the HEC-RAS and FLO 2D model are best in terms of economical and accurate flood analysis for river and floodplain modeling respectively. Limitations of FLO 2D in floodplain modeling mainly such as floodplain elevation differences and its vertical roughness in grids were found which can be improve through 3D model. Therefore, 3D model was found to be more suitable than 1D and 2D models in terms of vertical accuracy in grid cells. It was also found that 3D models for open channel flows already developed recently but not for floodplain. Hence, it was suggested that a 3D model for floodplain should be developed by considering all hydrological and high resolution topographic parameter's models, discussed in this review, to enhance the findings of causes and effects of flooding.

  3. Estimating rainfall time series and model parameter distributions using model data reduction and inversion techniques

    NASA Astrophysics Data System (ADS)

    Wright, Ashley J.; Walker, Jeffrey P.; Pauwels, Valentijn R. N.

    2017-08-01

    Floods are devastating natural hazards. To provide accurate, precise, and timely flood forecasts, there is a need to understand the uncertainties associated within an entire rainfall time series, even when rainfall was not observed. The estimation of an entire rainfall time series and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of entire rainfall input time series to be considered when estimating model parameters, and provides the ability to improve rainfall estimates from poorly gauged catchments. Current methods to estimate entire rainfall time series from streamflow records are unable to adequately invert complex nonlinear hydrologic systems. This study aims to explore the use of wavelets in the estimation of rainfall time series from streamflow records. Using the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia, it is shown that model parameter distributions and an entire rainfall time series can be estimated. Including rainfall in the estimation process improves streamflow simulations by a factor of up to 1.78. This is achieved while estimating an entire rainfall time series, inclusive of days when none was observed. It is shown that the choice of wavelet can have a considerable impact on the robustness of the inversion. Combining the use of a likelihood function that considers rainfall and streamflow errors with the use of the DWT as a model data reduction technique allows the joint inference of hydrologic model parameters along with rainfall.

  4. Accounting for parameter uncertainty in the definition of parametric distributions used to describe individual patient variation in health economic models.

    PubMed

    Degeling, Koen; IJzerman, Maarten J; Koopman, Miriam; Koffijberg, Hendrik

    2017-12-15

    Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive guidance on reflecting parameter uncertainty in the (correlated) parameters of distributions used to represent stochastic uncertainty in patient-level models. This study aims to provide this guidance by proposing appropriate methods and illustrating the impact of this uncertainty on modeling outcomes. Two approaches, 1) using non-parametric bootstrapping and 2) using multivariate Normal distributions, were applied in a simulation and case study. The approaches were compared based on point-estimates and distributions of time-to-event and health economic outcomes. To assess sample size impact on the uncertainty in these outcomes, sample size was varied in the simulation study and subgroup analyses were performed for the case-study. Accounting for parameter uncertainty in distributions that reflect stochastic uncertainty substantially increased the uncertainty surrounding health economic outcomes, illustrated by larger confidence ellipses surrounding the cost-effectiveness point-estimates and different cost-effectiveness acceptability curves. Although both approaches performed similar for larger sample sizes (i.e. n = 500), the second approach was more sensitive to extreme values for small sample sizes (i.e. n = 25), yielding infeasible modeling outcomes. Modelers should be aware that parameter uncertainty in distributions used to describe stochastic uncertainty needs to be reflected in probabilistic sensitivity analysis, as it could substantially impact the total amount of uncertainty surrounding health economic outcomes. If feasible, the bootstrap approach is recommended to account for this uncertainty.

  5. Modeling uncertainty: quicksand for water temperature modeling

    USGS Publications Warehouse

    Bartholow, John M.

    2003-01-01

    Uncertainty has been a hot topic relative to science generally, and modeling specifically. Modeling uncertainty comes in various forms: measured data, limited model domain, model parameter estimation, model structure, sensitivity to inputs, modelers themselves, and users of the results. This paper will address important components of uncertainty in modeling water temperatures, and discuss several areas that need attention as the modeling community grapples with how to incorporate uncertainty into modeling without getting stuck in the quicksand that prevents constructive contributions to policy making. The material, and in particular the reference, are meant to supplement the presentation given at this conference.

  6. Modification of the SHABERTH bearing code to incorporate RP-1 and a discussion of the traction model

    NASA Technical Reports Server (NTRS)

    Woods, Claudia M.

    1990-01-01

    Recently developed traction data for Rocket Propellant 1 (RP-1), a hydrocarbon fuel of the kerosene family, was used to develop the parameters needed by the bearing code SHABERTH in order to include RP-1 as a lubricant choice. The procedure for inputting data for a new lubricant choice is reviewed, and the theoretical fluid traction model is discussed. Comparisons are made between experimental traction data and those predicted by SHABERTH for RP-1. All data needed to modify SHABERTH for use with RP-1 as a lubricant are specified.

  7. Sensitivity and Uncertainty Analysis for Streamflow Prediction Using Different Objective Functions and Optimization Algorithms: San Joaquin California

    NASA Astrophysics Data System (ADS)

    Paul, M.; Negahban-Azar, M.

    2017-12-01

    The hydrologic models usually need to be calibrated against observed streamflow at the outlet of a particular drainage area through a careful model calibration. However, a large number of parameters are required to fit in the model due to their unavailability of the field measurement. Therefore, it is difficult to calibrate the model for a large number of potential uncertain model parameters. This even becomes more challenging if the model is for a large watershed with multiple land uses and various geophysical characteristics. Sensitivity analysis (SA) can be used as a tool to identify most sensitive model parameters which affect the calibrated model performance. There are many different calibration and uncertainty analysis algorithms which can be performed with different objective functions. By incorporating sensitive parameters in streamflow simulation, effects of the suitable algorithm in improving model performance can be demonstrated by the Soil and Water Assessment Tool (SWAT) modeling. In this study, the SWAT was applied in the San Joaquin Watershed in California covering 19704 km2 to calibrate the daily streamflow. Recently, sever water stress escalating due to intensified climate variability, prolonged drought and depleting groundwater for agricultural irrigation in this watershed. Therefore it is important to perform a proper uncertainty analysis given the uncertainties inherent in hydrologic modeling to predict the spatial and temporal variation of the hydrologic process to evaluate the impacts of different hydrologic variables. The purpose of this study was to evaluate the sensitivity and uncertainty of the calibrated parameters for predicting streamflow. To evaluate the sensitivity of the calibrated parameters three different optimization algorithms (Sequential Uncertainty Fitting- SUFI-2, Generalized Likelihood Uncertainty Estimation- GLUE and Parameter Solution- ParaSol) were used with four different objective functions (coefficient of determination- r2, Nash-Sutcliffe efficiency- NSE, percent bias- PBIAS, and Kling-Gupta efficiency- KGE). The preliminary results showed that using the SUFI-2 algorithm with the objective function NSE and KGE has improved significantly the calibration (e.g. R2 and NSE is found 0.52 and 0.47 respectively for daily streamflow calibration).

  8. Desorption kinetics of hydrophobic organic chemicals from sediment to water: a review of data and models.

    PubMed

    Birdwell, Justin; Cook, Robert L; Thibodeaux, Louis J

    2007-03-01

    Resuspension of contaminated sediment can lead to the release of toxic compounds to surface waters where they are more bioavailable and mobile. Because the timeframe of particle resettling during such events is shorter than that needed to reach equilibrium, a kinetic approach is required for modeling the release process. Due to the current inability of common theoretical approaches to predict site-specific release rates, empirical algorithms incorporating the phenomenological assumption of biphasic, or fast and slow, release dominate the descriptions of nonpolar organic chemical release in the literature. Two first-order rate constants and one fraction are sufficient to characterize practically all of the data sets studied. These rate constants were compared to theoretical model parameters and functionalities, including chemical properties of the contaminants and physical properties of the sorbents, to determine if the trends incorporated into the hindered diffusion model are consistent with the parameters used in curve fitting. The results did not correspond to the parameter dependence of the hindered diffusion model. No trend in desorption rate constants, for either fast or slow release, was observed to be dependent on K(OC) or aqueous solubility for six and seven orders of magnitude, respectively. The same was observed for aqueous diffusivity and sediment fraction organic carbon. The distribution of kinetic rate constant values was approximately log-normal, ranging from 0.1 to 50 d(-1) for the fast release (average approximately 5 d(-1)) and 0.0001 to 0.1 d(-1) for the slow release (average approximately 0.03 d(-1)). The implications of these findings with regard to laboratory studies, theoretical desorption process mechanisms, and water quality modeling needs are presented and discussed.

  9. SBSI: an extensible distributed software infrastructure for parameter estimation in systems biology

    PubMed Central

    Adams, Richard; Clark, Allan; Yamaguchi, Azusa; Hanlon, Neil; Tsorman, Nikos; Ali, Shakir; Lebedeva, Galina; Goltsov, Alexey; Sorokin, Anatoly; Akman, Ozgur E.; Troein, Carl; Millar, Andrew J.; Goryanin, Igor; Gilmore, Stephen

    2013-01-01

    Summary: Complex computational experiments in Systems Biology, such as fitting model parameters to experimental data, can be challenging to perform. Not only do they frequently require a high level of computational power, but the software needed to run the experiment needs to be usable by scientists with varying levels of computational expertise, and modellers need to be able to obtain up-to-date experimental data resources easily. We have developed a software suite, the Systems Biology Software Infrastructure (SBSI), to facilitate the parameter-fitting process. SBSI is a modular software suite composed of three major components: SBSINumerics, a high-performance library containing parallelized algorithms for performing parameter fitting; SBSIDispatcher, a middleware application to track experiments and submit jobs to back-end servers; and SBSIVisual, an extensible client application used to configure optimization experiments and view results. Furthermore, we have created a plugin infrastructure to enable project-specific modules to be easily installed. Plugin developers can take advantage of the existing user-interface and application framework to customize SBSI for their own uses, facilitated by SBSI’s use of standard data formats. Availability and implementation: All SBSI binaries and source-code are freely available from http://sourceforge.net/projects/sbsi under an Apache 2 open-source license. The server-side SBSINumerics runs on any Unix-based operating system; both SBSIVisual and SBSIDispatcher are written in Java and are platform independent, allowing use on Windows, Linux and Mac OS X. The SBSI project website at http://www.sbsi.ed.ac.uk provides documentation and tutorials. Contact: stg@inf.ed.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23329415

  10. A Review of United States Air Force and Department of Defense Aerospace Propulsion Needs

    DTIC Science & Technology

    2006-01-01

    evolved expendable launch vehicle EHF extremely high frequency EMA electromechanical actuator EMDP engine model derivative program EMTVA...condition. A key aspect of the model was which of the two methods was used—parameters of the system or propulsion variables produced in the design ... models for turbopump analysis and design . In addition, the skills required to design a high -performance turbopump are very specialized and must be

  11. Computational Difficulties in the Identification and Optimization of Control Systems.

    DTIC Science & Technology

    1980-01-01

    necessary and Identify by block number) - -. 3. iABSTRACT (Continue on revers, side It necessary and Identify by block number) As more realistic models ...Island 02912 ABSTRACT As more realistic models for resource management are developed, the need for efficient computational techniques for parameter...optimization (optimal control) in "state" models which This research was supported in part by ttfe National Science Foundation under grant NSF-MCS 79-05774

  12. Joint Model and Parameter Dimension Reduction for Bayesian Inversion Applied to an Ice Sheet Flow Problem

    NASA Astrophysics Data System (ADS)

    Ghattas, O.; Petra, N.; Cui, T.; Marzouk, Y.; Benjamin, P.; Willcox, K.

    2016-12-01

    Model-based projections of the dynamics of the polar ice sheets play a central role in anticipating future sea level rise. However, a number of mathematical and computational challenges place significant barriers on improving predictability of these models. One such challenge is caused by the unknown model parameters (e.g., in the basal boundary conditions) that must be inferred from heterogeneous observational data, leading to an ill-posed inverse problem and the need to quantify uncertainties in its solution. In this talk we discuss the problem of estimating the uncertainty in the solution of (large-scale) ice sheet inverse problems within the framework of Bayesian inference. Computing the general solution of the inverse problem--i.e., the posterior probability density--is intractable with current methods on today's computers, due to the expense of solving the forward model (3D full Stokes flow with nonlinear rheology) and the high dimensionality of the uncertain parameters (which are discretizations of the basal sliding coefficient field). To overcome these twin computational challenges, it is essential to exploit problem structure (e.g., sensitivity of the data to parameters, the smoothing property of the forward model, and correlations in the prior). To this end, we present a data-informed approach that identifies low-dimensional structure in both parameter space and the forward model state space. This approach exploits the fact that the observations inform only a low-dimensional parameter space and allows us to construct a parameter-reduced posterior. Sampling this parameter-reduced posterior still requires multiple evaluations of the forward problem, therefore we also aim to identify a low dimensional state space to reduce the computational cost. To this end, we apply a proper orthogonal decomposition (POD) approach to approximate the state using a low-dimensional manifold constructed using ``snapshots'' from the parameter reduced posterior, and the discrete empirical interpolation method (DEIM) to approximate the nonlinearity in the forward problem. We show that using only a limited number of forward solves, the resulting subspaces lead to an efficient method to explore the high-dimensional posterior.

  13. Pharmacokinetic-Pharmacodynamic Modeling in Pediatric Drug Development, and the Importance of Standardized Scaling of Clearance.

    PubMed

    Germovsek, Eva; Barker, Charlotte I S; Sharland, Mike; Standing, Joseph F

    2018-04-19

    Pharmacokinetic/pharmacodynamic (PKPD) modeling is important in the design and conduct of clinical pharmacology research in children. During drug development, PKPD modeling and simulation should underpin rational trial design and facilitate extrapolation to investigate efficacy and safety. The application of PKPD modeling to optimize dosing recommendations and therapeutic drug monitoring is also increasing, and PKPD model-based dose individualization will become a core feature of personalized medicine. Following extensive progress on pediatric PK modeling, a greater emphasis now needs to be placed on PD modeling to understand age-related changes in drug effects. This paper discusses the principles of PKPD modeling in the context of pediatric drug development, summarizing how important PK parameters, such as clearance (CL), are scaled with size and age, and highlights a standardized method for CL scaling in children. One standard scaling method would facilitate comparison of PK parameters across multiple studies, thus increasing the utility of existing PK models and facilitating optimal design of new studies.

  14. PyLDTk: Python toolkit for calculating stellar limb darkening profiles and model-specific coefficients for arbitrary filters

    NASA Astrophysics Data System (ADS)

    Parviainen, Hannu

    2015-10-01

    PyLDTk automates the calculation of custom stellar limb darkening (LD) profiles and model-specific limb darkening coefficients (LDC) using the library of PHOENIX-generated specific intensity spectra by Husser et al. (2013). It facilitates exoplanet transit light curve modeling, especially transmission spectroscopy where the modeling is carried out for custom narrow passbands. PyLDTk construct model-specific priors on the limb darkening coefficients prior to the transit light curve modeling. It can also be directly integrated into the log posterior computation of any pre-existing transit modeling code with minimal modifications to constrain the LD model parameter space directly by the LD profile, allowing for the marginalization over the whole parameter space that can explain the profile without the need to approximate this constraint by a prior distribution. This is useful when using a high-order limb darkening model where the coefficients are often correlated, and the priors estimated from the tabulated values usually fail to include these correlations.

  15. A possible formation channel for blue hook stars in globular cluster - II. Effects of metallicity, mass ratio, tidal enhancement efficiency and helium abundance

    NASA Astrophysics Data System (ADS)

    Lei, Zhenxin; Zhao, Gang; Zeng, Aihua; Shen, Lihua; Lan, Zhongjian; Jiang, Dengkai; Han, Zhanwen

    2016-12-01

    Employing tidally enhanced stellar wind, we studied in binaries the effects of metallicity, mass ratio of primary to secondary, tidal enhancement efficiency and helium abundance on the formation of blue hook (BHk) stars in globular clusters (GCs). A total of 28 sets of binary models combined with different input parameters are studied. For each set of binary model, we presented a range of initial orbital periods that is needed to produce BHk stars in binaries. All the binary models could produce BHk stars within different range of initial orbital periods. We also compared our results with the observation in the Teff-logg diagram of GC NGC 2808 and ω Cen. Most of the BHk stars in these two GCs locate well in the region predicted by our theoretical models, especially when C/N-enhanced model atmospheres are considered. We found that mass ratio of primary to secondary and tidal enhancement efficiency have little effects on the formation of BHk stars in binaries, while metallicity and helium abundance would play important roles, especially for helium abundance. Specifically, with helium abundance increasing in binary models, the space range of initial orbital periods needed to produce BHk stars becomes obviously wider, regardless of other input parameters adopted. Our results were discussed with recent observations and other theoretical models.

  16. Applications of bioenergetics models to fish ecology and management: where do we go from here?

    USGS Publications Warehouse

    Hansen, Michael J.; Boisclair, Daniel; Brandt, Stephen B.; Hewett, Steven W.; Kitchell, James F.; Lucas, Martyn C.; Ney, John J.

    1993-01-01

    Papers and panel discussions given during a 1992 symposium on bioenergetics models are summarized. Bioenergetics models have been applied to a variety of research and management questions related to fish stocks, populations, food webs, and ecosystems. Applications include estimates of the intensity and dynamics of predator-prey interactions, nutrient cycling within aquatic food webs of varying trophic structure, and food requirements of single animals, whole populations, and communities of fishes. As tools in food web and ecosystem applications, bioenergetics models have been used to compare forage consumption by salmonid predators across the Laurentian Great Lakes for single populations and whole communities, and to estimate the growth potential of pelagic predators in Chesapeake Bay and Lake Ontario. Some critics say that bioenergetics models lack sufficient detail to produce reliable results in such field applications, whereas others say that the models are too complex to be useful tools for fishery managers. Nevertheless, bioenergetics models have achieved notable predictive successes. Improved estimates are needed for model parameters such as metabolic costs of activity, and more complete studies are needed of the bioenergetics of larval and juvenile fishes. Future research on bioenergetics should include laboratory and field measurements of key model parameters such as weight-dependent maximum consumption, respiration and activity, and thermal habitats actually occupied by fish. Future applications of bioenergetics models to fish populations also depend on accurate estimates of population sizes and survival rates.

  17. Characterization of human passive muscles for impact loads using genetic algorithm and inverse finite element methods.

    PubMed

    Chawla, A; Mukherjee, S; Karthikeyan, B

    2009-02-01

    The objective of this study is to identify the dynamic material properties of human passive muscle tissues for the strain rates relevant to automobile crashes. A novel methodology involving genetic algorithm (GA) and finite element method is implemented to estimate the material parameters by inverse mapping the impact test data. Isolated unconfined impact tests for average strain rates ranging from 136 s(-1) to 262 s(-1) are performed on muscle tissues. Passive muscle tissues are modelled as isotropic, linear and viscoelastic material using three-element Zener model available in PAMCRASH(TM) explicit finite element software. In the GA based identification process, fitness values are calculated by comparing the estimated finite element forces with the measured experimental forces. Linear viscoelastic material parameters (bulk modulus, short term shear modulus and long term shear modulus) are thus identified at strain rates 136 s(-1), 183 s(-1) and 262 s(-1) for modelling muscles. Extracted optimal parameters from this study are comparable with reported parameters in literature. Bulk modulus and short term shear modulus are found to be more influential in predicting the stress-strain response than long term shear modulus for the considered strain rates. Variations within the set of parameters identified at different strain rates indicate the need for new or improved material model, which is capable of capturing the strain rate dependency of passive muscle response with single set of material parameters for wide range of strain rates.

  18. Influence of model parameters on synthesized high-frequency strong-motion waveforms

    NASA Astrophysics Data System (ADS)

    Zadonina, Ekaterina; Caldeira, Bento; Bezzeghoud, Mourad; Borges, José F.

    2010-05-01

    Waveform modeling is an important and helpful instrument of modern seismology that may provide valuable information. However, synthesizing seismograms requires to define many parameters, which differently affect the final result. Such parameters may be: the design of the grid, the structure model, the source time functions, the source mechanism, the rupture velocity. Variations in parameters may produce significantly different seismograms. We synthesize seismograms from a hypothetical earthquake and numerically estimate the influence of some of the used parameters. Firstly, we present the results for high-frequency near-fault waveforms obtained from defined model by changing tested parameters. Secondly, we present the results of a quantitative comparison of contributions from certain parameters on synthetic waveforms by using misfit criteria. For the synthesis of waveforms we used 2D/3D elastic finite-difference wave propagation code E3D [1] based on the elastodynamic formulation of the wave equation on a staggered grid. This code gave us the opportunity to perform all needed manipulations using a computer cluster. To assess the obtained results, we use misfit criteria [2] where seismograms are compared in time-frequency and phase by applying a continuous wavelet transform to the seismic signal. [1] - Larsen, S. and C.A. Schultz (1995). ELAS3D: 2D/3D elastic finite-difference wave propagation code, Technical Report No. UCRL-MA-121792, 19 pp. [2] - Kristekova, M., Kristek, J., Moczo, P., Day, S.M., 2006. Misfit criteria for quantitative comparison of seismograms. Bul. of Seis. Soc. of Am. 96(5), 1836-1850.

  19. Nonstationarities in Catchment Response According to Basin and Rainfall Characteristics: Application to Korean Watershed

    NASA Astrophysics Data System (ADS)

    Kwon, Hyun-Han; Kim, Jin-Guk; Jung, Il-Won

    2015-04-01

    It must be acknowledged that application of rainfall-runoff models to simulate rainfall-runoff processes are successful in gauged watershed. However, there still remain some issues that will need to be further discussed. In particular, the quantitive representation of nonstationarity issue in basin response (e.g. concentration time, storage coefficient and roughness) along with ungauged watershed needs to be studied. In this regard, this study aims to investigate nonstationarity in basin response so as to potentially provide useful information in simulating runoff processes in ungauged watershed. For this purpose, HEC-1 rainfall-runoff model was mainly utilized. In addition, this study combined HEC-1 model with Bayesian statistical model to estimate uncertainty of the parameters which is called Bayesian HEC-1 (BHEC-1). The proposed rainfall-runofall model is applied to various catchments along with various rainfall patterns to understand nonstationarities in catchment response. Further discussion about the nonstationarity in catchment response and possible regionalization of the parameters for ungauged watershed are discussed. KEYWORDS: Nonstationary, Catchment response, Uncertainty, Bayesian Acknowledgement This research was supported by a Grant (13SCIPA01) from Smart Civil Infrastructure Research Program funded by the Ministry of Land, Infrastructure and Transport (MOLIT) of Korea government and the Korea Agency for Infrastructure Technology Advancement (KAIA).

  20. Modeling High-Impact Weather and Climate: Lessons From a Tropical Cyclone Perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Done, James; Holland, Greg; Bruyere, Cindy

    2013-10-19

    Although the societal impact of a weather event increases with the rarity of the event, our current ability to assess extreme events and their impacts is limited by not only rarity but also by current model fidelity and a lack of understanding of the underlying physical processes. This challenge is driving fresh approaches to assess high-impact weather and climate. Recent lessons learned in modeling high-impact weather and climate are presented using the case of tropical cyclones as an illustrative example. Through examples using the Nested Regional Climate Model to dynamically downscale large-scale climate data the need to treat bias inmore » the driving data is illustrated. Domain size, location, and resolution are also shown to be critical and should be guided by the need to: include relevant regional climate physical processes; resolve key impact parameters; and to accurately simulate the response to changes in external forcing. The notion of sufficient model resolution is introduced together with the added value in combining dynamical and statistical assessments to fill out the parent distribution of high-impact parameters. Finally, through the example of a tropical cyclone damage index, direct impact assessments are resented as powerful tools that distill complex datasets into concise statements on likely impact, and as highly effective communication devices.« less

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Katherine H.; Cutler, Dylan S.; Olis, Daniel R.

    REopt is a techno-economic decision support model used to optimize energy systems for buildings, campuses, communities, and microgrids. The primary application of the model is for optimizing the integration and operation of behind-the-meter energy assets. This report provides an overview of the model, including its capabilities and typical applications; inputs and outputs; economic calculations; technology descriptions; and model parameters, variables, and equations. The model is highly flexible, and is continually evolving to meet the needs of each analysis. Therefore, this report is not an exhaustive description of all capabilities, but rather a summary of the core components of the model.

  2. Testing the Performance and Accuracy of the RELXILL Model for the Relativistic X-Ray Reflection from Accretion Disks

    NASA Astrophysics Data System (ADS)

    Choudhury, Kishalay; García, Javier A.; Steiner, James F.; Bambi, Cosimo

    2017-12-01

    The reflection spectroscopic model RELXILL is commonly implemented in studying relativistic X-ray reflection from accretion disks around black holes. We present a systematic study of the model’s capability to constrain the dimensionless spin and ionization parameters from ∼6000 Nuclear Spectroscopic Telescope Array (NuSTAR) simulations of a bright X-ray source employing the lamp-post geometry. We employ high-count spectra to show the limitations in the model without being confused with limitations in signal-to-noise. We find that both parameters are well-recovered at 90% confidence with improving constraints at higher reflection fraction, high spin, and low source height. We test spectra across a broad range—first at 106–107 and then ∼105 total source counts across the effective 3–79 keV band of NuSTAR, and discover a strong dependence of the results on how fits are performed around the starting parameters, owing to the complexity of the model itself. A blind fit chosen over an approach that carries some estimates of the actual parameter values can lead to significantly worse recovery of model parameters. We further stress the importance to span the space of nonlinear-behaving parameters like {log} ξ carefully and thoroughly for the model to avoid misleading results. In light of selecting fitting procedures, we recall the necessity to pay attention to the choice of data binning and fit statistics used to test the goodness of fit by demonstrating the effect on the photon index Γ. We re-emphasize and implore the need to account for the detector resolution while binning X-ray data and using Poisson fit statistics instead while analyzing Poissonian data.

  3. Kernel learning at the first level of inference.

    PubMed

    Cawley, Gavin C; Talbot, Nicola L C

    2014-05-01

    Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. UCODE_2005 and six other computer codes for universal sensitivity analysis, calibration, and uncertainty evaluation constructed using the JUPITER API

    USGS Publications Warehouse

    Poeter, Eileen E.; Hill, Mary C.; Banta, Edward R.; Mehl, Steffen; Christensen, Steen

    2006-01-01

    This report documents the computer codes UCODE_2005 and six post-processors. Together the codes can be used with existing process models to perform sensitivity analysis, data needs assessment, calibration, prediction, and uncertainty analysis. Any process model or set of models can be used; the only requirements are that models have numerical (ASCII or text only) input and output files, that the numbers in these files have sufficient significant digits, that all required models can be run from a single batch file or script, and that simulated values are continuous functions of the parameter values. Process models can include pre-processors and post-processors as well as one or more models related to the processes of interest (physical, chemical, and so on), making UCODE_2005 extremely powerful. An estimated parameter can be a quantity that appears in the input files of the process model(s), or a quantity used in an equation that produces a value that appears in the input files. In the latter situation, the equation is user-defined. UCODE_2005 can compare observations and simulated equivalents. The simulated equivalents can be any simulated value written in the process-model output files or can be calculated from simulated values with user-defined equations. The quantities can be model results, or dependent variables. For example, for ground-water models they can be heads, flows, concentrations, and so on. Prior, or direct, information on estimated parameters also can be considered. Statistics are calculated to quantify the comparison of observations and simulated equivalents, including a weighted least-squares objective function. In addition, data-exchange files are produced that facilitate graphical analysis. UCODE_2005 can be used fruitfully in model calibration through its sensitivity analysis capabilities and its ability to estimate parameter values that result in the best possible fit to the observations. Parameters are estimated using nonlinear regression: a weighted least-squares objective function is minimized with respect to the parameter values using a modified Gauss-Newton method or a double-dogleg technique. Sensitivities needed for the method can be read from files produced by process models that can calculate sensitivities, such as MODFLOW-2000, or can be calculated by UCODE_2005 using a more general, but less accurate, forward- or central-difference perturbation technique. Problems resulting from inaccurate sensitivities and solutions related to the perturbation techniques are discussed in the report. Statistics are calculated and printed for use in (1) diagnosing inadequate data and identifying parameters that probably cannot be estimated; (2) evaluating estimated parameter values; and (3) evaluating how well the model represents the simulated processes. Results from UCODE_2005 and codes RESIDUAL_ANALYSIS and RESIDUAL_ANALYSIS_ADV can be used to evaluate how accurately the model represents the processes it simulates. Results from LINEAR_UNCERTAINTY can be used to quantify the uncertainty of model simulated values if the model is sufficiently linear. Results from MODEL_LINEARITY and MODEL_LINEARITY_ADV can be used to evaluate model linearity and, thereby, the accuracy of the LINEAR_UNCERTAINTY results. UCODE_2005 can also be used to calculate nonlinear confidence and predictions intervals, which quantify the uncertainty of model simulated values when the model is not linear. CORFAC_PLUS can be used to produce factors that allow intervals to account for model intrinsic nonlinearity and small-scale variations in system characteristics that are not explicitly accounted for in the model or the observation weighting. The six post-processing programs are independent of UCODE_2005 and can use the results of other programs that produce the required data-exchange files. UCODE_2005 and the other six codes are intended for use on any computer operating system. The programs con

  5. In silico simulations of experimental protocols for cardiac modeling.

    PubMed

    Carro, Jesus; Rodriguez, Jose Felix; Pueyo, Esther

    2014-01-01

    A mathematical model of the AP involves the sum of different transmembrane ionic currents and the balance of intracellular ionic concentrations. To each ionic current corresponds an equation involving several effects. There are a number of model parameters that must be identified using specific experimental protocols in which the effects are considered as independent. However, when the model complexity grows, the interaction between effects becomes increasingly important. Therefore, model parameters identified considering the different effects as independent might be misleading. In this work, a novel methodology consisting in performing in silico simulations of the experimental protocol and then comparing experimental and simulated outcomes is proposed for parameter model identification and validation. The potential of the methodology is demonstrated by validating voltage-dependent L-type calcium current (ICaL) inactivation in recently proposed human ventricular AP models with different formulations. Our results show large differences between ICaL inactivation as calculated from the model equation and ICaL inactivation from the in silico simulations due to the interaction between effects and/or to the experimental protocol. Our results suggest that, when proposing any new model formulation, consistency between such formulation and the corresponding experimental data that is aimed at being reproduced needs to be first verified considering all involved factors.

  6. Inverse Monte Carlo method in a multilayered tissue model for diffuse reflectance spectroscopy

    NASA Astrophysics Data System (ADS)

    Fredriksson, Ingemar; Larsson, Marcus; Strömberg, Tomas

    2012-04-01

    Model based data analysis of diffuse reflectance spectroscopy data enables the estimation of optical and structural tissue parameters. The aim of this study was to present an inverse Monte Carlo method based on spectra from two source-detector distances (0.4 and 1.2 mm), using a multilayered tissue model. The tissue model variables include geometrical properties, light scattering properties, tissue chromophores such as melanin and hemoglobin, oxygen saturation and average vessel diameter. The method utilizes a small set of presimulated Monte Carlo data for combinations of different levels of epidermal thickness and tissue scattering. The path length distributions in the different layers are stored and the effect of the other parameters is added in the post-processing. The accuracy of the method was evaluated using Monte Carlo simulations of tissue-like models containing discrete blood vessels, evaluating blood tissue fraction and oxygenation. It was also compared to a homogeneous model. The multilayer model performed better than the homogeneous model and all tissue parameters significantly improved spectral fitting. Recorded in vivo spectra were fitted well at both distances, which we previously found was not possible with a homogeneous model. No absolute intensity calibration is needed and the algorithm is fast enough for real-time processing.

  7. qPIPSA: Relating enzymatic kinetic parameters and interaction fields

    PubMed Central

    Gabdoulline, Razif R; Stein, Matthias; Wade, Rebecca C

    2007-01-01

    Background The simulation of metabolic networks in quantitative systems biology requires the assignment of enzymatic kinetic parameters. Experimentally determined values are often not available and therefore computational methods to estimate these parameters are needed. It is possible to use the three-dimensional structure of an enzyme to perform simulations of a reaction and derive kinetic parameters. However, this is computationally demanding and requires detailed knowledge of the enzyme mechanism. We have therefore sought to develop a general, simple and computationally efficient procedure to relate protein structural information to enzymatic kinetic parameters that allows consistency between the kinetic and structural information to be checked and estimation of kinetic constants for structurally and mechanistically similar enzymes. Results We describe qPIPSA: quantitative Protein Interaction Property Similarity Analysis. In this analysis, molecular interaction fields, for example, electrostatic potentials, are computed from the enzyme structures. Differences in molecular interaction fields between enzymes are then related to the ratios of their kinetic parameters. This procedure can be used to estimate unknown kinetic parameters when enzyme structural information is available and kinetic parameters have been measured for related enzymes or were obtained under different conditions. The detailed interaction of the enzyme with substrate or cofactors is not modeled and is assumed to be similar for all the proteins compared. The protein structure modeling protocol employed ensures that differences between models reflect genuine differences between the protein sequences, rather than random fluctuations in protein structure. Conclusion Provided that the experimental conditions and the protein structural models refer to the same protein state or conformation, correlations between interaction fields and kinetic parameters can be established for sets of related enzymes. Outliers may arise due to variation in the importance of different contributions to the kinetic parameters, such as protein stability and conformational changes. The qPIPSA approach can assist in the validation as well as estimation of kinetic parameters, and provide insights into enzyme mechanism. PMID:17919319

  8. Metal mixture modeling evaluation project: 2. Comparison of four modeling approaches.

    PubMed

    Farley, Kevin J; Meyer, Joseph S; Balistrieri, Laurie S; De Schamphelaere, Karel A C; Iwasaki, Yuichi; Janssen, Colin R; Kamo, Masashi; Lofts, Stephen; Mebane, Christopher A; Naito, Wataru; Ryan, Adam C; Santore, Robert C; Tipping, Edward

    2015-04-01

    As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the US Geological Survey (USA), HDR|HydroQual (USA), and the Centre for Ecology and Hydrology (United Kingdom) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME workshop in Brussels, Belgium (May 2012), is provided in the present study. Overall, the models were found to be similar in structure (free ion activities computed by the Windermere humic aqueous model [WHAM]; specific or nonspecific binding of metals/cations in or on the organism; specification of metal potency factors or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single vs multiple types of binding sites on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong interrelationships among the model parameters (binding constants, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed. © 2014 SETAC.

  9. ACME Priority Metrics (A-PRIME)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, Katherine J; Zender, Charlie; Van Roekel, Luke

    A-PRIME, is a collection of scripts designed to provide Accelerated Climate Model for Energy (ACME) model developers and analysts with a variety of analysis of the model needed to determine if the model is producing the desired results, depending on the goals of the simulation. The software is csh scripts based at the top level to enable scientist to provide the input parameters. Within the scripts, the csh scripts calls code to perform the postprocessing of the raw data analysis and create plots for visual assessment.

  10. Development of analysis technique to predict the material behavior of blowing agent

    NASA Astrophysics Data System (ADS)

    Hwang, Ji Hoon; Lee, Seonggi; Hwang, So Young; Kim, Naksoo

    2014-11-01

    In order to numerically simulate the foaming behavior of mastic sealer containing the blowing agent, a foaming and driving force model are needed which incorporate the foaming characteristics. Also, the elastic stress model is required to represent the material behavior of co-existing phase of liquid state and the cured polymer. It is important to determine the thermal properties such as thermal conductivity and specific heat because foaming behavior is heavily influenced by temperature change. In this study, three models are proposed to explain the foaming process and material behavior during and after the process. To obtain the material parameters in each model, following experiments and the numerical simulations are performed: thermal test, simple shear test and foaming test. The error functions are defined as differences between the experimental measurements and the numerical simulation results, and then the parameters are determined by minimizing the error functions. To ensure the validity of the obtained parameters, the confirmation simulation for each model is conducted by applying the determined parameters. The cross-verification is performed by measuring the foaming/shrinkage force. The results of cross-verification tended to follow the experimental results. Interestingly, it was possible to estimate the micro-deformation occurring in automobile roof surface by applying the proposed model to oven process analysis. The application of developed analysis technique will contribute to the design with minimized micro-deformation.

  11. Stress-Strain Characterization for Reversed Loading Path and Constitutive Modeling for AHSS Springback Predictions

    NASA Astrophysics Data System (ADS)

    Zhu, Hong; Huang, Mai; Sadagopan, Sriram; Yao, Hong

    2017-09-01

    With increasing vehicle fuel economy standards, automotive OEMs are widely using various AHSS grades including DP, TRIP, CP and 3rd Gen AHSS to reduce vehicle weight due to their good combination of strength and formability. As one of enabling technologies for AHSS application, the requirement for requiring accurate prediction of springback for cold stamped AHSS parts stimulated a large number of investigations in the past decade with reversed loading path at large strains followed by constitutive modeling. With a spectrum of complex loading histories occurring in production stamping processes, there were many challenges in this field including issues of test data reliability, loading path representability, constitutive model robustness and non-unique constitutive parameter-identification. In this paper, various testing approaches and constitutive modeling will be reviewed briefly and a systematic methodology from stress-strain characterization, constitutive model parameter identification for material card generation will be presented in order to support automotive OEM’s need on virtual stamping. This systematic methodology features a tension-compression test at large strain with robust anti-buckling device with concurrent friction force correction, properly selected loading paths to represent material behavior during different springback modes as well as the 10-parameter Yoshida model with knowledge-based parameter-identification through nonlinear optimization. Validation cases for lab AHSS parts will also be discussed to check applicability of this methodology.

  12. Robust time and frequency domain estimation methods in adaptive control

    NASA Technical Reports Server (NTRS)

    Lamaire, Richard Orville

    1987-01-01

    A robust identification method was developed for use in an adaptive control system. The type of estimator is called the robust estimator, since it is robust to the effects of both unmodeled dynamics and an unmeasurable disturbance. The development of the robust estimator was motivated by a need to provide guarantees in the identification part of an adaptive controller. To enable the design of a robust control system, a nominal model as well as a frequency-domain bounding function on the modeling uncertainty associated with this nominal model must be provided. Two estimation methods are presented for finding parameter estimates, and, hence, a nominal model. One of these methods is based on the well developed field of time-domain parameter estimation. In a second method of finding parameter estimates, a type of weighted least-squares fitting to a frequency-domain estimated model is used. The frequency-domain estimator is shown to perform better, in general, than the time-domain parameter estimator. In addition, a methodology for finding a frequency-domain bounding function on the disturbance is used to compute a frequency-domain bounding function on the additive modeling error due to the effects of the disturbance and the use of finite-length data. The performance of the robust estimator in both open-loop and closed-loop situations is examined through the use of simulations.

  13. Using evolutionary algorithms for fitting high-dimensional models to neuronal data.

    PubMed

    Svensson, Carl-Magnus; Coombes, Stephen; Peirce, Jonathan Westley

    2012-04-01

    In the study of neurosciences, and of complex biological systems in general, there is frequently a need to fit mathematical models with large numbers of parameters to highly complex datasets. Here we consider algorithms of two different classes, gradient following (GF) methods and evolutionary algorithms (EA) and examine their performance in fitting a 9-parameter model of a filter-based visual neuron to real data recorded from a sample of 107 neurons in macaque primary visual cortex (V1). Although the GF method converged very rapidly on a solution, it was highly susceptible to the effects of local minima in the error surface and produced relatively poor fits unless the initial estimates of the parameters were already very good. Conversely, although the EA required many more iterations of evaluating the model neuron's response to a series of stimuli, it ultimately found better solutions in nearly all cases and its performance was independent of the starting parameters of the model. Thus, although the fitting process was lengthy in terms of processing time, the relative lack of human intervention in the evolutionary algorithm, and its ability ultimately to generate model fits that could be trusted as being close to optimal, made it far superior in this particular application than the gradient following methods. This is likely to be the case in many further complex systems, as are often found in neuroscience.

  14. Early prediction of intensive care unit-acquired weakness using easily available parameters: a prospective observational study.

    PubMed

    Wieske, Luuk; Witteveen, Esther; Verhamme, Camiel; Dettling-Ihnenfeldt, Daniela S; van der Schaaf, Marike; Schultz, Marcus J; van Schaik, Ivo N; Horn, Janneke

    2014-01-01

    An early diagnosis of Intensive Care Unit-acquired weakness (ICU-AW) using muscle strength assessment is not possible in most critically ill patients. We hypothesized that development of ICU-AW can be predicted reliably two days after ICU admission, using patient characteristics, early available clinical parameters, laboratory results and use of medication as parameters. Newly admitted ICU patients mechanically ventilated ≥2 days were included in this prospective observational cohort study. Manual muscle strength was measured according to the Medical Research Council (MRC) scale, when patients were awake and attentive. ICU-AW was defined as an average MRC score <4. A prediction model was developed by selecting predictors from an a-priori defined set of candidate predictors, based on known risk factors. Discriminative performance of the prediction model was evaluated, validated internally and compared to the APACHE IV and SOFA score. Of 212 included patients, 103 developed ICU-AW. Highest lactate levels, treatment with any aminoglycoside in the first two days after admission and age were selected as predictors. The area under the receiver operating characteristic curve of the prediction model was 0.71 after internal validation. The new prediction model improved discrimination compared to the APACHE IV and the SOFA score. The new early prediction model for ICU-AW using a set of 3 easily available parameters has fair discriminative performance. This model needs external validation.

  15. Adaptation of an urban land surface model to a tropical suburban area: Offline evaluation, sensitivity analysis, and optimization of TEB/ISBA (SURFEX)

    NASA Astrophysics Data System (ADS)

    Harshan, Suraj

    The main objective of the present thesis is the improvement of the TEB/ISBA (SURFEX) urban land surface model (ULSM) through comprehensive evaluation, sensitivity analysis, and optimization experiments using energy balance and radiative and air temperature data observed during 11 months at a tropical sub-urban site in Singapore. Overall the performance of the model is satisfactory, with a small underestimation of net radiation and an overestimation of sensible heat flux. Weaknesses in predicting the latent heat flux are apparent with smaller model values during daytime and the model also significantly underpredicts both the daytime peak and nighttime storage heat. Surface temperatures of all facets are generally overpredicted. Significant variation exists in the model behaviour between dry and wet seasons. The vegetation parametrization used in the model is inadequate to represent the moisture dynamics, producing unrealistically low latent heat fluxes during a particularly dry period. The comprehensive evaluation of the USLM shows the need for accurate estimation of input parameter values for present site. Since obtaining many of these parameters through empirical methods is not feasible, the present study employed a two step approach aimed at providing information about the most sensitive parameters and an optimized parameter set from model calibration. Two well established sensitivity analysis methods (global: Sobol and local: Morris) and a state-of-the-art multiobjective evolutionary algorithm (Borg) were employed for sensitivity analysis and parameter estimation. Experiments were carried out for three different weather periods. The analysis indicates that roof related parameters are the most important ones in controlling the behaviour of the sensible heat flux and net radiation flux, with roof and road albedo as the most influential parameters. Soil moisture initialization parameters are important in controlling the latent heat flux. The built (town) fraction has a significant influence on all fluxes considered. Comparison between the Sobol and Morris methods shows similar sensitivities, indicating the robustness of the present analysis and that the Morris method can be employed as a computationally cheaper alternative of Sobol's method. Optimization as well as the sensitivity experiments for the three periods (dry, wet and mixed), show a noticeable difference in parameter sensitivity and parameter convergence, indicating inadequacies in model formulation. Existence of a significant proportion of less sensitive parameters might be indicating an over-parametrized model. Borg MOEA showed great promise in optimizing the input parameters set. The optimized model modified using the site specific values for thermal roughness length parametrization shows an improvement in the performances of outgoing longwave radiation flux, overall surface temperature, heat storage flux and sensible heat flux.

  16. Assessment of Spatial Transferability of Process-Based Hydrological Model Parameters in Two Neighboring Catchments in the Himalayan Region

    NASA Astrophysics Data System (ADS)

    Nepal, S.

    2016-12-01

    The spatial transferability of the model parameters of the process-oriented distributed J2000 hydrological model was investigated in two glaciated sub-catchments of the Koshi river basin in eastern Nepal. The basins had a high degree of similarity with respect to their static landscape features. The model was first calibrated (1986-1991) and validated (1992-1997) in the Dudh Koshi sub-catchment. The calibrated and validated model parameters were then transferred to the nearby Tamor catchment (2001-2009). A sensitivity and uncertainty analysis was carried out for both sub-catchments to discover the sensitivity range of the parameters in the two catchments. The model represented the overall hydrograph well in both sub-catchments, including baseflow and medium range flows (rising and recession limbs). The efficiency results according to both Nash-Sutcliffe and the coefficient of determination was above 0.84 in both cases. The sensitivity analysis showed that the same parameter was most sensitive for Nash-Sutcliffe (ENS) and Log Nash-Sutcliffe (LNS) efficiencies in both catchments. However, there were some differences in sensitivity to ENS and LNS for moderate and low sensitive parameters, although the majority (13 out of 16 for ENS and 16 out of 16 for LNS) had a sensitivity response in a similar range. A generalized likelihood uncertainty estimation (GLUE) result suggest that most of the time the observed runoff is within the parameter uncertainty range, although occasionally the values lie outside the uncertainty range, especially during flood peaks and more in the Tamor. This may be due to the limited input data resulting from the small number of precipitation stations and lack of representative stations in high-altitude areas, as well as to model structural uncertainty. The results indicate that transfer of the J2000 parameters to a neighboring catchment in the Himalayan region with similar physiographic landscape characteristics is viable. This indicates the possibility of applying process-based J2000 model be to the ungauged catchments in the Himalayan region, which could provide important insights into the hydrological system dynamics and provide much needed information to support water resources planning and management.

  17. Testing alternative ground water models using cross-validation and other methods

    USGS Publications Warehouse

    Foglia, L.; Mehl, S.W.; Hill, M.C.; Perona, P.; Burlando, P.

    2007-01-01

    Many methods can be used to test alternative ground water models. Of concern in this work are methods able to (1) rank alternative models (also called model discrimination) and (2) identify observations important to parameter estimates and predictions (equivalent to the purpose served by some types of sensitivity analysis). Some of the measures investigated are computationally efficient; others are computationally demanding. The latter are generally needed to account for model nonlinearity. The efficient model discrimination methods investigated include the information criteria: the corrected Akaike information criterion, Bayesian information criterion, and generalized cross-validation. The efficient sensitivity analysis measures used are dimensionless scaled sensitivity (DSS), composite scaled sensitivity, and parameter correlation coefficient (PCC); the other statistics are DFBETAS, Cook's D, and observation-prediction statistic. Acronyms are explained in the introduction. Cross-validation (CV) is a computationally intensive nonlinear method that is used for both model discrimination and sensitivity analysis. The methods are tested using up to five alternative parsimoniously constructed models of the ground water system of the Maggia Valley in southern Switzerland. The alternative models differ in their representation of hydraulic conductivity. A new method for graphically representing CV and sensitivity analysis results for complex models is presented and used to evaluate the utility of the efficient statistics. The results indicate that for model selection, the information criteria produce similar results at much smaller computational cost than CV. For identifying important observations, the only obviously inferior linear measure is DSS; the poor performance was expected because DSS does not include the effects of parameter correlation and PCC reveals large parameter correlations. ?? 2007 National Ground Water Association.

  18. A theoretical-electron-density databank using a model of real and virtual spherical atoms.

    PubMed

    Nassour, Ayoub; Domagala, Slawomir; Guillot, Benoit; Leduc, Theo; Lecomte, Claude; Jelsch, Christian

    2017-08-01

    A database describing the electron density of common chemical groups using combinations of real and virtual spherical atoms is proposed, as an alternative to the multipolar atom modelling of the molecular charge density. Theoretical structure factors were computed from periodic density functional theory calculations on 38 crystal structures of small molecules and the charge density was subsequently refined using a density model based on real spherical atoms and additional dummy charges on the covalent bonds and on electron lone-pair sites. The electron-density parameters of real and dummy atoms present in a similar chemical environment were averaged on all the molecules studied to build a database of transferable spherical atoms. Compared with the now-popular databases of transferable multipolar parameters, the spherical charge modelling needs fewer parameters to describe the molecular electron density and can be more easily incorporated in molecular modelling software for the computation of electrostatic properties. The construction method of the database is described. In order to analyse to what extent this modelling method can be used to derive meaningful molecular properties, it has been applied to the urea molecule and to biotin/streptavidin, a protein/ligand complex.

  19. Technical Approach for Determining Key Parameters Needed for Modeling the Performance of Cast Stone for the Integrated Disposal Facility Performance Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yabusaki, Steven B.; Serne, R. Jeffrey; Rockhold, Mark L.

    2015-03-30

    Washington River Protection Solutions (WRPS) and its contractors at Pacific Northwest National Laboratory (PNNL) and Savannah River National Laboratory (SRNL) are conducting a development program to develop / refine the cementitious waste form for the wastes treated at the ETF and to provide the data needed to support the IDF PA. This technical approach document is intended to provide guidance to the cementitious waste form development program with respect to the waste form characterization and testing information needed to support the IDF PA. At the time of the preparation of this technical approach document, the IDF PA effort is justmore » getting started and the approach to analyze the performance of the cementitious waste form has not been determined. Therefore, this document looks at a number of different approaches for evaluating the waste form performance and describes the testing needed to provide data for each approach. Though the approach addresses a cementitious secondary aqueous waste form, it is applicable to other waste forms such as Cast Stone for supplemental immobilization of Hanford LAW. The performance of Cast Stone as a physical and chemical barrier to the release of contaminants of concern (COCs) from solidification of Hanford liquid low activity waste (LAW) and secondary wastes processed through the Effluent Treatment Facility (ETF) is of critical importance to the Hanford Integrated Disposal Facility (IDF) total system performance assessment (TSPA). The effectiveness of cementitious waste forms as a barrier to COC release is expected to evolve with time. PA modeling must therefore anticipate and address processes, properties, and conditions that alter the physical and chemical controls on COC transport in the cementitious waste forms over time. Most organizations responsible for disposal facility operation and their regulators support an iterative hierarchical safety/performance assessment approach with a general philosophy that modeling provides the critical link between the short-term understanding from laboratory and field tests, and the prediction of repository performance over repository time frames and scales. One common recommendation is that experiments be designed to permit the appropriate scaling in the models. There is a large contrast in the physical and chemical properties between the Cast Stone waste package and the IDF backfill and surrounding sediments. Cast Stone exhibits low permeability, high tortuosity, low carbonate, high pH, and low Eh whereas the backfill and native sediments have high permeability, low tortuosity, high carbonate, circumneutral pH, and high Eh. These contrasts have important implications for flow, transport, and reactions across the Cast Stone – backfill interface. Over time with transport across the interface and subsequent reactions, the sharp geochemical contrast will blur and there will be a range of spatially-distributed conditions. In general, COC mobility and transport will be sensitive to these geochemical variations, which also include physical changes in porosity and permeability from mineral reactions. Therefore, PA modeling must address processes, properties, and conditions that alter the physical and chemical controls on COC transport in the cementitious waste forms over time. Section 2 of this document reviews past Hanford PAs and SRS Saltstone PAs, which to date have mostly relied on the lumped parameter COC release conceptual models for TSPA predictions, and provides some details on the chosen values for the lumped parameters. Section 3 provides more details on the hierarchical modeling strategy and processes and mechanisms that control COC release. Section 4 summarizes and lists the key parameters for which numerical values are needed to perform PAs. Section 5 provides brief summaries of the methods used to measure the needed parameters and references to get more details.« less

  20. Caracterisation mecanique dynamique de materiaux poro-visco-elastiques

    NASA Astrophysics Data System (ADS)

    Renault, Amelie

    Poro-viscoelastic materials are well modelled with Biot-Allard equations. This model needs a number of geometrical parameters in order to describe the macroscopic geometry of the material and elastic parameters in order to describe the elastic properties of the material skeleton. Several characterisation methods of viscoelastic parameters of porous materials are studied in this thesis. Firstly, quasistatic and resonant characterization methods are described and analyzed. Secondly, a new inverse dynamic characterization of the same modulus is developed. The latter involves a two layers metal-porous beam, which is excited at the center. The input mobility is measured. The set-up is simplified compared to previous methods. The parameters are obtained via an inversion procedure based on the minimisation of the cost function comparing the measured and calculated frequency response functions (FRF). The calculation is done with a general laminate model. A parametric study identifies the optimal beam dimensions for maximum sensitivity of the inversion model. The advantage of using a code which is not taking into account fluid-structure interactions is the low computation time. For most materials, the effect of this interaction on the elastic properties is negligible. Several materials are tested to demonstrate the performance of the method compared to the classical quasi-static approaches, and set its limitations and range of validity. Finally, conclusions about their utilisation are given. Keywords. Elastic parameters, porous materials, anisotropy, vibration.

  1. Water Stress & Biomass Monitoring and SWAP Modeling of Irrigated Crops in Saratov Region of Russia

    NASA Astrophysics Data System (ADS)

    Zeyliger, Anatoly; Ermolaeva, Olga

    2016-04-01

    Development of modern irrigation technologies are balanced between the need to maximize production and the need to minimize water use which provides harmonious interaction of irrigated systems with closely-spaced environment. Thus requires an understanding of complex interrelationships between landscape and underground of irrigated and adjacent areas in present and future conditions aiming to minimize development of negative scenarios. In this way in each irrigated areas a combination of specific factors and drivers must be recognized and evaluated. Much can be obtained by improving the efficiency use of water applied for irrigation. Modern RS monitoring technologies offers the opportunity to develop and implement an effective irrigation control program permitting today to increase efficiency of irrigation water use. These technologies provide parameters with both high temporal and adequate spatial needed to monitor agrohydrological parameters of irrigated agricultural crops. Combination of these parameters with meteorological and biophysical parameters can be used to estimate crop water stress defined as ratio between actual (ETa) and potential (ETc) evapotranspiration. Aggregation of actual values of crop water stress with biomass (yield) data predicted by agrohydrological model based on weather forecasting and scenarios of irrigation water application may be used for indication of both rational timing and amount of irrigation water allocation. This type of analysis facilitating an efficient water management can be easily extended to irrigated areas by developing maps of water efficiency application serving as an irrigation advice system for farmers at his fields and as a decision support tool for the authorities on the large perimeter irrigation management. This contribution aims to communicate an illustrative explanation about the practical application of a data combination of agrohydrological modeling and ground & space based monitoring. For this aim some results of analyzing water stress during growing season of 2012 and yielded biomass of crops three types of crops alfalfa, corn and soya irrigated by sprinkling machines at left bank of Volga River at Saratov Region of Russia are presented and analyzed. For that a combination of data received from satellite, local meteorological station and farmers as well as SWAP model was used. Analyze of data sets of monitored water deficit of each crop averaged for irrigation period was done by linear regression with yielded biomass values. Following analyze of effectiveness of irrigation water application was done by SWAP agrohydrological model.

  2. Application of a compressible flow solver and barotropic cavitation model for the evaluation of the suction head in a low specific speed centrifugal pump impeller channel

    NASA Astrophysics Data System (ADS)

    Limbach, P.; Müller, T.; Skoda, R.

    2015-12-01

    Commonly, for the simulation of cavitation in centrifugal pumps incompressible flow solvers with VOF kind cavitation models are applied. Since the source/sink terms of the void fraction transport equation are based on simplified bubble dynamics, empirical parameters may need to be adjusted to the particular pump operating point. In the present study a barotropic cavitation model, which is based solely on thermodynamic fluid properties and does not include any empirical parameters, is applied on a single flow channel of a pump impeller in combination with a time-explicit viscous compressible flow solver. The suction head curves (head drop) are compared to the results of an incompressible implicit standard industrial CFD tool and are predicted qualitatively correct by the barotropic model.

  3. Metal Mixture Modeling Evaluation project: 2. Comparison of four modeling approaches

    USGS Publications Warehouse

    Farley, Kevin J.; Meyer, Joe; Balistrieri, Laurie S.; DeSchamphelaere, Karl; Iwasaki, Yuichi; Janssen, Colin; Kamo, Masashi; Lofts, Steve; Mebane, Christopher A.; Naito, Wataru; Ryan, Adam C.; Santore, Robert C.; Tipping, Edward

    2015-01-01

    As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the U.S. Geological Survey (USA), HDR⎪HydroQual, Inc. (USA), and the Centre for Ecology and Hydrology (UK) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME Workshop in Brussels, Belgium (May 2012), is provided herein. Overall, the models were found to be similar in structure (free ion activities computed by WHAM; specific or non-specific binding of metals/cations in or on the organism; specification of metal potency factors and/or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single versus multiple types of binding site on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong inter-relationships among the model parameters (log KM values, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed.

  4. Probing Quark-Gluon-Plasma properties with a Bayesian model-to-data comparison

    NASA Astrophysics Data System (ADS)

    Cai, Tianji; Bernhard, Jonah; Ke, Weiyao; Bass, Steffen; Duke QCD Group Team

    2016-09-01

    Experiments at RHIC and LHC study a special state of matter called the Quark Gluon Plasma (QGP), where quarks and gluons roam freely, by colliding relativistic heavy-ions. Given the transitory nature of the QGP, its properties can only be explored by comparing computational models of its formation and evolution to experimental data. The models fall, roughly speaking, under two categories-those solely using relativistic viscous hydrodynamics (pure hydro model) and those that in addition couple to a microscopic Boltzmann transport for the later evolution of the hadronic decay products (hybrid model). Each of these models has multiple parameters that encode the physical properties we want to probe and that need to be calibrated to experimental data, a task which is computationally expensive, but necessary for the knowledge extraction and determination of the models' quality. Our group has developed an analysis technique based on Bayesian Statistics to perform the model calibration and to extract probability distributions for each model parameter. Following the previous work that applies the technique to the hybrid model, we now perform a similar analysis on a pure-hydro model and display the posterior distributions for the same set of model parameters. We also develop a set of criteria to assess the quality of the two models with respect to their ability to describe current experimental data. Funded by Duke University Goldman Sachs Research Fellowship.

  5. One-power IC with MPPT design

    NASA Astrophysics Data System (ADS)

    Xu, Shengzhi; Chu, Ian; Zhao, Gengshen; Wang, Qingzhang

    2008-03-01

    When proceed photovoltaic power system design, engineer needs prepared model of PV cells to evaluate system response, capability performance, and stability, the DC model is not enough, but an accuracy AC model plays a big role. This paper talks first about the AC model of PV cells, and DC model is also introduced in simple. There is a PV controller example explaining the steps to do system simulation in this paper. Two equivalent circuit models are implemented with mixed-signal language verilog-a, one hardware language easy to use and having good speed and high accuracy. Both of two models include solar cell arrays, one buck switched mode DC-DC converter, and the maximum power point tracking algorithm. The difference between them is that Solar cell in one of two models is with ac small signal parameter, another is without. The simulation result is given in comparison. This paper's work shows that ac parameter plays large role in switch-mode PV power system, especially when the switch frequency is higher than 100kHz.

  6. Technical note: Bayesian calibration of dynamic ruminant nutrition models.

    PubMed

    Reed, K F; Arhonditsis, G B; France, J; Kebreab, E

    2016-08-01

    Mechanistic models of ruminant digestion and metabolism have advanced our understanding of the processes underlying ruminant animal physiology. Deterministic modeling practices ignore the inherent variation within and among individual animals and thus have no way to assess how sources of error influence model outputs. We introduce Bayesian calibration of mathematical models to address the need for robust mechanistic modeling tools that can accommodate error analysis by remaining within the bounds of data-based parameter estimation. For the purpose of prediction, the Bayesian approach generates a posterior predictive distribution that represents the current estimate of the value of the response variable, taking into account both the uncertainty about the parameters and model residual variability. Predictions are expressed as probability distributions, thereby conveying significantly more information than point estimates in regard to uncertainty. Our study illustrates some of the technical advantages of Bayesian calibration and discusses the future perspectives in the context of animal nutrition modeling. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  7. On the validity of the dispersion model of hepatic drug elimination when intravascular transit time densities are long-tailed.

    PubMed

    Weiss, M; Stedtler, C; Roberts, M S

    1997-09-01

    The dispersion model with mixed boundary conditions uses a single parameter, the dispersion number, to describe the hepatic elimination of xenobiotics and endogenous substances. An implicit a priori assumption of the model is that the transit time density of intravascular indicators is approximately by an inverse Gaussian distribution. This approximation is limited in that the model poorly describes the tail part of the hepatic outflow curves of vascular indicators. A sum of two inverse Gaussian functions is proposed as an alternative, more flexible empirical model for transit time densities of vascular references. This model suggests that a more accurate description of the tail portion of vascular reference curves yields an elimination rate constant (or intrinsic clearance) which is 40% less than predicted by the dispersion model with mixed boundary conditions. The results emphasize the need to accurately describe outflow curves in using them as a basis for determining pharmacokinetic parameters using hepatic elimination models.

  8. Potential application of item-response theory to interpretation of medical codes in electronic patient records

    PubMed Central

    2011-01-01

    Background Electronic patient records are generally coded using extensive sets of codes but the significance of the utilisation of individual codes may be unclear. Item response theory (IRT) models are used to characterise the psychometric properties of items included in tests and questionnaires. This study asked whether the properties of medical codes in electronic patient records may be characterised through the application of item response theory models. Methods Data were provided by a cohort of 47,845 participants from 414 family practices in the UK General Practice Research Database (GPRD) with a first stroke between 1997 and 2006. Each eligible stroke code, out of a set of 202 OXMIS and Read codes, was coded as either recorded or not recorded for each participant. A two parameter IRT model was fitted using marginal maximum likelihood estimation. Estimated parameters from the model were considered to characterise each code with respect to the latent trait of stroke diagnosis. The location parameter is referred to as a calibration parameter, while the slope parameter is referred to as a discrimination parameter. Results There were 79,874 stroke code occurrences available for analysis. Utilisation of codes varied between family practices with intraclass correlation coefficients of up to 0.25 for the most frequently used codes. IRT analyses were restricted to 110 Read codes. Calibration and discrimination parameters were estimated for 77 (70%) codes that were endorsed for 1,942 stroke patients. Parameters were not estimated for the remaining more frequently used codes. Discrimination parameter values ranged from 0.67 to 2.78, while calibration parameters values ranged from 4.47 to 11.58. The two parameter model gave a better fit to the data than either the one- or three-parameter models. However, high chi-square values for about a fifth of the stroke codes were suggestive of poor item fit. Conclusion The application of item response theory models to coded electronic patient records might potentially contribute to identifying medical codes that offer poor discrimination or low calibration. This might indicate the need for improved coding sets or a requirement for improved clinical coding practice. However, in this study estimates were only obtained for a small proportion of participants and there was some evidence of poor model fit. There was also evidence of variation in the utilisation of codes between family practices raising the possibility that, in practice, properties of codes may vary for different coders. PMID:22176509

  9. Physiologically based pharmacokinetic modeling of a homologous series of barbiturates in the rat: a sensitivity analysis.

    PubMed

    Nestorov, I A; Aarons, L J; Rowland, M

    1997-08-01

    Sensitivity analysis studies the effects of the inherent variability and uncertainty in model parameters on the model outputs and may be a useful tool at all stages of the pharmacokinetic modeling process. The present study examined the sensitivity of a whole-body physiologically based pharmacokinetic (PBPK) model for the distribution kinetics of nine 5-n-alkyl-5-ethyl barbituric acids in arterial blood and 14 tissues (lung, liver, kidney, stomach, pancreas, spleen, gut, muscle, adipose, skin, bone, heart, brain, testes) after i.v. bolus administration to rats. The aims were to obtain new insights into the model used, to rank the model parameters involved according to their impact on the model outputs and to study the changes in the sensitivity induced by the increase in the lipophilicity of the homologues on ascending the series. Two approaches for sensitivity analysis have been implemented. The first, based on the Matrix Perturbation Theory, uses a sensitivity index defined as the normalized sensitivity of the 2-norm of the model compartmental matrix to perturbations in its entries. The second approach uses the traditional definition of the normalized sensitivity function as the relative change in a model state (a tissue concentration) corresponding to a relative change in a model parameter. Autosensitivity has been defined as sensitivity of a state to any of its parameters; cross-sensitivity as the sensitivity of a state to any other states' parameters. Using the two approaches, the sensitivity of representative tissue concentrations (lung, liver, kidney, stomach, gut, adipose, heart, and brain) to the following model parameters: tissue-to-unbound plasma partition coefficients, tissue blood flows, unbound renal and intrinsic hepatic clearance, permeability surface area product of the brain, have been analyzed. Both the tissues and the parameters were ranked according to their sensitivity and impact. The following general conclusions were drawn: (i) the overall sensitivity of the system to all parameters involved is small due to the weak connectivity of the system structure; (ii) the time course of both the auto- and cross-sensitivity functions for all tissues depends on the dynamics of the tissues themselves, e.g., the higher the perfusion of a tissue, the higher are both its cross-sensitivity to other tissues' parameters and the cross-sensitivities of other tissues to its parameters; and (iii) with a few exceptions, there is not a marked influence of the lipophilicity of the homologues on either the pattern or the values of the sensitivity functions. The estimates of the sensitivity and the subsequent tissue and parameter rankings may be extended to other drugs, sharing the same common structure of the whole body PBPK model, and having similar model parameters. Results show also that the computationally simple Matrix Perturbation Analysis should be used only when an initial idea about the sensitivity of a system is required. If comprehensive information regarding the sensitivity is needed, the numerically expensive Direct Sensitivity Analysis should be used.

  10. Beyond the Spin Model Approximation for Ramsey Spectroscopy

    DTIC Science & Technology

    2014-03-26

    December 2013; revised manuscript received 31 January 2014; published 26 March 2014) Ramsey spectroscopy has become a powerful technique for probing...atomic systems without the need for ultralow temperatures. It is thus important to determine the parameter regime in which a pure interacting-spins picture

  11. Predicting fiber refractive index from a measured preform index profile

    NASA Astrophysics Data System (ADS)

    Kiiveri, P.; Koponen, J.; Harra, J.; Novotny, S.; Husu, H.; Ihalainen, H.; Kokki, T.; Aallos, V.; Kimmelma, O.; Paul, J.

    2018-02-01

    When producing fiber lasers and amplifiers, silica glass compositions consisting of three to six different materials are needed. Due to the varying needs of different applications, substantial number of different glass compositions are used in the active fiber structures. Often it is not possible to find material parameters for theoretical models to estimate thermal and mechanical properties of those glass compositions. This makes it challenging to predict accurately fiber core refractive index values, even if the preform index profile is measured. Usually the desired fiber refractive index value is achieved experimentally, which is expensive. To overcome this problem, we analyzed statistically the changes between the measured preform and fiber index values. We searched for correlations that would help to predict the Δn-value change from preform to fiber in a situation where we don't know the values of the glass material parameters that define the change. Our index change models were built using the data collected from preforms and fibers made by the Direct Nanoparticle Deposition (DND) technology.

  12. The Hull Method for Selecting the Number of Common Factors

    ERIC Educational Resources Information Center

    Lorenzo-Seva, Urbano; Timmerman, Marieke E.; Kiers, Henk A. L.

    2011-01-01

    A common problem in exploratory factor analysis is how many factors need to be extracted from a particular data set. We propose a new method for selecting the number of major common factors: the Hull method, which aims to find a model with an optimal balance between model fit and number of parameters. We examine the performance of the method in an…

  13. Derivation of Continuum Models from An Agent-based Cancer Model: Optimization and Sensitivity Analysis.

    PubMed

    Voulgarelis, Dimitrios; Velayudhan, Ajoy; Smith, Frank

    2017-01-01

    Agent-based models provide a formidable tool for exploring complex and emergent behaviour of biological systems as well as accurate results but with the drawback of needing a lot of computational power and time for subsequent analysis. On the other hand, equation-based models can more easily be used for complex analysis in a much shorter timescale. This paper formulates an ordinary differential equations and stochastic differential equations model to capture the behaviour of an existing agent-based model of tumour cell reprogramming and applies it to optimization of possible treatment as well as dosage sensitivity analysis. For certain values of the parameter space a close match between the equation-based and agent-based models is achieved. The need for division of labour between the two approaches is explored. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  14. Biomolecular Force Field Parameterization via Atoms-in-Molecule Electron Density Partitioning.

    PubMed

    Cole, Daniel J; Vilseck, Jonah Z; Tirado-Rives, Julian; Payne, Mike C; Jorgensen, William L

    2016-05-10

    Molecular mechanics force fields, which are commonly used in biomolecular modeling and computer-aided drug design, typically treat nonbonded interactions using a limited library of empirical parameters that are developed for small molecules. This approach does not account for polarization in larger molecules or proteins, and the parametrization process is labor-intensive. Using linear-scaling density functional theory and atoms-in-molecule electron density partitioning, environment-specific charges and Lennard-Jones parameters are derived directly from quantum mechanical calculations for use in biomolecular modeling of organic and biomolecular systems. The proposed methods significantly reduce the number of empirical parameters needed to construct molecular mechanics force fields, naturally include polarization effects in charge and Lennard-Jones parameters, and scale well to systems comprised of thousands of atoms, including proteins. The feasibility and benefits of this approach are demonstrated by computing free energies of hydration, properties of pure liquids, and the relative binding free energies of indole and benzofuran to the L99A mutant of T4 lysozyme.

  15. A combined reconstruction-classification method for diffuse optical tomography.

    PubMed

    Hiltunen, P; Prince, S J D; Arridge, S

    2009-11-07

    We present a combined classification and reconstruction algorithm for diffuse optical tomography (DOT). DOT is a nonlinear ill-posed inverse problem. Therefore, some regularization is needed. We present a mixture of Gaussians prior, which regularizes the DOT reconstruction step. During each iteration, the parameters of a mixture model are estimated. These associate each reconstructed pixel with one of several classes based on the current estimate of the optical parameters. This classification is exploited to form a new prior distribution to regularize the reconstruction step and update the optical parameters. The algorithm can be described as an iteration between an optimization scheme with zeroth-order variable mean and variance Tikhonov regularization and an expectation-maximization scheme for estimation of the model parameters. We describe the algorithm in a general Bayesian framework. Results from simulated test cases and phantom measurements show that the algorithm enhances the contrast of the reconstructed images with good spatial accuracy. The probabilistic classifications of each image contain only a few misclassified pixels.

  16. Surveying implicit solvent models for estimating small molecule absolute hydration free energies

    PubMed Central

    Knight, Jennifer L.

    2011-01-01

    Implicit solvent models are powerful tools in accounting for the aqueous environment at a fraction of the computational expense of explicit solvent representations. Here, we compare the ability of common implicit solvent models (TC, OBC, OBC2, GBMV, GBMV2, GBSW, GBSW/MS, GBSW/MS2 and FACTS) to reproduce experimental absolute hydration free energies for a series of 499 small neutral molecules that are modeled using AMBER/GAFF parameters and AM1-BCC charges. Given optimized surface tension coefficients for scaling the surface area term in the nonpolar contribution, most implicit solvent models demonstrate reasonable agreement with extensive explicit solvent simulations (average difference 1.0-1.7 kcal/mol and R2=0.81-0.91) and with experimental hydration free energies (average unsigned errors=1.1-1.4 kcal/mol and R2=0.66-0.81). Chemical classes of compounds are identified that need further optimization of their ligand force field parameters and others that require improvement in the physical parameters of the implicit solvent models themselves. More sophisticated nonpolar models are also likely necessary to more effectively represent the underlying physics of solvation and take the quality of hydration free energies estimated from implicit solvent models to the next level. PMID:21735452

  17. Ensuring congruency in multiscale modeling: towards linking agent based and continuum biomechanical models of arterial adaptation.

    PubMed

    Hayenga, Heather N; Thorne, Bryan C; Peirce, Shayn M; Humphrey, Jay D

    2011-11-01

    There is a need to develop multiscale models of vascular adaptations to understand tissue-level manifestations of cellular level mechanisms. Continuum-based biomechanical models are well suited for relating blood pressures and flows to stress-mediated changes in geometry and properties, but less so for describing underlying mechanobiological processes. Discrete stochastic agent-based models are well suited for representing biological processes at a cellular level, but not for describing tissue-level mechanical changes. We present here a conceptually new approach to facilitate the coupling of continuum and agent-based models. Because of ubiquitous limitations in both the tissue- and cell-level data from which one derives constitutive relations for continuum models and rule-sets for agent-based models, we suggest that model verification should enforce congruency across scales. That is, multiscale model parameters initially determined from data sets representing different scales should be refined, when possible, to ensure that common outputs are consistent. Potential advantages of this approach are illustrated by comparing simulated aortic responses to a sustained increase in blood pressure predicted by continuum and agent-based models both before and after instituting a genetic algorithm to refine 16 objectively bounded model parameters. We show that congruency-based parameter refinement not only yielded increased consistency across scales, it also yielded predictions that are closer to in vivo observations.

  18. Generation of High Resolution Land Surface Parameters in the Community Land Model

    NASA Astrophysics Data System (ADS)

    Ke, Y.; Coleman, A. M.; Wigmosta, M. S.; Leung, L.; Huang, M.; Li, H.

    2010-12-01

    The Community Land Model (CLM) is the land surface model used for the Community Atmosphere Model (CAM) and the Community Climate System Model (CCSM). It examines the physical, chemical, and biological processes across a variety of spatial and temporal scales. Currently, efforts are being made to improve the spatial resolution of the CLM, in part, to represent finer scale hydrologic characteristics. Current land surface parameters of CLM4.0, in particular plant functional types (PFT) and leaf area index (LAI), are generated from MODIS and calculated at a 0.05 degree resolution. These MODIS-derived land surface parameters have also been aggregated to coarser resolutions (e.g., 0.5, 1.0 degrees). To evaluate the response of CLM across various spatial scales, higher spatial resolution land surface parameters need to be generated. In this study we examine the use of Landsat TM/ETM+ imagery and data fusion techniques for generating land surface parameters at a 1km resolution within the Pacific Northwest United States. . Land cover types and PFTs are classified based on Landsat multi-season spectral information, DEM, National Land Cover Database (NLCD) and the USDA-NASS Crop Data Layer (CDL). For each PFT, relationships between MOD15A2 high quality LAI values, Landsat-based vegetation indices, climate variables, terrain, and laser-altimeter derived vegetation height are used to generate monthly LAI values at a 30m resolution. The high-resolution PFT and LAI data are aggregated to create a 1km model grid resolution. An evaluation and comparison of CLM land surface response at both fine and moderate scale is presented.

  19. Land and Water Use Characteristics and Human Health Input Parameters for use in Environmental Dosimetry and Risk Assessments at the Savannah River Site. 2016 Update

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jannik, G. Tim; Hartman, Larry; Stagich, Brooke

    Operations at the Savannah River Site (SRS) result in releases of small amounts of radioactive materials to the atmosphere and to the Savannah River. For regulatory compliance purposes, potential offsite radiological doses are estimated annually using computer models that follow U.S. Nuclear Regulatory Commission (NRC) regulatory guides. Within the regulatory guides, default values are provided for many of the dose model parameters, but the use of applicant site-specific values is encouraged. Detailed surveys of land-use and water-use parameters were conducted in 1991 and 2010. They are being updated in this report. These parameters include local characteristics of meat, milk andmore » vegetable production; river recreational activities; and meat, milk and vegetable consumption rates, as well as other human usage parameters required in the SRS dosimetry models. In addition, the preferred elemental bioaccumulation factors and transfer factors (to be used in human health exposure calculations at SRS) are documented. The intent of this report is to establish a standardized source for these parameters that is up to date with existing data, and that is maintained via review of future-issued national references (to evaluate the need for changes as new information is released). These reviews will continue to be added to this document by revision.« less

  20. Land and Water Use Characteristics and Human Health Input Parameters for use in Environmental Dosimetry and Risk Assessments at the Savannah River Site 2017 Update

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jannik, T.; Stagich, B.

    Operations at the Savannah River Site (SRS) result in releases of relatively small amounts of radioactive materials to the atmosphere and to the Savannah River. For regulatory compliance purposes, potential offsite radiological doses are estimated annually using computer models that follow U.S. Nuclear Regulatory Commission (NRC) regulatory guides. Within the regulatory guides, default values are provided for many of the dose model parameters, but the use of site-specific values is encouraged. Detailed surveys of land-use and water-use parameters were conducted in 1991, 2008, 2010, and 2016 and are being concurred with or updated in this report. These parameters include localmore » characteristics of meat, milk, and vegetable production; river recreational activities; and meat, milk, and vegetable consumption rates, as well as other human usage parameters required in the SRS dosimetry models. In addition, the preferred elemental bioaccumulation factors and transfer factors (to be used in human health exposure calculations at SRS) are documented. The intent of this report is to establish a standardized source for these parameters that is up to date with existing data, and that is maintained via review of future-issued national references (to evaluate the need for changes as new information is released). These reviews will continue to be added to this document by revision.« less

  1. Visual Basic, Excel-based fish population modeling tool - The pallid sturgeon example

    USGS Publications Warehouse

    Moran, Edward H.; Wildhaber, Mark L.; Green, Nicholas S.; Albers, Janice L.

    2016-02-10

    The model presented in this report is a spreadsheet-based model using Visual Basic for Applications within Microsoft Excel (http://dx.doi.org/10.5066/F7057D0Z) prepared in cooperation with the U.S. Army Corps of Engineers and U.S. Fish and Wildlife Service. It uses the same model structure and, initially, parameters as used by Wildhaber and others (2015) for pallid sturgeon. The difference between the model structure used for this report and that used by Wildhaber and others (2015) is that variance is not partitioned. For the model of this report, all variance is applied at the iteration and time-step levels of the model. Wildhaber and others (2015) partition variance into parameter variance (uncertainty about the value of a parameter itself) applied at the iteration level and temporal variance (uncertainty caused by random environmental fluctuations with time) applied at the time-step level. They included implicit individual variance (uncertainty caused by differences between individuals) within the time-step level.The interface developed for the model of this report is designed to allow the user the flexibility to change population model structure and parameter values and uncertainty separately for every component of the model. This flexibility makes the modeling tool potentially applicable to any fish species; however, the flexibility inherent in this modeling tool makes it possible for the user to obtain spurious outputs. The value and reliability of the model outputs are only as good as the model inputs. Using this modeling tool with improper or inaccurate parameter values, or for species for which the structure of the model is inappropriate, could lead to untenable management decisions. By facilitating fish population modeling, this modeling tool allows the user to evaluate a range of management options and implications. The goal of this modeling tool is to be a user-friendly modeling tool for developing fish population models useful to natural resource managers to inform their decision-making processes; however, as with all population models, caution is needed, and a full understanding of the limitations of a model and the veracity of user-supplied parameters should always be considered when using such model output in the management of any species.

  2. Simulation of the detonation process of an ammonium nitrate based emulsion explosive using the lee-tarver reactive flow model

    NASA Astrophysics Data System (ADS)

    Ribeiro, José B.; Silva, Cristóvão; Mendes, Ricardo; Plaksin, I.; Campos, Jose

    2012-03-01

    The use of emulsion explosives [EEx] for processing materials (compaction, welding and forming) requires the ability to perform detailed simulations of its detonation process [DP]. Detailed numerical simulations of the DP of this kind of explosives, characterized by having a finite reaction zone thickness, are thought to be suitably performed using the Lee-Tarver reactive flow model. In this work a real coded genetic algorithm methodology was used to estimate the 15 parameters of the reaction rate equation [RRE] of that model for a particular EEx. This methodology allows, in a single optimization procedure, using only one experimental result and without the need of any starting solution, to seek for the 15 parameters of the RRE that fit the numerical to the experimental results. Mass averaging and the Plate-Gap Model have been used for the determination of the shock data used in the unreacted explosive JWL EoS assessment, and the thermochemical code THOR retrieved the data used in the detonation products JWL EoS assessment. The obtained parameters allow a reasonable description of the experimental data.

  3. Using a cloud to replenish parched groundwater modeling efforts.

    PubMed

    Hunt, Randall J; Luchette, Joseph; Schreuder, Willem A; Rumbaugh, James O; Doherty, John; Tonkin, Matthew J; Rumbaugh, Douglas B

    2010-01-01

    Groundwater models can be improved by introduction of additional parameter flexibility and simultaneous use of soft-knowledge. However, these sophisticated approaches have high computational requirements. Cloud computing provides unprecedented access to computing power via the Internet to facilitate the use of these techniques. A modeler can create, launch, and terminate "virtual" computers as needed, paying by the hour, and save machine images for future use. Such cost-effective and flexible computing power empowers groundwater modelers to routinely perform model calibration and uncertainty analysis in ways not previously possible.

  4. Using a cloud to replenish parched groundwater modeling efforts

    USGS Publications Warehouse

    Hunt, Randall J.; Luchette, Joseph; Schreuder, Willem A.; Rumbaugh, James O.; Doherty, John; Tonkin, Matthew J.; Rumbaugh, Douglas B.

    2010-01-01

    Groundwater models can be improved by introduction of additional parameter flexibility and simultaneous use of soft-knowledge. However, these sophisticated approaches have high computational requirements. Cloud computing provides unprecedented access to computing power via the Internet to facilitate the use of these techniques. A modeler can create, launch, and terminate “virtual” computers as needed, paying by the hour, and save machine images for future use. Such cost-effective and flexible computing power empowers groundwater modelers to routinely perform model calibration and uncertainty analysis in ways not previously possible.

  5. Optimization of microphysics in the Unified Model, using the Micro-genetic algorithm.

    NASA Astrophysics Data System (ADS)

    Jang, J.; Lee, Y.; Lee, H.; Lee, J.; Joo, S.

    2016-12-01

    This study focuses on parameter optimization of microphysics in the Unified Model (UM) using the Micro-genetic algorithm (Micro-GA). We need the optimization of microphysics in UM. Because, Microphysics in the Numerical Weather Prediction (NWP) model is important to Quantitative Precipitation Forecasting (QPF). The Micro-GA searches for optimal parameters on the basis of fitness function. The five parameters are chosen. The target parameters include x1, x2 related to raindrop size distribution, Cloud-rain correlation coefficient, Surface droplet number and Droplet taper height. The fitness function is based on the skill score that is BIAS and Critical Successive Index (CSI). An interface between UM and Micro-GA is developed and applied to three precipitation cases in Korea. The cases are (ⅰ) heavy rainfall in the Southern area because of typhoon NAKRI, (ⅱ) heavy rainfall in the Youngdong area, and (ⅲ) heavy rainfall in the Seoul metropolitan area. When the optimized result is compared to the control result (using the UM default value, CNTL), the optimized result leads to improvements in precipitation forecast, especially for heavy rainfall of the late forecast time. Also, we analyze the skill score of precipitation forecasts in terms of various thresholds of CNTL, Optimized result, and experiments on each optimized parameter for five parameters. Generally, the improvement is maximized when the five optimized parameters are used simultaneously. Therefore, this study demonstrates the ability to improve Korean precipitation forecasts by optimizing microphysics in UM.

  6. Optimization of GATE and PHITS Monte Carlo code parameters for uniform scanning proton beam based on simulation with FLUKA general-purpose code

    NASA Astrophysics Data System (ADS)

    Kurosu, Keita; Takashina, Masaaki; Koizumi, Masahiko; Das, Indra J.; Moskvin, Vadim P.

    2014-10-01

    Although three general-purpose Monte Carlo (MC) simulation tools: Geant4, FLUKA and PHITS have been used extensively, differences in calculation results have been reported. The major causes are the implementation of the physical model, preset value of the ionization potential or definition of the maximum step size. In order to achieve artifact free MC simulation, an optimized parameters list for each simulation system is required. Several authors have already proposed the optimized lists, but those studies were performed with a simple system such as only a water phantom. Since particle beams have a transport, interaction and electromagnetic processes during beam delivery, establishment of an optimized parameters-list for whole beam delivery system is therefore of major importance. The purpose of this study was to determine the optimized parameters list for GATE and PHITS using proton treatment nozzle computational model. The simulation was performed with the broad scanning proton beam. The influences of the customizing parameters on the percentage depth dose (PDD) profile and the proton range were investigated by comparison with the result of FLUKA, and then the optimal parameters were determined. The PDD profile and the proton range obtained from our optimized parameters list showed different characteristics from the results obtained with simple system. This led to the conclusion that the physical model, particle transport mechanics and different geometry-based descriptions need accurate customization in planning computational experiments for artifact-free MC simulation.

  7. Inference of reaction rate parameters based on summary statistics from experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khalil, Mohammad; Chowdhary, Kamaljit Singh; Safta, Cosmin

    Here, we present the results of an application of Bayesian inference and maximum entropy methods for the estimation of the joint probability density for the Arrhenius rate para meters of the rate coefficient of the H 2/O 2-mechanism chain branching reaction H + O 2 → OH + O. Available published data is in the form of summary statistics in terms of nominal values and error bars of the rate coefficient of this reaction at a number of temperature values obtained from shock-tube experiments. Our approach relies on generating data, in this case OH concentration profiles, consistent with the givenmore » summary statistics, using Approximate Bayesian Computation methods and a Markov Chain Monte Carlo procedure. The approach permits the forward propagation of parametric uncertainty through the computational model in a manner that is consistent with the published statistics. A consensus joint posterior on the parameters is obtained by pooling the posterior parameter densities given each consistent data set. To expedite this process, we construct efficient surrogates for the OH concentration using a combination of Pad'e and polynomial approximants. These surrogate models adequately represent forward model observables and their dependence on input parameters and are computationally efficient to allow their use in the Bayesian inference procedure. We also utilize Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation resulting in orders of magnitude speedup in data likelihood evaluation. Despite the strong non-linearity in the model, the consistent data sets all res ult in nearly Gaussian conditional parameter probability density functions. The technique also accounts for nuisance parameters in the form of Arrhenius parameters of other rate coefficients with prescribed uncertainty. The resulting pooled parameter probability density function is propagated through stoichiometric hydrogen-air auto-ignition computations to illustrate the need to account for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions.« less

  8. Inference of reaction rate parameters based on summary statistics from experiments

    DOE PAGES

    Khalil, Mohammad; Chowdhary, Kamaljit Singh; Safta, Cosmin; ...

    2016-10-15

    Here, we present the results of an application of Bayesian inference and maximum entropy methods for the estimation of the joint probability density for the Arrhenius rate para meters of the rate coefficient of the H 2/O 2-mechanism chain branching reaction H + O 2 → OH + O. Available published data is in the form of summary statistics in terms of nominal values and error bars of the rate coefficient of this reaction at a number of temperature values obtained from shock-tube experiments. Our approach relies on generating data, in this case OH concentration profiles, consistent with the givenmore » summary statistics, using Approximate Bayesian Computation methods and a Markov Chain Monte Carlo procedure. The approach permits the forward propagation of parametric uncertainty through the computational model in a manner that is consistent with the published statistics. A consensus joint posterior on the parameters is obtained by pooling the posterior parameter densities given each consistent data set. To expedite this process, we construct efficient surrogates for the OH concentration using a combination of Pad'e and polynomial approximants. These surrogate models adequately represent forward model observables and their dependence on input parameters and are computationally efficient to allow their use in the Bayesian inference procedure. We also utilize Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation resulting in orders of magnitude speedup in data likelihood evaluation. Despite the strong non-linearity in the model, the consistent data sets all res ult in nearly Gaussian conditional parameter probability density functions. The technique also accounts for nuisance parameters in the form of Arrhenius parameters of other rate coefficients with prescribed uncertainty. The resulting pooled parameter probability density function is propagated through stoichiometric hydrogen-air auto-ignition computations to illustrate the need to account for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions.« less

  9. A generalized Lyapunov theory for robust root clustering of linear state space models with real parameter uncertainty

    NASA Technical Reports Server (NTRS)

    Yedavalli, R. K.

    1992-01-01

    The problem of analyzing and designing controllers for linear systems subject to real parameter uncertainty is considered. An elegant, unified theory for robust eigenvalue placement is presented for a class of D-regions defined by algebraic inequalities by extending the nominal matrix root clustering theory of Gutman and Jury (1981) to linear uncertain time systems. The author presents explicit conditions for matrix root clustering for different D-regions and establishes the relationship between the eigenvalue migration range and the parameter range. The bounds are all obtained by one-shot computation in the matrix domain and do not need any frequency sweeping or parameter gridding. The method uses the generalized Lyapunov theory for getting the bounds.

  10. Interaction-induced effects on Bose-Hubbard parameters

    NASA Astrophysics Data System (ADS)

    Kremer, Mark; Sachdeva, Rashi; Benseny, Albert; Busch, Thomas

    2017-12-01

    We study the effects of repulsive on-site interactions on the broadening of the localized Wannier functions used for calculating the parameters to describe ultracold atoms in optical lattices. For this, we replace the common single-particle Wannier functions, which do not contain any information about the interactions, by two-particle Wannier functions obtained from an exact solution which takes the interactions into account. We then use these interaction-dependent basis functions to calculate the Bose-Hubbard model parameters, showing that they are substantially different both at low and high lattice depths from the ones calculated using single-particle Wannier functions. Our results suggest that density effects are not negligible for many parameter ranges and need to be taken into account in metrology experiments.

  11. FitSKIRT: genetic algorithms to automatically fit dusty galaxies with a Monte Carlo radiative transfer code

    NASA Astrophysics Data System (ADS)

    De Geyter, G.; Baes, M.; Fritz, J.; Camps, P.

    2013-02-01

    We present FitSKIRT, a method to efficiently fit radiative transfer models to UV/optical images of dusty galaxies. These images have the advantage that they have better spatial resolution compared to FIR/submm data. FitSKIRT uses the GAlib genetic algorithm library to optimize the output of the SKIRT Monte Carlo radiative transfer code. Genetic algorithms prove to be a valuable tool in handling the multi- dimensional search space as well as the noise induced by the random nature of the Monte Carlo radiative transfer code. FitSKIRT is tested on artificial images of a simulated edge-on spiral galaxy, where we gradually increase the number of fitted parameters. We find that we can recover all model parameters, even if all 11 model parameters are left unconstrained. Finally, we apply the FitSKIRT code to a V-band image of the edge-on spiral galaxy NGC 4013. This galaxy has been modeled previously by other authors using different combinations of radiative transfer codes and optimization methods. Given the different models and techniques and the complexity and degeneracies in the parameter space, we find reasonable agreement between the different models. We conclude that the FitSKIRT method allows comparison between different models and geometries in a quantitative manner and minimizes the need of human intervention and biasing. The high level of automation makes it an ideal tool to use on larger sets of observed data.

  12. Virtual Ionosonde Construction by using ITS and IRI-2012 models

    NASA Astrophysics Data System (ADS)

    Kabasakal, Mehmet; Toker, Cenk

    2016-07-01

    Ionosonde is a kind of radar which is used to examine several properties of the ionosphere, including the electron density and drift velocity. Ionosonde is an expensive device and its installation requires special expertise and a proper area clear of sources of radio interference. In order to overcome the difficulties of installing an ionosonde hardware, the target of this study is to construct a virtual ionosonde based on communication channel models where the model parameters are determined by ray tracing obtained by the PHaRLAP software and the International Reference Ionosphere (IRI-2012) model. Although narrowband high frequency (HF) communication models have been widely used to represent the behaviour of the radio channel, they are applicable to a limited set of actual propagation conditions and wideband models are needed to better understand the HF channel. In 1997, the Institute for Telecommunication Science (ITS) developed a wideband HF ionospheric model, the so-called ITS model, however, it has some restrictions in real life applications. The ITS model parameters are grouped into two parts; the deterministic and the stochastic parameters. The deterministic parameters are the delay time (tau _{c}) of each reflection path based on the penetration frequency (f _{p}), the height (h _{0}) of the maximum electron density and the half thickness (sigma) of the reflective layer. The stochastic parameters, delay spread (sigma _{tau}), delay rise time (sigma _{c}), Doppler spread (sigma _{D}), Doppler shift (f _{s}), are to calculate the impulse response of the channel. These parameters are generally difficult to obtain and are based on the measured data which may not be available in all cases. In order to obtain these parameters, we propose to integrate the PHaRLAP ray tracing toolbox and the IRI-2012 model. When Total Electron Content (TEC) estimates obtained from GNSS measurements are input to IRI-2012, the model generates electron density profiles close to the actual profiles, which are used for ray tracing between the user defined geographical coordinates. Then, ITS model parameters are obtained from both ray tracing and also the IRI-2012 model. Finally, an ionosonde signal waveform is transmitted through the channel obtained from the ITS model to generate the ionogram. As an application, oblique sounding between two points is simulated with ITS channel model. M-sequence, Barker sequence and complementary sequences are used as sounding waveforms. The effects of channel on the oblique ionogram and sounding waveform characteristics are also investigated.

  13. Transport properties and equation of state for HCNO mixtures in and beyond the warm dense matter regime

    DOE PAGES

    Ticknor, Christopher; Collins, Lee A.; Kress, Joel D.

    2015-08-04

    We present simulations of a four component mixture of HCNO with orbital free molecular dynamics (OFMD). These simulations were conducted for 5–200 eV with densities ranging between 0.184 and 36.8 g/cm 3. We extract the equation of state from the simulations and compare to average atom models. We found that we only need to add a cold curve model to find excellent agreement. In addition, we studied mass transport properties. We present fits to the self-diffusion and shear viscosity that are able to reproduce the transport properties over the parameter range studied. We compare these OFMD results to models basedmore » on the Coulomb coupling parameter and one-component plasmas.« less

  14. A simulation model for predicting the temperature during the application of MR-guided focused ultrasound for stroke treatment using pulsed ultrasound

    NASA Astrophysics Data System (ADS)

    Hadjisavvas, V.; Damianou, C.

    2011-09-01

    In this paper a simulation model for predicting the temperature during the application of MR-guided focused ultrasound for stroke treatment using pulsed ultrasound is presented. A single element spherically focused transducer of 5 cm diameter, focusing at 10 cm and operating at either 0.5 MHz or 1 MHz was considered. The power field was estimated using the KZK model. The temperature was estimated using the bioheat equation. The goal was to extract the acoustic parameters (power, pulse duration, duty factor and pulse repetition frequency) that maintain a temperature increase of less than 1 °C during the application of a pulse ultrasound protocol. It was found that the temperature change increases linearly with duty factor. The higher the power, the lower the duty factor needed to keep the temperature change to the safe limit of 1 °C. The higher the frequency the lower the duty factor needed to keep the temperature change to the safe limit of 1 °C. Finally, the deeper the target, the higher the duty factor needed to keep the temperature change to the safe limit of 1 °C. The simulation model was tested in brain tissue during the application of pulse ultrasound and the measured temperature was in close agreement with the simulated temperature. This simulation model is considered to be very useful tool for providing acoustic parameters (frequency, power, duty factor, pulse repetition frequency) during the application of pulsed ultrasound at various depths in tissue so that a safe temperature is maintained during the treatment. This model could be tested soon during stroke clinical trials.

  15. Comparison of retention models for polymers 1. Poly(ethylene glycol)s.

    PubMed

    Bashir, Mubasher A; Radke, Wolfgang

    2006-10-27

    The suitability of three different retention models to predict the retention times of poly(ethylene glycol)s (PEGs) in gradient and isocratic chromatography was investigated. The models investigated were the linear (LSSM) and the quadratic solvent strength model (QSSM). In addition, a model describing the retention behaviour of polymers was extended to account for gradient elution (PM). It was found that all models are suited to properly predict gradient retention volumes provided the extraction of the analyte specific parameters is performed from gradient experiments as well. The LSSM and QSSM on principle cannot describe retention behaviour under critical or SEC conditions. Since the PM is designed to cover all three modes of polymer chromatography, it is therefore superior to the other models. However, the determination of the analyte specific parameters, which are needed to calibrate the retention behaviour, strongly depend on the suitable selection of initial experiments. A useful strategy for a purposeful selection of these calibration experiments is proposed.

  16. Deformation analysis of polymers composites: rheological model involving time-based fractional derivative

    NASA Astrophysics Data System (ADS)

    Zhou, H. W.; Yi, H. Y.; Mishnaevsky, L.; Wang, R.; Duan, Z. Q.; Chen, Q.

    2017-05-01

    A modeling approach to time-dependent property of Glass Fiber Reinforced Polymers (GFRP) composites is of special interest for quantitative description of long-term behavior. An electronic creep machine is employed to investigate the time-dependent deformation of four specimens of dog-bond-shaped GFRP composites at various stress level. A negative exponent function based on structural changes is introduced to describe the damage evolution of material properties in the process of creep test. Accordingly, a new creep constitutive equation, referred to fractional derivative Maxwell model, is suggested to characterize the time-dependent behavior of GFRP composites by replacing Newtonian dashpot with the Abel dashpot in the classical Maxwell model. The analytic solution for the fractional derivative Maxwell model is given and the relative parameters are determined. The results estimated by the fractional derivative Maxwell model proposed in the paper are in a good agreement with the experimental data. It is shown that the new creep constitutive model proposed in the paper needs few parameters to represent various time-dependent behaviors.

  17. SU-E-J-161: Inverse Problems for Optical Parameters in Laser Induced Thermal Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fahrenholtz, SJ; Stafford, RJ; Fuentes, DT

    Purpose: Magnetic resonance-guided laser-induced thermal therapy (MRgLITT) is investigated as a neurosurgical intervention for oncological applications throughout the body by active post market studies. Real-time MR temperature imaging is used to monitor ablative thermal delivery in the clinic. Additionally, brain MRgLITT could improve through effective planning for laser fiber's placement. Mathematical bioheat models have been extensively investigated but require reliable patient specific physical parameter data, e.g. optical parameters. This abstract applies an inverse problem algorithm to characterize optical parameter data obtained from previous MRgLITT interventions. Methods: The implemented inverse problem has three primary components: a parameter-space search algorithm, a physicsmore » model, and training data. First, the parameter-space search algorithm uses a gradient-based quasi-Newton method to optimize the effective optical attenuation coefficient, μ-eff. A parameter reduction reduces the amount of optical parameter-space the algorithm must search. Second, the physics model is a simplified bioheat model for homogeneous tissue where closed-form Green's functions represent the exact solution. Third, the training data was temperature imaging data from 23 MRgLITT oncological brain ablations (980 nm wavelength) from seven different patients. Results: To three significant figures, the descriptive statistics for μ-eff were 1470 m{sup −1} mean, 1360 m{sup −1} median, 369 m{sup −1} standard deviation, 933 m{sup −1} minimum and 2260 m{sup −1} maximum. The standard deviation normalized by the mean was 25.0%. The inverse problem took <30 minutes to optimize all 23 datasets. Conclusion: As expected, the inferred average is biased by underlying physics model. However, the standard deviation normalized by the mean is smaller than literature values and indicates an increased precision in the characterization of the optical parameters needed to plan MRgLITT procedures. This investigation demonstrates the potential for the optimization and validation of more sophisticated bioheat models that incorporate the uncertainty of the data into the predictions, e.g. stochastic finite element methods.« less

  18. Transformation of Physical DVHs to Radiobiologically Equivalent Ones in Hypofractionated Radiotherapy Analyzing Dosimetric and Clinical Parameters: A Practical Approach for Routine Clinical Practice in Radiation Oncology

    PubMed Central

    Thrapsanioti, Zoi; Karanasiou, Irene; Platoni, Kalliopi; Efstathopoulos, Efstathios P.; Matsopoulos, George; Dilvoi, Maria; Patatoukas, George; Chaldeopoulos, Demetrios; Kelekis, Nikolaos; Kouloulias, Vassilis

    2013-01-01

    Purpose. The purpose of this study was to transform DVHs from physical to radiobiological ones as well as to evaluate their reliability by correlations of dosimetric and clinical parameters for 50 patients with prostate cancer and 50 patients with breast cancer, who were submitted to Hypofractionated Radiotherapy. Methods and Materials. To achieve this transformation, we used both the linear-quadratic model (LQ model) and the Niemierko model. The outcome of radiobiological DVHs was correlated with acute toxicity score according to EORTC/RTOG criteria. Results. Concerning the prostate radiotherapy, there was a significant correlation between RTOG acute rectal toxicity and D 50 (P < 0.001) and V 60 (P = 0.001) dosimetric parameters, calculated for α/β = 10 Gy. Moreover, concerning the breast radiotherapy there was a significant correlation between RTOG skin toxicity and V ≥60 dosimetric parameter, calculated for both α/β = 2.3 Gy (P < 0.001) and α/β = 10 Gy (P < 0.001). The new tool seems reliable and user-friendly. Conclusions. Our proposed model seems user-friendly. Its reliability in terms of agreement with the presented acute radiation induced toxicity was satisfactory. However, more patients are needed to extract safe conclusions. PMID:24348743

  19. Evaluation of Different Dose-Response Models for High Hydrostatic Pressure Inactivation of Microorganisms

    PubMed Central

    2017-01-01

    Modeling of microbial inactivation by high hydrostatic pressure (HHP) requires a plot of the log microbial count or survival ratio versus time data under a constant pressure and temperature. However, at low pressure and temperature values, very long holding times are needed to obtain measurable inactivation. Since the time has a significant effect on the cost of HHP processing it may be reasonable to fix the time at an appropriate value and quantify the inactivation with respect to pressure. Such a plot is called dose-response curve and it may be more beneficial than the traditional inactivation modeling since short holding times with different pressure values can be selected and used for the modeling of HHP inactivation. For this purpose, 49 dose-response curves (with at least 4 log10 reduction and ≥5 data points including the atmospheric pressure value (P = 0.1 MPa), and with holding time ≤10 min) for HHP inactivation of microorganisms obtained from published studies were fitted with four different models, namely the Discrete model, Shoulder model, Fermi equation, and Weibull model, and the pressure value needed for 5 log10 (P5) inactivation was calculated for all the models above. The Shoulder model and Fermi equation produced exactly the same parameter and P5 values, while the Discrete model produced similar or sometimes the exact same parameter values as the Fermi equation. The Weibull model produced the worst fit (had the lowest adjusted determination coefficient (R2adj) and highest mean square error (MSE) values), while the Fermi equation had the best fit (the highest R2adj and lowest MSE values). Parameters of the models and also P5 values of each model can be useful for the further experimental design of HHP processing and also for the comparison of the pressure resistance of different microorganisms. Further experiments can be done to verify the P5 values at given conditions. The procedure given in this study can also be extended for enzyme inactivation by HHP. PMID:28880255

  20. Proceedings of the Workshop on Computational Aspects in the Control of Flexible Systems, part 1

    NASA Technical Reports Server (NTRS)

    Taylor, Lawrence W., Jr. (Compiler)

    1989-01-01

    Control/Structures Integration program software needs, computer aided control engineering for flexible spacecraft, computer aided design, computational efficiency and capability, modeling and parameter estimation, and control synthesis and optimization software for flexible structures and robots are among the topics discussed.

  1. FIRST ORDER KINETIC GAS GENERATION MODEL PARAMETERS FOR WET LANDFILLS

    EPA Science Inventory

    Landfill gas is produced as a result of a sequence of physical, chemical, and biological processes occurring within an anaerobic landfill. Landfill operators, energy recovery project owners, regulators, and energy users need to be able to project the volume of gas produced and re...

  2. An Innovative Approach to Assess Quantity-Distance

    DTIC Science & Technology

    1992-08-01

    modeling of the geologic materials is the identification of the pertinent parameters which need to be accounted for (Bakhtar and DiBona , 1985; Bakhtar...Agency, Strategic Structures Division, Contract DNA 001-84-C-0435, Washington D.C., October 1986. Bakhtar, K. and G. DiBona . "Dynamic Loading

  3. Progress in remote sensing of global land surface heat fluxes and evaporations with a turbulent heat exchange parameterization method

    NASA Astrophysics Data System (ADS)

    Chen, Xuelong; Su, Bob

    2017-04-01

    Remote sensing has provided us an opportunity to observe Earth land surface with a much higher resolution than any of GCM simulation. Due to scarcity of information for land surface physical parameters, up-to-date GCMs still have large uncertainties in the coupled land surface process modeling. One critical issue is a large amount of parameters used in their land surface models. Thus remote sensing of land surface spectral information can be used to provide information on these parameters or assimilated to decrease the model uncertainties. Satellite imager could observe the Earth land surface with optical, thermal and microwave bands. Some basic Earth land surface status (land surface temperature, canopy height, canopy leaf area index, soil moisture etc.) has been produced with remote sensing technique, which already help scientists understanding Earth land and atmosphere interaction more precisely. However, there are some challenges when applying remote sensing variables to calculate global land-air heat and water exchange fluxes. Firstly, a global turbulent exchange parameterization scheme needs to be developed and verified, especially for global momentum and heat roughness length calculation with remote sensing information. Secondly, a compromise needs to be innovated to overcome the spatial-temporal gaps in remote sensing variables to make the remote sensing based land surface fluxes applicable for GCM model verification or comparison. A flux network data library (more 200 flux towers) was collected to verify the designed method. Important progress in remote sensing of global land flux and evaporation will be presented and its benefits for GCM models will also be discussed. Some in-situ studies on the Tibetan Plateau and problems of land surface process simulation will also be discussed.

  4. An Energy Integrated Dispatching Strategy of Multi- energy Based on Energy Internet

    NASA Astrophysics Data System (ADS)

    Jin, Weixia; Han, Jun

    2018-01-01

    Energy internet is a new way of energy use. Energy internet achieves energy efficiency and low cost by scheduling a variety of different forms of energy. Particle Swarm Optimization (PSO) is an advanced algorithm with few parameters, high computational precision and fast convergence speed. By improving the parameters ω, c1 and c2, PSO can improve the convergence speed and calculation accuracy. The objective of optimizing model is lowest cost of fuel, which can meet the load of electricity, heat and cold after all the renewable energy is received. Due to the different energy structure and price in different regions, the optimization strategy needs to be determined according to the algorithm and model.

  5. Anisotropic cosmologies in warped DGP braneworld

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heydari-Fard, Malihe

    2009-10-15

    The DGP braneworld scenario explains accelerated expansion of the Universe via leakage of gravity to extra dimensions without any need for dark energy. We study the behavior of homogeneous and anisotropic cosmologies on a warped DGP brane with perfect fluid as a matter source. Taking a conformally flat bulk, we obtain the general solutions of the field equations in an exact parametric form for Bianchi type I space-time with a pressureless fluid. Finally, the behavior of the observationally important parameters like shear, anisotropy, and the deceleration parameter is considered in detail. We find that isotropization can proceed slower in themore » warped DGP model than the generalized Randall-Sundrum II model.« less

  6. Electronic Structure and Transport in Magnetic Multilayers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2008-02-18

    ORNL assisted Seagate Recording Heads Operations in the development of CIPS pin Valves for application as read sensors in hard disk drives. Personnel at ORNL were W. H. Butler and Xiaoguang Zhang. Dr. Olle Heinonen from Seagate RHO also participated. ORNL provided codes and materials parameters that were used by Seagate to model CIP GMR in their heads. The objectives were to: (1) develop a linearized Boltzmann transport code for describing CIP GMR based on realistic models of the band structure and interfaces in materials in CIP spin valves in disk drive heads; (2) calculate the materials parameters needed asmore » inputs to the Boltzmann code; and (3) transfer the technology to Seagate Recording Heads.« less

  7. Business model design for a wearable biofeedback system.

    PubMed

    Hidefjäll, Patrik; Titkova, Dina

    2015-01-01

    Wearable sensor technologies used to track daily activities have become successful in the consumer market. In order for wearable sensor technology to offer added value in the more challenging areas of stress-rehab care and occupational health stress-related biofeedback parameters need to be monitored and more elaborate business models are needed. To identify probable success factors for a wearable biofeedback system (Affective Health) in the two mentioned market segments in a Swedish setting, we conducted literature studies and interviews with relevant representatives. Data were collected and used first to describe the two market segments and then to define likely feasible business model designs, according to the Business Model Canvas framework. Needs of stakeholders were identified as inputs to business model design. Value propositions, a key building block of a business model, were defined for each segment. The value proposition for occupational health was defined as "A tool that can both identify employees at risk of stress-related disorders and reinforce healthy sustainable behavior" and for healthcare as: "Providing therapists with objective data about the patient's emotional state and motivating patients to better engage in the treatment process".

  8. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation

    NASA Astrophysics Data System (ADS)

    Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.

  9. Changes of peritoneal transport parameters with time on dialysis: assessment with sequential peritoneal equilibration test.

    PubMed

    Waniewski, Jacek; Antosiewicz, Stefan; Baczynski, Daniel; Poleszczuk, Jan; Pietribiasi, Mauro; Lindholm, Bengt; Wankowicz, Zofia

    2017-10-27

    Sequential peritoneal equilibration test (sPET) is based on the consecutive performance of the peritoneal equilibration test (PET, 4-hour, glucose 2.27%) and the mini-PET (1-hour, glucose 3.86%), and the estimation of peritoneal transport parameters with the 2-pore model. It enables the assessment of the functional transport barrier for fluid and small solutes. The objective of this study was to check whether the estimated model parameters can serve as better and earlier indicators of the changes in the peritoneal transport characteristics than directly measured transport indices that depend on several transport processes. 17 patients were examined using sPET twice with the interval of about 8 months (230 ± 60 days). There was no difference between the observational parameters measured in the 2 examinations. The indices for solute transport, but not net UF, were well correlated between the examinations. Among the estimated parameters, a significant decrease between the 2 examinations was found only for hydraulic permeability LpS, and osmotic conductance for glucose, whereas the other parameters remained unchanged. These fluid transport parameters did not correlate with D/P for creatinine, although the decrease in LpS values between the examinations was observed mostly for patients with low D/P for creatinine. We conclude that changes in fluid transport parameters, hydraulic permeability and osmotic conductance for glucose, as assessed by the pore model, may precede the changes in small solute transport. The systematic assessment of fluid transport status needs specific clinical and mathematical tools beside the standard PET tests.

  10. Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2011-01-01

    An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation applications such as model-based diagnostic, controls, and life usage calculations. The advantage of the innovation is the significant reduction in estimation errors that it can provide relative to the conventional approach of selecting a subset of health parameters to serve as the model tuning parameter vector. Because this technique needs only to be performed during the system design process, it places no additional computation burden on the onboard Kalman filter implementation. The technique has been developed for aircraft engine onboard estimation applications, as this application typically presents an under-determined estimation problem. However, this generic technique could be applied to other industries using gas turbine engine technology.

  11. A simple hyperbolic model for communication in parallel processing environments

    NASA Technical Reports Server (NTRS)

    Stoica, Ion; Sultan, Florin; Keyes, David

    1994-01-01

    We introduce a model for communication costs in parallel processing environments called the 'hyperbolic model,' which generalizes two-parameter dedicated-link models in an analytically simple way. Dedicated interprocessor links parameterized by a latency and a transfer rate that are independent of load are assumed by many existing communication models; such models are unrealistic for workstation networks. The communication system is modeled as a directed communication graph in which terminal nodes represent the application processes that initiate the sending and receiving of the information and in which internal nodes, called communication blocks (CBs), reflect the layered structure of the underlying communication architecture. The direction of graph edges specifies the flow of the information carried through messages. Each CB is characterized by a two-parameter hyperbolic function of the message size that represents the service time needed for processing the message. The parameters are evaluated in the limits of very large and very small messages. Rules are given for reducing a communication graph consisting of many to an equivalent two-parameter form, while maintaining an approximation for the service time that is exact in both large and small limits. The model is validated on a dedicated Ethernet network of workstations by experiments with communication subprograms arising in scientific applications, for which a tight fit of the model predictions with actual measurements of the communication and synchronization time between end processes is demonstrated. The model is then used to evaluate the performance of two simple parallel scientific applications from partial differential equations: domain decomposition and time-parallel multigrid. In an appropriate limit, we also show the compatibility of the hyperbolic model with the recently proposed LogP model.

  12. Sewer deterioration modeling with condition data lacking historical records.

    PubMed

    Egger, C; Scheidegger, A; Reichert, P; Maurer, M

    2013-11-01

    Accurate predictions of future conditions of sewer systems are needed for efficient rehabilitation planning. For this purpose, a range of sewer deterioration models has been proposed which can be improved by calibration with observed sewer condition data. However, if datasets lack historical records, calibration requires a combination of deterioration and sewer rehabilitation models, as the current state of the sewer network reflects the combined effect of both processes. Otherwise, physical sewer lifespans are overestimated as pipes in poor condition that were rehabilitated are no longer represented in the dataset. We therefore propose the combination of a sewer deterioration model with a simple rehabilitation model which can be calibrated with datasets lacking historical information. We use Bayesian inference for parameter estimation due to the limited information content of the data and limited identifiability of the model parameters. A sensitivity analysis gives an insight into the model's robustness against the uncertainty of the prior. The analysis reveals that the model results are principally sensitive to the means of the priors of specific model parameters, which should therefore be elicited with care. The importance sampling technique applied for the sensitivity analysis permitted efficient implementation for regional sensitivity analysis with reasonable computational outlay. Application of the combined model with both simulated and real data shows that it effectively compensates for the bias induced by a lack of historical data. Thus, the novel approach makes it possible to calibrate sewer pipe deterioration models even when historical condition records are lacking. Since at least some prior knowledge of the model parameters is available, the strength of Bayesian inference is particularly evident in the case of small datasets. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Predictive modeling of transient storage and nutrient uptake: Implications for stream restoration

    USGS Publications Warehouse

    O'Connor, Ben L.; Hondzo, Miki; Harvey, Judson W.

    2010-01-01

    This study examined two key aspects of reactive transport modeling for stream restoration purposes: the accuracy of the nutrient spiraling and transient storage models for quantifying reach-scale nutrient uptake, and the ability to quantify transport parameters using measurements and scaling techniques in order to improve upon traditional conservative tracer fitting methods. Nitrate (NO3–) uptake rates inferred using the nutrient spiraling model underestimated the total NO3– mass loss by 82%, which was attributed to the exclusion of dispersion and transient storage. The transient storage model was more accurate with respect to the NO3– mass loss (±20%) and also demonstrated that uptake in the main channel was more significant than in storage zones. Conservative tracer fitting was unable to produce transport parameter estimates for a riffle-pool transition of the study reach, while forward modeling of solute transport using measured/scaled transport parameters matched conservative tracer breakthrough curves for all reaches. Additionally, solute exchange between the main channel and embayment surface storage zones was quantified using first-order theory. These results demonstrate that it is vital to account for transient storage in quantifying nutrient uptake, and the continued development of measurement/scaling techniques is needed for reactive transport modeling of streams with complex hydraulic and geomorphic conditions.

  14. Simulation-based sensitivity analysis for non-ignorably missing data.

    PubMed

    Yin, Peng; Shi, Jian Q

    2017-01-01

    Sensitivity analysis is popular in dealing with missing data problems particularly for non-ignorable missingness, where full-likelihood method cannot be adopted. It analyses how sensitively the conclusions (output) may depend on assumptions or parameters (input) about missing data, i.e. missing data mechanism. We call models with the problem of uncertainty sensitivity models. To make conventional sensitivity analysis more useful in practice we need to define some simple and interpretable statistical quantities to assess the sensitivity models and make evidence based analysis. We propose a novel approach in this paper on attempting to investigate the possibility of each missing data mechanism model assumption, by comparing the simulated datasets from various MNAR models with the observed data non-parametrically, using the K-nearest-neighbour distances. Some asymptotic theory has also been provided. A key step of this method is to plug in a plausibility evaluation system towards each sensitivity parameter, to select plausible values and reject unlikely values, instead of considering all proposed values of sensitivity parameters as in the conventional sensitivity analysis method. The method is generic and has been applied successfully to several specific models in this paper including meta-analysis model with publication bias, analysis of incomplete longitudinal data and mean estimation with non-ignorable missing data.

  15. A Simplified Model of Moisture Transport in Hydrophilic Porous Media With Applications to Pharmaceutical Tablets.

    PubMed

    Klinzing, Gerard R; Zavaliangos, Antonios

    2016-08-01

    This work establishes a predictive model that explicitly recognizes microstructural parameters in the description of the overall mass uptake and local gradients of moisture into tablets. Model equations were formulated based on local tablet geometry to describe the transient uptake of moisture. An analytical solution to a simplified set of model equations was solved to predict the overall mass uptake and moisture gradients with the tablets. The analytical solution takes into account individual diffusion mechanisms in different scales of porosity and diffusion into the solid phase. The time constant of mass uptake was found to be a function of several key material properties, such as tablet relative density, pore tortuosity, and equilibrium moisture content of the material. The predictions of the model are in excellent agreement with experimental results for microcrystalline cellulose tablets without the need for parameter fitting. The model presented provides a new method to analyze the transient uptake of moisture into hydrophilic materials with the knowledge of only a few fundamental material and microstructural parameters. In addition, the model allows for quick and insightful predictions of moisture diffusion for a variety of practical applications including pharmaceutical tablets, porous polymer systems, or cementitious materials. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  16. Mathematics as a Conduit for Translational Research in Post-Traumatic Osteoarthritis

    PubMed Central

    Ayati, Bruce P.; Kapitanov, Georgi I.; Coleman, Mitchell C.; Anderson, Donald D.; Martin, James A.

    2016-01-01

    Biomathematical models offer a powerful method of clarifying complex temporal interactions and the relationships among multiple variables in a system. We present a coupled in silico biomathematical model of articular cartilage degeneration in response to impact and/or aberrant loading such as would be associated with injury to an articular joint. The model incorporates fundamental biological and mechanical information obtained from explant and small animal studies to predict post-traumatic osteoarthritis (PTOA) progression, with an eye toward eventual application in human patients. In this sense, we refer to the mathematics as a “conduit of translation”. The new in silico framework presented in this paper involves a biomathematical model for the cellular and biochemical response to strains computed using finite element analysis. The model predicts qualitative responses presently, utilizing system parameter values largely taken from the literature. To contribute to accurate predictions, models need to be accurately parameterized with values that are based on solid science. We discuss a parameter identification protocol that will enable us to make increasingly accurate predictions of PTOA progression using additional data from smaller scale explant and small animal assays as they become available. By distilling the data from the explant and animal assays into parameters for biomathematical models, mathematics can translate experimental data to clinically relevant knowledge. PMID:27653021

  17. Development of EOS data for granular material like sand by using micromodels

    NASA Astrophysics Data System (ADS)

    Larcher, M.; Gebbeken, N.

    2012-08-01

    Detonations in soil can occur due to several reasons: e.g. land mines or bombs from the Second World War. Soil is also often used as a protective barrier. In all cases the behaviour of soil loaded by shock waves is important. The simulation of shock wave loaded soil using hydro-codes like AUTODYN needs a failure model as well as an equation of state (EOS). The parameters for these models are often not known. The popular material law for sand from Laine and Sandvik [1], e.g., is a first approximation, but it can only be used for dry sand with a certain grain grading. The parameters porosity, grain grading, and humidity have a big influence on the material behaviour of cohesive soils. Micro-mechanic models can be used to develop the material behaviour of granular materials. EOS data can be obtained by numerically loading micro-mechanically modelled grains and measuring the density under a certain pressure in the finite element model. The influence of porosity, grain grading, and humidity can be easily investigated. EOS data are determined in this work for cohesive soils depending on these parameters.

  18. Effects of land cover, topography, and built structure on seasonal water quality at multiple spatial scales.

    PubMed

    Pratt, Bethany; Chang, Heejun

    2012-03-30

    The relationship among land cover, topography, built structure and stream water quality in the Portland Metro region of Oregon and Clark County, Washington areas, USA, is analyzed using ordinary least squares (OLS) and geographically weighted (GWR) multiple regression models. Two scales of analysis, a sectional watershed and a buffer, offered a local and a global investigation of the sources of stream pollutants. Model accuracy, measured by R(2) values, fluctuated according to the scale, season, and regression method used. While most wet season water quality parameters are associated with urban land covers, most dry season water quality parameters are related topographic features such as elevation and slope. GWR models, which take into consideration local relations of spatial autocorrelation, had stronger results than OLS regression models. In the multiple regression models, sectioned watershed results were consistently better than the sectioned buffer results, except for dry season pH and stream temperature parameters. This suggests that while riparian land cover does have an effect on water quality, a wider contributing area needs to be included in order to account for distant sources of pollutants. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. Optimization of kinetic parameters for the degradation of plasmid DNA in rat plasma

    NASA Astrophysics Data System (ADS)

    Chaudhry, Q. A.

    2014-12-01

    Biotechnology is a rapidly growing area of research work in the field of pharmaceutical sciences. The study of pharmacokinetics of plasmid DNA (pDNA) is an important area of research work. It has been observed that the process of gene delivery faces many troubles on the transport of pDNA towards their target sites. The topoforms of pDNA has been termed as super coiled (S-C), open circular (O-C) and linear (L), the kinetic model of which will be presented in this paper. The kinetic model gives rise to system of ordinary differential equations (ODEs), the exact solution of which has been found. The kinetic parameters, which are responsible for the degradation of super coiled, and the formation of open circular and linear topoforms have a great significance not only in vitro but for modeling of further processes as well, therefore need to be addressed in great detail. For this purpose, global optimization techniques have been adopted, thus finding the optimal results for the said model. The results of the model, while using the optimal parameters, were compared against the measured data, which gives a nice agreement.

  20. Sequential peritoneal equilibration test: a new method for assessment and modelling of peritoneal transport.

    PubMed

    Galach, Magda; Antosiewicz, Stefan; Baczynski, Daniel; Wankowicz, Zofia; Waniewski, Jacek

    2013-02-01

    In spite of many peritoneal tests proposed, there is still a need for a simple and reliable new approach for deriving detailed information about peritoneal membrane characteristics, especially those related to fluid transport. The sequential peritoneal equilibration test (sPET) that includes PET (glucose 2.27%, 4 h) followed by miniPET (glucose 3.86%, 1 h) was performed in 27 stable continuous ambulatory peritoneal dialysis patients. Ultrafiltration volumes, glucose absorption, ratio of concentration in dialysis fluid to concentration in plasma (D/P), sodium dip (Dip D/P Sodium), free water fraction (FWF60) and the ultrafiltration passing through small pores at 60 min (UFSP60), were calculated using clinical data. Peritoneal transport parameters were estimated using the three-pore model (3p model) and clinical data. Osmotic conductance for glucose was calculated from the parameters of the model. D/P creatinine correlated with diffusive mass transport parameters for all considered solutes, but not with fluid transport characteristics. Hydraulic permeability (L(p)S) correlated with net ultrafiltration from miniPET, UFSP60, FWF60 and sodium dip. The fraction of ultrasmall pores correlated with FWF60 and sodium dip. The sequential PET described and interpreted mechanisms of ultrafiltration and solute transport. Fluid transport parameters from the 3p model were independent of the PET D/P creatinine, but correlated with fluid transport characteristics from PET and miniPET.

  1. Constraining ecosystem model with adaptive Metropolis algorithm using boreal forest site eddy covariance measurements

    NASA Astrophysics Data System (ADS)

    Mäkelä, Jarmo; Susiluoto, Jouni; Markkanen, Tiina; Aurela, Mika; Järvinen, Heikki; Mammarella, Ivan; Hagemann, Stefan; Aalto, Tuula

    2016-12-01

    We examined parameter optimisation in the JSBACH (Kaminski et al., 2013; Knorr and Kattge, 2005; Reick et al., 2013) ecosystem model, applied to two boreal forest sites (Hyytiälä and Sodankylä) in Finland. We identified and tested key parameters in soil hydrology and forest water and carbon-exchange-related formulations, and optimised them using the adaptive Metropolis (AM) algorithm for Hyytiälä with a 5-year calibration period (2000-2004) followed by a 4-year validation period (2005-2008). Sodankylä acted as an independent validation site, where optimisations were not made. The tuning provided estimates for full distribution of possible parameters, along with information about correlation, sensitivity and identifiability. Some parameters were correlated with each other due to a phenomenological connection between carbon uptake and water stress or other connections due to the set-up of the model formulations. The latter holds especially for vegetation phenology parameters. The least identifiable parameters include phenology parameters, parameters connecting relative humidity and soil dryness, and the field capacity of the skin reservoir. These soil parameters were masked by the large contribution from vegetation transpiration. In addition to leaf area index and the maximum carboxylation rate, the most effective parameters adjusting the gross primary production (GPP) and evapotranspiration (ET) fluxes in seasonal tuning were related to soil wilting point, drainage and moisture stress imposed on vegetation. For daily and half-hourly tunings the most important parameters were the ratio of leaf internal CO2 concentration to external CO2 and the parameter connecting relative humidity and soil dryness. Effectively the seasonal tuning transferred water from soil moisture into ET, and daily and half-hourly tunings reversed this process. The seasonal tuning improved the month-to-month development of GPP and ET, and produced the most stable estimates of water use efficiency. When compared to the seasonal tuning, the daily tuning is worse on the seasonal scale. However, daily parametrisation reproduced the observations for average diurnal cycle best, except for the GPP for Sodankylä validation period, where half-hourly tuned parameters were better. In general, the daily tuning provided the largest reduction in model-data mismatch. The models response to drought was unaffected by our parametrisations and further studies are needed into enhancing the dry response in JSBACH.

  2. Physical Uncertainty Bounds (PUB)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vaughan, Diane Elizabeth; Preston, Dean L.

    2015-03-19

    This paper introduces and motivates the need for a new methodology for determining upper bounds on the uncertainties in simulations of engineered systems due to limited fidelity in the composite continuum-level physics models needed to simulate the systems. We show that traditional uncertainty quantification methods provide, at best, a lower bound on this uncertainty. We propose to obtain bounds on the simulation uncertainties by first determining bounds on the physical quantities or processes relevant to system performance. By bounding these physics processes, as opposed to carrying out statistical analyses of the parameter sets of specific physics models or simply switchingmore » out the available physics models, one can obtain upper bounds on the uncertainties in simulated quantities of interest.« less

  3. Grounding, bonding and shielding for safety and signal interference control

    NASA Technical Reports Server (NTRS)

    Forsyth, T. J.; Bautista, AL

    1990-01-01

    Aircraft models and other aerodynamic tests are conducted at the NASA Ames Research Center National Full Scale Aerodynamics Complex (NFAC). The models, tested in NFAC's wind tunnels, are sometimes heavily instrumented and are connected to a data acquisition system. Besides recording data for evaluation, certain critical information must be monitored to be sure the model is within operational limits. The signals for these parameters are for the most part low-level signals that require good instrumentation amplification. These amplifiers need to be grounded and shielded for common mode rejection and noise reduction. The instrumentation also needs to be grounded to prevent electrical shock hazards. The purpose of this paper is to present an understanding of the principles and purpose of grounding, bonding, and shielding.

  4. Normalized inverse characterization of sound absorbing rigid porous media.

    PubMed

    Zieliński, Tomasz G

    2015-06-01

    This paper presents a methodology for the inverse characterization of sound absorbing rigid porous media, based on standard measurements of the surface acoustic impedance of a porous sample. The model parameters need to be normalized to have a robust identification procedure which fits the model-predicted impedance curves with the measured ones. Such a normalization provides a substitute set of dimensionless (normalized) parameters unambiguously related to the original model parameters. Moreover, two scaling frequencies are introduced, however, they are not additional parameters and for different, yet reasonable, assumptions of their values, the identification procedure should eventually lead to the same solution. The proposed identification technique uses measured and computed impedance curves for a porous sample not only in the standard configuration, that is, set to the rigid termination piston in an impedance tube, but also with air gaps of known thicknesses between the sample and the piston. Therefore, all necessary analytical formulas for sound propagation in double-layered media are provided. The methodology is illustrated by one numerical test and by two examples based on the experimental measurements of the acoustic impedance and absorption of porous ceramic samples of different thicknesses and a sample of polyurethane foam.

  5. Measurement of the PPN parameter γ by testing the geometry of near-Earth space

    NASA Astrophysics Data System (ADS)

    Luo, Jie; Tian, Yuan; Wang, Dian-Hong; Qin, Cheng-Gang; Shao, Cheng-Gang

    2016-06-01

    The Beyond Einstein Advanced Coherent Optical Network (BEACON) mission was designed to achieve an accuracy of 10^{-9} in measuring the Eddington parameter γ , which is perhaps the most fundamental Parameterized Post-Newtonian parameter. However, this ideal accuracy was just estimated as a ratio of the measurement accuracy of the inter-spacecraft distances to the magnitude of the departure from Euclidean geometry. Based on the BEACON concept, we construct a measurement model to estimate the parameter γ with the least squares method. Influences of the measurement noise and the out-of-plane error on the estimation accuracy are evaluated based on the white noise model. Though the BEACON mission does not require expensive drag-free systems and avoids physical dynamical models of spacecraft, the relatively low accuracy of initial inter-spacecraft distances poses a great challenge, which reduces the estimation accuracy in about two orders of magnitude. Thus the noise requirements may need to be more stringent in the design in order to achieve the target accuracy, which is demonstrated in the work. Considering that, we have given the limits on the power spectral density of both noise sources for the accuracy of 10^{-9}.

  6. Reference tissue modeling with parameter coupling: application to a study of SERT binding in HIV

    NASA Astrophysics Data System (ADS)

    Endres, Christopher J.; Hammoud, Dima A.; Pomper, Martin G.

    2011-04-01

    When applicable, it is generally preferred to evaluate positron emission tomography (PET) studies using a reference tissue-based approach as that avoids the need for invasive arterial blood sampling. However, most reference tissue methods have been shown to have a bias that is dependent on the level of tracer binding, and the variability of parameter estimates may be substantially affected by noise level. In a study of serotonin transporter (SERT) binding in HIV dementia, it was determined that applying parameter coupling to the simplified reference tissue model (SRTM) reduced the variability of parameter estimates and yielded the strongest between-group significant differences in SERT binding. The use of parameter coupling makes the application of SRTM more consistent with conventional blood input models and reduces the total number of fitted parameters, thus should yield more robust parameter estimates. Here, we provide a detailed evaluation of the application of parameter constraint and parameter coupling to [11C]DASB PET studies. Five quantitative methods, including three methods that constrain the reference tissue clearance (kr2) to a common value across regions were applied to the clinical and simulated data to compare measurement of the tracer binding potential (BPND). Compared with standard SRTM, either coupling of kr2 across regions or constraining kr2 to a first-pass estimate improved the sensitivity of SRTM to measuring a significant difference in BPND between patients and controls. Parameter coupling was particularly effective in reducing the variance of parameter estimates, which was less than 50% of the variance obtained with standard SRTM. A linear approach was also improved when constraining kr2 to a first-pass estimate, although the SRTM-based methods yielded stronger significant differences when applied to the clinical study. This work shows that parameter coupling reduces the variance of parameter estimates and may better discriminate between-group differences in specific binding.

  7. Harnessing the theoretical foundations of the exponential and beta-Poisson dose-response models to quantify parameter uncertainty using Markov Chain Monte Carlo.

    PubMed

    Schmidt, Philip J; Pintar, Katarina D M; Fazil, Aamir M; Topp, Edward

    2013-09-01

    Dose-response models are the essential link between exposure assessment and computed risk values in quantitative microbial risk assessment, yet the uncertainty that is inherent to computed risks because the dose-response model parameters are estimated using limited epidemiological data is rarely quantified. Second-order risk characterization approaches incorporating uncertainty in dose-response model parameters can provide more complete information to decisionmakers by separating variability and uncertainty to quantify the uncertainty in computed risks. Therefore, the objective of this work is to develop procedures to sample from posterior distributions describing uncertainty in the parameters of exponential and beta-Poisson dose-response models using Bayes's theorem and Markov Chain Monte Carlo (in OpenBUGS). The theoretical origins of the beta-Poisson dose-response model are used to identify a decomposed version of the model that enables Bayesian analysis without the need to evaluate Kummer confluent hypergeometric functions. Herein, it is also established that the beta distribution in the beta-Poisson dose-response model cannot address variation among individual pathogens, criteria to validate use of the conventional approximation to the beta-Poisson model are proposed, and simple algorithms to evaluate actual beta-Poisson probabilities of infection are investigated. The developed MCMC procedures are applied to analysis of a case study data set, and it is demonstrated that an important region of the posterior distribution of the beta-Poisson dose-response model parameters is attributable to the absence of low-dose data. This region includes beta-Poisson models for which the conventional approximation is especially invalid and in which many beta distributions have an extreme shape with questionable plausibility. © Her Majesty the Queen in Right of Canada 2013. Reproduced with the permission of the Minister of the Public Health Agency of Canada.

  8. NetMOD Version 2.0 Parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merchant, Bion J.

    2015-08-01

    NetMOD ( Net work M onitoring for O ptimal D etection) is a Java-based software package for conducting simulation of seismic, hydroacoustic and infrasonic networks. Network simulations have long been used to study network resilience to station outages and to determine where additional stations are needed to reduce monitoring thresholds. NetMOD makes use of geophysical models to determine the source characteristics, signal attenuation along the path between the source and station, and the performance and noise properties of the station. These geophysical models are combined to simulate the relative amplitudes of signal and noise that are observed at each ofmore » the stations. From these signal-to-noise ratios (SNR), the probability of detection can be computed given a detection threshold. This document describes the parameters that are used to configure the NetMOD tool and the input and output parameters that make up the simulation definitions.« less

  9. Limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method for the parameter estimation on geographically weighted ordinal logistic regression model (GWOLR)

    NASA Astrophysics Data System (ADS)

    Saputro, Dewi Retno Sari; Widyaningsih, Purnami

    2017-08-01

    In general, the parameter estimation of GWOLR model uses maximum likelihood method, but it constructs a system of nonlinear equations, making it difficult to find the solution. Therefore, an approximate solution is needed. There are two popular numerical methods: the methods of Newton and Quasi-Newton (QN). Newton's method requires large-scale time in executing the computation program since it contains Jacobian matrix (derivative). QN method overcomes the drawback of Newton's method by substituting derivative computation into a function of direct computation. The QN method uses Hessian matrix approach which contains Davidon-Fletcher-Powell (DFP) formula. The Broyden-Fletcher-Goldfarb-Shanno (BFGS) method is categorized as the QN method which has the DFP formula attribute of having positive definite Hessian matrix. The BFGS method requires large memory in executing the program so another algorithm to decrease memory usage is needed, namely Low Memory BFGS (LBFGS). The purpose of this research is to compute the efficiency of the LBFGS method in the iterative and recursive computation of Hessian matrix and its inverse for the GWOLR parameter estimation. In reference to the research findings, we found out that the BFGS and LBFGS methods have arithmetic operation schemes, including O(n2) and O(nm).

  10. Large-watershed flood simulation and forecasting based on different-resolution distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Li, J.

    2017-12-01

    Large-watershed flood simulation and forecasting is very important for a distributed hydrological model in the application. There are some challenges including the model's spatial resolution effect, model performance and accuracy and so on. To cope with the challenge of the model's spatial resolution effect, different model resolution including 1000m*1000m, 600m*600m, 500m*500m, 400m*400m, 200m*200m were used to build the distributed hydrological model—Liuxihe model respectively. The purpose is to find which one is the best resolution for Liuxihe model in Large-watershed flood simulation and forecasting. This study sets up a physically based distributed hydrological model for flood forecasting of the Liujiang River basin in south China. Terrain data digital elevation model (DEM), soil type and land use type are downloaded from the website freely. The model parameters are optimized by using an improved Particle Swarm Optimization(PSO) algorithm; And parameter optimization could reduce the parameter uncertainty that exists for physically deriving model parameters. The different model resolution (200m*200m—1000m*1000m ) are proposed for modeling the Liujiang River basin flood with the Liuxihe model in this study. The best model's spatial resolution effect for flood simulation and forecasting is 200m*200m.And with the model's spatial resolution reduction, the model performance and accuracy also become worse and worse. When the model resolution is 1000m*1000m, the flood simulation and forecasting result is the worst, also the river channel divided based on this resolution is differs from the actual one. To keep the model with an acceptable performance, minimum model spatial resolution is needed. The suggested threshold model spatial resolution for modeling the Liujiang River basin flood is a 500m*500m grid cell, but the model spatial resolution with a 200m*200m grid cell is recommended in this study to keep the model at a best performance.

  11. An assessment of bird habitat quality using population growth rates

    USGS Publications Warehouse

    Knutson, M.G.; Powell, L.A.; Hines, R.K.; Friberg, M.A.; Niemi, G.J.

    2006-01-01

    Survival and reproduction directly affect population growth rate (lambda) making lambda a fundamental parameter for assessing habitat quality. We used field data, literature review, and a computer simulation to predict annual productivity and lambda for several species of landbirds breeding in floodplain and upland forests in the Midwestern United States. We monitored 1735 nests of 27 species; 760 nests were in the uplands and 975 were in the floodplain. Each type of forest habitat (upland and floodplain) was a source habitat for some species. Despite a relatively low proportion of regional forest cover, the majority of species had stable or increasing populations in all or some habitats, including six species of conservation concern. In our search for a simple analog for lambda, we found that only adult apparent survival, juvenile survival, and annual productivity were correlated with lambda; daily nest survival and relative abundance estimated from point counts were not. Survival and annual productivity are among the most costly demographic parameters to measure and there does not seem to be a low-cost alternative. In addition, our literature search revealed that the demographic parameters needed to model annual productivity and lambda were unavailable for several species. More collective effort across North America is needed to fill the gaps in our knowledge of demographic parameters necessary to model both annual productivity and lambda. Managers can use habitat-specific predictions of annual productivity to compare habitat quality among species and habitats for purposes of evaluating management plans.

  12. Implications of AM for the Navy Supply Chain

    DTIC Science & Technology

    2016-12-01

    Cornell University and Queens University of Canada. He is the co-chair of the America Makes Working Group for Additive Manufacturing Qualification and...strategic deployment of additive manufacturing (AM) ma- chines throughout the supply chain, coupled with the right business model, is an imperative need in...60 Table 1. Additive Manufacturing Business Model Factors to develop a standard BCA template, taking into consideration the parameters in Table 1

  13. Cheetah: Starspot modeling code

    NASA Astrophysics Data System (ADS)

    Walkowicz, Lucianne; Thomas, Michael; Finkestein, Adam

    2014-12-01

    Cheetah models starspots in photometric data (lightcurves) by calculating the modulation of a light curve due to starspots. The main parameters of the program are the linear and quadratic limb darkening coefficients, stellar inclination, spot locations and sizes, and the intensity ratio of the spots to the stellar photosphere. Cheetah uses uniform spot contrast and the minimum number of spots needed to produce a good fit and ignores bright regions for the sake of simplicity.

  14. A method of hidden Markov model optimization for use with geophysical data sets

    NASA Technical Reports Server (NTRS)

    Granat, R. A.

    2003-01-01

    Geophysics research has been faced with a growing need for automated techniques with which to process large quantities of data. A successful tool must meet a number of requirements: it should be consistent, require minimal parameter tuning, and produce scientifically meaningful results in reasonable time. We introduce a hidden Markov model (HMM)-based method for analysis of geophysical data sets that attempts to address these issues.

  15. Life Prediction Methodologies for Aerospace Materials Annual Report, 2003

    DTIC Science & Technology

    2003-06-01

    peening parameters are obtained using a simplified model [Cao, et al .]. The solutions ultimately will need to be fine-tuned by simulating the...clamping stress and applied axial stress, identified from prior work [Hutson, et al .]. Accumulated damage on some samples was characterized using...defined using single fiber creep data [Wilson, et al .]. A two-level Mori-Tanaka model [Mori and Tanaka] has been used to define the effective

  16. An Observation-Driven Agent-Based Modeling and Analysis Framework for C. elegans Embryogenesis.

    PubMed

    Wang, Zi; Ramsey, Benjamin J; Wang, Dali; Wong, Kwai; Li, Husheng; Wang, Eric; Bao, Zhirong

    2016-01-01

    With cutting-edge live microscopy and image analysis, biologists can now systematically track individual cells in complex tissues and quantify cellular behavior over extended time windows. Computational approaches that utilize the systematic and quantitative data are needed to understand how cells interact in vivo to give rise to the different cell types and 3D morphology of tissues. An agent-based, minimum descriptive modeling and analysis framework is presented in this paper to study C. elegans embryogenesis. The framework is designed to incorporate the large amounts of experimental observations on cellular behavior and reserve data structures/interfaces that allow regulatory mechanisms to be added as more insights are gained. Observed cellular behaviors are organized into lineage identity, timing and direction of cell division, and path of cell movement. The framework also includes global parameters such as the eggshell and a clock. Division and movement behaviors are driven by statistical models of the observations. Data structures/interfaces are reserved for gene list, cell-cell interaction, cell fate and landscape, and other global parameters until the descriptive model is replaced by a regulatory mechanism. This approach provides a framework to handle the ongoing experiments of single-cell analysis of complex tissues where mechanistic insights lag data collection and need to be validated on complex observations.

  17. Identification of the dominant hydrological process and appropriate model structure of a karst catchment through stepwise simplification of a complex conceptual model

    NASA Astrophysics Data System (ADS)

    Chang, Yong; Wu, Jichun; Jiang, Guanghui; Kang, Zhiqiang

    2017-05-01

    Conceptual models often suffer from the over-parameterization problem due to limited available data for the calibration. This leads to the problem of parameter nonuniqueness and equifinality, which may bring much uncertainty of the simulation result. How to find out the appropriate model structure supported by the available data to simulate the catchment is still a big challenge in the hydrological research. In this paper, we adopt a multi-model framework to identify the dominant hydrological process and appropriate model structure of a karst spring, located in Guilin city, China. For this catchment, the spring discharge is the only available data for the model calibration. This framework starts with a relative complex conceptual model according to the perception of the catchment and then this complex is simplified into several different models by gradually removing the model component. The multi-objective approach is used to compare the performance of these different models and the regional sensitivity analysis (RSA) is used to investigate the parameter identifiability. The results show this karst spring is mainly controlled by two different hydrological processes and one of the processes is threshold-driven which is consistent with the fieldwork investigation. However, the appropriate model structure to simulate the discharge of this spring is much simpler than the actual aquifer structure and hydrological processes understanding from the fieldwork investigation. A simple linear reservoir with two different outlets is enough to simulate this spring discharge. The detail runoff process in the catchment is not needed in the conceptual model to simulate the spring discharge. More complex model should need more other additional data to avoid serious deterioration of model predictions.

  18. The cultural implications of growth: Modeling nonlinear interaction of trait selection and population dynamics

    NASA Astrophysics Data System (ADS)

    Antoci, Angelo; Galeotti, Marcello; Russu, Paolo; Luigi Sacco, Pier

    2018-05-01

    In this paper, we study a nonlinear model of the interaction between trait selection and population dynamics, building on previous work of Ghirlanda et al. [Theor. Popul. Biol. 77, 181-188 (2010)] and Antoci et al. [Commun. Nonlinear Sci. Numer. Simul. 58, 92-106 (2018)]. We establish some basic properties of the model dynamics and present some simulations of the fine-grained structure of alternative dynamic regimes for chosen combinations of parameters. The role of the parameters that govern the reinforcement/corruption of maladaptive vs. adaptive traits is of special importance in determining the model's dynamic evolution. The main implication of this result is the need to pay special attention to the structural forces that may favor the emergence and consolidation of maladaptive traits in contemporary socio-economies, as it is the case, for example, for the stimulation of dysfunctional consumption habits and lifestyles in the pursuit of short-term profits.

  19. A Note on Recurring Misconceptions When Fitting Nonlinear Mixed Models.

    PubMed

    Harring, Jeffrey R; Blozis, Shelley A

    2016-01-01

    Nonlinear mixed-effects (NLME) models are used when analyzing continuous repeated measures data taken on each of a number of individuals where the focus is on characteristics of complex, nonlinear individual change. Challenges with fitting NLME models and interpreting analytic results have been well documented in the statistical literature. However, parameter estimates as well as fitted functions from NLME analyses in recent articles have been misinterpreted, suggesting the need for clarification of these issues before these misconceptions become fact. These misconceptions arise from the choice of popular estimation algorithms, namely, the first-order linearization method (FO) and Gaussian-Hermite quadrature (GHQ) methods, and how these choices necessarily lead to population-average (PA) or subject-specific (SS) interpretations of model parameters, respectively. These estimation approaches also affect the fitted function for the typical individual, the lack-of-fit of individuals' predicted trajectories, and vice versa.

  20. The cultural implications of growth: Modeling nonlinear interaction of trait selection and population dynamics.

    PubMed

    Antoci, Angelo; Galeotti, Marcello; Russu, Paolo; Luigi Sacco, Pier

    2018-05-01

    In this paper, we study a nonlinear model of the interaction between trait selection and population dynamics, building on previous work of Ghirlanda et al. [Theor. Popul. Biol. 77, 181-188 (2010)] and Antoci et al. [Commun. Nonlinear Sci. Numer. Simul. 58, 92-106 (2018)]. We establish some basic properties of the model dynamics and present some simulations of the fine-grained structure of alternative dynamic regimes for chosen combinations of parameters. The role of the parameters that govern the reinforcement/corruption of maladaptive vs. adaptive traits is of special importance in determining the model's dynamic evolution. The main implication of this result is the need to pay special attention to the structural forces that may favor the emergence and consolidation of maladaptive traits in contemporary socio-economies, as it is the case, for example, for the stimulation of dysfunctional consumption habits and lifestyles in the pursuit of short-term profits.

  1. An improved computer model for prediction of axial gas turbine performance losses

    NASA Technical Reports Server (NTRS)

    Jenkins, R. M.

    1984-01-01

    The calculation model performs a rapid preliminary pitchline optimization of axial gas turbine annular flowpath geometry, as well as an initial estimate of blade profile shapes, given only a minimum of thermodynamic cycle requirements. No geometric parameters need be specified. The following preliminary design data are determined: (1) the optimum flowpath geometry, within mechanical stress limits; (2) initial estimates of cascade blade shapes; and (3) predictions of expected turbine performance. The model uses an inverse calculation technique whereby blade profiles are generated by designing channels to yield a specified velocity distribution on the two walls. Velocity distributions are then used to calculate the cascade loss parameters. Calculated blade shapes are used primarily to determine whether the assumed velocity loadings are physically realistic. Model verification is accomplished by comparison of predicted turbine geometry and performance with an array of seven NASA single-stage axial gas turbine configurations.

  2. Variational prediction of the mechanical behavior of shape memory alloys based on thermal experiments

    NASA Astrophysics Data System (ADS)

    Junker, Philipp; Jaeger, Stefanie; Kastner, Oliver; Eggeler, Gunther; Hackl, Klaus

    2015-07-01

    In this work, we present simulations of shape memory alloys which serve as first examples demonstrating the predicting character of energy-based material models. We begin with a theoretical approach for the derivation of the caloric parts of the Helmholtz free energy. Afterwards, experimental results for DSC measurements are presented. Then, we recall a micromechanical model based on the principle of the minimum of the dissipation potential for the simulation of polycrystalline shape memory alloys. The previously determined caloric parts of the Helmholtz free energy close the set of model parameters without the need of parameter fitting. All quantities are derived directly from experiments. Finally, we compare finite element results for tension tests to experimental data and show that the model identified by thermal measurements can predict mechanically induced phase transformations and thus rationalize global material behavior without any further assumptions.

  3. Remote sensing-aided systems for snow qualification, evapotranspiration estimation, and their application in hydrologic models

    NASA Technical Reports Server (NTRS)

    Korram, S.

    1977-01-01

    The design of general remote sensing-aided methodologies was studied to provide the estimates of several important inputs to water yield forecast models. These input parameters are snow area extent, snow water content, and evapotranspiration. The study area is Feather River Watershed (780,000 hectares), Northern California. The general approach involved a stepwise sequence of identification of the required information, sample design, measurement/estimation, and evaluation of results. All the relevent and available information types needed in the estimation process are being defined. These include Landsat, meteorological satellite, and aircraft imagery, topographic and geologic data, ground truth data, and climatic data from ground stations. A cost-effective multistage sampling approach was employed in quantification of all the required parameters. The physical and statistical models for both snow quantification and evapotranspiration estimation was developed. These models use the information obtained by aerial and ground data through appropriate statistical sampling design.

  4. Dynamic parameter identification of robot arms with servo-controlled electrical motors

    NASA Astrophysics Data System (ADS)

    Jiang, Zhao-Hui; Senda, Hiroshi

    2005-12-01

    This paper addresses the issue of dynamic parameter identification of the robot manipulator with servo-controlled electrical motors. An assumption is made that all kinematical parameters, such as link lengths, are known, and only dynamic parameters containing mass, moment of inertia, and their functions need to be identified. First, we derive dynamics of the robot arm with a linear form of the unknown dynamic parameters by taking dynamic characteristics of the motor and servo unit into consideration. Then, we implement the parameter identification approach to identify the unknown parameters with respect to individual link separately. A pseudo-inverse matrix is used for formulation of the parameter identification. The optimal solution is guaranteed in a sense of least-squares of the mean errors. A Direct Drive (DD) SCARA type industrial robot arm AdeptOne is used as an application example of the parameter identification. Simulations and experiments for both open loop and close loop controls are carried out. Comparison of the results confirms the correctness and usefulness of the parameter identification and the derived dynamic model.

  5. Dynamical modeling and multi-experiment fitting with PottersWheel

    PubMed Central

    Maiwald, Thomas; Timmer, Jens

    2008-01-01

    Motivation: Modelers in Systems Biology need a flexible framework that allows them to easily create new dynamic models, investigate their properties and fit several experimental datasets simultaneously. Multi-experiment-fitting is a powerful approach to estimate parameter values, to check the validity of a given model, and to discriminate competing model hypotheses. It requires high-performance integration of ordinary differential equations and robust optimization. Results: We here present the comprehensive modeling framework Potters-Wheel (PW) including novel functionalities to satisfy these requirements with strong emphasis on the inverse problem, i.e. data-based modeling of partially observed and noisy systems like signal transduction pathways and metabolic networks. PW is designed as a MATLAB toolbox and includes numerous user interfaces. Deterministic and stochastic optimization routines are combined by fitting in logarithmic parameter space allowing for robust parameter calibration. Model investigation includes statistical tests for model-data-compliance, model discrimination, identifiability analysis and calculation of Hessian- and Monte-Carlo-based parameter confidence limits. A rich application programming interface is available for customization within own MATLAB code. Within an extensive performance analysis, we identified and significantly improved an integrator–optimizer pair which decreases the fitting duration for a realistic benchmark model by a factor over 3000 compared to MATLAB with optimization toolbox. Availability: PottersWheel is freely available for academic usage at http://www.PottersWheel.de/. The website contains a detailed documentation and introductory videos. The program has been intensively used since 2005 on Windows, Linux and Macintosh computers and does not require special MATLAB toolboxes. Contact: maiwald@fdm.uni-freiburg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:18614583

  6. Forecasting the need for medical specialists in Spain: application of a system dynamics model

    PubMed Central

    2010-01-01

    Background Spain has gone from a surplus to a shortage of medical doctors in very few years. Medium and long-term planning for health professionals has become a high priority for health authorities. Methods We created a supply and demand/need simulation model for 43 medical specialties using system dynamics. The model includes demographic, education and labour market variables. Several scenarios were defined. Variables controllable by health planners can be set as parameters to simulate different scenarios. The model calculates the supply and the deficit or surplus. Experts set the ratio of specialists needed per 1000 inhabitants with a Delphi method. Results In the scenario of the baseline model with moderate population growth, the deficit of medical specialists will grow from 2% at present (2800 specialists) to 14.3% in 2025 (almost 21 000). The specialties with the greatest medium-term shortages are Anesthesiology, Orthopedic and Traumatic Surgery, Pediatric Surgery, Plastic Aesthetic and Reparatory Surgery, Family and Community Medicine, Pediatrics, Radiology, and Urology. Conclusions The model suggests the need to increase the number of students admitted to medical school. Training itineraries should be redesigned to facilitate mobility among specialties. In the meantime, the need to make more flexible the supply in the short term is being filled by the immigration of physicians from new members of the European Union and from Latin America. PMID:21034458

  7. Data-driven strategies for robust forecast of continuous glucose monitoring time-series.

    PubMed

    Fiorini, Samuele; Martini, Chiara; Malpassi, Davide; Cordera, Renzo; Maggi, Davide; Verri, Alessandro; Barla, Annalisa

    2017-07-01

    Over the past decade, continuous glucose monitoring (CGM) has proven to be a very resourceful tool for diabetes management. To date, CGM devices are employed for both retrospective and online applications. Their use allows to better describe the patients' pathology as well as to achieve a better control of patients' level of glycemia. The analysis of CGM sensor data makes possible to observe a wide range of metrics, such as the glycemic variability during the day or the amount of time spent below or above certain glycemic thresholds. However, due to the high variability of the glycemic signals among sensors and individuals, CGM data analysis is a non-trivial task. Standard signal filtering solutions fall short when an appropriate model personalization is not applied. State-of-the-art data-driven strategies for online CGM forecasting rely upon the use of recursive filters. Each time a new sample is collected, such models need to adjust their parameters in order to predict the next glycemic level. In this paper we aim at demonstrating that the problem of online CGM forecasting can be successfully tackled by personalized machine learning models, that do not need to recursively update their parameters.

  8. Model predictions of ocular injury from 1315-nm laser light

    NASA Astrophysics Data System (ADS)

    Polhamus, Garrett D.; Zuclich, Joseph A.; Cain, Clarence P.; Thomas, Robert J.; Foltz, Michael

    2003-06-01

    With the advent of future weapons systems that employ high energy lasers, the 1315 nm wavelength will present a new laser safety hazard to the armed forces. Experiments in non-human primates using this wavelength have demonstrated a range of ocular injuries, including corneal, lenticular and retinal lesions, as a function of pulse duration and spot size at the cornea. To improve our understanding of this phenomena, there is a need for a mathematical model that properly predicts these injuries and their dependence on appropriate exposure parameters. This paper describes the use of a finite difference model of laser thermal injury in the cornea and retina. The model was originally developed for use with shorter wavelength laser irradiation, and as such, requires estimation of several key parameters used in the computations. The predictions from the model are compared to the experimental data, and conclusions are drawn regarding the ability of the model to properly follow the published observations at this wavelength.

  9. An approach to and web-based tool for infectious disease outbreak intervention analysis

    NASA Astrophysics Data System (ADS)

    Daughton, Ashlynn R.; Generous, Nicholas; Priedhorsky, Reid; Deshpande, Alina

    2017-04-01

    Infectious diseases are a leading cause of death globally. Decisions surrounding how to control an infectious disease outbreak currently rely on a subjective process involving surveillance and expert opinion. However, there are many situations where neither may be available. Modeling can fill gaps in the decision making process by using available data to provide quantitative estimates of outbreak trajectories. Effective reduction of the spread of infectious diseases can be achieved through collaboration between the modeling community and public health policy community. However, such collaboration is rare, resulting in a lack of models that meet the needs of the public health community. Here we show a Susceptible-Infectious-Recovered (SIR) model modified to include control measures that allows parameter ranges, rather than parameter point estimates, and includes a web user interface for broad adoption. We apply the model to three diseases, measles, norovirus and influenza, to show the feasibility of its use and describe a research agenda to further promote interactions between decision makers and the modeling community.

  10. Polynomic nonlinear dynamical systems - A residual sensitivity method for model reduction

    NASA Technical Reports Server (NTRS)

    Yurkovich, S.; Bugajski, D.; Sain, M.

    1985-01-01

    The motivation for using polynomic combinations of system states and inputs to model nonlinear dynamics systems is founded upon the classical theories of analysis and function representation. A feature of such representations is the need to make available all possible monomials in these variables, up to the degree specified, so as to provide for the description of widely varying functions within a broad class. For a particular application, however, certain monomials may be quite superfluous. This paper examines the possibility of removing monomials from the model in accordance with the level of sensitivity displayed by the residuals to their absence. Critical in these studies is the effect of system input excitation, and the effect of discarding monomial terms, upon the model parameter set. Therefore, model reduction is approached iteratively, with inputs redesigned at each iteration to ensure sufficient excitation of remaining monomials for parameter approximation. Examples are reported to illustrate the performance of such model reduction approaches.

  11. Alternatives to the fish early life-stage test: Developing a conceptual model for early fish development

    EPA Science Inventory

    Chronic fish toxicity is a key parameter for hazard classification and environmental risk assessment of chemicals, and the OECD 210 fish early life-stage (FELS) test is the primary guideline test used for various international regulatory programs. There exists a need to develop ...

  12. Alternative ways of using field-based estimates to calibrate ecosystem models and their implications for carbon cycle studies

    USGS Publications Warehouse

    He, Yujie; Zhuang, Qianlai; McGuire, David; Liu, Yaling; Chen, Min

    2013-01-01

    Model-data fusion is a process in which field observations are used to constrain model parameters. How observations are used to constrain parameters has a direct impact on the carbon cycle dynamics simulated by ecosystem models. In this study, we present an evaluation of several options for the use of observations in modeling regional carbon dynamics and explore the implications of those options. We calibrated the Terrestrial Ecosystem Model on a hierarchy of three vegetation classification levels for the Alaskan boreal forest: species level, plant-functional-type level (PFT level), and biome level, and we examined the differences in simulated carbon dynamics. Species-specific field-based estimates were directly used to parameterize the model for species-level simulations, while weighted averages based on species percent cover were used to generate estimates for PFT- and biome-level model parameterization. We found that calibrated key ecosystem process parameters differed substantially among species and overlapped for species that are categorized into different PFTs. Our analysis of parameter sets suggests that the PFT-level parameterizations primarily reflected the dominant species and that functional information of some species were lost from the PFT-level parameterizations. The biome-level parameterization was primarily representative of the needleleaf PFT and lost information on broadleaf species or PFT function. Our results indicate that PFT-level simulations may be potentially representative of the performance of species-level simulations while biome-level simulations may result in biased estimates. Improved theoretical and empirical justifications for grouping species into PFTs or biomes are needed to adequately represent the dynamics of ecosystem functioning and structure.

  13. Estimating skin blood saturation by selecting a subset of hyperspectral imaging data

    NASA Astrophysics Data System (ADS)

    Ewerlöf, Maria; Salerud, E. Göran; Strömberg, Tomas; Larsson, Marcus

    2015-03-01

    Skin blood haemoglobin saturation (?b) can be estimated with hyperspectral imaging using the wavelength (λ) range of 450-700 nm where haemoglobin absorption displays distinct spectral characteristics. Depending on the image size and photon transport algorithm, computations may be demanding. Therefore, this work aims to evaluate subsets with a reduced number of wavelengths for ?b estimation. White Monte Carlo simulations are performed using a two-layered tissue model with discrete values for epidermal thickness (?epi) and the reduced scattering coefficient (μ's ), mimicking an imaging setup. A detected intensity look-up table is calculated for a range of model parameter values relevant to human skin, adding absorption effects in the post-processing. Skin model parameters, including absorbers, are; μ's (λ), ?epi, haemoglobin saturation (?b), tissue fraction blood (?b) and tissue fraction melanin (?mel). The skin model paired with the look-up table allow spectra to be calculated swiftly. Three inverse models with varying number of free parameters are evaluated: A(?b, ?b), B(?b, ?b, ?mel) and C(all parameters free). Fourteen wavelength candidates are selected by analysing the maximal spectral sensitivity to ?b and minimizing the sensitivity to ?b. All possible combinations of these candidates with three, four and 14 wavelengths, as well as the full spectral range, are evaluated for estimating ?b for 1000 randomly generated evaluation spectra. The results show that the simplified models A and B estimated ?b accurately using four wavelengths (mean error 2.2% for model B). If the number of wavelengths increased, the model complexity needed to be increased to avoid poor estimations.

  14. Estimation of real-time runway surface contamination using flight data recorder parameters

    NASA Astrophysics Data System (ADS)

    Curry, Donovan

    Within this research effort, the development of an analytic process for friction coefficient estimation is presented. Under static equilibrium, the sum of forces and moments acting on the aircraft, in the aircraft body coordinate system, while on the ground at any instant is equal to zero. Under this premise the longitudinal, lateral and normal forces due to landing are calculated along with the individual deceleration components existent when an aircraft comes to a rest during ground roll. In order to validate this hypothesis a six degree of freedom aircraft model had to be created and landing tests had to be simulated on different surfaces. The simulated aircraft model includes a high fidelity aerodynamic model, thrust model, landing gear model, friction model and antiskid model. Three main surfaces were defined in the friction model; dry, wet and snow/ice. Only the parameters recorded by an FDR are used directly from the aircraft model all others are estimated or known a priori. The estimation of unknown parameters is also presented in the research effort. With all needed parameters a comparison and validation with simulated and estimated data, under different runway conditions, is performed. Finally, this report presents results of a sensitivity analysis in order to provide a measure of reliability of the analytic estimation process. Linear and non-linear sensitivity analysis has been performed in order to quantify the level of uncertainty implicit in modeling estimated parameters and how they can affect the calculation of the instantaneous coefficient of friction. Using the approach of force and moment equilibrium about the CG at landing to reconstruct the instantaneous coefficient of friction appears to be a reasonably accurate estimate when compared to the simulated friction coefficient. This is also true when the FDR and estimated parameters are introduced to white noise and when crosswind is introduced to the simulation. After the linear analysis the results show the minimum frequency at which the algorithm still provides moderately accurate data is at 2Hz. In addition, the linear analysis shows that with estimated parameters increased and decreased up to 25% at random, high priority parameters have to be accurate to within at least +/-5% to have an effect of less than 1% change in the average coefficient of friction. Non-linear analysis results show that the algorithm can be considered reasonably accurate for all simulated cases when inaccuracies in the estimated parameters vary randomly and simultaneously up to +/-27%. At worst-case the maximum percentage change in average coefficient of friction is less than 10% for all surfaces.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuo, Rui; Wu, C. F. Jeff

    Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.

  16. Density matrix Monte Carlo modeling of quantum cascade lasers

    NASA Astrophysics Data System (ADS)

    Jirauschek, Christian

    2017-10-01

    By including elements of the density matrix formalism, the semiclassical ensemble Monte Carlo method for carrier transport is extended to incorporate incoherent tunneling, known to play an important role in quantum cascade lasers (QCLs). In particular, this effect dominates electron transport across thick injection barriers, which are frequently used in terahertz QCL designs. A self-consistent model for quantum mechanical dephasing is implemented, eliminating the need for empirical simulation parameters. Our modeling approach is validated against available experimental data for different types of terahertz QCL designs.

  17. Key Parameters for Operator Diagnosis of BWR Plant Condition during a Severe Accident

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clayton, Dwight A.; Poore, III, Willis P.

    2015-01-01

    The objective of this research is to examine the key information needed from nuclear power plant instrumentation to guide severe accident management and mitigation for boiling water reactor (BWR) designs (specifically, a BWR/4-Mark I), estimate environmental conditions that the instrumentation will experience during a severe accident, and identify potential gaps in existing instrumentation that may require further research and development. This report notes the key parameters that instrumentation needs to measure to help operators respond to severe accidents. A follow-up report will assess severe accident environmental conditions as estimated by severe accident simulation model analysis for a specific US BWR/4-Markmore » I plant for those instrumentation systems considered most important for accident management purposes.« less

  18. Correcting Inadequate Model Snow Process Descriptions Dramatically Improves Mountain Hydrology Simulations

    NASA Astrophysics Data System (ADS)

    Pomeroy, J. W.; Fang, X.

    2014-12-01

    The vast effort in hydrology devoted to parameter calibration as a means to improve model performance assumes that the models concerned are not fundamentally wrong. By focussing on finding optimal parameter sets and ascribing poor model performance to parameter or data uncertainty, these efforts may fail to consider the need to improve models with more intelligent descriptions of hydrological processes. To test this hypothesis, a flexible physically based hydrological model including a full suite of snow hydrology processes as well as warm season, hillslope and groundwater hydrology was applied to Marmot Creek Research Basin, Canadian Rocky Mountains where excellent driving meteorology and basin biophysical descriptions exist. Model parameters were set from values found in the basin or from similar environments; no parameters were calibrated. The model was tested against snow surveys and streamflow observations. The model used algorithms that describe snow redistribution, sublimation and forest canopy effects on snowmelt and evaporative processes that are rarely implemented in hydrological models. To investigate the contribution of these processes to model predictive capability, the model was "falsified" by deleting parameterisations for forest canopy snow mass and energy, blowing snow, intercepted rain evaporation, and sublimation. Model falsification by ignoring forest canopy processes contributed to a large increase in SWE errors for forested portions of the research basin with RMSE increasing from 19 to 55 mm and mean bias (MB) increasing from 0.004 to 0.62. In the alpine tundra portion, removing blowing processes resulted in an increase in model SWE MB from 0.04 to 2.55 on north-facing slopes and -0.006 to -0.48 on south-facing slopes. Eliminating these algorithms degraded streamflow prediction with the Nash Sutcliffe efficiency dropping from 0.58 to 0.22 and MB increasing from 0.01 to 0.09. These results show dramatic model improvements by including snow redistribution and melt processes associated with wind transport and forest canopies. As most hydrological models do not currently include these processes, it is suggested that modellers first improve the realism of model structures before trying to optimise what are inherently inadequate simulations of hydrology.

  19. Wake Vortex Inverse Model User's Guide

    NASA Technical Reports Server (NTRS)

    Lai, David; Delisi, Donald

    2008-01-01

    NorthWest Research Associates (NWRA) has developed an inverse model for inverting landing aircraft vortex data. The data used for the inversion are the time evolution of the lateral transport position and vertical position of both the port and starboard vortices. The inverse model performs iterative forward model runs using various estimates of vortex parameters, vertical crosswind profiles, and vortex circulation as a function of wake age. Forward model predictions of lateral transport and altitude are then compared with the observed data. Differences between the data and model predictions guide the choice of vortex parameter values, crosswind profile and circulation evolution in the next iteration. Iterations are performed until a user-defined criterion is satisfied. Currently, the inverse model is set to stop when the improvement in the rms deviation between the data and model predictions is less than 1 percent for two consecutive iterations. The forward model used in this inverse model is a modified version of the Shear-APA model. A detailed description of this forward model, the inverse model, and its validation are presented in a different report (Lai, Mellman, Robins, and Delisi, 2007). This document is a User's Guide for the Wake Vortex Inverse Model. Section 2 presents an overview of the inverse model program. Execution of the inverse model is described in Section 3. When executing the inverse model, a user is requested to provide the name of an input file which contains the inverse model parameters, the various datasets, and directories needed for the inversion. A detailed description of the list of parameters in the inversion input file is presented in Section 4. A user has an option to save the inversion results of each lidar track in a mat-file (a condensed data file in Matlab format). These saved mat-files can be used for post-inversion analysis. A description of the contents of the saved files is given in Section 5. An example of an inversion input file, with preferred parameters values, is given in Appendix A. An example of the plot generated at a normal completion of the inversion is shown in Appendix B.

  20. Kinetic Modeling of Sunflower Grain Filling and Fatty Acid Biosynthesis

    PubMed Central

    Durruty, Ignacio; Aguirrezábal, Luis A. N.; Echarte, María M.

    2016-01-01

    Grain growth and oil biosynthesis are complex processes that involve various enzymes placed in different sub-cellular compartments of the grain. In order to understand the mechanisms controlling grain weight and composition, we need mathematical models capable of simulating the dynamic behavior of the main components of the grain during the grain filling stage. In this paper, we present a non-structured mechanistic kinetic model developed for sunflower grains. The model was first calibrated for sunflower hybrid ACA855. The calibrated model was able to predict the theoretical amount of carbohydrate equivalents allocated to the grain, grain growth and the dynamics of the oil and non-oil fraction, while considering maintenance requirements and leaf senescence. Incorporating into the model the serial-parallel nature of fatty acid biosynthesis permitted a good representation of the kinetics of palmitic, stearic, oleic, and linoleic acids production. A sensitivity analysis showed that the relative influence of input parameters changed along grain development. Grain growth was mostly affected by the specific growth parameter (μ′) while fatty acid composition strongly depended on their own maximum specific rate parameters. The model was successfully applied to two additional hybrids (MG2 and DK3820). The proposed model can be the first building block toward the development of a more sophisticated model, capable of predicting the effects of environmental conditions on grain weight and composition, in a comprehensive and quantitative way. PMID:27242809

  1. Using field observations to inform thermal hydrology models of permafrost dynamics with ATS (v0.83)

    DOE PAGES

    Atchley, Adam L.; Painter, Scott L.; Harp, Dylan R.; ...

    2015-09-01

    Climate change is profoundly transforming the carbon-rich Arctic tundra landscape, potentially moving it from a carbon sink to a carbon source by increasing the thickness of soil that thaws on a seasonal basis. Thus, the modeling capability and precise parameterizations of the physical characteristics needed to estimate projected active layer thickness (ALT) are limited in Earth system models (ESMs). In particular, discrepancies in spatial scale between field measurements and Earth system models challenge validation and parameterization of hydrothermal models. A recently developed surface–subsurface model for permafrost thermal hydrology, the Advanced Terrestrial Simulator (ATS), is used in combination with field measurementsmore » to achieve the goals of constructing a process-rich model based on plausible parameters and to identify fine-scale controls of ALT in ice-wedge polygon tundra in Barrow, Alaska. An iterative model refinement procedure that cycles between borehole temperature and snow cover measurements and simulations functions to evaluate and parameterize different model processes necessary to simulate freeze–thaw processes and ALT formation. After model refinement and calibration, reasonable matches between simulated and measured soil temperatures are obtained, with the largest errors occurring during early summer above ice wedges (e.g., troughs). The results suggest that properly constructed and calibrated one-dimensional thermal hydrology models have the potential to provide reasonable representation of the subsurface thermal response and can be used to infer model input parameters and process representations. The models for soil thermal conductivity and snow distribution were found to be the most sensitive process representations. However, information on lateral flow and snowpack evolution might be needed to constrain model representations of surface hydrology and snow depth.« less

  2. Survey of simulation methods for modeling pulsed sieve-plate extraction columns

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burkhart, L.

    1979-03-01

    The report first considers briefly the use of liquid-liquid extraction in nuclear fuel reprocessing and then describes the operation of the pulse column. Currently available simulation models of the column are reviewed, and followed by an analysis of the information presently available from which the necessary parameters can be obtained for use in a model of the column. Finally, overall conclusions are given regarding the information needed to develop an accurate model of the column for materials accountability in fuel reprocessing plants. 156 references.

  3. Efficient calibration for imperfect computer models

    DOE PAGES

    Tuo, Rui; Wu, C. F. Jeff

    2015-12-01

    Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.

  4. NTCP modelling of lung toxicity after SBRT comparing the universal survival curve and the linear quadratic model for fractionation correction.

    PubMed

    Wennberg, Berit M; Baumann, Pia; Gagliardi, Giovanna; Nyman, Jan; Drugge, Ninni; Hoyer, Morten; Traberg, Anders; Nilsson, Kristina; Morhed, Elisabeth; Ekberg, Lars; Wittgren, Lena; Lund, Jo-Åsmund; Levin, Nina; Sederholm, Christer; Lewensohn, Rolf; Lax, Ingmar

    2011-05-01

    In SBRT of lung tumours no established relationship between dose-volume parameters and the incidence of lung toxicity is found. The aim of this study is to compare the LQ model and the universal survival curve (USC) to calculate biologically equivalent doses in SBRT to see if this will improve knowledge on this relationship. Toxicity data on radiation pneumonitis grade 2 or more (RP2+) from 57 patients were used, 10.5% were diagnosed with RP2+. The lung DVHs were corrected for fractionation (LQ and USC) and analysed with the Lyman- Kutcher-Burman (LKB) model. In the LQ-correction α/β = 3 Gy was used and the USC parameters used were: α/β = 3 Gy, D(0) = 1.0 Gy, [Formula: see text] = 10, α = 0.206 Gy(-1) and d(T) = 5.8 Gy. In order to understand the relative contribution of different dose levels to the calculated NTCP the concept of fractional NTCP was used. This might give an insight to the questions of whether "high doses to small volumes" or "low doses to large volumes" are most important for lung toxicity. NTCP analysis with the LKB-model using parameters m = 0.4, D(50) = 30 Gy resulted for the volume dependence parameter (n) with LQ correction n = 0.87 and with USC correction n = 0.71. Using parameters m = 0.3, D(50) = 20 Gy n = 0.93 with LQ correction and n = 0.83 with USC correction. In SBRT of lung tumours, NTCP modelling of lung toxicity comparing models (LQ,USC) for fractionation correction, shows that low dose contribute less and high dose more to the NTCP when using the USC-model. Comparing NTCP modelling of SBRT data and data from breast cancer, lung cancer and whole lung irradiation implies that the response of the lung is treatment specific. More data are however needed in order to have a more reliable modelling.

  5. Investigating the effects of the fixed and varying dispersion parameters of Poisson-gamma models on empirical Bayes estimates.

    PubMed

    Lord, Dominique; Park, Peter Young-Jin

    2008-07-01

    Traditionally, transportation safety analysts have used the empirical Bayes (EB) method to improve the estimate of the long-term mean of individual sites; to correct for the regression-to-the-mean (RTM) bias in before-after studies; and to identify hotspots or high risk locations. The EB method combines two different sources of information: (1) the expected number of crashes estimated via crash prediction models, and (2) the observed number of crashes at individual sites. Crash prediction models have traditionally been estimated using a negative binomial (NB) (or Poisson-gamma) modeling framework due to the over-dispersion commonly found in crash data. A weight factor is used to assign the relative influence of each source of information on the EB estimate. This factor is estimated using the mean and variance functions of the NB model. With recent trends that illustrated the dispersion parameter to be dependent upon the covariates of NB models, especially for traffic flow-only models, as well as varying as a function of different time-periods, there is a need to determine how these models may affect EB estimates. The objectives of this study are to examine how commonly used functional forms as well as fixed and time-varying dispersion parameters affect the EB estimates. To accomplish the study objectives, several traffic flow-only crash prediction models were estimated using a sample of rural three-legged intersections located in California. Two types of aggregated and time-specific models were produced: (1) the traditional NB model with a fixed dispersion parameter and (2) the generalized NB model (GNB) with a time-varying dispersion parameter, which is also dependent upon the covariates of the model. Several statistical methods were used to compare the fitting performance of the various functional forms. The results of the study show that the selection of the functional form of NB models has an important effect on EB estimates both in terms of estimated values, weight factors, and dispersion parameters. Time-specific models with a varying dispersion parameter provide better statistical performance in terms of goodness-of-fit (GOF) than aggregated multi-year models. Furthermore, the identification of hazardous sites, using the EB method, can be significantly affected when a GNB model with a time-varying dispersion parameter is used. Thus, erroneously selecting a functional form may lead to select the wrong sites for treatment. The study concludes that transportation safety analysts should not automatically use an existing functional form for modeling motor vehicle crashes without conducting rigorous analyses to estimate the most appropriate functional form linking crashes with traffic flow.

  6. Impact of implementation choices on quantitative predictions of cell-based computational models

    NASA Astrophysics Data System (ADS)

    Kursawe, Jochen; Baker, Ruth E.; Fletcher, Alexander G.

    2017-09-01

    'Cell-based' models provide a powerful computational tool for studying the mechanisms underlying the growth and dynamics of biological tissues in health and disease. An increasing amount of quantitative data with cellular resolution has paved the way for the quantitative parameterisation and validation of such models. However, the numerical implementation of cell-based models remains challenging, and little work has been done to understand to what extent implementation choices may influence model predictions. Here, we consider the numerical implementation of a popular class of cell-based models called vertex models, which are often used to study epithelial tissues. In two-dimensional vertex models, a tissue is approximated as a tessellation of polygons and the vertices of these polygons move due to mechanical forces originating from the cells. Such models have been used extensively to study the mechanical regulation of tissue topology in the literature. Here, we analyse how the model predictions may be affected by numerical parameters, such as the size of the time step, and non-physical model parameters, such as length thresholds for cell rearrangement. We find that vertex positions and summary statistics are sensitive to several of these implementation parameters. For example, the predicted tissue size decreases with decreasing cell cycle durations, and cell rearrangement may be suppressed by large time steps. These findings are counter-intuitive and illustrate that model predictions need to be thoroughly analysed and implementation details carefully considered when applying cell-based computational models in a quantitative setting.

  7. A review of international pharmacy-based minor ailment services and proposed service design model.

    PubMed

    Aly, Mariyam; García-Cárdenas, Victoria; Williams, Kylie; Benrimoj, Shalom I

    2018-01-05

    The need to consider sustainable healthcare solutions is essential. An innovative strategy used to promote minor ailment care is the utilisation of community pharmacists to deliver minor ailment services (MASs). Promoting higher levels of self-care can potentially reduce the strain on existing resources. To explore the features of international MASs, including their similarities and differences, and consider the essential elements to design a MAS model. A grey literature search strategy was completed in June 2017 to comply with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses standard. This included (1) Google/Yahoo! search engines, (2) targeted websites, and (3) contact with commissioning organisations. Executive summaries, table of contents and title pages of documents were reviewed. Key characteristics of MASs were extracted and a MAS model was developed. A total of 147 publications were included in the review. Key service elements identified included eligibility, accessibility, staff involvement, reimbursement systems. Several factors need to be considered when designing a MAS model; including contextualisation of MAS to the market. Stakeholder engagement, service planning, governance, implementation and review have emerged as key aspects involved with a design model. MASs differ in their structural parameters. Consideration of these parameters is necessary when devising MAS aims and assessing outcomes to promote sustainability and success of the service. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. A new Bayesian Earthquake Analysis Tool (BEAT)

    NASA Astrophysics Data System (ADS)

    Vasyura-Bathke, Hannes; Dutta, Rishabh; Jónsson, Sigurjón; Mai, Martin

    2017-04-01

    Modern earthquake source estimation studies increasingly use non-linear optimization strategies to estimate kinematic rupture parameters, often considering geodetic and seismic data jointly. However, the optimization process is complex and consists of several steps that need to be followed in the earthquake parameter estimation procedure. These include pre-describing or modeling the fault geometry, calculating the Green's Functions (often assuming a layered elastic half-space), and estimating the distributed final slip and possibly other kinematic source parameters. Recently, Bayesian inference has become popular for estimating posterior distributions of earthquake source model parameters given measured/estimated/assumed data and model uncertainties. For instance, some research groups consider uncertainties of the layered medium and propagate these to the source parameter uncertainties. Other groups make use of informative priors to reduce the model parameter space. In addition, innovative sampling algorithms have been developed that efficiently explore the often high-dimensional parameter spaces. Compared to earlier studies, these improvements have resulted in overall more robust source model parameter estimates that include uncertainties. However, the computational demands of these methods are high and estimation codes are rarely distributed along with the published results. Even if codes are made available, it is often difficult to assemble them into a single optimization framework as they are typically coded in different programing languages. Therefore, further progress and future applications of these methods/codes are hampered, while reproducibility and validation of results has become essentially impossible. In the spirit of providing open-access and modular codes to facilitate progress and reproducible research in earthquake source estimations, we undertook the effort of producing BEAT, a python package that comprises all the above-mentioned features in one single programing environment. The package is build on top of the pyrocko seismological toolbox (www.pyrocko.org) and makes use of the pymc3 module for Bayesian statistical model fitting. BEAT is an open-source package (https://github.com/hvasbath/beat) and we encourage and solicit contributions to the project. In this contribution, we present our strategy for developing BEAT, show application examples, and discuss future developments.

  9. Contribution of the International Reference Ionosphere to the progress of the ionospheric representation

    NASA Astrophysics Data System (ADS)

    Bilitza, Dieter

    2017-04-01

    The International Reference Ionosphere (IRI), a joint project of the Committee on Space Research (COSPAR) and the International Union of Radio Science (URSI), is a data-based reference model for the ionosphere and since 2014 it is also recognized as the ISO (International Standardization Organization) standard for the ionosphere. The model is a synthesis of most of the available and reliable observations of ionospheric parameters combining ground and space measurements. This presentation reviews the steady progress in achieving a more and more accurate representation of the ionospheric plasma parameters accomplished during the last decade of IRI model improvements. Understandably, a data-based model is only as good as the data foundation on which it is built. We will discuss areas where we are in need of more data to obtain a more solid and continuous data foundation in space and time. We will also take a look at still existing discrepancies between simultaneous measurements of the same parameter with different measurement techniques and discuss the approach taken in the IRI model to deal with these conflicts. In conclusion we will provide an outlook at development activities that may result in significant future improvements of the accurate representation of the ionosphere in the IRI model.

  10. Analytical Computation of Effective Grid Parameters for the Finite-Difference Seismic Waveform Modeling With the PREM, IASP91, SP6, and AK135

    NASA Astrophysics Data System (ADS)

    Toyokuni, G.; Takenaka, H.

    2007-12-01

    We propose a method to obtain effective grid parameters for the finite-difference (FD) method with standard Earth models using analytical ways. In spite of the broad use of the heterogeneous FD formulation for seismic waveform modeling, accurate treatment of material discontinuities inside the grid cells has been a serious problem for many years. One possible way to solve this problem is to introduce effective grid elastic moduli and densities (effective parameters) calculated by the volume harmonic averaging of elastic moduli and volume arithmetic averaging of density in grid cells. This scheme enables us to put a material discontinuity into an arbitrary position in the spatial grids. Most of the methods used for synthetic seismogram calculation today receives the blessing of the standard Earth models, such as the PREM, IASP91, SP6, and AK135, represented as functions of normalized radius. For the FD computation of seismic waveform with such models, we first need accurate treatment of material discontinuities in radius. This study provides a numerical scheme for analytical calculations of the effective parameters for an arbitrary spatial grids in radial direction as to these major four standard Earth models making the best use of their functional features. This scheme can analytically obtain the integral volume averages through partial fraction decompositions (PFDs) and integral formulae. We have developed a FORTRAN subroutine to perform the computations, which is opened to utilization in a large variety of FD schemes ranging from 1-D to 3-D, with conventional- and staggered-grids. In the presentation, we show some numerical examples displaying the accuracy of the FD synthetics simulated with the analytical effective parameters.

  11. Understanding controls of hydrologic processes across two monolithological catchments using model-data integration

    NASA Astrophysics Data System (ADS)

    Xiao, D.; Shi, Y.; Li, L.

    2016-12-01

    Field measurements are important to understand the fluxes of water, energy, sediment, and solute in the Critical Zone however are expensive in time, money, and labor. This study aims to assess the model predictability of hydrological processes in a watershed using information from another intensively-measured watershed. We compare two watersheds of different lithology using national datasets, field measurements, and physics-based model, Flux-PIHM. We focus on two monolithological, forested watersheds under the same climate in the Shale Hills Susquehanna CZO in central Pennsylvania: the Shale-based Shale Hills (SSH, 0.08 km2) and the sandstone-based Garner Run (GR, 1.34 km2). We firstly tested the transferability of calibration coefficients from SSH to GR. We found that without any calibration the model can successfully predict seasonal average soil moisture and discharge which shows the advantage of a physics-based model, however, cannot precisely capture some peaks or the runoff in summer. The model reproduces the GR field data better after calibrating the soil hydrology parameters. In particular, the percentage of sand turns out to be a critical parameter in reproducing data. With sandstone being the dominant lithology, GR has much higher sand percentage than SSH (48.02% vs. 29.01%), leading to higher hydraulic conductivity, lower overall water storage capacity, and in general lower soil moisture. This is consistent with area averaged soil moisture observations using the cosmic-ray soil moisture observing system (COSMOS) at the two sites. This work indicates that some parameters, including evapotranspiration parameters, are transferrable due to similar climatic and land cover conditions. However, the key parameters that control soil moisture, including the sand percentage, need to be recalibrated, reflecting the key role of soil hydrological properties.

  12. Force-directed visualization for conceptual data models

    NASA Astrophysics Data System (ADS)

    Battigaglia, Andrew; Sutter, Noah

    2017-03-01

    Conceptual data models are increasingly stored in an eXtensible Markup Language (XML) format because of its portability between different systems and the ability of databases to use this format for storing data. However, when attempting to capture business or design needs, an organized graphical format is preferred in order to facilitate communication to receive as much input as possible from users and subject-matter experts. Existing methods of achieving this conversion suffer from problems of not being specific enough to capture all of the needs of conceptual data modeling and not being able to handle a large number of relationships between entities. This paper describes an implementation for a modeling solution to clearly illustrate conceptual data models stored in XML formats in well organized and structured diagrams. A force layout with several different parameters is applied to the diagram to create both compact and easily traversable relationships between entities.

  13. Evaluating alternate models to estimate genetic parameters of calving traits in United Kingdom Holstein-Friesian dairy cattle.

    PubMed

    Eaglen, Sophie A E; Coffey, Mike P; Woolliams, John A; Wall, Eileen

    2012-07-28

    The focus in dairy cattle breeding is gradually shifting from production to functional traits and genetic parameters of calving traits are estimated more frequently. However, across countries, various statistical models are used to estimate these parameters. This study evaluates different models for calving ease and stillbirth in United Kingdom Holstein-Friesian cattle. Data from first and later parity records were used. Genetic parameters for calving ease, stillbirth and gestation length were estimated using the restricted maximum likelihood method, considering different models i.e. sire (-maternal grandsire), animal, univariate and bivariate models. Gestation length was fitted as a correlated indicator trait and, for all three traits, genetic correlations between first and later parities were estimated. Potential bias in estimates was avoided by acknowledging a possible environmental direct-maternal covariance. The total heritable variance was estimated for each trait to discuss its theoretical importance and practical value. Prediction error variances and accuracies were calculated to compare the models. On average, direct and maternal heritabilities for calving traits were low, except for direct gestation length. Calving ease in first parity had a significant and negative direct-maternal genetic correlation. Gestation length was maternally correlated to stillbirth in first parity and directly correlated to calving ease in later parities. Multi-trait models had a slightly greater predictive ability than univariate models, especially for the lowly heritable traits. The computation time needed for sire (-maternal grandsire) models was much smaller than for animal models with only small differences in accuracy. The sire (-maternal grandsire) model was robust when additional genetic components were estimated, while the equivalent animal model had difficulties reaching convergence. For the evaluation of calving traits, multi-trait models show a slight advantage over univariate models. Extended sire models (-maternal grandsire) are more practical and robust than animal models. Estimated genetic parameters for calving traits of UK Holstein cattle are consistent with literature. Calculating an aggregate estimated breeding value including direct and maternal values should encourage breeders to consider both direct and maternal effects in selection decisions.

  14. Evaluating alternate models to estimate genetic parameters of calving traits in United Kingdom Holstein-Friesian dairy cattle

    PubMed Central

    2012-01-01

    Background The focus in dairy cattle breeding is gradually shifting from production to functional traits and genetic parameters of calving traits are estimated more frequently. However, across countries, various statistical models are used to estimate these parameters. This study evaluates different models for calving ease and stillbirth in United Kingdom Holstein-Friesian cattle. Methods Data from first and later parity records were used. Genetic parameters for calving ease, stillbirth and gestation length were estimated using the restricted maximum likelihood method, considering different models i.e. sire (−maternal grandsire), animal, univariate and bivariate models. Gestation length was fitted as a correlated indicator trait and, for all three traits, genetic correlations between first and later parities were estimated. Potential bias in estimates was avoided by acknowledging a possible environmental direct-maternal covariance. The total heritable variance was estimated for each trait to discuss its theoretical importance and practical value. Prediction error variances and accuracies were calculated to compare the models. Results and discussion On average, direct and maternal heritabilities for calving traits were low, except for direct gestation length. Calving ease in first parity had a significant and negative direct-maternal genetic correlation. Gestation length was maternally correlated to stillbirth in first parity and directly correlated to calving ease in later parities. Multi-trait models had a slightly greater predictive ability than univariate models, especially for the lowly heritable traits. The computation time needed for sire (−maternal grandsire) models was much smaller than for animal models with only small differences in accuracy. The sire (−maternal grandsire) model was robust when additional genetic components were estimated, while the equivalent animal model had difficulties reaching convergence. Conclusions For the evaluation of calving traits, multi-trait models show a slight advantage over univariate models. Extended sire models (−maternal grandsire) are more practical and robust than animal models. Estimated genetic parameters for calving traits of UK Holstein cattle are consistent with literature. Calculating an aggregate estimated breeding value including direct and maternal values should encourage breeders to consider both direct and maternal effects in selection decisions. PMID:22839757

  15. The ASMEx snow slab experiment: snow microwave radiative transfer (SMRT) model evaluation

    NASA Astrophysics Data System (ADS)

    Sandells, Melody; Löwe, Henning; Picard, Ghislain; Dumont, Marie; Essery, Richard; Floury, Nicolas; Kontu, Anna; Lemmetyinen, Juha; Maslanka, William; Mätzler, Christian; Morin, Samuel; Wiesmann, Andreas

    2017-04-01

    A major uncertainty in snow microwave modelling to date has been the treatment of the snow microstructure. Although observations of microstructural parameters such as the optical grain diameter, specific surface area and correlation length have improved drastically over the last few years, scale factors have been used to derive the parameters needed in microwave emission models from these observations. Previous work has shown that a major difference between electromagnetic models of scattering coefficients is due to the specific snow microstructure models used. The snow microwave radiative transfer model (SMRT) is a new model developed to advance understanding of the role of microstructure and isolate different assumptions in existing microwave models that collectively hinder interpretation of model intercomparison studies. SMRT is implemented in Python and is modular, thus allows switching between different representations in its various components. Here, the role of microstructure is examined with the Improved Born Approximation electromagnetic model. The model is evaluated against scattering and absorption coefficients derived from radiometer measurements of snow slabs taken as part of the Arctic Snow Microstructure Experiment (ASMEx), which took place in Sodankylä, Finland over two seasons. Microtomography observations of slab samples were used to determine parameters for five microstructure models: spherical, exponential, sticky hard sphere, Teubner-Strey and Gaussian random field. SMRT brightness temperature simulations are also compared with radiometric observations of the snow slabs over a reflector plate and an absorber substrate. Agreement between simulations and observations is generally good except for slabs that are highly anisotropic.

  16. Inner Radiation Belt Dynamics and Climatology

    NASA Astrophysics Data System (ADS)

    Guild, T. B.; O'Brien, P. P.; Looper, M. D.

    2012-12-01

    We present preliminary results of inner belt proton data assimilation using an augmented version of the Selesnick et al. Inner Zone Model (SIZM). By varying modeled physics parameters and solar particle injection parameters to generate many ensembles of the inner belt, then optimizing the ensemble weights according to inner belt observations from SAMPEX/PET at LEO and HEO/DOS at high altitude, we obtain the best-fit state of the inner belt. We need to fully sample the range of solar proton injection sources among the ensemble members to ensure reasonable agreement between the model ensembles and observations. Once this is accomplished, we find the method is fairly robust. We will demonstrate the data assimilation by presenting an extended interval of solar proton injections and losses, illustrating how these short-term dynamics dominate long-term inner belt climatology.

  17. 3D tomographic reconstruction using geometrical models

    NASA Astrophysics Data System (ADS)

    Battle, Xavier L.; Cunningham, Gregory S.; Hanson, Kenneth M.

    1997-04-01

    We address the issue of reconstructing an object of constant interior density in the context of 3D tomography where there is prior knowledge about the unknown shape. We explore the direct estimation of the parameters of a chosen geometrical model from a set of radiographic measurements, rather than performing operations (segmentation for example) on a reconstructed volume. The inverse problem is posed in the Bayesian framework. A triangulated surface describes the unknown shape and the reconstruction is computed with a maximum a posteriori (MAP) estimate. The adjoint differentiation technique computes the derivatives needed for the optimization of the model parameters. We demonstrate the usefulness of the approach and emphasize the techniques of designing forward and adjoint codes. We use the system response of the University of Arizona Fast SPECT imager to illustrate this method by reconstructing the shape of a heart phantom.

  18. Emulating a System Dynamics Model with Agent-Based Models: A Methodological Case Study in Simulation of Diabetes Progression

    DOE PAGES

    Schryver, Jack; Nutaro, James; Shankar, Mallikarjun

    2015-10-30

    An agent-based simulation model hierarchy emulating disease states and behaviors critical to progression of diabetes type 2 was designed and implemented in the DEVS framework. The models are translations of basic elements of an established system dynamics model of diabetes. In this model hierarchy, which mimics diabetes progression over an aggregated U.S. population, was dis-aggregated and reconstructed bottom-up at the individual (agent) level. Four levels of model complexity were defined in order to systematically evaluate which parameters are needed to mimic outputs of the system dynamics model. Moreover, the four estimated models attempted to replicate stock counts representing disease statesmore » in the system dynamics model, while estimating impacts of an elderliness factor, obesity factor and health-related behavioral parameters. Health-related behavior was modeled as a simple realization of the Theory of Planned Behavior, a joint function of individual attitude and diffusion of social norms that spread over each agent s social network. Although the most complex agent-based simulation model contained 31 adjustable parameters, all models were considerably less complex than the system dynamics model which required numerous time series inputs to make its predictions. In all three elaborations of the baseline model provided significantly improved fits to the output of the system dynamics model. The performances of the baseline agent-based model and its extensions illustrate a promising approach to translate complex system dynamics models into agent-based model alternatives that are both conceptually simpler and capable of capturing main effects of complex local agent-agent interactions.« less

  19. Emulating a System Dynamics Model with Agent-Based Models: A Methodological Case Study in Simulation of Diabetes Progression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schryver, Jack; Nutaro, James; Shankar, Mallikarjun

    An agent-based simulation model hierarchy emulating disease states and behaviors critical to progression of diabetes type 2 was designed and implemented in the DEVS framework. The models are translations of basic elements of an established system dynamics model of diabetes. In this model hierarchy, which mimics diabetes progression over an aggregated U.S. population, was dis-aggregated and reconstructed bottom-up at the individual (agent) level. Four levels of model complexity were defined in order to systematically evaluate which parameters are needed to mimic outputs of the system dynamics model. Moreover, the four estimated models attempted to replicate stock counts representing disease statesmore » in the system dynamics model, while estimating impacts of an elderliness factor, obesity factor and health-related behavioral parameters. Health-related behavior was modeled as a simple realization of the Theory of Planned Behavior, a joint function of individual attitude and diffusion of social norms that spread over each agent s social network. Although the most complex agent-based simulation model contained 31 adjustable parameters, all models were considerably less complex than the system dynamics model which required numerous time series inputs to make its predictions. In all three elaborations of the baseline model provided significantly improved fits to the output of the system dynamics model. The performances of the baseline agent-based model and its extensions illustrate a promising approach to translate complex system dynamics models into agent-based model alternatives that are both conceptually simpler and capable of capturing main effects of complex local agent-agent interactions.« less

  20. Toward improved simulation of river operations through integration with a hydrologic model

    USGS Publications Warehouse

    Morway, Eric D.; Niswonger, Richard G.; Triana, Enrique

    2016-01-01

    Advanced modeling tools are needed for informed water resources planning and management. Two classes of modeling tools are often used to this end–(1) distributed-parameter hydrologic models for quantifying supply and (2) river-operation models for sorting out demands under rule-based systems such as the prior-appropriation doctrine. Within each of these two broad classes of models, there are many software tools that excel at simulating the processes specific to each discipline, but have historically over-simplified, or at worse completely neglected, aspects of the other. As a result, water managers reliant on river-operation models for administering water resources need improved tools for representing spatially and temporally varying groundwater resources in conjunctive-use systems. A new tool is described that improves the representation of groundwater/surface-water (GW-SW) interaction within a river-operations modeling context and, in so doing, advances evaluation of system-wide hydrologic consequences of new or altered management regimes.

Top