Science.gov

Sample records for additional model parameters

  1. Parameter Estimation for a Physically-Based Model Using Multi-Objective Approach Constrained With Additional Internal States

    NASA Astrophysics Data System (ADS)

    Zhang, G.; Fenicia, F.; Savenije, H. H.

    2007-12-01

    . Therefore, in addition to the use of stream flow measurements for model calibration and uncertainty analysis, groundwater table gauging data were also used to help constrain parameter space. In model parameter identification, objective functions in favor of both high flows and low flows were employed to optimize the model performance. Results of this study show that parameters for subsurface processes are better identifiable than those for surface processes. This work also demonstrates that MOSCEM-UA is an efficient tool in parameter identification giving more insight to the structural behavior of the model.

  2. Separating response-execution bias from decision bias: arguments for an additional parameter in Ratcliff's diffusion model.

    PubMed

    Voss, Andreas; Voss, Jochen; Klauer, Karl Christoph

    2010-11-01

    Diffusion model data analysis permits the disentangling of different processes underlying the effects of experimental manipulations. Estimates can be provided for the speed of information accumulation, for the amount of information used to draw conclusions, and for a decision bias. One parameter describes the duration of non-decisional processes including the duration of motor-response execution. In the default diffusion model, it is implicitly assumed that both responses are executed with the same speed. In some applications of the diffusion model, this assumption will be violated. This will lead to biased parameter estimates. Consequently, we suggest accounting explicitly for differences in the speed of response execution for both responses. Results from a simulation study illustrate that parameter estimates from the default model are biased if the speed of response execution differs between responses. A second simulation study shows that large trial numbers (N>1,000) are needed to detect whether differences in response-execution times are based on different execution times. PMID:20030967

  3. An Additional Approach to Model Current Followers and Amplifiers with Electronically Controllable Parameters from Commercially Available ICs

    NASA Astrophysics Data System (ADS)

    Sotner, R.; Kartci, A.; Jerabek, J.; Herencsar, N.; Dostal, T.; Vrba, K.

    2012-12-01

    Several behavioral models of current active elements for experimental purposes are introduced in this paper. These models are based on commercially available devices. They are suitable for experimental tests of current- and mixed-mode filters, oscillators, and other circuits (employing current-mode active elements) frequently used in analog signal processing without necessity of onchip fabrication of proper active element. Several methods of electronic control of intrinsic resistance in the proposed behavioral models are discussed. All predictions and theoretical assumptions are supported by simulations and experiments. This contribution helps to find a cheaper and more effective way to preliminary laboratory tests without expensive on-chip fabrication of special active elements.

  4. Functional Generalized Additive Models.

    PubMed

    McLean, Mathew W; Hooker, Giles; Staicu, Ana-Maria; Scheipl, Fabian; Ruppert, David

    2014-01-01

    We introduce the functional generalized additive model (FGAM), a novel regression model for association studies between a scalar response and a functional predictor. We model the link-transformed mean response as the integral with respect to t of F{X(t), t} where F(·,·) is an unknown regression function and X(t) is a functional covariate. Rather than having an additive model in a finite number of principal components as in Müller and Yao (2008), our model incorporates the functional predictor directly and thus our model can be viewed as the natural functional extension of generalized additive models. We estimate F(·,·) using tensor-product B-splines with roughness penalties. A pointwise quantile transformation of the functional predictor is also considered to ensure each tensor-product B-spline has observed data on its support. The methods are evaluated using simulated data and their predictive performance is compared with other competing scalar-on-function regression alternatives. We illustrate the usefulness of our approach through an application to brain tractography, where X(t) is a signal from diffusion tensor imaging at position, t, along a tract in the brain. In one example, the response is disease-status (case or control) and in a second example, it is the score on a cognitive test. R code for performing the simulations and fitting the FGAM can be found in supplemental materials available online. PMID:24729671

  5. Numerical modeling of heat-transfer and the influence of process parameters on tailoring the grain morphology of IN718 in electron beam additive manufacturing

    DOE PAGESBeta

    Raghavan, Narendran; Dehoff, Ryan; Pannala, Sreekanth; Simunovic, Srdjan; Kirka, Michael; Turner, John; Carlson, Neil; Babu, Sudarsanam S.

    2016-04-26

    The fabrication of 3-D parts from CAD models by additive manufacturing (AM) is a disruptive technology that is transforming the metal manufacturing industry. The correlation between solidification microstructure and mechanical properties has been well understood in the casting and welding processes over the years. This paper focuses on extending these principles to additive manufacturing to understand the transient phenomena of repeated melting and solidification during electron beam powder melting process to achieve site-specific microstructure control within a fabricated component. In this paper, we have developed a novel melt scan strategy for electron beam melting of nickel-base superalloy (Inconel 718) andmore » also analyzed 3-D heat transfer conditions using a parallel numerical solidification code (Truchas) developed at Los Alamos National Laboratory. The spatial and temporal variations of temperature gradient (G) and growth velocity (R) at the liquid-solid interface of the melt pool were calculated as a function of electron beam parameters. By manipulating the relative number of voxels that lie in the columnar or equiaxed region, the crystallographic texture of the components can be controlled to an extent. The analysis of the parameters provided optimum processing conditions that will result in columnar to equiaxed transition (CET) during the solidification. Furthermore, the results from the numerical simulations were validated by experimental processing and characterization thereby proving the potential of additive manufacturing process to achieve site-specific crystallographic texture control within a fabricated component.« less

  6. Mixed additive models

    NASA Astrophysics Data System (ADS)

    Carvalho, Francisco; Covas, Ricardo

    2016-06-01

    We consider mixed models y =∑i =0 w Xiβi with V (y )=∑i =1 w θiMi Where Mi=XiXi⊤ , i = 1, . . ., w, and µ = X0β0. For these we will estimate the variance components θ1, . . ., θw, aswell estimable vectors through the decomposition of the initial model into sub-models y(h), h ∈ Γ, with V (y (h ))=γ (h )Ig (h )h ∈Γ . Moreover we will consider L extensions of these models, i.e., y˚=Ly+ɛ, where L=D (1n1, . . ., 1nw) and ɛ, independent of y, has null mean vector and variance covariance matrix θw+1Iw, where w =∑i =1 n wi .

  7. Additive Manufacturing of Single-Crystal Superalloy CMSX-4 Through Scanning Laser Epitaxy: Computational Modeling, Experimental Process Development, and Process Parameter Optimization

    NASA Astrophysics Data System (ADS)

    Basak, Amrita; Acharya, Ranadip; Das, Suman

    2016-06-01

    This paper focuses on additive manufacturing (AM) of single-crystal (SX) nickel-based superalloy CMSX-4 through scanning laser epitaxy (SLE). SLE, a powder bed fusion-based AM process was explored for the purpose of producing crack-free, dense deposits of CMSX-4 on top of similar chemistry investment-cast substrates. Optical microscopy and scanning electron microscopy (SEM) investigations revealed the presence of dendritic microstructures that consisted of fine γ' precipitates within the γ matrix in the deposit region. Computational fluid dynamics (CFD)-based process modeling, statistical design of experiments (DoE), and microstructural characterization techniques were combined to produce metallurgically bonded single-crystal deposits of more than 500 μm height in a single pass along the entire length of the substrate. A customized quantitative metallography based image analysis technique was employed for automatic extraction of various deposit quality metrics from the digital cross-sectional micrographs. The processing parameters were varied, and optimal processing windows were identified to obtain good quality deposits. The results reported here represent one of the few successes obtained in producing single-crystal epitaxial deposits through a powder bed fusion-based metal AM process and thus demonstrate the potential of SLE to repair and manufacture single-crystal hot section components of gas turbine systems from nickel-based superalloy powders.

  8. Additive Manufacturing of Single-Crystal Superalloy CMSX-4 Through Scanning Laser Epitaxy: Computational Modeling, Experimental Process Development, and Process Parameter Optimization

    NASA Astrophysics Data System (ADS)

    Basak, Amrita; Acharya, Ranadip; Das, Suman

    2016-08-01

    This paper focuses on additive manufacturing (AM) of single-crystal (SX) nickel-based superalloy CMSX-4 through scanning laser epitaxy (SLE). SLE, a powder bed fusion-based AM process was explored for the purpose of producing crack-free, dense deposits of CMSX-4 on top of similar chemistry investment-cast substrates. Optical microscopy and scanning electron microscopy (SEM) investigations revealed the presence of dendritic microstructures that consisted of fine γ' precipitates within the γ matrix in the deposit region. Computational fluid dynamics (CFD)-based process modeling, statistical design of experiments (DoE), and microstructural characterization techniques were combined to produce metallurgically bonded single-crystal deposits of more than 500 μm height in a single pass along the entire length of the substrate. A customized quantitative metallography based image analysis technique was employed for automatic extraction of various deposit quality metrics from the digital cross-sectional micrographs. The processing parameters were varied, and optimal processing windows were identified to obtain good quality deposits. The results reported here represent one of the few successes obtained in producing single-crystal epitaxial deposits through a powder bed fusion-based metal AM process and thus demonstrate the potential of SLE to repair and manufacture single-crystal hot section components of gas turbine systems from nickel-based superalloy powders.

  9. Computational Process Modeling for Additive Manufacturing

    NASA Technical Reports Server (NTRS)

    Bagg, Stacey; Zhang, Wei

    2014-01-01

    Computational Process and Material Modeling of Powder Bed additive manufacturing of IN 718. Optimize material build parameters with reduced time and cost through modeling. Increase understanding of build properties. Increase reliability of builds. Decrease time to adoption of process for critical hardware. Potential to decrease post-build heat treatments. Conduct single-track and coupon builds at various build parameters. Record build parameter information and QM Meltpool data. Refine Applied Optimization powder bed AM process model using data. Report thermal modeling results. Conduct metallography of build samples. Calibrate STK models using metallography findings. Run STK models using AO thermal profiles and report STK modeling results. Validate modeling with additional build. Photodiode Intensity measurements highly linear with power input. Melt Pool Intensity highly correlated to Melt Pool Size. Melt Pool size and intensity increase with power. Applied Optimization will use data to develop powder bed additive manufacturing process model.

  10. Asymmetric Conjugate Addition of Benzofuran-2-ones to Alkyl 2-Phthalimidoacrylates: Modeling Structure-Stereoselectivity Relationships with Steric and Electronic Parameters.

    PubMed

    Yang, Chen; Zhang, En-Ge; Li, Xin; Cheng, Jin-Pei

    2016-05-23

    A highly predictive model to correlate the steric and electronic parameters of tertiary amine thiourea catalysts with the stereoselectivity of Michael reactions of 3-substituted benzofuranones and alkyl 2-phthalimidoacrylates is described. As predicted, new 3,5-bis(trifluoromethyl)benzyl- and methyl-substituted tertiary amine thioureas turned out to be highly suitable catalysts for this reaction and enabled the synthesis of enantioenriched α-amino acid derivatives with 1,3-nonadjacent stereogenic centers. PMID:27080558

  11. Parameter estimation for transformer modeling

    NASA Astrophysics Data System (ADS)

    Cho, Sung Don

    are simulated and compared with an earlier-developed BCTRAN-based model. Black start energization cases are also simulated as a means of model evaluation and compared with actual event records. The simulated results using the model developed here are reasonable and more correct than those of the BCTRAN-based model. Simulation accuracy is dependent on the accuracy of the equipment model and its parameters. This work is significant in that it advances existing parameter estimation methods in cases where the available data and measurements are incomplete. The accuracy of EMTP simulation for power systems including three-phase autotransformers is thus enhanced. Theoretical results obtained from this work provide a sound foundation for development of transformer parameter estimation methods using engineering optimization. In addition, it should be possible to refine which information and measurement data are necessary for complete duality-based transformer models. To further refine and develop the models and transformer parameter estimation methods developed here, iterative full-scale laboratory tests using high-voltage and high-power three-phase transformer would be helpful.

  12. Linking Item Response Model Parameters.

    PubMed

    van der Linden, Wim J; Barrett, Michelle D

    2016-09-01

    With a few exceptions, the problem of linking item response model parameters from different item calibrations has been conceptualized as an instance of the problem of test equating scores on different test forms. This paper argues, however, that the use of item response models does not require any test score equating. Instead, it involves the necessity of parameter linking due to a fundamental problem inherent in the formal nature of these models-their general lack of identifiability. More specifically, item response model parameters need to be linked to adjust for the different effects of the identifiability restrictions used in separate item calibrations. Our main theorems characterize the formal nature of these linking functions for monotone, continuous response models, derive their specific shapes for different parameterizations of the 3PL model, and show how to identify them from the parameter values of the common items or persons in different linking designs. PMID:26155754

  13. Parameter extraction and transistor models

    NASA Technical Reports Server (NTRS)

    Rykken, Charles; Meiser, Verena; Turner, Greg; Wang, QI

    1985-01-01

    Using specified mathematical models of the MOSFET device, the optimal values of the model-dependent parameters were extracted from data provided by the Jet Propulsion Laboratory (JPL). Three MOSFET models, all one-dimensional were used. One of the models took into account diffusion (as well as convection) currents. The sensitivity of the models was assessed for variations of the parameters from their optimal values. Lines of future inquiry are suggested on the basis of the behavior of the devices, of the limitations of the proposed models, and of the complexity of the required numerical investigations.

  14. Estimation of pharmacokinetic model parameters.

    PubMed

    Timcenko, A; Reich, D L; Trunfio, G

    1995-01-01

    This paper addresses the problem of estimating the depth of anesthesia in clinical practice where many drugs are used in combination. The aim of the project is to use pharmacokinetically-derived data to predict episodes of light anesthesia. The weighted linear combination of anesthetic drug concentrations was computed using a stochastic pharmacokinetic model. The clinical definition of light anesthesia was based on the hemodynamic consequences of autonomic nervous system responses to surgical stimuli. A rule-based expert system was used to review anesthesia records to determine instances of light anesthesia using hemodynamic criteria. It was assumed that light anesthesia was a direct consequence of the weighted linear combination of drug concentrations in the patient's body that decreased below a certain threshold. We augmented traditional two-compartment models with a stochastic component of anesthetics' concentrations to compensate for interpatient pharmacokinetic and pharmacodynamic variability. A cohort of 532 clinical anesthesia cases was examined and parameters of two compartment pharmacokinetic models for 6 intravenously administered anesthetic drugs (fentanyl, thiopenthal, morphine, propofol, midazolam, ketamine) were estimated, as well as the parameters for 2 inhalational anesthetics (N2O and isoflurane). These parameters were then prospectively applied to 22 cases that were not used for parameter estimation, and the predictive ability of the pharmacokinetic model was determined. The goal of the study is the development of a pharmacokinetic model that will be useful in predicting light anesthesia in the clinically relevant circumstance where many drugs are used concurrently. PMID:8563327

  15. Additional Investigations of Ice Shape Sensitivity to Parameter Variations

    NASA Technical Reports Server (NTRS)

    Miller, Dean R.; Potapczuk, Mark G.; Langhals, Tammy J.

    2006-01-01

    A second parameter sensitivity study was conducted at the NASA Glenn Research Center's Icing Research Tunnel (IRT) using a 36 in. chord (0.91 m) NACA-0012 airfoil. The objective of this work was to further investigate the feasibility of using ice shape feature changes to define requirements for the simulation and measurement of SLD and appendix C icing conditions. A previous study concluded that it was feasible to use changes in ice shape features (e.g., ice horn angle, ice horn thickness, and ice shape mass) to detect relatively small variations in icing spray condition parameters (LWC, MVD, and temperature). The subject of this current investigation extends the scope of this previous work, by also examining the effect of icing tunnel spray-bar parameter variations (water pressure, air pressure) on ice shape feature changes. The approach was to vary spray-bar water pressure and air pressure, and then evaluate the effects of these parameter changes on the resulting ice shapes. This paper will provide a description of the experimental method, present selected experimental results, and conclude with an evaluation of these results.

  16. Parameter identification in continuum models

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Crowley, J. M.

    1983-01-01

    Approximation techniques for use in numerical schemes for estimating spatially varying coefficients in continuum models such as those for Euler-Bernoulli beams are discussed. The techniques are based on quintic spline state approximations and cubic spline parameter approximations. Both theoretical and numerical results are presented.

  17. Robust parameter estimation method for bilinear model

    NASA Astrophysics Data System (ADS)

    Ismail, Mohd Isfahani; Ali, Hazlina; Yahaya, Sharipah Soaad S.

    2015-12-01

    This paper proposed the method of parameter estimation for bilinear model, especially on BL(1,0,1,1) model without and with the presence of additive outlier (AO). In this study, the estimated parameters for BL(1,0,1,1) model are using nonlinear least squares (LS) method and also through robust approaches. The LS method employs the Newton-Raphson (NR) iterative procedure in estimating the parameters of bilinear model, but, using LS in estimating the parameters can be affected with the occurrence of outliers. As a solution, this study proposed robust approaches in dealing with the problem of outliers specifically on AO in BL(1,0,1,1) model. In robust estimation method, for improvement, we proposed to modify the NR procedure with robust scale estimators. We introduced two robust scale estimators namely median absolute deviation (MADn) and Tn in linear autoregressive model, AR(1) that be adequate and suitable for bilinear BL(1,0,1,1) model. We used the estimated parameter value in AR(1) model as an initial value in estimating the parameter values of BL(1,0,1,1) model. The investigation of the performance of LS and robust estimation methods in estimating the coefficients of BL(1,0,1,1) model is carried out through simulation study. The achievement of performance for both methods will be assessed in terms of bias values. Numerical results present that, the robust estimation method performs better than LS method in estimating the parameters without and with the presence of AO.

  18. Computational Process Modeling for Additive Manufacturing (OSU)

    NASA Technical Reports Server (NTRS)

    Bagg, Stacey; Zhang, Wei

    2015-01-01

    Powder-Bed Additive Manufacturing (AM) through Direct Metal Laser Sintering (DMLS) or Selective Laser Melting (SLM) is being used by NASA and the Aerospace industry to "print" parts that traditionally are very complex, high cost, or long schedule lead items. The process spreads a thin layer of metal powder over a build platform, then melts the powder in a series of welds in a desired shape. The next layer of powder is applied, and the process is repeated until layer-by-layer, a very complex part can be built. This reduces cost and schedule by eliminating very complex tooling and processes traditionally used in aerospace component manufacturing. To use the process to print end-use items, NASA seeks to understand SLM material well enough to develop a method of qualifying parts for space flight operation. Traditionally, a new material process takes many years and high investment to generate statistical databases and experiential knowledge, but computational modeling can truncate the schedule and cost -many experiments can be run quickly in a model, which would take years and a high material cost to run empirically. This project seeks to optimize material build parameters with reduced time and cost through modeling.

  19. Self-calibration of terrestrial laser scanners: selection of the best geometric additional parameters

    NASA Astrophysics Data System (ADS)

    Lerma, J. L.; García-San-Miguel, D.

    2014-05-01

    Systematic errors are present in laser scanning system observations due to manufacturer imperfections, wearing over time, vibrations, changing environmental conditions and, last but not least, involuntary hits. To achieve maximum quality and rigorous measurements from terrestrial laser scanners, a least squares estimation of additional calibration parameters can be used to model the a priori unknown systematic errors and therefore improve output observations. The selection of the right set of additional parameters is not trivial and requires laborious statistical analysis. Based on this requirement, this article presents an approach to determine the best set of additional parameters which provides the best mathematical solution based on a dimensionless quality index. The best set of additional parameters is the one which provides the maximum quality index (i.e. minimum value) for the group of observables, exterior orientation parameters and reference points. Calibration performance is tested using both a phase shift continuous wave scanner, FARO PHOTON 880, and a pulse-based time-of-flight system, Leica HDS3000. The improvement achieved after the geometric calibration is 30% for the former and 70% for the latter.

  20. Criteria for deviation from predictions by the concentration addition model.

    PubMed

    Takeshita, Jun-Ichi; Seki, Masanori; Kamo, Masashi

    2016-07-01

    Loewe's additivity (concentration addition) is a well-known model for predicting the toxic effects of chemical mixtures under the additivity assumption of toxicity. However, from the perspective of chemical risk assessment and/or management, it is important to identify chemicals whose toxicities are additive when present concurrently, that is, it should be established whether there are chemical mixtures to which the concentration addition predictive model can be applied. The objective of the present study was to develop criteria for judging test results that deviated from the predictions by the concentration addition chemical mixture model. These criteria were based on the confidence interval of the concentration addition model's prediction and on estimation of errors of the predicted concentration-effect curves by toxicity tests after exposure to single chemicals. A log-logit model with 2 parameters was assumed for the concentration-effect curve of each individual chemical. These parameters were determined by the maximum-likelihood method, and the criteria were defined using the variances and the covariance of the parameters. In addition, the criteria were applied to a toxicity test of a binary mixture of p-n-nonylphenol and p-n-octylphenol using the Japanese killifish, medaka (Oryzias latipes). Consequently, the concentration addition model using confidence interval was capable of predicting the test results at any level, and no reason for rejecting the concentration addition was found. Environ Toxicol Chem 2016;35:1806-1814. © 2015 SETAC. PMID:26660330

  1. Parameter Estimation for Viscoplastic Material Modeling

    NASA Technical Reports Server (NTRS)

    Saleeb, Atef F.; Gendy, Atef S.; Wilt, Thomas E.

    1997-01-01

    A key ingredient in the design of engineering components and structures under general thermomechanical loading is the use of mathematical constitutive models (e.g. in finite element analysis) capable of accurate representation of short and long term stress/deformation responses. In addition to the ever-increasing complexity of recent viscoplastic models of this type, they often also require a large number of material constants to describe a host of (anticipated) physical phenomena and complicated deformation mechanisms. In turn, the experimental characterization of these material parameters constitutes the major factor in the successful and effective utilization of any given constitutive model; i.e., the problem of constitutive parameter estimation from experimental measurements.

  2. Additive-multiplicative rates model for recurrent events.

    PubMed

    Liu, Yanyan; Wu, Yuanshan; Cai, Jianwen; Zhou, Haibo

    2010-07-01

    Recurrent events are frequently encountered in biomedical studies. Evaluating the covariates effects on the marginal recurrent event rate is of practical interest. There are mainly two types of rate models for the recurrent event data: the multiplicative rates model and the additive rates model. We consider a more flexible additive-multiplicative rates model for analysis of recurrent event data, wherein some covariate effects are additive while others are multiplicative. We formulate estimating equations for estimating the regression parameters. The estimators for these regression parameters are shown to be consistent and asymptotically normally distributed under appropriate regularity conditions. Moreover, the estimator of the baseline mean function is proposed and its large sample properties are investigated. We also conduct simulation studies to evaluate the finite sample behavior of the proposed estimators. A medical study of patients with cystic fibrosis suffered from recurrent pulmonary exacerbations is provided for illustration of the proposed method. PMID:20229314

  3. Parameter estimation for distributed parameter models of complex, flexible structures

    NASA Technical Reports Server (NTRS)

    Taylor, Lawrence W., Jr.

    1991-01-01

    Distributed parameter modeling of structural dynamics has been limited to simple spacecraft configurations because of the difficulty of handling several distributed parameter systems linked at their boundaries. Although there is other computer software able to generate such models or complex, flexible spacecraft, unfortunately, neither is suitable for parameter estimation. Because of this limitation the computer software PDEMOD is being developed for the express purposes of modeling, control system analysis, parameter estimation and structure optimization. PDEMOD is capable of modeling complex, flexible spacecraft which consist of a three-dimensional network of flexible beams and rigid bodies. Each beam has bending (Bernoulli-Euler or Timoshenko) in two directions, torsion, and elongation degrees of freedom. The rigid bodies can be attached to the beam ends at any angle or body location. PDEMOD is also capable of performing parameter estimation based on matching experimental modal frequencies and static deflection test data. The underlying formulation and the results of using this approach for test data of the Mini-MAST truss will be discussed. The resulting accuracy of the parameter estimates when using such limited data can impact significantly the instrumentation requirements for on-orbit tests.

  4. Moose models with vanishing S parameter

    SciTech Connect

    Casalbuoni, R.; De Curtis, S.; Dominici, D.

    2004-09-01

    In the linear moose framework, which naturally emerges in deconstruction models, we show that there is a unique solution for the vanishing of the S parameter at the lowest order in the weak interactions. We consider an effective gauge theory based on K SU(2) gauge groups, K+1 chiral fields, and electroweak groups SU(2){sub L} and U(1){sub Y} at the ends of the chain of the moose. S vanishes when a link in the moose chain is cut. As a consequence one has to introduce a dynamical nonlocal field connecting the two ends of the moose. Then the model acquires an additional custodial symmetry which protects this result. We examine also the possibility of a strong suppression of S through an exponential behavior of the link couplings as suggested by the Randall Sundrum metric.

  5. Parameter Estimation of Partial Differential Equation Models

    PubMed Central

    Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Carroll, Raymond J.; Maity, Arnab

    2013-01-01

    Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown, and need to be estimated from the measurements of the dynamic system in the present of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE, and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from LIDAR data. PMID:24363476

  6. Parameter Estimation of Partial Differential Equation Models.

    PubMed

    Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Carroll, Raymond J; Maity, Arnab

    2013-01-01

    Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown, and need to be estimated from the measurements of the dynamic system in the present of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE, and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from LIDAR data. PMID:24363476

  7. Transferability and additivity of dihedral parameters in polarizable and nonpolarizable empirical force fields.

    PubMed

    Zgarbová, Marie; Rosnik, Andreana M; Luque, F Javier; Curutchet, Carles; Jurečka, Petr

    2015-09-30

    Recent advances in polarizable force fields have revealed that major reparameterization is necessary when the polarization energy is treated explicitly. This study is focused on the torsional parameters, which are crucial for the accurate description of conformational equilibria in biomolecules. In particular, attention is paid to the influence of polarization on the (i) transferability of dihedral terms between molecules, (ii) transferability between different environments, and (iii) additivity of dihedral energies. To this end, three polarizable force fields based on the induced point dipole model designed for use in AMBER are tested, including two recent ff02 reparameterizations. Attention is paid to the contributions due to short range interactions (1-2, 1-3, and 1-4) within the four atoms defining the dihedral angle. The results show that when short range 1-2 and 1-3 polarization interactions are omitted, as for instance in ff02, the 1-4 polarization contribution is rather small and unlikely to improve the description of the torsional energy. Conversely, when screened 1-2 and 1-3 interactions are included, the polarization contribution is sizeable and shows potential to improve the transferability of parameters between different molecules and environments as well as the additivity of dihedral terms. However, to reproduce intramolecular polarization effects accurately, further fine-tuning of the short range damping of polarization is necessary. PMID:26224547

  8. Understanding Parameter Invariance in Unidimensional IRT Models

    ERIC Educational Resources Information Center

    Rupp, Andre A.; Zumbo, Bruno D.

    2006-01-01

    One theoretical feature that makes item response theory (IRT) models those of choice for many psychometric data analysts is parameter invariance, the equality of item and examinee parameters from different examinee populations or measurement conditions. In this article, using the well-known fact that item and examinee parameters are identical only…

  9. Additional field verification of convective scaling for the lateral dispersion parameter

    SciTech Connect

    Sakiyama, S.K.; Davis, P.A.

    1988-07-01

    The results of a series of diffusion trials over the heterogeneous surface of the Canadian Precambrian Shield provide additional support for the convective scaling of the lateral dispersion parameter. The data indicate that under convective conditions, the lateral dispersion parameter can be scaled with the convective velocity scale and the mixing depth. 10 references.

  10. Model parameter updating using Bayesian networks

    SciTech Connect

    Treml, C. A.; Ross, Timothy J.

    2004-01-01

    This paper outlines a model parameter updating technique for a new method of model validation using a modified model reference adaptive control (MRAC) framework with Bayesian Networks (BNs). The model parameter updating within this method is generic in the sense that the model/simulation to be validated is treated as a black box. It must have updateable parameters to which its outputs are sensitive, and those outputs must have metrics that can be compared to that of the model reference, i.e., experimental data. Furthermore, no assumptions are made about the statistics of the model parameter uncertainty, only upper and lower bounds need to be specified. This method is designed for situations where a model is not intended to predict a complete point-by-point time domain description of the item/system behavior; rather, there are specific points, features, or events of interest that need to be predicted. These specific points are compared to the model reference derived from actual experimental data. The logic for updating the model parameters to match the model reference is formed via a BN. The nodes of this BN consist of updateable model input parameters and the specific output values or features of interest. Each time the model is executed, the input/output pairs are used to adapt the conditional probabilities of the BN. Each iteration further refines the inferred model parameters to produce the desired model output. After parameter updating is complete and model inputs are inferred, reliabilities for the model output are supplied. Finally, this method is applied to a simulation of a resonance control cooling system for a prototype coupled cavity linac. The results are compared to experimental data.

  11. Global Model Analysis by Parameter Space Partitioning

    ERIC Educational Resources Information Center

    Pitt, Mark A.; Kim, Woojae; Navarro, Daniel J.; Myung, Jay I.

    2006-01-01

    To model behavior, scientists need to know how models behave. This means learning what other behaviors a model can produce besides the one generated by participants in an experiment. This is a difficult problem because of the complexity of psychological models (e.g., their many parameters) and because the behavioral precision of models (e.g.,…

  12. RECURSIVE PARAMETER ESTIMATION OF HYDROLOGIC MODELS

    EPA Science Inventory

    Proposed is a nonlinear filtering approach to recursive parameter estimation of conceptual watershed response models in state-space form. he conceptual model state is augmented by the vector of free parameters which are to be estimated from input-output data, and the extended Kal...

  13. Network Reconstruction Using Nonparametric Additive ODE Models

    PubMed Central

    Henderson, James; Michailidis, George

    2014-01-01

    Network representations of biological systems are widespread and reconstructing unknown networks from data is a focal problem for computational biologists. For example, the series of biochemical reactions in a metabolic pathway can be represented as a network, with nodes corresponding to metabolites and edges linking reactants to products. In a different context, regulatory relationships among genes are commonly represented as directed networks with edges pointing from influential genes to their targets. Reconstructing such networks from data is a challenging problem receiving much attention in the literature. There is a particular need for approaches tailored to time-series data and not reliant on direct intervention experiments, as the former are often more readily available. In this paper, we introduce an approach to reconstructing directed networks based on dynamic systems models. Our approach generalizes commonly used ODE models based on linear or nonlinear dynamics by extending the functional class for the functions involved from parametric to nonparametric models. Concomitantly we limit the complexity by imposing an additive structure on the estimated slope functions. Thus the submodel associated with each node is a sum of univariate functions. These univariate component functions form the basis for a novel coupling metric that we define in order to quantify the strength of proposed relationships and hence rank potential edges. We show the utility of the method by reconstructing networks using simulated data from computational models for the glycolytic pathway of Lactocaccus Lactis and a gene network regulating the pluripotency of mouse embryonic stem cells. For purposes of comparison, we also assess reconstruction performance using gene networks from the DREAM challenges. We compare our method to those that similarly rely on dynamic systems models and use the results to attempt to disentangle the distinct roles of linearity, sparsity, and derivative

  14. CREATION OF THE MODEL ADDITIONAL PROTOCOL

    SciTech Connect

    Houck, F.; Rosenthal, M.; Wulf, N.

    2010-05-25

    In 1991, the international nuclear nonproliferation community was dismayed to discover that the implementation of safeguards by the International Atomic Energy Agency (IAEA) under its NPT INFCIRC/153 safeguards agreement with Iraq had failed to detect Iraq's nuclear weapon program. It was now clear that ensuring that states were fulfilling their obligations under the NPT would require not just detecting diversion but also the ability to detect undeclared materials and activities. To achieve this, the IAEA initiated what would turn out to be a five-year effort to reappraise the NPT safeguards system. The effort engaged the IAEA and its Member States and led to agreement in 1997 on a new safeguards agreement, the Model Protocol Additional to the Agreement(s) between States and the International Atomic Energy Agency for the Application of Safeguards. The Model Protocol makes explicit that one IAEA goal is to provide assurance of the absence of undeclared nuclear material and activities. The Model Protocol requires an expanded declaration that identifies a State's nuclear potential, empowers the IAEA to raise questions about the correctness and completeness of the State's declaration, and, if needed, allows IAEA access to locations. The information required and the locations available for access are much broader than those provided for under INFCIRC/153. The negotiation was completed in quite a short time because it started with a relatively complete draft of an agreement prepared by the IAEA Secretariat. This paper describes how the Model Protocol was constructed and reviews key decisions that were made both during the five-year period and in the actual negotiation.

  15. Rheological parameters of dough with inulin addition and its effect on bread quality

    NASA Astrophysics Data System (ADS)

    Bojnanska, T.; Tokar, M.; Vollmannova, A.

    2015-04-01

    The rheological properties of enriched flour prepared with an addition of inulin were studied. The addition of inulin caused changes of the rheological parameters of the recorder curve. 10% and more addition significantly extended development time and on the farinogram were two peaks of consistency, what is a non-standard shape. With increasing addition of inulin resistance to deformation grows and dough is difficult to process, over 15% addition make dough short and unsuitable for making bread. Bread volume, the most important parameter, significantly decreased with inulin addition. Our results suggest a level of 5% inulin to produce a functional bread of high sensory acceptance and a level of 10% inulin produce a bread of satisfactory sensory acceptance. Bread with a level over 10% of inulin was unsatisfactory.

  16. Detecting contaminated birthdates using generalized additive models

    PubMed Central

    2014-01-01

    Background Erroneous patient birthdates are common in health databases. Detection of these errors usually involves manual verification, which can be resource intensive and impractical. By identifying a frequent manifestation of birthdate errors, this paper presents a principled and statistically driven procedure to identify erroneous patient birthdates. Results Generalized additive models (GAM) enabled explicit incorporation of known demographic trends and birth patterns. With false positive rates controlled, the method identified birthdate contamination with high accuracy. In the health data set used, of the 58 actual incorrect birthdates manually identified by the domain expert, the GAM-based method identified 51, with 8 false positives (resulting in a positive predictive value of 86.0% (51/59) and a false negative rate of 12.0% (7/58)). These results outperformed linear time-series models. Conclusions The GAM-based method is an effective approach to identify systemic birthdate errors, a common data quality issue in both clinical and administrative databases, with high accuracy. PMID:24923281

  17. Lightning Climatology with a Generalized Additive Model

    NASA Astrophysics Data System (ADS)

    Simon, Thorsten; Mayr, Georg; Umlauf, Nikolaus; Zeileis, Achim

    2016-04-01

    This study present a lightning climatology on a 1km x 1km grid estimated via generalized additive models (GAM). GAMs provide a framework to account for non-linear effects in time and space and for non-linear spatial-temporal interaction terms simultaneously. The degrees of smoothness of the non-linear effects is selected automatically in our approach. Furthermore, the influence of topography is captured in the model by including a non-linear term. To illustrate our approach we use lightning data from the ALDIS networks and selected a region in Southeastern Austria, where complex terrain extends from 200 an 3800 m asl and summertime lightning activity is high compared to other parts of the Eastern Alps. The temporal effect in the GAM shows a rapid increase in lightning activity in early July and a slow decay in activity afterwards. The estimated spatial effect is not very smooth and requires approximately 225 effective degrees of freedom. It reveals that lightning is more likely in the Eastern and Southern part of the region of interest. This spatial effect only accounts for variability not already explained by the topography. The topography effect shows lightning to be more likely at higher altitudes. The effect describing the spatio-temporal interactions takes approximately 200 degrees of freedom, and reveals local deviations of the climatology.

  18. Parameter Invariance in the Rasch Model.

    ERIC Educational Resources Information Center

    Davison, Mark L.; Chen, Tsuey-Hwa

    This paper explores a logistic regression procedure for estimating item parameters in the Rasch model and testing the hypothesis of item parameter invariance across several groups/populations. Rather than using item responses directly, the procedure relies on "pseudo-paired comparisons" (PC) statistics defined over all possible pairs of items.…

  19. Exploiting intrinsic fluctuations to identify model parameters.

    PubMed

    Zimmer, Christoph; Sahle, Sven; Pahle, Jürgen

    2015-04-01

    Parameterisation of kinetic models plays a central role in computational systems biology. Besides the lack of experimental data of high enough quality, some of the biggest challenges here are identification issues. Model parameters can be structurally non-identifiable because of functional relationships. Noise in measured data is usually considered to be a nuisance for parameter estimation. However, it turns out that intrinsic fluctuations in particle numbers can make parameters identifiable that were previously non-identifiable. The authors present a method to identify model parameters that are structurally non-identifiable in a deterministic framework. The method takes time course recordings of biochemical systems in steady state or transient state as input. Often a functional relationship between parameters presents itself by a one-dimensional manifold in parameter space containing parameter sets of optimal goodness. Although the system's behaviour cannot be distinguished on this manifold in a deterministic framework it might be distinguishable in a stochastic modelling framework. Their method exploits this by using an objective function that includes a measure for fluctuations in particle numbers. They show on three example models, immigration-death, gene expression and Epo-EpoReceptor interaction, that this resolves the non-identifiability even in the case of measurement noise with known amplitude. The method is applied to partially observed recordings of biochemical systems with measurement noise. It is simple to implement and it is usually very fast to compute. This optimisation can be realised in a classical or Bayesian fashion. PMID:26672148

  20. Models and parameters for environmental radiological assessments

    SciTech Connect

    Miller, C W

    1984-01-01

    This book presents a unified compilation of models and parameters appropriate for assessing the impact of radioactive discharges to the environment. Models examined include those developed for the prediction of atmospheric and hydrologic transport and deposition, for terrestrial and aquatic food-chain bioaccumulation, and for internal and external dosimetry. Chapters have been entered separately into the data base. (ACR)

  1. Analysis of Modeling Parameters on Threaded Screws.

    SciTech Connect

    Vigil, Miquela S.; Brake, Matthew Robert; Vangoethem, Douglas

    2015-06-01

    Assembled mechanical systems often contain a large number of bolted connections. These bolted connections (joints) are integral aspects of the load path for structural dynamics, and, consequently, are paramount for calculating a structure's stiffness and energy dissipation prop- erties. However, analysts have not found the optimal method to model appropriately these bolted joints. The complexity of the screw geometry cause issues when generating a mesh of the model. This paper will explore different approaches to model a screw-substrate connec- tion. Model parameters such as mesh continuity, node alignment, wedge angles, and thread to body element size ratios are examined. The results of this study will give analysts a better understanding of the influences of these parameters and will aide in finding the optimal method to model bolted connections.

  2. Parameter Estimation of Spacecraft Fuel Slosh Model

    NASA Technical Reports Server (NTRS)

    Gangadharan, Sathya; Sudermann, James; Marlowe, Andrea; Njengam Charles

    2004-01-01

    Fuel slosh in the upper stages of a spinning spacecraft during launch has been a long standing concern for the success of a space mission. Energy loss through the movement of the liquid fuel in the fuel tank affects the gyroscopic stability of the spacecraft and leads to nutation (wobble) which can cause devastating control issues. The rate at which nutation develops (defined by Nutation Time Constant (NTC can be tedious to calculate and largely inaccurate if done during the early stages of spacecraft design. Pure analytical means of predicting the influence of onboard liquids have generally failed. A strong need exists to identify and model the conditions of resonance between nutation motion and liquid modes and to understand the general characteristics of the liquid motion that causes the problem in spinning spacecraft. A 3-D computerized model of the fuel slosh that accounts for any resonant modes found in the experimental testing will allow for increased accuracy in the overall modeling process. Development of a more accurate model of the fuel slosh currently lies in a more generalized 3-D computerized model incorporating masses, springs and dampers. Parameters describing the model include the inertia tensor of the fuel, spring constants, and damper coefficients. Refinement and understanding the effects of these parameters allow for a more accurate simulation of fuel slosh. The current research will focus on developing models of different complexity and estimating the model parameters that will ultimately provide a more realistic prediction of Nutation Time Constant obtained through simulation.

  3. Additives

    NASA Technical Reports Server (NTRS)

    Smalheer, C. V.

    1973-01-01

    The chemistry of lubricant additives is discussed to show what the additives are chemically and what functions they perform in the lubrication of various kinds of equipment. Current theories regarding the mode of action of lubricant additives are presented. The additive groups discussed include the following: (1) detergents and dispersants, (2) corrosion inhibitors, (3) antioxidants, (4) viscosity index improvers, (5) pour point depressants, and (6) antifouling agents.

  4. Uncertainty in dual permeability model parameters for structured soils

    PubMed Central

    Arora, B.; Mohanty, B. P.; McGuire, J. T.

    2013-01-01

    Successful application of dual permeability models (DPM) to predict contaminant transport is contingent upon measured or inversely estimated soil hydraulic and solute transport parameters. The difficulty in unique identification of parameters for the additional macropore- and matrix-macropore interface regions, and knowledge about requisite experimental data for DPM has not been resolved to date. Therefore, this study quantifies uncertainty in dual permeability model parameters of experimental soil columns with different macropore distributions (single macropore, and low- and high-density multiple macropores). Uncertainty evaluation is conducted using adaptive Markov chain Monte Carlo (AMCMC) and conventional Metropolis-Hastings (MH) algorithms while assuming 10 out of 17 parameters to be uncertain or random. Results indicate that AMCMC resolves parameter correlations and exhibits fast convergence for all DPM parameters while MH displays large posterior correlations for various parameters. This study demonstrates that the choice of parameter sampling algorithms is paramount in obtaining unique DPM parameters when information on covariance structure is lacking, or else additional information on parameter correlations must be supplied to resolve the problem of equifinality of DPM parameters. This study also highlights the placement and significance of matrix-macropore interface in flow experiments of soil columns with different macropore densities. Histograms for certain soil hydraulic parameters display tri-modal characteristics implying that macropores are drained first followed by the interface region and then by pores of the matrix domain in drainage experiments. Results indicate that hydraulic properties and behavior of the matrix-macropore interface is not only a function of saturated hydraulic conductivity of the macroporematrix interface (Ksa) and macropore tortuosity (lf) but also of other parameters of the matrix and macropore domains. PMID:24478531

  5. Blind estimation of compartmental model parameters.

    PubMed

    Di Bella, E V; Clackdoyle, R; Gullberg, G T

    1999-03-01

    Computation of physiologically relevant kinetic parameters from dynamic PET or SPECT imaging requires knowledge of the blood input function. This work is concerned with developing methods to accurately estimate these kinetic parameters blindly; that is, without use of a directly measured blood input function. Instead, only measurements of the output functions--the tissue time-activity curves--are used. The blind estimation method employed here minimizes a set of cross-relation equations, from which the blood term has been factored out, to determine compartmental model parameters. The method was tested with simulated data appropriate for dynamic SPECT cardiac perfusion imaging with 99mTc-teboroxime and for dynamic PET cerebral blood flow imaging with 15O water. The simulations did not model the tomographic process. Noise levels typical of the respective modalities were employed. From three to eight different regions were simulated, each with different time-activity curves. The time-activity curve (24 or 70 time points) for each region was simulated with a compartment model. The simulation used a biexponential blood input function and washin rates between 0.2 and 1.3 min(-1) and washout rates between 0.2 and 1.0 min(-1). The system of equations was solved numerically and included constraints to bound the range of possible solutions. From the cardiac simulations, washin was determined to within a scale factor of the true washin parameters with less than 6% bias and 12% variability. 99mTc-teboroxime washout results had less than 5% bias, but variability ranged from 14% to 43%. The cerebral blood flow washin parameters were determined with less than 5% bias and 4% variability. The washout parameters were determined with less than 4% bias, but had 15-30% variability. Since washin is often the parameter of most use in clinical studies, the blind estimation approach may eliminate the current necessity of measuring the input function when performing certain dynamic studies

  6. Distributed parameter modeling of repeated truss structures

    NASA Technical Reports Server (NTRS)

    Wang, Han-Ching

    1994-01-01

    A new approach to find homogeneous models for beam-like repeated flexible structures is proposed which conceptually involves two steps. The first step involves the approximation of 3-D non-homogeneous model by a 1-D periodic beam model. The structure is modeled as a 3-D non-homogeneous continuum. The displacement field is approximated by Taylor series expansion. Then, the cross sectional mass and stiffness matrices are obtained by energy equivalence using their additive properties. Due to the repeated nature of the flexible bodies, the mass, and stiffness matrices are also periodic. This procedure is systematic and requires less dynamics detail. The first step involves the homogenization from a 1-D periodic beam model to a 1-D homogeneous beam model. The periodic beam model is homogenized into an equivalent homogeneous beam model using the additive property of compliance along the generic axis. The major departure from previous approaches in literature is using compliance instead of stiffness in homogenization. An obvious justification is that the stiffness is additive at each cross section but not along the generic axis. The homogenized model preserves many properties of the original periodic model.

  7. Modeling techniques for gaining additional urban space

    NASA Astrophysics Data System (ADS)

    Thunig, Holger; Naumann, Simone; Siegmund, Alexander

    2009-09-01

    One of the major accompaniments of the globalization is the rapid growing of urban areas. Urban sprawl is the main environmental problem affecting those cities across different characteristics and continents. Various reasons for the increase in urban sprawl in the last 10 to 30 years have been proposed [1], and often depend on the socio-economic situation of cities. The quantitative reduction and the sustainable handling of land should be performed by inner urban development instead of expanding urban regions. Following the principal "spare the urban fringe, develop the inner suburbs first" requires differentiated tools allowing for quantitative and qualitative appraisals of current building potentials. Using spatial high resolution remote sensing data within an object-based approach enables the detection of potential areas while GIS-data provides information for the quantitative valuation. This paper presents techniques for modeling urban environment and opportunities of utilization of the retrieved information for urban planners and their special needs.

  8. Testing Linear Models for Ability Parameters in Item Response Models

    ERIC Educational Resources Information Center

    Glas, Cees A. W.; Hendrawan, Irene

    2005-01-01

    Methods for testing hypotheses concerning the regression parameters in linear models for the latent person parameters in item response models are presented. Three tests are outlined: A likelihood ratio test, a Lagrange multiplier test and a Wald test. The tests are derived in a marginal maximum likelihood framework. They are explicitly formulated…

  9. Scaling Performance Assessments: A Comparison of One-Parameter and Two-Parameter Partial Credit Models.

    ERIC Educational Resources Information Center

    Fitzpatrick, Anne R.; And Others

    1996-01-01

    One-parameter (1PPC) and two-parameter partial credit (2PPC) models were compared using real and simulated data with constructed response items present. Results suggest that the more flexible three-parameter logistic-2PPC model combination produces better model fit than the combination of the one-parameter logistic and the 1PPC models. (SLD)

  10. Modelling spin Hamiltonian parameters of molecular nanomagnets.

    PubMed

    Gupta, Tulika; Rajaraman, Gopalan

    2016-07-12

    Molecular nanomagnets encompass a wide range of coordination complexes possessing several potential applications. A formidable challenge in realizing these potential applications lies in controlling the magnetic properties of these clusters. Microscopic spin Hamiltonian (SH) parameters describe the magnetic properties of these clusters, and viable ways to control these SH parameters are highly desirable. Computational tools play a proactive role in this area, where SH parameters such as isotropic exchange interaction (J), anisotropic exchange interaction (Jx, Jy, Jz), double exchange interaction (B), zero-field splitting parameters (D, E) and g-tensors can be computed reliably using X-ray structures. In this feature article, we have attempted to provide a holistic view of the modelling of these SH parameters of molecular magnets. The determination of J includes various class of molecules, from di- and polynuclear Mn complexes to the {3d-Gd}, {Gd-Gd} and {Gd-2p} class of complexes. The estimation of anisotropic exchange coupling includes the exchange between an isotropic metal ion and an orbitally degenerate 3d/4d/5d metal ion. The double-exchange section contains some illustrative examples of mixed valance systems, and the section on the estimation of zfs parameters covers some mononuclear transition metal complexes possessing very large axial zfs parameters. The section on the computation of g-anisotropy exclusively covers studies on mononuclear Dy(III) and Er(III) single-ion magnets. The examples depicted in this article clearly illustrate that computational tools not only aid in interpreting and rationalizing the observed magnetic properties but possess the potential to predict new generation MNMs. PMID:27366794

  11. Molecular-orbital coefficients for dinuclear polymethyne dyes in the effective additive parameter method

    SciTech Connect

    Dyadyusha, G.G.; Ushomirskii, M.M.

    1986-09-01

    A method previously proposed for determining the energy structure of a polymethyne dye with any terminal groups is used in considering formulas for the molecularorbital coefficients and the differences in the distribution on the atoms in the polymethyne chain for localized and delocalized energy levels, as well as the accuracy in calculating the molecular-orbital coefficients by means of a finite number of effective additive parameters. It is found that the localized states are important to the electron-density distribution on the chain atoms characteristic of the polymethyne dyes.

  12. Test models for improving filtering with model errors through stochastic parameter estimation

    SciTech Connect

    Gershgorin, B.; Harlim, J. Majda, A.J.

    2010-01-01

    The filtering skill for turbulent signals from nature is often limited by model errors created by utilizing an imperfect model for filtering. Updating the parameters in the imperfect model through stochastic parameter estimation is one way to increase filtering skill and model performance. Here a suite of stringent test models for filtering with stochastic parameter estimation is developed based on the Stochastic Parameterization Extended Kalman Filter (SPEKF). These new SPEKF-algorithms systematically correct both multiplicative and additive biases and involve exact formulas for propagating the mean and covariance including the parameters in the test model. A comprehensive study is presented of robust parameter regimes for increasing filtering skill through stochastic parameter estimation for turbulent signals as the observation time and observation noise are varied and even when the forcing is incorrectly specified. The results here provide useful guidelines for filtering turbulent signals in more complex systems with significant model errors.

  13. Constant-parameter capture-recapture models

    USGS Publications Warehouse

    Brownie, C.; Hines, J.E.; Nichols, J.D.

    1986-01-01

    Jolly (1982, Biometrics 38, 301-321) presented modifications of the Jolly-Seber model for capture-recapture data, which assume constant survival and/or capture rates. Where appropriate, because of the reduced number of parameters, these models lead to more efficient estimators than the Jolly-Seber model. The tests to compare models given by Jolly do not make complete use of the data, and we present here the appropriate modifications, and also indicate how to carry out goodness-of-fit tests which utilize individual capture history information. We also describe analogous models for the case where young and adult animals are tagged. The availability of computer programs to perform the analysis is noted, and examples are given using output from these programs.

  14. Sensitivity analysis of geometric errors in additive manufacturing medical models.

    PubMed

    Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian

    2015-03-01

    Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms. PMID:25649961

  15. Additional deleterious effects of alcohol consumption on sperm parameters and DNA integrity in diabetic mice.

    PubMed

    Pourentezari, M; Talebi, A R; Mangoli, E; Anvari, M; Rahimipour, M

    2016-06-01

    The aim of this study was to survey the impact of alcohol consumption on sperm parameters and DNA integrity in experimentally induced diabetic mice. A total of 32 adult male mice were divided into four groups: mice of group 1 served as control fed on basal diet, group 2 received streptozotocin (STZ) (200 mg kg(-1) , single dose, intraperitoneal) and basal diet, group 3 received alcohol (10 mg kg(-1) , water soluble) and basal diet, and group 4 received STZ and alcohol for 35 days. The cauda epididymidis of each mouse was dissected and placed in 1 ml of pre-warm Ham's F10 culture medium for 30 min. The swim-out spermatozoa were analysed for count, motility, morphology and viability. Sperm chromatin quality was evaluated with aniline blue, toluidine blue, acridine orange and chromomycin A3 staining. The results showed that all sperm parameters had significant differences (P < 0.05), also when sperm chromatin was assessed with cytochemical tests. There were significant differences (P < 0.001) between the groups. According to our results, alcohol and diabetes can cause abnormalities in sperm parameters and chromatin quality. In addition, alcohol consumption in diabetic mice can intensify sperm chromatin/DNA damage. PMID:26358836

  16. The transport exponent in percolation models with additional loops

    NASA Astrophysics Data System (ADS)

    Babalievski, F.

    1994-10-01

    Several percolation models with additional loops were studied. The transport exponents for these models were estimated numerically by means of a transfer-matrix approach. It was found that the transport exponent has a drastically changed value for some of the models. This result supports some previous numerical studies on the vibrational properties of similar models (with additional loops).

  17. Observation model and parameter partials for the JPL VLBI parameter estimation software MASTERFIT-1987

    NASA Technical Reports Server (NTRS)

    Sovers, O. J.; Fanselow, J. L.

    1987-01-01

    This report is a revision of the document of the same title (1986), dated August 1, which it supersedes. Model changes during 1986 and 1987 included corrections for antenna feed rotation, refraction in modelling antenna axis offsets, and an option to employ improved values of the semiannual and annual nutation amplitudes. Partial derivatives of the observables with respect to an additional parameter (surface temperature) are now available. New versions of two figures representing the geometric delay are incorporated. The expressions for the partial derivatives with respect to the nutation parameters have been corrected to include contributions from the dependence of UTI on nutation. The authors hope to publish revisions of this document in the future, as modeling improvements warrant.

  18. Observation model and parameter partials for the JPL VLBI parameter estimation software MASTERFIT-1987

    NASA Astrophysics Data System (ADS)

    Sovers, O. J.; Fanselow, J. L.

    1987-12-01

    This report is a revision of the document of the same title (1986), dated August 1, which it supersedes. Model changes during 1986 and 1987 included corrections for antenna feed rotation, refraction in modelling antenna axis offsets, and an option to employ improved values of the semiannual and annual nutation amplitudes. Partial derivatives of the observables with respect to an additional parameter (surface temperature) are now available. New versions of two figures representing the geometric delay are incorporated. The expressions for the partial derivatives with respect to the nutation parameters have been corrected to include contributions from the dependence of UTI on nutation. The authors hope to publish revisions of this document in the future, as modeling improvements warrant.

  19. Radar altimeter waveform modeled parameter recovery. [SEASAT-1 data

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Satellite-borne radar altimeters include waveform sampling gates providing point samples of the transmitted radar pulse after its scattering from the ocean's surface. Averages of the waveform sampler data can be fitted by varying parameters in a model mean return waveform. The theoretical waveform model used is described as well as a general iterative nonlinear least squares procedures used to obtain estimates of parameters characterizing the modeled waveform for SEASAT-1 data. The six waveform parameters recovered by the fitting procedure are: (1) amplitude; (2) time origin, or track point; (3) ocean surface rms roughness; (4) noise baseline; (5) ocean surface skewness; and (6) altitude or off-nadir angle. Additional practical processing considerations are addressed and FORTRAN source listing for subroutines used in the waveform fitting are included. While the description is for the Seasat-1 altimeter waveform data analysis, the work can easily be generalized and extended to other radar altimeter systems.

  20. Macroscopic singlet oxygen model incorporating photobleaching as an input parameter

    NASA Astrophysics Data System (ADS)

    Kim, Michele M.; Finlay, Jarod C.; Zhu, Timothy C.

    2015-03-01

    A macroscopic singlet oxygen model for photodynamic therapy (PDT) has been used extensively to calculate the reacted singlet oxygen concentration for various photosensitizers. The four photophysical parameters (ξ, σ, β, δ) and threshold singlet oxygen dose ([1O2]r,sh) can be found for various drugs and drug-light intervals using a fitting algorithm. The input parameters for this model include the fluence, photosensitizer concentration, optical properties, and necrosis radius. An additional input variable of photobleaching was implemented in this study to optimize the results. Photobleaching was measured by using the pre-PDT and post-PDT sensitizer concentrations. Using the RIF model of murine fibrosarcoma, mice were treated with a linear source with fluence rates from 12 - 150 mW/cm and total fluences from 24 - 135 J/cm. The two main drugs investigated were benzoporphyrin derivative monoacid ring A (BPD) and 2-[1-hexyloxyethyl]-2-devinyl pyropheophorbide-a (HPPH). Previously published photophysical parameters were fine-tuned and verified using photobleaching as the additional fitting parameter. Furthermore, photobleaching can be used as an indicator of the robustness of the model for the particular mouse experiment by comparing the experimental and model-calculated photobleaching ratio.

  1. Assessment of Model Parameters Interdependency of a Conceptual Rainfall-Runoff Model

    NASA Astrophysics Data System (ADS)

    Das, T.; Bárdossy, A.; Zehe, E.

    2006-12-01

    Conceptual rainfall-runoff models are widely used tools in hydrology. Contrary to more complex physically- based distributed models, the required input data are readily available for most applications in the world. In addition to their modest data requirement, conceptual models are usually simple and relatively easy to apply. However, for partly or fully conceptual models, some parameters cannot be considered as physically measured quantities and thus have to be estimated on the basis of the available data and information. However, in the range of input data, it is often not possible to find one unique parameter set, i.e. a number of parameter sets can lead to similar model results (known as equifinality). Nevertheless, the model parameter sets which lead to equally good model performance may have interesting internal structures. The issue of equifinality following the internal model structures was investigated in this paper using two examples. The first example is one which uses a simple two parameter sediment transport model in a river. A large number of parameter pairs was generated randomly. The results indicated that they both can be taken from a wide interval of possible values which might lead to satisfactory model performance. However, a well structured set is obtained if one investigates the set of parameters as pairs. The second example was given by using model parameters of the modified HBV conceptual rainfall-runoff model. One hundred independent calibration runs for the HBV model were carried out. These runs were done using the automatic calibration procedure based on the simulated optimization algorithm; each run using a different, randomly selected initial seed value required for the calibration algorithm. No explicit dependence between the parameters was considered. The results demonstrated that parameters of rainfall-runoff models often can not be identified as individual values. A large set of possible parameters can lead to a similar model

  2. Parameter optimization in S-system models

    PubMed Central

    Vilela, Marco; Chou, I-Chun; Vinga, Susana; Vasconcelos, Ana Tereza R; Voit, Eberhard O; Almeida, Jonas S

    2008-01-01

    Background The inverse problem of identifying the topology of biological networks from their time series responses is a cornerstone challenge in systems biology. We tackle this challenge here through the parameterization of S-system models. It was previously shown that parameter identification can be performed as an optimization based on the decoupling of the differential S-system equations, which results in a set of algebraic equations. Results A novel parameterization solution is proposed for the identification of S-system models from time series when no information about the network topology is known. The method is based on eigenvector optimization of a matrix formed from multiple regression equations of the linearized decoupled S-system. Furthermore, the algorithm is extended to the optimization of network topologies with constraints on metabolites and fluxes. These constraints rejoin the system in cases where it had been fragmented by decoupling. We demonstrate with synthetic time series why the algorithm can be expected to converge in most cases. Conclusion A procedure was developed that facilitates automated reverse engineering tasks for biological networks using S-systems. The proposed method of eigenvector optimization constitutes an advancement over S-system parameter identification from time series using a recent method called Alternating Regression. The proposed method overcomes convergence issues encountered in alternate regression by identifying nonlinear constraints that restrict the search space to computationally feasible solutions. Because the parameter identification is still performed for each metabolite separately, the modularity and linear time characteristics of the alternating regression method are preserved. Simulation studies illustrate how the proposed algorithm identifies the correct network topology out of a collection of models which all fit the dynamical time series essentially equally well. PMID:18416837

  3. Modeling Chinese ionospheric layer parameters based on EOF analysis

    NASA Astrophysics Data System (ADS)

    Yu, You; Wan, Weixing

    2016-04-01

    Using 24-ionosonde observations in and around China during the 20th solar cycle, an assimilative model is constructed to map the ionospheric layer parameters (foF2, hmF2, M(3000)F2, and foE) over China based on empirical orthogonal function (EOF) analysis. First, we decompose the background maps from the International Reference Ionosphere model 2007 (IRI-07) into different EOF modes. The obtained EOF modes consist of two factors: the EOF patterns and the corresponding EOF amplitudes. These two factors individually reflect the spatial distributions (e.g., the latitudinal dependence such as the equatorial ionization anomaly structure and the longitude structure with east-west difference) and temporal variations on different time scales (e.g., solar cycle, annual, semiannual, and diurnal variations) of the layer parameters. Then, the EOF patterns and long-term observations of ionosondes are assimilated to get the observed EOF amplitudes, which are further used to construct the Chinese Ionospheric Maps (CIMs) of the layer parameters. In contrast with the IRI-07 model, the mapped CIMs successfully capture the inherent temporal and spatial variations of the ionospheric layer parameters. Finally, comparison of the modeled (EOF and IRI-07 model) and observed values reveals that the EOF model reproduces the observation with smaller root-mean-square errors and higher linear correlation co- efficients. In addition, IRI discrepancy at the low latitude especially for foF2 is effectively removed by EOF model.

  4. Multiscale and Multiphysics Modeling of Additive Manufacturing of Advanced Materials

    NASA Technical Reports Server (NTRS)

    Liou, Frank; Newkirk, Joseph; Fan, Zhiqiang; Sparks, Todd; Chen, Xueyang; Fletcher, Kenneth; Zhang, Jingwei; Zhang, Yunlu; Kumar, Kannan Suresh; Karnati, Sreekar

    2015-01-01

    The objective of this proposed project is to research and develop a prediction tool for advanced additive manufacturing (AAM) processes for advanced materials and develop experimental methods to provide fundamental properties and establish validation data. Aircraft structures and engines demand materials that are stronger, useable at much higher temperatures, provide less acoustic transmission, and enable more aeroelastic tailoring than those currently used. Significant improvements in properties can only be achieved by processing the materials under nonequilibrium conditions, such as AAM processes. AAM processes encompass a class of processes that use a focused heat source to create a melt pool on a substrate. Examples include Electron Beam Freeform Fabrication and Direct Metal Deposition. These types of additive processes enable fabrication of parts directly from CAD drawings. To achieve the desired material properties and geometries of the final structure, assessing the impact of process parameters and predicting optimized conditions with numerical modeling as an effective prediction tool is necessary. The targets for the processing are multiple and at different spatial scales, and the physical phenomena associated occur in multiphysics and multiscale. In this project, the research work has been developed to model AAM processes in a multiscale and multiphysics approach. A macroscale model was developed to investigate the residual stresses and distortion in AAM processes. A sequentially coupled, thermomechanical, finite element model was developed and validated experimentally. The results showed the temperature distribution, residual stress, and deformation within the formed deposits and substrates. A mesoscale model was developed to include heat transfer, phase change with mushy zone, incompressible free surface flow, solute redistribution, and surface tension. Because of excessive computing time needed, a parallel computing approach was also tested. In addition

  5. Model parameters for simulation of physiological lipids.

    PubMed

    Hills, Ronald D; McGlinchey, Nicholas

    2016-05-01

    Coarse grain simulation of proteins in their physiological membrane environment can offer insight across timescales, but requires a comprehensive force field. Parameters are explored for multicomponent bilayers composed of unsaturated lipids DOPC and DOPE, mixed-chain saturation POPC and POPE, and anionic lipids found in bacteria: POPG and cardiolipin. A nonbond representation obtained from multiscale force matching is adapted for these lipids and combined with an improved bonding description of cholesterol. Equilibrating the area per lipid yields robust bilayer simulations and properties for common lipid mixtures with the exception of pure DOPE, which has a known tendency to form nonlamellar phase. The models maintain consistency with an existing lipid-protein interaction model, making the force field of general utility for studying membrane proteins in physiologically representative bilayers. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:26864972

  6. The effect of additional design parameters on the LQR based design of a control/structural system

    NASA Technical Reports Server (NTRS)

    Bainum, Peter M.; Xu, Jianke

    1990-01-01

    A multiobjective cost function that includes a form of the standard LQR regulator cost and its partial variation with respect to the additional design parameters is presented in connection with the design of an orbiting control/structural system. Simple models of uniform solid and tubular beams are demonstrated with two typical additional payload masses, i.e., symmetrically distributed and asymmetrically distributed, with respect to the center of the beam. By regarding the transient response of pitch angle and free-free beam deformations in the orbital plane, the optimal outer diameter of the beam and all feedback control can be determined by numerical analysis with this multicriterial approach. It is concluded that the multicriteria design approach should give better results from both the structural designer's and the control designer's standpoints.

  7. Modelling of snow avalanche dynamics: influence of model parameters

    NASA Astrophysics Data System (ADS)

    Bozhinskiy, A. N.

    The three-parameter hydraulic model of snow avalanche dynamics including the coefficients of dry and turbulent friction and the coefficient of new-snow-mass entrainment was investigated. The 'Domestic' avalanche site in Elbrus region, Caucasus, Russia, was chosen as the model avalanche range. According to the model, the fixed avalanche run-out can be achieved with various combinations of model parameters. At the fixed value of the coefficient of entrainment me, we have a curve on a plane of the coefficients of dry and turbulent friction. It was found that the family of curves (me is a parameter) are crossed at the single point. The value of the coefficient of turbulent friction at the cross-point remained practically constant for the maximum and average avalanche run-outs. The conclusions obtained are confirmed by the results of modelling for six arbitrarily chosen avalanche sites: three in the Khibiny mountains, Kola Peninsula, Russia, two in the Elbrus region and one idealized site with an exponential longitudinal profile. The dependences of run-out on the coefficient of dry friction are constructed for all the investigated avalanche sites. The results are important for the statistical simulation of avalanche dynamics since they suggest the possibility of using only one random model parameter, namely, the coefficient of dry friction, in the model. The histograms and distribution functions of the coefficient of dry friction are constructed and presented for avalanche sites Nos 22 and 43 (Khibiny mountains) and 'Domestic', with the available series of field data.

  8. Radiation processing of thermoplastic starch by blending aromatic additives: Effect of blend composition and radiation parameters

    NASA Astrophysics Data System (ADS)

    Khandal, Dhriti; Mikus, Pierre-Yves; Dole, Patrice; Coqueret, Xavier

    2013-03-01

    This paper reports on the effects of electron beam (EB) irradiation on poly α-1,4-glucose oligomers (maltodextrins) in the presence of water and of various aromatic additives, as model blends for gaining a better understanding at a molecular level the modifications occurring in amorphous starch-lignin blends submitted to ionizing irradiation for improving the properties of this type of bio-based thermoplastic material. A series of aromatic compounds, namely p-methoxy benzyl alcohol, benzene dimethanol, cinnamyl alcohol and some related carboxylic acids namely cinnamic acid, coumaric acid, and ferulic acid, was thus studied for assessing the ability of each additive to counteract chain scission of the polysaccharide and induce interchain covalent linkages. Gel formation in EB-irradiated blends comprising of maltodextrin was shown to be dependent on three main factors: the type of aromatic additive, presence of glycerol, and irradiation dose. The chain scission versus grafting phenomenon as a function of blend composition and dose were studied using Size Exclusion Chromatography by determining the changes in molecular weight distribution (MWD) from Refractive Index (RI) chromatograms and the presence of aromatic grafts onto the maltodextrin chains from UV chromatograms. The occurrence of crosslinking was quantified by gel fraction measurements allowing for ranking the cross-linking efficiency of the additives. When applying the method to destructurized starch blends, gel formation was also shown to be strongly affected by the moisture content of the sample submitted to irradiation. The results demonstrate the possibility to tune the reactivity of tailored blend for minimizing chain degradation and control the degree of cross-linking.

  9. Observation model and parameter partials for the JPL VLBI parameter estimation software MODEST, 1996

    NASA Astrophysics Data System (ADS)

    Sovers, O. J.; Jacobs, Christopher S.

    1996-08-01

    The current theoretical model of radio interferometric delays and delay rates observed in very long baseline interferometry experiments is discussed in detail. Modeling the time delay consists of a number of steps. First, the locations of the observing stations are expressed in an Earth fixed coordinate frame at the time that the incoming wave front reaches the reference station. These station coordinates are modified by Earth-fixed effects, such as tides and tectonic motion. Next, a transformation to a celestial coordinate system moving with the Earth accounts for the Earth's precession and nutation in inertial space. A relativistic transformation then brings these coordinates into a frame centered athlete center of mass of the Solar System. The time delays calculated in this Solar System Barycentric frame, including corrections to account for extended source structure of the source and gravitational delay of the signal. Finally, he delay is transformed back to the celestial geocentric frame, and corrected for additional delays of the signal by components of the Earth's atmosphere. Partial derivatives of the observables with respect to numerous parameters entering the model components are also given. This report is a revision of the document Observation Model and Parameter Partials for the JPL VLBI Parameter Estimation Software "MODEST" -1994 dated August 1994. It supersedes that document and its five previous versions (1983, 1985, 1986, 1987, and 1991). Numerous portions of the Very Long Baseline Interferometry (VLBI) model were improved in MODEST from 1994 to 1996. For various aspects of the geometric delay, improved expressions for the geodetic latitude and station altitude are now used, along with more recent values of the Earth's radius and rotation rate. The equation of equinoxes can now be selected to be the IERS-92 expression, plus its 1997 extension. Models for the tidal response of the Earth orientation now include Dickman's revision (UT1S) of Yoder et

  10. Parameter identification in dynamical models of anaerobic waste water treatment.

    PubMed

    Müller, T G; Noykova, N; Gyllenberg, M; Timmer, J

    2002-01-01

    Biochemical reactions can often be formulated mathematically as ordinary differential equations. In the process of modeling, the main questions that arise are concerned with structural identifiability, parameter estimation and practical identifiability. To clarify these questions and the methods how to solve them, we analyze two different second order models for anaerobic waste water treatment processes using two data sets obtained from different experimental setups. In both experiments only biogas production rate was measured which complicates the analysis considerably. We show that proving structural identifiability of the mathematical models with currently used methods fails. Therefore, we introduce a new, general method based on the asymptotic behavior of the maximum likelihood estimator to show local structural identifiability. For parameter estimation we use the multiple shooting approach which is described. Additionally we show that the Hessian matrix approach to compute confidence intervals fails in our examples while a method based on Monte Carlo Simulation works well. PMID:11965253

  11. Generating Effective Models and Parameters for RNA Genetic Circuits.

    PubMed

    Hu, Chelsea Y; Varner, Jeffrey D; Lucks, Julius B

    2015-08-21

    RNA genetic circuitry is emerging as a powerful tool to control gene expression. However, little work has been done to create a theoretical foundation for RNA circuit design. A prerequisite to this is a quantitative modeling framework that accurately describes the dynamics of RNA circuits. In this work, we develop an ordinary differential equation model of transcriptional RNA genetic circuitry, using an RNA cascade as a test case. We show that parameter sensitivity analysis can be used to design a set of four simple experiments that can be performed in parallel using rapid cell-free transcription-translation (TX-TL) reactions to determine the 13 parameters of the model. The resulting model accurately recapitulates the dynamic behavior of the cascade, and can be easily extended to predict the function of new cascade variants that utilize new elements with limited additional characterization experiments. Interestingly, we show that inconsistencies between model predictions and experiments led to the model-guided discovery of a previously unknown maturation step required for RNA regulator function. We also determine circuit parameters in two different batches of TX-TL, and show that batch-to-batch variation can be attributed to differences in parameters that are directly related to the concentrations of core gene expression machinery. We anticipate the RNA circuit models developed here will inform the creation of computer aided genetic circuit design tools that can incorporate the growing number of RNA regulators, and that the parametrization method will find use in determining functional parameters of a broad array of natural and synthetic regulatory systems. PMID:26046393

  12. Multiscale modeling of failure in composites under model parameter uncertainty

    NASA Astrophysics Data System (ADS)

    Bogdanor, Michael J.; Oskay, Caglar; Clay, Stephen B.

    2015-09-01

    This manuscript presents a multiscale stochastic failure modeling approach for fiber reinforced composites. A homogenization based reduced-order multiscale computational model is employed to predict the progressive damage accumulation and failure in the composite. Uncertainty in the composite response is modeled at the scale of the microstructure by considering the constituent material (i.e., matrix and fiber) parameters governing the evolution of damage as random variables. Through the use of the multiscale model, randomness at the constituent scale is propagated to the scale of the composite laminate. The probability distributions of the underlying material parameters are calibrated from unidirectional composite experiments using a Bayesian statistical approach. The calibrated multiscale model is exercised to predict the ultimate tensile strength of quasi-isotropic open-hole composite specimens at various loading rates. The effect of random spatial distribution of constituent material properties on the composite response is investigated.

  13. Fixing the c Parameter in the Three-Parameter Logistic Model

    ERIC Educational Resources Information Center

    Han, Kyung T.

    2012-01-01

    For several decades, the "three-parameter logistic model" (3PLM) has been the dominant choice for practitioners in the field of educational measurement for modeling examinees' response data from multiple-choice (MC) items. Past studies, however, have pointed out that the c-parameter of 3PLM should not be interpreted as a guessing parameter. This…

  14. How much additional model complexity do the use of catchment hydrological signatures, additional data and expert knowledge warrant?

    NASA Astrophysics Data System (ADS)

    Hrachowitz, M.; Fovet, O.; RUIZ, L.; Gascuel-odoux, C.; Savenije, H.

    2013-12-01

    In the frequent absence of sufficient suitable data to constrain hydrological models, it is not uncommon to represent catchments at a range of scales by lumped model set-ups. Although process heterogeneity can average out on the catchment scale to generate simple catchment integrated responses whose general flow features can frequently be reproduced by lumped models, these models often fail to get details of the flow pattern as well as catchment internal dynamics, such as groundwater level changes, right to a sufficient degree, resulting in considerable predictive uncertainty. Traditionally, models are constrained by only one or two objectives functions, which does not warrant more than a handful of parameters to avoid elevated predictive uncertainty, thereby preventing more complex model set-ups accounting for increased process heterogeneity. In this study it was tested how much additional process heterogeneity is warranted in models when optimizing the model calibration strategy, using additional data and expert knowledge. Long-term timeseries of flow and groundwater levels for small nested experimental catchments in French Brittany with considerable differences in geology, topography and flow regime were used in this study to test which degree of model process heterogeneity is warranted with increased availability of information. In a first step, as a benchmark, the system was treated as one lumped entity and the model was trained based only on its ability to reproduce the hydrograph. Although it was found that the overall modelled flow generally reflects the observed flow response quite well, the internal system dynamics could not be reproduced. In further steps the complexity of this model was gradually increased, first by adding a separate riparian reservoir to the lumped set-up and then by a semi-distributed set-up, allowing for independent, parallel model structures, representing the contrasting nested catchments. Although calibration performance increased

  15. Improving the transferability of hydrological model parameters under changing conditions

    NASA Astrophysics Data System (ADS)

    Huang, Yingchun; Bárdossy, András

    2014-05-01

    Hydrological models are widely utilized to describe catchment behaviors with observed hydro-meteorological data. Hydrological process may be considered as non-stationary under the changing climate and land use conditions. An applicable hydrological model should be able to capture the essential features of the target catchment and therefore be transferable to different conditions. At present, many model applications based on the stationary assumptions are not sufficient for predicting further changes or time variability. The aim of this study is to explore new model calibration methods in order to improve the transferability of model parameters. To cope with the instability of model parameters calibrated on catchments in non-stationary conditions, we investigate the idea of simultaneously calibration on streamflow records for the period with dissimilar climate characteristics. In additional, a weather based weighting function is implemented to adjust the calibration period to future trends. For regions with limited data and ungauged basins, the common calibration was applied by using information from similar catchments. Result shows the model performance and transfer quantity could be well improved via common calibration. This model calibration approach will be used to enhance regional water management and flood forecasting capabilities.

  16. Empirical flow parameters : a tool for hydraulic model validity

    USGS Publications Warehouse

    Asquith, William H.; Burley, Thomas E.; Cleveland, Theodore G.

    2013-01-01

    The objectives of this project were (1) To determine and present from existing data in Texas, relations between observed stream flow, topographic slope, mean section velocity, and other hydraulic factors, to produce charts such as Figure 1 and to produce empirical distributions of the various flow parameters to provide a methodology to "check if model results are way off!"; (2) To produce a statistical regional tool to estimate mean velocity or other selected parameters for storm flows or other conditional discharges at ungauged locations (most bridge crossings) in Texas to provide a secondary way to compare such values to a conventional hydraulic modeling approach. (3.) To present ancillary values such as Froude number, stream power, Rosgen channel classification, sinuosity, and other selected characteristics (readily determinable from existing data) to provide additional information to engineers concerned with the hydraulic-soil-foundation component of transportation infrastructure.

  17. Support vector machine to predict diesel engine performance and emission parameters fueled with nano-particles additive to diesel fuel

    NASA Astrophysics Data System (ADS)

    Ghanbari, M.; Najafi, G.; Ghobadian, B.; Mamat, R.; Noor, M. M.; Moosavian, A.

    2015-12-01

    This paper studies the use of adaptive Support Vector Machine (SVM) to predict the performance parameters and exhaust emissions of a diesel engine operating on nanodiesel blended fuels. In order to predict the engine parameters, the whole experimental data were randomly divided into training and testing data. For SVM modelling, different values for radial basis function (RBF) kernel width and penalty parameters (C) were considered and the optimum values were then found. The results demonstrate that SVM is capable of predicting the diesel engine performance and emissions. In the experimental step, Carbon nano tubes (CNT) (40, 80 and 120 ppm) and nano silver particles (40, 80 and 120 ppm) with nanostructure were prepared and added as additive to the diesel fuel. Six cylinders, four-stroke diesel engine was fuelled with these new blended fuels and operated at different engine speeds. Experimental test results indicated the fact that adding nano particles to diesel fuel, increased diesel engine power and torque output. For nano-diesel it was found that the brake specific fuel consumption (bsfc) was decreased compared to the net diesel fuel. The results proved that with increase of nano particles concentrations (from 40 ppm to 120 ppm) in diesel fuel, CO2 emission increased. CO emission in diesel fuel with nano-particles was lower significantly compared to pure diesel fuel. UHC emission with silver nano-diesel blended fuel decreased while with fuels that contains CNT nano particles increased. The trend of NOx emission was inverse compared to the UHC emission. With adding nano particles to the blended fuels, NOx increased compared to the net diesel fuel. The tests revealed that silver & CNT nano particles can be used as additive in diesel fuel to improve complete combustion of the fuel and reduce the exhaust emissions significantly.

  18. Investigation of land use effects on Nash model parameters

    NASA Astrophysics Data System (ADS)

    Niazi, Faegheh; Fakheri Fard, Ahmad; Nourani, Vahid; Goodrich, David; Gupta, Hoshin

    2015-04-01

    the Nash model is more sensitive to the K. In addition, there is a wider range in parameter values in urban sub-watershed than the natural one in which the efficiency has an acceptable value. It might be due to less uncertainty in urban watersheds where runoff to rainfall ratios are much larger than in the natural sub-watershed. The uncertainty in rainfall observations (noise) is therefore a much smaller percentage of the runoff (signal).

  19. Integrating microbial diversity in soil carbon dynamic models parameters

    NASA Astrophysics Data System (ADS)

    Louis, Benjamin; Menasseri-Aubry, Safya; Leterme, Philippe; Maron, Pierre-Alain; Viaud, Valérie

    2015-04-01

    Faced with the numerous concerns about soil carbon dynamic, a large quantity of carbon dynamic models has been developed during the last century. These models are mainly in the form of deterministic compartment models with carbon fluxes between compartments represented by ordinary differential equations. Nowadays, lots of them consider the microbial biomass as a compartment of the soil organic matter (carbon quantity). But the amount of microbial carbon is rarely used in the differential equations of the models as a limiting factor. Additionally, microbial diversity and community composition are mostly missing, although last advances in soil microbial analytical methods during the two past decades have shown that these characteristics play also a significant role in soil carbon dynamic. As soil microorganisms are essential drivers of soil carbon dynamic, the question about explicitly integrating their role have become a key issue in soil carbon dynamic models development. Some interesting attempts can be found and are dominated by the incorporation of several compartments of different groups of microbial biomass in terms of functional traits and/or biogeochemical compositions to integrate microbial diversity. However, these models are basically heuristic models in the sense that they are used to test hypotheses through simulations. They have rarely been confronted to real data and thus cannot be used to predict realistic situations. The objective of this work was to empirically integrate microbial diversity in a simple model of carbon dynamic through statistical modelling of the model parameters. This work is based on available experimental results coming from a French National Research Agency program called DIMIMOS. Briefly, 13C-labelled wheat residue has been incorporated into soils with different pedological characteristics and land use history. Then, the soils have been incubated during 104 days and labelled and non-labelled CO2 fluxes have been measured at ten

  20. Complex Modelling Scheme Of An Additive Manufacturing Centre

    NASA Astrophysics Data System (ADS)

    Popescu, Liliana Georgeta

    2015-09-01

    This paper presents a modelling scheme sustaining the development of an additive manufacturing research centre model and its processes. This modelling is performed using IDEF0, the resulting model process representing the basic processes required in developing such a centre in any university. While the activities presented in this study are those recommended in general, changes may occur in specific existing situations in a research centre.

  1. Improving a regional model using reduced complexity and parameter estimation

    USGS Publications Warehouse

    Kelson, Victor A.; Hunt, Randall J.; Haitjema, Henk M.

    2002-01-01

    The availability of powerful desktop computers and graphical user interfaces for ground water flow models makes possible the construction of ever more complex models. A proposed copper-zinc sulfide mine in northern Wisconsin offers a unique case in which the same hydrologic system has been modeled using a variety of techniques covering a wide range of sophistication and complexity. Early in the permitting process, simple numerical models were used to evaluate the necessary amount of water to be pumped from the mine, reductions in streamflow, and the drawdowns in the regional aquifer. More complex models have subsequently been used in an attempt to refine the predictions. Even after so much modeling effort, questions regarding the accuracy and reliability of the predictions remain. We have performed a new analysis of the proposed mine using the two-dimensional analytic element code GFLOW coupled with the nonlinear parameter estimation code UCODE. The new model is parsimonious, containing fewer than 10 parameters, and covers a region several times larger in areal extent than any of the previous models. The model demonstrates the suitability of analytic element codes for use with parameter estimation codes. The simplified model results are similar to the more complex models; predicted mine inflows and UCODE-derived 95% confidence intervals are consistent with the previous predictions. More important, the large areal extent of the model allowed us to examine hydrological features not included in the previous models, resulting in new insights about the effects that far-field boundary conditions can have on near-field model calibration and parameterization. In this case, the addition of surface water runoff into a lake in the headwaters of a stream while holding recharge constant moved a regional ground watershed divide and resulted in some of the added water being captured by the adjoining basin. Finally, a simple analytical solution was used to clarify the GFLOW model

  2. Transfer function modeling of damping mechanisms in distributed parameter models

    NASA Technical Reports Server (NTRS)

    Slater, J. C.; Inman, D. J.

    1994-01-01

    This work formulates a method for the modeling of material damping characteristics in distributed parameter models which may be easily applied to models such as rod, plate, and beam equations. The general linear boundary value vibration equation is modified to incorporate hysteresis effects represented by complex stiffness using the transfer function approach proposed by Golla and Hughes. The governing characteristic equations are decoupled through separation of variables yielding solutions similar to those of undamped classical theory, allowing solution of the steady state as well as transient response. Example problems and solutions are provided demonstrating the similarity of the solutions to those of the classical theories and transient responses of nonviscous systems.

  3. Some aspects of application of the two parameter SEU model

    SciTech Connect

    Miroshkin, V.V.; Tverskoy, M.G.

    1995-12-01

    Influence of the projectile type, pion production in nucleon-nucleon interaction inside nucleus and direction of the beam incidence on SEU cross section for INTEL 2164A microcircuit in framework of the two parameter model is investigated. Model parameters for devices, investigated recently are reported. Optimum proton energies for determination of model parameters are proposed.

  4. Comprehensive European dietary exposure model (CEDEM) for food additives.

    PubMed

    Tennant, David R

    2016-05-01

    European methods for assessing dietary exposures to nutrients, additives and other substances in food are limited by the availability of detailed food consumption data for all member states. A proposed comprehensive European dietary exposure model (CEDEM) applies summary data published by the European Food Safety Authority (EFSA) in a deterministic model based on an algorithm from the EFSA intake method for food additives. The proposed approach can predict estimates of food additive exposure provided in previous EFSA scientific opinions that were based on the full European food consumption database. PMID:26987377

  5. Estimation of Time-Varying Pilot Model Parameters

    NASA Technical Reports Server (NTRS)

    Zaal, Peter M. T.; Sweet, Barbara T.

    2011-01-01

    Human control behavior is rarely completely stationary over time due to fatigue or loss of attention. In addition, there are many control tasks for which human operators need to adapt their control strategy to vehicle dynamics that vary in time. In previous studies on the identification of time-varying pilot control behavior wavelets were used to estimate the time-varying frequency response functions. However, the estimation of time-varying pilot model parameters was not considered. Estimating these parameters can be a valuable tool for the quantification of different aspects of human time-varying manual control. This paper presents two methods for the estimation of time-varying pilot model parameters, a two-step method using wavelets and a windowed maximum likelihood estimation method. The methods are evaluated using simulations of a closed-loop control task with time-varying pilot equalization and vehicle dynamics. Simulations are performed with and without remnant. Both methods give accurate results when no pilot remnant is present. The wavelet transform is very sensitive to measurement noise, resulting in inaccurate parameter estimates when considerable pilot remnant is present. Maximum likelihood estimation is less sensitive to pilot remnant, but cannot detect fast changes in pilot control behavior.

  6. Optimal welding parameters for very high power ultrasonic additive manufacturing of smart structures with aluminum 6061 matrix

    NASA Astrophysics Data System (ADS)

    Wolcott, Paul J.; Hehr, Adam; Dapino, Marcelo J.

    2014-03-01

    Ultrasonic additive manufacturing (UAM) is a recent solid state manufacturing process that combines ad- ditive joining of thin metal tapes with subtractive milling operations to generate near net shape metallic parts. Due to the minimal heating during the process, UAM is a proven method of embedding Ni-Ti, Fe-Ga, and PVDF to create active metal matrix composites. Recently, advances in the UAM process utilizing 9 kW very high power (VHP) welding has improved bonding properties, enabling joining of high strength materials previously unweldable with 1 kW low power UAM. Consequently, a design of experiments study was conducted to optimize welding conditions for aluminum 6061 components. This understanding is critical in the design of UAM parts containing smart materials. Build parameters, including weld force, weld speed, amplitude, and temperature were varied based on a Taguchi experimental design matrix and tested for me- chanical strength. Optimal weld parameters were identi ed with statistical methods including a generalized linear model for analysis of variance (ANOVA), mean e ects plots, and interaction e ects plots.

  7. Sample Size and Item Parameter Estimation Precision When Utilizing the One-Parameter "Rasch" Model

    ERIC Educational Resources Information Center

    Custer, Michael

    2015-01-01

    This study examines the relationship between sample size and item parameter estimation precision when utilizing the one-parameter model. Item parameter estimates are examined relative to "true" values by evaluating the decline in root mean squared deviation (RMSD) and the number of outliers as sample size increases. This occurs across…

  8. Accelerated Nucleation Due to Trace Additives: A Fluctuating Coverage Model.

    PubMed

    Poon, Geoffrey G; Peters, Baron

    2016-03-01

    We develop a theory to account for variable coverage of trace additives that lower the interfacial free energy for nucleation. The free energy landscape is based on classical nucleation theory and a statistical mechanical model for Langmuir adsorption. Dynamics are modeled by diffusion-controlled attachment and detachment of solutes and adsorbing additives. We compare the mechanism and kinetics from a mean-field model, a projection of the dynamics and free energy surface onto nucleus size, and a full two-dimensional calculation using Kramers-Langer-Berezhkovskii-Szabo theory. The fluctuating coverage model predicts rates more accurately than mean-field models of the same process primarily because it more accurately estimates the potential of mean force along the size coordinate. PMID:26485064

  9. Model atmospheres and fundamental stellar parameters

    NASA Astrophysics Data System (ADS)

    Plez, B.

    2013-11-01

    I start by illustrating the need for precise and accurate fundamental stellar parameters through there examples: lithium abundances in metal-poor stars, the derivation of stellar ages from isochrones, and the chemical composition of planet-hosting stars. I present widely used methods (infrared flux method, spectroscopy) in the determination of T_{eff}, and log g. I comment upon difficulties encountered with the determination of stellar parameters of red supergiant stars, and I discuss the impact of non-LTE and 3D hydrodynamical effects.

  10. Parameter Estimates in Differential Equation Models for Chemical Kinetics

    ERIC Educational Resources Information Center

    Winkel, Brian

    2011-01-01

    We discuss the need for devoting time in differential equations courses to modelling and the completion of the modelling process with efforts to estimate the parameters in the models using data. We estimate the parameters present in several differential equation models of chemical reactions of order n, where n = 0, 1, 2, and apply more general…

  11. Additional information on heavy quark parameters from charged lepton forward-backward asymmetry

    NASA Astrophysics Data System (ADS)

    Turczyk, Sascha

    2016-04-01

    The determination of | V cb | using inclusive and exclusive (semi-)leptonic decays exhibits a long-standing tension of varying O(3σ ) significance. For the inclusive determination the decay rate is expanded in 1/ m b using heavy quark expansion, and from moments of physical observables the higher order heavy quark parameters are extracted from experimental data in order to assess | V cb | from the normalisation. The drawbacks are high correlations both theoretically as well as experimentally among these observables. We will scrutinise the inclusive determination in order to add a new and less correlated observable. This observable is related to the decay angle of the charged lepton and can help to constrain the important heavy quark parameters in a new way. It may validate the current seemingly stable extraction of | V cb | from inclusive decays or hints to possible issues, and even may be sensitive to New Physics operators.

  12. Effect of a phytogenic feed additive on performance, ovarian morphology, serum lipid parameters and egg sensory quality in laying hen

    PubMed Central

    Saki, Ali Asghar; Aliarabi, Hassan; Hosseini Siyar, Sayed Ali; Salari, Jalal; Hashemi, Mahdi

    2014-01-01

    This present study was conducted to evaluate the effects of dietary inclusion of 4, 8 and 12 g kg-1 phytogenic feed additives mixture on performance, egg quality, ovary parameters, serum biochemical parameters and yolk trimethylamine level in laying hens. The results of experiment have shown that egg weight was increased by supplementation of 12 g kg-1 feed additive whereas egg production, feed intake and feed conversion ratio (FCR) were not significantly affected. There were no significant differences in egg quality parameters by supplementation of phytogenic feed additive, whereas yolk trimethylamine level was decreased as the feed additive level increased. The sensory evaluation parameters did not differ significantly. No significant differences were found in serum cholesterol and triglyceride levels between the treatments but low- and high-density lipoprotein were significantly increased. Number of small follicles and ovary weight were significantly increased by supplementation of 12 g kg-1 feed additive. Overall, dietary supplementation of polyherbal additive increased egg weigh, improved ovary characteristics and declined yolk trimethylamine level. PMID:25610580

  13. Order-parameter model for unstable multilane traffic flow

    NASA Astrophysics Data System (ADS)

    Lubashevsky, Ihor A.; Mahnke, Reinhard

    2000-11-01

    We discuss a phenomenological approach to the description of unstable vehicle motion on multilane highways that explains in a simple way the observed sequence of the ``free flow <--> synchronized mode <--> jam'' phase transitions as well as the hysteresis in these transitions. We introduce a variable called an order parameter that accounts for possible correlations in the vehicle motion at different lanes. So, it is principally due to the ``many-body'' effects in the car interaction in contrast to such variables as the mean car density and velocity being actually the zeroth and first moments of the ``one-particle'' distribution function. Therefore, we regard the order parameter as an additional independent state variable of traffic flow. We assume that these correlations are due to a small group of ``fast'' drivers and by taking into account the general properties of the driver behavior we formulate a governing equation for the order parameter. In this context we analyze the instability of homogeneous traffic flow that manifested itself in the above-mentioned phase transitions and gave rise to the hysteresis in both of them. Besides, the jam is characterized by the vehicle flows at different lanes which are independent of one another. We specify a certain simplified model in order to study the general features of the car cluster self-formation under the ``free flow <--> synchronized motion'' phase transition. In particular, we show that the main local parameters of the developed cluster are determined by the state characteristics of vehicle motion only.

  14. Effect of argon addition on plasma parameters and dust charging in hydrogen plasma

    SciTech Connect

    Kakati, B. Kausik, S. S.; Saikia, B. K.; Bandyopadhyay, M.; Saxena, Y. C.

    2014-10-28

    Experimental results on effect of adding argon gas to hydrogen plasma in a multi-cusp dusty plasma device are reported. Addition of argon modifies plasma density, electron temperature, degree of hydrogen dissociation, dust current as well as dust charge. From the dust charging profile, it is observed that the dust current and dust charge decrease significantly up to 40% addition of argon flow rate in hydrogen plasma. But beyond 40% of argon flow rate, the changes in dust current and dust charge are insignificant. Results show that the addition of argon to hydrogen plasma in a dusty plasma device can be used as a tool to control the dust charging in a low pressure dusty plasma.

  15. Physiological Parameters Database for PBPK Modeling (External Review Draft)

    EPA Science Inventory

    EPA released for public comment a physiological parameters database (created using Microsoft ACCESS) intended to be used in PBPK modeling. The database contains physiological parameter values for humans from early childhood through senescence. It also contains similar data for an...

  16. DINA Model and Parameter Estimation: A Didactic

    ERIC Educational Resources Information Center

    de la Torre, Jimmy

    2009-01-01

    Cognitive and skills diagnosis models are psychometric models that have immense potential to provide rich information relevant for instruction and learning. However, wider applications of these models have been hampered by their novelty and the lack of commercially available software that can be used to analyze data from this psychometric…

  17. Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan

    2013-01-01

    The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.

  18. Parameter recovery and model selection in mixed Rasch models.

    PubMed

    Preinerstorfer, David; Formann, Anton K

    2012-05-01

    This study examines the precision of conditional maximum likelihood estimates and the quality of model selection methods based on information criteria (AIC and BIC) in mixed Rasch models. The design of the Monte Carlo simulation study included four test lengths (10, 15, 25, 40), three sample sizes (500, 1000, 2500), two simulated mixture conditions (one and two groups), and population homogeneity (equally sized subgroups) or heterogeneity (one subgroup three times larger than the other). The results show that both increasing sample size and increasing number of items lead to higher accuracy; medium-range parameters were estimated more precisely than extreme ones; and the accuracy was higher in homogeneous populations. The minimum-BIC method leads to almost perfect results and is more reliable than AIC-based model selection. The results are compared to findings by Li, Cohen, Kim, and Cho (2009) and practical guidelines are provided. PMID:21675964

  19. Modelling the behaviour of additives in gun barrels

    NASA Astrophysics Data System (ADS)

    Rhodes, N.; Ludwig, J. C.

    1986-01-01

    A mathematical model which predicts the flow and heat transfer in a gun barrel is described. The model is transient, two-dimensional and equations are solved for velocities and enthalpies of a gas phase, which arises from the combustion of propellant and cartridge case, for particle additives which are released from the case; volume fractions of the gas and particles. Closure of the equations is obtained using a two-equation turbulence model. Preliminary calculations are described in which the proportions of particle additives in the cartridge case was altered. The model gives a good prediction of the ballistic performance and the gas to wall heat transfer. However, the expected magnitude of reduction in heat transfer when particles are present is not predicted. The predictions of gas flow invalidate some of the assumptions made regarding case and propellant behavior during combustion and further work is required to investigate these effects and other possible interactions, both chemical and physical, between gas and particles.

  20. Modeling the cardiovascular system using a nonlinear additive autoregressive model with exogenous input

    NASA Astrophysics Data System (ADS)

    Riedl, M.; Suhrbier, A.; Malberg, H.; Penzel, T.; Bretthauer, G.; Kurths, J.; Wessel, N.

    2008-07-01

    The parameters of heart rate variability and blood pressure variability have proved to be useful analytical tools in cardiovascular physics and medicine. Model-based analysis of these variabilities additionally leads to new prognostic information about mechanisms behind regulations in the cardiovascular system. In this paper, we analyze the complex interaction between heart rate, systolic blood pressure, and respiration by nonparametric fitted nonlinear additive autoregressive models with external inputs. Therefore, we consider measurements of healthy persons and patients suffering from obstructive sleep apnea syndrome (OSAS), with and without hypertension. It is shown that the proposed nonlinear models are capable of describing short-term fluctuations in heart rate as well as systolic blood pressure significantly better than similar linear ones, which confirms the assumption of nonlinear controlled heart rate and blood pressure. Furthermore, the comparison of the nonlinear and linear approaches reveals that the heart rate and blood pressure variability in healthy subjects is caused by a higher level of noise as well as nonlinearity than in patients suffering from OSAS. The residue analysis points at a further source of heart rate and blood pressure variability in healthy subjects, in addition to heart rate, systolic blood pressure, and respiration. Comparison of the nonlinear models within and among the different groups of subjects suggests the ability to discriminate the cohorts that could lead to a stratification of hypertension risk in OSAS patients.

  1. An Additional Symmetry in the Weinberg-Salam Model

    SciTech Connect

    Bakker, B.L.G.; Veselov, A.I.; Zubkov, M.A.

    2005-06-01

    An additional Z{sub 6} symmetry hidden in the fermion and Higgs sectors of the Standard Model has been found recently. It has a singular nature and is connected to the centers of the SU(3) and SU(2) subgroups of the gauge group. A lattice regularization of the Standard Model was constructed that possesses this symmetry. In this paper, we report our results on the numerical simulation of its electroweak sector.

  2. Hydrological modeling in alpine catchments: sensing the critical parameters towards an efficient model calibration.

    PubMed

    Achleitner, S; Rinderer, M; Kirnbauer, R

    2009-01-01

    For the Tyrolean part of the river Inn, a hybrid model for flood forecast has been set up and is currently in its test phase. The system is a hybrid system which comprises of a hydraulic 1D model for the river Inn, and the hydrological models HQsim (Rainfall-runoff-discharge model) and the snow and ice melt model SES for modeling the rainfall runoff form non-glaciated and glaciated tributary catchment respectively. Within this paper the focus is put on the hydrological modeling of the totally 49 connected non-glaciated catchments realized with the software HQsim. In the course of model calibration, the identification of the most sensitive parameters is important aiming at an efficient calibration procedure. The indicators used for explaining the parameter sensitivities were chosen specifically for the purpose of flood forecasting. Finally five model parameters could be identified as being sensitive for model calibration when aiming for a well calibrated model for flood conditions. In addition two parameters were identified which are sensitive in situations where the snow line plays an important role. PMID:19759453

  3. Modeling uranium transport in acidic contaminated groundwater with base addition

    SciTech Connect

    Zhang, Fan; Luo, Wensui; Parker, Jack C.; Brooks, Scott C; Watson, David B; Jardine, Philip; Gu, Baohua

    2011-01-01

    This study investigates reactive transport modeling in a column of uranium(VI)-contaminated sediments with base additions in the circulating influent. The groundwater and sediment exhibit oxic conditions with low pH, high concentrations of NO{sub 3}{sup -}, SO{sub 4}{sup 2-}, U and various metal cations. Preliminary batch experiments indicate that additions of strong base induce rapid immobilization of U for this material. In the column experiment that is the focus of the present study, effluent groundwater was titrated with NaOH solution in an inflow reservoir before reinjection to gradually increase the solution pH in the column. An equilibrium hydrolysis, precipitation and ion exchange reaction model developed through simulation of the preliminary batch titration experiments predicted faster reduction of aqueous Al than observed in the column experiment. The model was therefore modified to consider reaction kinetics for the precipitation and dissolution processes which are the major mechanism for Al immobilization. The combined kinetic and equilibrium reaction model adequately described variations in pH, aqueous concentrations of metal cations (Al, Ca, Mg, Sr, Mn, Ni, Co), sulfate and U(VI). The experimental and modeling results indicate that U(VI) can be effectively sequestered with controlled base addition due to sorption by slowly precipitated Al with pH-dependent surface charge. The model may prove useful to predict field-scale U(VI) sequestration and remediation effectiveness.

  4. Using Set Model for Learning Addition of Integers

    ERIC Educational Resources Information Center

    Lestari, Umi Puji; Putri, Ratu Ilma Indra; Hartono, Yusuf

    2015-01-01

    This study aims to investigate how set model can help students' understanding of addition of integers in fourth grade. The study has been carried out to 23 students and a teacher of IVC SD Iba Palembang in January 2015. This study is a design research that also promotes PMRI as the underlying design context and activity. Results showed that the…

  5. Estimation Methods for One-Parameter Testlet Models

    ERIC Educational Resources Information Center

    Jiao, Hong; Wang, Shudong; He, Wei

    2013-01-01

    This study demonstrated the equivalence between the Rasch testlet model and the three-level one-parameter testlet model and explored the Markov Chain Monte Carlo (MCMC) method for model parameter estimation in WINBUGS. The estimation accuracy from the MCMC method was compared with those from the marginalized maximum likelihood estimation (MMLE)…

  6. Equating Parameter Estimates from the Generalized Graded Unfolding Model.

    ERIC Educational Resources Information Center

    Roberts, James S.

    Three common methods for equating parameter estimates from binary item response theory models are extended to the generalized grading unfolding model (GGUM). The GGUM is an item response model in which single-peaked, nonmonotonic expected value functions are implemented for polytomous responses. GGUM parameter estimates are equated using extended…

  7. Seamless continental-domain hydrologic model parameter estimations with Multi-Scale Parameter Regionalization

    NASA Astrophysics Data System (ADS)

    Mizukami, Naoki; Clark, Martyn; Newman, Andrew; Wood, Andy

    2016-04-01

    Estimation of spatially distributed parameters is one of the biggest challenges in hydrologic modeling over a large spatial domain. This problem arises from methodological challenges such as the transfer of calibrated parameters to ungauged locations. Consequently, many current large scale hydrologic assessments rely on spatially inconsistent parameter fields showing patchwork patterns resulting from individual basin calibration or spatially constant parameters resulting from the adoption of default or a-priori estimates. In this study we apply the Multi-scale Parameter Regionalization (MPR) framework (Samaniego et al., 2010) to generate spatially continuous and optimized parameter fields for the Variable Infiltration Capacity (VIC) model over the contiguous United States(CONUS). The MPR method uses transfer functions that relate geophysical attributes (e.g., soil) to model parameters (e.g., parameters that describe the storage and transmission of water) at the native resolution of the geophysical attribute data and then scale to the model spatial resolution with several scaling functions, e.g., arithmetic mean, harmonic mean, and geometric mean. Model parameter adjustments are made by calibrating the parameters of the transfer function rather than the model parameters themselves. In this presentation, we first discuss conceptual challenges in a "model agnostic" continental-domain application of the MPR approach. We describe development of transfer functions for the soil parameters, and discuss challenges associated with extending MPR for VIC to multiple models. Next, we discuss the "computational shortcut" of headwater basin calibration where we estimate the parameters for only 500 headwater basins rather than conducting simulations for every grid box across the entire domain. We first performed individual basin calibration to obtain a benchmark of the maximum achievable performance in each basin, and examined their transferability to the other basins. We then

  8. STREAM PRODUCTIVITY ANALYSIS WITH DORM (DISSOLVED OXYGEN ROUTING MODEL) - 2: PARAMETER ESTIMATION AND SENSITIVITY

    EPA Science Inventory

    The dissolved oxygen routing model DORM, which determines productivity and respiration of a stream biological community, requires in addition to stream geometry and stream flow, parameter values for reaeration coefficients and temperature and dissolved oxygen (DO) limitations on ...

  9. Parameter estimation and error analysis in environmental modeling and computation

    NASA Technical Reports Server (NTRS)

    Kalmaz, E. E.

    1986-01-01

    A method for the estimation of parameters and error analysis in the development of nonlinear modeling for environmental impact assessment studies is presented. The modular computer program can interactively fit different nonlinear models to the same set of data, dynamically changing the error structure associated with observed values. Parameter estimation techniques and sequential estimation algorithms employed in parameter identification and model selection are first discussed. Then, least-square parameter estimation procedures are formulated, utilizing differential or integrated equations, and are used to define a model for association of error with experimentally observed data.

  10. Fitting additive hazards models for case-cohort studies: a multiple imputation approach.

    PubMed

    Jung, Jinhyouk; Harel, Ofer; Kang, Sangwook

    2016-07-30

    In this paper, we consider fitting semiparametric additive hazards models for case-cohort studies using a multiple imputation approach. In a case-cohort study, main exposure variables are measured only on some selected subjects, but other covariates are often available for the whole cohort. We consider this as a special case of a missing covariate by design. We propose to employ a popular incomplete data method, multiple imputation, for estimation of the regression parameters in additive hazards models. For imputation models, an imputation modeling procedure based on a rejection sampling is developed. A simple imputation modeling that can naturally be applied to a general missing-at-random situation is also considered and compared with the rejection sampling method via extensive simulation studies. In addition, a misspecification aspect in imputation modeling is investigated. The proposed procedures are illustrated using a cancer data example. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26194861

  11. Estimating soil water retention using soil component additivity model

    NASA Astrophysics Data System (ADS)

    Zeiliger, A.; Ermolaeva, O.; Semenov, V.

    2009-04-01

    Soil water retention is a major soil hydraulic property that governs soil functioning in ecosystems and greatly affects soil management. Data on soil water retention are used in research and applications in hydrology, agronomy, meteorology, ecology, environmental protection, and many other soil-related fields. Soil organic matter content and composition affect both soil structure and adsorption properties; therefore water retention may be affected by changes in soil organic matter that occur because of both climate change and modifications of management practices. Thus, effects of organic matter on soil water retention should be understood and quantified. Measurement of soil water retention is relatively time-consuming, and become impractical when soil hydrologic estimates are needed for large areas. One approach to soil water retention estimation from readily available data is based on the hypothesis that soil water retention may be estimated as an additive function obtained by summing up water retention of pore subspaces associated with soil textural and/or structural components and organic matter. The additivity model and was tested with 550 soil samples from the international database UNSODA and 2667 soil samples from the European database HYPRES containing all textural soil classes after USDA soil texture classification. The root mean square errors (RMSEs) of the volumetric water content estimates for UNSODA vary from 0.021 m3m-3 for coarse sandy loam to 0.075 m3m-3 for sandy clay. Obtained RMSEs are at the lower end of the RMSE range for regression-based water retention estimates found in literature. Including retention estimates of organic matter significantly improved RMSEs. The attained accuracy warrants testing the 'additivity' model with additional soil data and improving this model to accommodate various types of soil structure. Keywords: soil water retention, soil components, additive model, soil texture, organic matter.

  12. Automatic Parameters Identification of Groundwater Model using Expert System

    NASA Astrophysics Data System (ADS)

    Tsai, P. J.; Chen, Y.; Chang, L.

    2011-12-01

    Conventionally, parameters identification of groundwater model can be classified into manual parameters identification and automatic parameters identification using optimization method. Parameter searching in manual parameters identification requires heavily interaction with the modeler. Therefore, the identified parameters value is interpretable by the modeler. However, manual method is a complicated and time-consuming work and requires groundwater modeling practice and parameters identification experiences to performing the task. Optimization-based identification is more efficient and convenient comparing to the manual one. Nevertheless, the parameters search in the optimization approach can not directly interactive with modeler and one can only examine the final results. Moreover, because of the simplification of the optimization model, the parameters value obtained by optimization-based identification may not be feasible in reality. In light of previous discussion, this study integrates a rule-based expert system and a groundwater simulation model, MODFLOW 2000, to develop an automatic groundwater parameters identification system. The hydraulic conductivity and specific yield are the parameters to be calibrated in the system. Since the parameter value is automatic searched according the rules that are specified by modeler, it is efficient and the identified parameters value is more interpretable than that by optimized based approach. Beside, since the rules are easy to modify and adding, the system is flexible and can accumulate the expertise experiences. Several hypothesized cases were used to examine the system validity and capability. The result shows a good agreement between the identified and given parameter values and also demonstrates a great potential for extending the system to a fully function and practical field application system.

  13. Summary of the DREAM8 Parameter Estimation Challenge: Toward Parameter Identification for Whole-Cell Models.

    PubMed

    Karr, Jonathan R; Williams, Alex H; Zucker, Jeremy D; Raue, Andreas; Steiert, Bernhard; Timmer, Jens; Kreutz, Clemens; Wilkinson, Simon; Allgood, Brandon A; Bot, Brian M; Hoff, Bruce R; Kellen, Michael R; Covert, Markus W; Stolovitzky, Gustavo A; Meyer, Pablo

    2015-05-01

    Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM) 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model's structure and in silico "experimental" data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation. PMID:26020786

  14. Parameter estimation of hydrologic models using data assimilation

    NASA Astrophysics Data System (ADS)

    Kaheil, Y. H.

    2005-12-01

    The uncertainties associated with the modeling of hydrologic systems sometimes demand that data should be incorporated in an on-line fashion in order to understand the behavior of the system. This paper represents a Bayesian strategy to estimate parameters for hydrologic models in an iterative mode. The paper presents a modified technique called localized Bayesian recursive estimation (LoBaRE) that efficiently identifies the optimum parameter region, avoiding convergence to a single best parameter set. The LoBaRE methodology is tested for parameter estimation for two different types of models: a support vector machine (SVM) model for predicting soil moisture, and the Sacramento Soil Moisture Accounting (SAC-SMA) model for estimating streamflow. The SAC-SMA model has 13 parameters that must be determined. The SVM model has three parameters. Bayesian inference is used to estimate the best parameter set in an iterative fashion. This is done by narrowing the sampling space by imposing uncertainty bounds on the posterior best parameter set and/or updating the "parent" bounds based on their fitness. The new approach results in fast convergence towards the optimal parameter set using minimum training/calibration data and evaluation of fewer parameter sets. The efficacy of the localized methodology is also compared with the previously used Bayesian recursive estimation (BaRE) algorithm.

  15. Distributed parameter modeling for the control of flexible spacecraft

    NASA Technical Reports Server (NTRS)

    Taylor, Lawrence W., Jr.

    1990-01-01

    The use of FEMs of spacecraft structural dynamics is a common practice, but it has a number of shortcomings. Distributed-parameter models offer an alternative, but present both advantages and difficulties. First, the model order does not have to be reduced prior to the inclusion of control system dynamics. This advantage eliminates the risk involved with model 'order reduction'. Second, distributed parameter models inherently involve fewer parameters, thereby enabling more accurate parameter estimation using experimental data. Third, it is possible to include the damping in the basic model, thereby increasing the accuracy of the structural damping. The difficulty in generating distributed parameter models of complex spacecraft configurations has been greatly alleviated by the use of PDEMOD, BUNVIS-RG, or DISTEL. PDEMOD is being developed for simultaneously modeling structural dynamics and control system dynamics.

  16. Isolating parameter sensitivity in reach scale transient storage modeling

    NASA Astrophysics Data System (ADS)

    Schmadel, Noah M.; Neilson, Bethany T.; Heavilin, Justin E.; Wörman, Anders

    2016-03-01

    Parameter sensitivity analyses, although necessary to assess identifiability, may not lead to an increased understanding or accurate representation of transient storage processes when associated parameter sensitivities are muted. Reducing the number of uncertain calibration parameters through field-based measurements may allow for more realistic representations and improved predictive capabilities of reach scale stream solute transport. Using a two-zone transient storage model, we examined the spatial detail necessary to set parameters describing hydraulic characteristics and isolate the sensitivity of the parameters associated with transient storage processes. We represented uncertain parameter distributions as triangular fuzzy numbers and used closed form statistical moment solutions to express parameter sensitivity thus avoiding copious model simulations. These solutions also allowed for the direct incorporation of different levels of spatial information regarding hydraulic characteristics. To establish a baseline for comparison, we performed a sensitivity analysis considering all model parameters as uncertain. Next, we set hydraulic parameters as the reach averages, leaving the transient storage parameters as uncertain, and repeated the analysis. Lastly, we incorporated high resolution hydraulic information assessed from aerial imagery to examine whether more spatial detail was necessary to isolate the sensitivity of transient storage parameters. We found that a reach-average hydraulic representation, as opposed to using detailed spatial information, was sufficient to highlight transient storage parameter sensitivity and provide more information regarding the potential identifiability of these parameters.

  17. Accuracy of Parameter Estimation in Gibbs Sampling under the Two-Parameter Logistic Model.

    ERIC Educational Resources Information Center

    Kim, Seock-Ho; Cohen, Allan S.

    The accuracy of Gibbs sampling, a Markov chain Monte Carlo procedure, was considered for estimation of item and ability parameters under the two-parameter logistic model. Memory test data were analyzed to illustrate the Gibbs sampling procedure. Simulated data sets were analyzed using Gibbs sampling and the marginal Bayesian method. The marginal…

  18. The Effect of Nondeterministic Parameters on Shock-Associated Noise Prediction Modeling

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.; Khavaran, Abbas

    2010-01-01

    Engineering applications for aircraft noise prediction contain models for physical phenomenon that enable solutions to be computed quickly. These models contain parameters that have an uncertainty not accounted for in the solution. To include uncertainty in the solution, nondeterministic computational methods are applied. Using prediction models for supersonic jet broadband shock-associated noise, fixed model parameters are replaced by probability distributions to illustrate one of these methods. The results show the impact of using nondeterministic parameters both on estimating the model output uncertainty and on the model spectral level prediction. In addition, a global sensitivity analysis is used to determine the influence of the model parameters on the output, and to identify the parameters with the least influence on model output.

  19. Summary of the DREAM8 Parameter Estimation Challenge: Toward Parameter Identification for Whole-Cell Models

    PubMed Central

    Karr, Jonathan R.; Williams, Alex H.; Zucker, Jeremy D.; Raue, Andreas; Steiert, Bernhard; Timmer, Jens; Kreutz, Clemens; Wilkinson, Simon; Allgood, Brandon A.; Bot, Brian M.; Hoff, Bruce R.; Kellen, Michael R.; Covert, Markus W.; Stolovitzky, Gustavo A.; Meyer, Pablo

    2015-01-01

    Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM) 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model’s structure and in silico “experimental” data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation. PMID:26020786

  20. Additions to Mars Global Reference Atmospheric Model (Mars-GRAM)

    NASA Technical Reports Server (NTRS)

    Justus, C. G.

    1991-01-01

    Three major additions or modifications were made to the Mars Global Reference Atmospheric Model (Mars-GRAM): (1) in addition to the interactive version, a new batch version is available, which uses NAMELIST input, and is completely modular, so that the main driver program can easily be replaced by any calling program, such as a trajectory simulation program; (2) both the interactive and batch versions now have an option for treating local-scale dust storm effects, rather than just the global-scale dust storms in the original Mars-GRAM; and (3) the Zurek wave perturbation model was added, to simulate the effects of tidal perturbations, in addition to the random (mountain wave) perturbation model of the original Mars-GRAM. A minor modification has also been made which allows heights to go below local terrain height and return realistic pressure, density, and temperature (not the surface values) as returned by the original Mars-GRAM. This feature will allow simulations of Mars rover paths which might go into local valley areas which lie below the average height of the present, rather coarse-resolution, terrain height data used by Mars-GRAM. Sample input and output of both the interactive and batch version of Mars-GRAM are presented.

  1. Additions to Mars Global Reference Atmospheric Model (MARS-GRAM)

    NASA Technical Reports Server (NTRS)

    Justus, C. G.; James, Bonnie

    1992-01-01

    Three major additions or modifications were made to the Mars Global Reference Atmospheric Model (Mars-GRAM): (1) in addition to the interactive version, a new batch version is available, which uses NAMELIST input, and is completely modular, so that the main driver program can easily be replaced by any calling program, such as a trajectory simulation program; (2) both the interactive and batch versions now have an option for treating local-scale dust storm effects, rather than just the global-scale dust storms in the original Mars-GRAM; and (3) the Zurek wave perturbation model was added, to simulate the effects of tidal perturbations, in addition to the random (mountain wave) perturbation model of the original Mars-GRAM. A minor modification was also made which allows heights to go 'below' local terrain height and return 'realistic' pressure, density, and temperature, and not the surface values, as returned by the original Mars-GRAM. This feature will allow simulations of Mars rover paths which might go into local 'valley' areas which lie below the average height of the present, rather coarse-resolution, terrain height data used by Mars-GRAM. Sample input and output of both the interactive and batch versions of Mars-GRAM are presented.

  2. Computationally inexpensive identification of noninformative model parameters by sequential screening

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Cuntz, Matthias; Zink, Matthias; Thober, Stephan; Kumar, Rohini; Schäfer, David; Schrön, Martin; Craven, John; Rakovec, Oldrich; Spieler, Diana; Prykhodko, Vladyslav; Dalmasso, Giovanni; Musuuza, Jude; Langenberg, Ben; Attinger, Sabine; Samaniego, Luis

    2016-04-01

    Environmental models tend to require increasing computational time and resources as physical process descriptions are improved or new descriptions are incorporated. Many-query applications such as sensitivity analysis or model calibration usually require a large number of model evaluations leading to high computational demand. This often limits the feasibility of rigorous analyses. Here we present a fully automated sequential screening method that selects only informative parameters for a given model output. The method requires a number of model evaluations that is approximately 10 times the number of model parameters. It was tested using the mesoscale hydrologic model mHM in three hydrologically unique European river catchments. It identified around 20 informative parameters out of 52, with different informative parameters in each catchment. The screening method was evaluated with subsequent analyses using all 52 as well as only the informative parameters. Subsequent Sobol's global sensitivity analysis led to almost identical results yet required 40% fewer model evaluations after screening. mHM was calibrated with all and with only informative parameters in the three catchments. Model performances for daily discharge were equally high in both cases with Nash-Sutcliffe efficiencies above 0.82. Calibration using only the informative parameters needed just one third of the number of model evaluations. The universality of the sequential screening method was demonstrated using several general test functions from the literature. We therefore recommend the use of the computationally inexpensive sequential screening method prior to rigorous analyses on complex environmental models.

  3. Computationally inexpensive identification of noninformative model parameters by sequential screening

    NASA Astrophysics Data System (ADS)

    Cuntz, Matthias; Mai, Juliane; Zink, Matthias; Thober, Stephan; Kumar, Rohini; Schäfer, David; Schrön, Martin; Craven, John; Rakovec, Oldrich; Spieler, Diana; Prykhodko, Vladyslav; Dalmasso, Giovanni; Musuuza, Jude; Langenberg, Ben; Attinger, Sabine; Samaniego, Luis

    2015-08-01

    Environmental models tend to require increasing computational time and resources as physical process descriptions are improved or new descriptions are incorporated. Many-query applications such as sensitivity analysis or model calibration usually require a large number of model evaluations leading to high computational demand. This often limits the feasibility of rigorous analyses. Here we present a fully automated sequential screening method that selects only informative parameters for a given model output. The method requires a number of model evaluations that is approximately 10 times the number of model parameters. It was tested using the mesoscale hydrologic model mHM in three hydrologically unique European river catchments. It identified around 20 informative parameters out of 52, with different informative parameters in each catchment. The screening method was evaluated with subsequent analyses using all 52 as well as only the informative parameters. Subsequent Sobol's global sensitivity analysis led to almost identical results yet required 40% fewer model evaluations after screening. mHM was calibrated with all and with only informative parameters in the three catchments. Model performances for daily discharge were equally high in both cases with Nash-Sutcliffe efficiencies above 0.82. Calibration using only the informative parameters needed just one third of the number of model evaluations. The universality of the sequential screening method was demonstrated using several general test functions from the literature. We therefore recommend the use of the computationally inexpensive sequential screening method prior to rigorous analyses on complex environmental models.

  4. Agricultural and Environmental Input Parameters for the Biosphere Model

    SciTech Connect

    Kaylie Rasmuson; Kurt Rautenstrauch

    2003-06-20

    This analysis is one of nine technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. It documents input parameters for the biosphere model, and supports the use of the model to develop Biosphere Dose Conversion Factors (BDCF). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in the biosphere Technical Work Plan (TWP, BSC 2003a). It should be noted that some documents identified in Figure 1-1 may be under development and therefore not available at the time this document is issued. The ''Biosphere Model Report'' (BSC 2003b) describes the ERMYN and its input parameters. This analysis report, ANL-MGR-MD-000006, ''Agricultural and Environmental Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. This report defines and justifies values for twelve parameters required in the biosphere model. These parameters are related to use of contaminated groundwater to grow crops. The parameter values recommended in this report are used in the soil, plant, and carbon-14 submodels of the ERMYN.

  5. A simulation of water pollution model parameter estimation

    NASA Technical Reports Server (NTRS)

    Kibler, J. F.

    1976-01-01

    A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.

  6. Understanding Rasch Measurement: The Rasch Model, Additive Conjoint Measurement, and New Models of Probabilistic Measurement Theory.

    ERIC Educational Resources Information Center

    Karabatsos, George

    2001-01-01

    Describes similarities and differences between additive conjoint measurement and the Rasch model, and formalizes some new nonparametric item response models that are, in a sense, probabilistic measurement theory models. Applies these new models to published and simulated data. (SLD)

  7. On Interpreting the Parameters for Any Item Response Model

    ERIC Educational Resources Information Center

    Thissen, David

    2009-01-01

    Maris and Bechger's article is an exercise in technical virtuosity and provides much to be learned by students of psychometrics. In this commentary, the author begins with making two observations. The first is that the title, "On Interpreting the Model Parameters for the Three Parameter Logistic Model," belies the generality of parts of Maris and…

  8. Exploring the interdependencies between parameters in a material model.

    SciTech Connect

    Silling, Stewart Andrew; Fermen-Coker, Muge

    2014-01-01

    A method is investigated to reduce the number of numerical parameters in a material model for a solid. The basis of the method is to detect interdependencies between parameters within a class of materials of interest. The method is demonstrated for a set of material property data for iron and steel using the Johnson-Cook plasticity model.

  9. Backbone additivity in the transfer model of protein solvation

    PubMed Central

    Hu, Char Y; Kokubo, Hironori; Lynch, Gillian C; Bolen, D Wayne; Pettitt, B Montgomery

    2010-01-01

    The transfer model implying additivity of the peptide backbone free energy of transfer is computationally tested. Molecular dynamics simulations are used to determine the extent of change in transfer free energy (ΔGtr) with increase in chain length of oligoglycine with capped end groups. Solvation free energies of oligoglycine models of varying lengths in pure water and in the osmolyte solutions, 2M urea and 2M trimethylamine N-oxide (TMAO), were calculated from simulations of all atom models, and ΔGtr values for peptide backbone transfer from water to the osmolyte solutions were determined. The results show that the transfer free energies change linearly with increasing chain length, demonstrating the principle of additivity, and provide values in reasonable agreement with experiment. The peptide backbone transfer free energy contributions arise from van der Waals interactions in the case of transfer to urea, but from electrostatics on transfer to TMAO solution. The simulations used here allow for the calculation of the solvation and transfer free energy of longer oligoglycine models to be evaluated than is currently possible through experiment. The peptide backbone unit computed transfer free energy of −54 cal/mol/M compares quite favorably with −43 cal/mol/M determined experimentally. PMID:20306490

  10. Backbone Additivity in the Transfer Model of Protein Solvation

    SciTech Connect

    Hu, Char Y.; Kokubo, Hironori; Lynch, Gillian C.; Bolen, D Wayne; Pettitt, Bernard M.

    2010-05-01

    The transfer model implying additivity of the peptide backbone free energy of transfer is computationally tested. Molecular dynamics simulations are used to determine the extent of change in transfer free energy (ΔGtr) with increase in chain length of oligoglycine with capped end groups. Solvation free energies of oligoglycine models of varying lengths in pure water and in the osmolyte solutions, 2M urea and 2M trimethylamine N-oxide (TMAO), were calculated from simulations of all atom models, and ΔGtr values for peptide backbone transfer from water to the osmolyte solutions were determined. The results show that the transfer free energies change linearly with increasing chain length, demonstrating the principle of additivity, and provide values in reasonable agreement with experiment. The peptide backbone transfer free energy contributions arise from van der Waals interactions in the case of transfer to urea, but from electrostatics on transfer to TMAO solution. The simulations used here allow for the calculation of the solvation and transfer free energy of longer oligoglycine models to be evaluated than is currently possible through experiment. The peptide backbone unit computed transfer free energy of –54 cal/mol/Mcompares quite favorably with –43 cal/mol/M determined experimentally.

  11. Brownian motion model with stochastic parameters for asset prices

    NASA Astrophysics Data System (ADS)

    Ching, Soo Huei; Hin, Pooi Ah

    2013-09-01

    The Brownian motion model may not be a completely realistic model for asset prices because in real asset prices the drift μ and volatility σ may change over time. Presently we consider a model in which the parameter x = (μ,σ) is such that its value x (t + Δt) at a short time Δt ahead of the present time t depends on the value of the asset price at time t + Δt as well as the present parameter value x(t) and m-1 other parameter values before time t via a conditional distribution. The Malaysian stock prices are used to compare the performance of the Brownian motion model with fixed parameter with that of the model with stochastic parameter.

  12. Estimation of Accumulation Parameters for Urban Runoff Quality Modeling

    NASA Astrophysics Data System (ADS)

    Alley, William M.; Smith, Peter E.

    1981-12-01

    Many recently developed watershed models utilize accumulation and washoff equations to simulate the quality of runofffrom urban impervious areas. These models often have been calibrated by trial and error and with little understanding of model sensitivity to the various parameters. Methodologies for estimating best fit values of the washoff parameters commonly used in these models have been presented previously. In this paper, parameter identification techniques for estimating the accumulation parameters from measured runoff quality data are presented along with a sensitivity analysis of the parameters. Results from application of the techniques and the sensitivity analysis suggest a need for data quantifying the magnitude and identifying the shape of constituent accumulation curves. An exponential accumulation curve is shown to be more general than the linear accumulation curves used in most urban runoff quality models. When determining accumulation rates, attention needs to be given to the effects of residual amounts of constituents remaining after the previous period of storm runoff or street sweeping.

  13. Standard model parameters and the search for new physics

    SciTech Connect

    Marciano, W.J.

    1988-04-01

    In these lectures, my aim is to present an up-to-date status report on the standard model and some key tests of electroweak unification. Within that context, I also discuss how and where hints of new physics may emerge. To accomplish those goals, I have organized my presentation as follows: I discuss the standard model parameters with particular emphasis on the gauge coupling constants and vector boson masses. Examples of new physics appendages are also briefly commented on. In addition, because these lectures are intended for students and thus somewhat pedagogical, I have included an appendix on dimensional regularization and a simple computational example that employs that technique. Next, I focus on weak charged current phenomenology. Precision tests of the standard model are described and up-to-date values for the Cabibbo-Kobayashi-Maskawa (CKM) mixing matrix parameters are presented. Constraints implied by those tests for a 4th generation, supersymmetry, extra Z/prime/ bosons, and compositeness are also discussed. I discuss weak neutral current phenomenology and the extraction of sin/sup 2/ /theta//sub W/ from experiment. The results presented there are based on a recently completed global analysis of all existing data. I have chosen to concentrate that discussion on radiative corrections, the effect of a heavy top quark mass, and implications for grand unified theories (GUTS). The potential for further experimental progress is also commented on. I depart from the narrowest version of the standard model and discuss effects of neutrino masses and mixings. I have chosen to concentrate on oscillations, the Mikheyev-Smirnov- Wolfenstein (MSW) effect, and electromagnetic properties of neutrinos. On the latter topic, I will describe some recent work on resonant spin-flavor precession. Finally, I conclude with a prospectus on hopes for the future. 76 refs.

  14. An automatic and effective parameter optimization method for model tuning

    NASA Astrophysics Data System (ADS)

    Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.

    2015-11-01

    Physical parameterizations in general circulation models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time-consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determining the model's sensitivity to the parameters and the other choosing the optimum initial value for those sensitive parameters, are introduced before the downhill simplex method. This new method reduces the number of parameters to be tuned and accelerates the convergence of the downhill simplex method. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9 %. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameter tuning during the model development stage.

  15. Additional Research Needs to Support the GENII Biosphere Models

    SciTech Connect

    Napier, Bruce A.; Snyder, Sandra F.; Arimescu, Carmen

    2013-11-30

    In the course of evaluating the current parameter needs for the GENII Version 2 code (Snyder et al. 2013), areas of possible improvement for both the data and the underlying models have been identified. As the data review was implemented, PNNL staff identified areas where the models can be improved both to accommodate the locally significant pathways identified and also to incorporate newer models. The areas are general data needs for the existing models and improved formulations for the pathway models. It is recommended that priorities be set by NRC staff to guide selection of the most useful improvements in a cost-effective manner. Suggestions are made based on relatively easy and inexpensive changes, and longer-term more costly studies. In the short term, there are several improved model formulations that could be applied to the GENII suite of codes to make them more generally useful. • Implementation of the separation of the translocation and weathering processes • Implementation of an improved model for carbon-14 from non-atmospheric sources • Implementation of radon exposure pathways models • Development of a KML processor for the output report generator module data that are calculated on a grid that could be superimposed upon digital maps for easier presentation and display • Implementation of marine mammal models (manatees, seals, walrus, whales, etc.). Data needs in the longer term require extensive (and potentially expensive) research. Before picking any one radionuclide or food type, NRC staff should perform an in-house review of current and anticipated environmental analyses to select “dominant” radionuclides of interest to allow setting of cost-effective priorities for radionuclide- and pathway-specific research. These include • soil-to-plant uptake studies for oranges and other citrus fruits, and • Development of models for evaluation of radionuclide concentration in highly-processed foods such as oils and sugars. Finally, renewed

  16. Unscented Kalman filter with parameter identifiability analysis for the estimation of multiple parameters in kinetic models.

    PubMed

    Baker, Syed Murtuza; Poskar, C Hart; Junker, Björn H

    2011-01-01

    In systems biology, experimentally measured parameters are not always available, necessitating the use of computationally based parameter estimation. In order to rely on estimated parameters, it is critical to first determine which parameters can be estimated for a given model and measurement set. This is done with parameter identifiability analysis. A kinetic model of the sucrose accumulation in the sugar cane culm tissue developed by Rohwer et al. was taken as a test case model. What differentiates this approach is the integration of an orthogonal-based local identifiability method into the unscented Kalman filter (UKF), rather than using the more common observability-based method which has inherent limitations. It also introduces a variable step size based on the system uncertainty of the UKF during the sensitivity calculation. This method identified 10 out of 12 parameters as identifiable. These ten parameters were estimated using the UKF, which was run 97 times. Throughout the repetitions the UKF proved to be more consistent than the estimation algorithms used for comparison. PMID:21989173

  17. Relationship between Cole-Cole model parameters and spectral decomposition parameters derived from SIP data

    NASA Astrophysics Data System (ADS)

    Weigand, M.; Kemna, A.

    2016-06-01

    Spectral induced polarization (SIP) data are commonly analysed using phenomenological models. Among these models the Cole-Cole (CC) model is the most popular choice to describe the strength and frequency dependence of distinct polarization peaks in the data. More flexibility regarding the shape of the spectrum is provided by decomposition schemes. Here the spectral response is decomposed into individual responses of a chosen elementary relaxation model, mathematically acting as kernel in the involved integral, based on a broad range of relaxation times. A frequently used kernel function is the Debye model, but also the CC model with some other a priorly specified frequency dispersion (e.g. Warburg model) has been proposed as kernel in the decomposition. The different decomposition approaches in use, also including conductivity and resistivity formulations, pose the question to which degree the integral spectral parameters typically derived from the obtained relaxation time distribution are biased by the approach itself. Based on synthetic SIP data sampled from an ideal CC response, we here investigate how the two most important integral output parameters deviate from the corresponding CC input parameters. We find that the total chargeability may be underestimated by up to 80 per cent and the mean relaxation time may be off by up to three orders of magnitude relative to the original values, depending on the frequency dispersion of the analysed spectrum and the proximity of its peak to the frequency range limits considered in the decomposition. We conclude that a quantitative comparison of SIP parameters across different studies, or the adoption of parameter relationships from other studies, for example when transferring laboratory results to the field, is only possible on the basis of a consistent spectral analysis procedure. This is particularly important when comparing effective CC parameters with spectral parameters derived from decomposition results.

  18. Addition Table of Colours: Additive and Subtractive Mixtures Described Using a Single Reasoning Model

    ERIC Educational Resources Information Center

    Mota, A. R.; Lopes dos Santos, J. M. B.

    2014-01-01

    Students' misconceptions concerning colour phenomena and the apparent complexity of the underlying concepts--due to the different domains of knowledge involved--make its teaching very difficult. We have developed and tested a teaching device, the addition table of colours (ATC), that encompasses additive and subtractive mixtures in a single…

  19. Tube-Load Model Parameter Estimation for Monitoring Arterial Hemodynamics

    PubMed Central

    Zhang, Guanqun; Hahn, Jin-Oh; Mukkamala, Ramakrishna

    2011-01-01

    A useful model of the arterial system is the uniform, lossless tube with parametric load. This tube-load model is able to account for wave propagation and reflection (unlike lumped-parameter models such as the Windkessel) while being defined by only a few parameters (unlike comprehensive distributed-parameter models). As a result, the parameters may be readily estimated by accurate fitting of the model to available arterial pressure and flow waveforms so as to permit improved monitoring of arterial hemodynamics. In this paper, we review tube-load model parameter estimation techniques that have appeared in the literature for monitoring wave reflection, large artery compliance, pulse transit time, and central aortic pressure. We begin by motivating the use of the tube-load model for parameter estimation. We then describe the tube-load model, its assumptions and validity, and approaches for estimating its parameters. We next summarize the various techniques and their experimental results while highlighting their advantages over conventional techniques. We conclude the review by suggesting future research directions and describing potential applications. PMID:22053157

  20. Superposition-additive approach: thermodynamic parameters of clusterization of monosubstituted alkanes at the air/water interface.

    PubMed

    Vysotsky, Yu B; Belyaeva, E A; Fomina, E S; Fainerman, V B; Aksenenko, E V; Vollhardt, D; Miller, R

    2011-12-21

    The applicability of the superposition-additive approach for the calculation of the thermodynamic parameters of formation and atomization of conjugate systems, their dipole electric polarisabilities, molecular diamagnetic susceptibilities, π-electron circular currents, as well as for the estimation of the thermodynamic parameters of substituted alkanes, was demonstrated earlier. Now the applicability of the superposition-additive approach for the description of clusterization of fatty alcohols, thioalcohols, amines, carboxylic acids at the air/water interface is studied. Two superposition-additive schemes are used that ensure the maximum superimposition of the graphs of the considered molecular structures including the intermolecular CH-HC interactions within the clusters. The thermodynamic parameters of clusterization are calculated for dimers, trimers and tetramers. The calculations are based on the values of enthalpy, entropy and Gibbs' energy of clusterization calculated earlier using the semiempirical quantum chemical PM3 method. It is shown that the proposed approach is capable of the reproduction with sufficiently enough accuracy of the values calculated previously. PMID:22042000

  1. Extraction of exposure modeling parameters of thick resist

    NASA Astrophysics Data System (ADS)

    Liu, Chi; Du, Jinglei; Liu, Shijie; Duan, Xi; Luo, Boliang; Zhu, Jianhua; Guo, Yongkang; Du, Chunlei

    2004-12-01

    Experimental and theoretical analysis indicates that many nonlinear factors existing in the exposure process of thick resist can remarkably affect the PAC concentration distribution in the resist. So the effects should be fully considered in the exposure model of thick resist, and exposure parameters should not be treated as constants because there exists certain relationship between the parameters and resist thickness. In this paper, an enhanced Dill model for the exposure process of thick resist is presented, and the experimental setup for measuring exposure parameters of thick resist is developed. We measure the intensity transmittance curve of thick resist AZ4562 under different processing conditions, and extract the corresponding exposure parameters based on the experiment results and the calculations from the beam propagation matrix of the resist films. With these modified modeling parameters and enhanced Dill model, simulation of thick-resist exposure process can be effectively developed in the future.

  2. Additive Manufacturing of Medical Models--Applications in Rhinology.

    PubMed

    Raos, Pero; Klapan, Ivica; Galeta, Tomislav

    2015-09-01

    In the paper we are introducing guidelines and suggestions for use of 3D image processing SW in head pathology diagnostic and procedures for obtaining physical medical model by additive manufacturing/rapid prototyping techniques, bearing in mind the improvement of surgery performance, its maximum security and faster postoperative recovery of patients. This approach has been verified in two case reports. In the treatment we used intelligent classifier-schemes for abnormal patterns using computer-based system for 3D-virtual and endoscopic assistance in rhinology, with appropriate visualization of anatomy and pathology within the nose, paranasal sinuses, and scull base area. PMID:26898064

  3. Estimation of Kalman filter model parameters from an ensemble of tests

    NASA Technical Reports Server (NTRS)

    Gibbs, B. P.; Haley, D. R.; Levine, W.; Porter, D. W.; Vahlberg, C. J.

    1980-01-01

    A methodology for estimating initial mean and covariance parameters in a Kalman filter model from an ensemble of nonidentical tests is presented. In addition, the problem of estimating time constants and process noise levels is addressed. Practical problems such as developing and validating inertial instrument error models from laboratory test data or developing error models of individual phases of a test are generally considered.

  4. Software reliability: Additional investigations into modeling with replicated experiments

    NASA Technical Reports Server (NTRS)

    Nagel, P. M.; Schotz, F. M.; Skirvan, J. A.

    1984-01-01

    The effects of programmer experience level, different program usage distributions, and programming languages are explored. All these factors affect performance, and some tentative relational hypotheses are presented. An analytic framework for replicated and non-replicated (traditional) software experiments is presented. A method of obtaining an upper bound on the error rate of the next error is proposed. The method was validated empirically by comparing forecasts with actual data. In all 14 cases the bound exceeded the observed parameter, albeit somewhat conservatively. Two other forecasting methods are proposed and compared to observed results. Although demonstrated relative to this framework that stages are neither independent nor exponentially distributed, empirical estimates show that the exponential assumption is nearly valid for all but the extreme tails of the distribution. Except for the dependence in the stage probabilities, Cox's model approximates to a degree what is being observed.

  5. Identification of parameters of discrete-continuous models

    SciTech Connect

    Cekus, Dawid Warys, Pawel

    2015-03-10

    In the paper, the parameters of a discrete-continuous model have been identified on the basis of experimental investigations and formulation of optimization problem. The discrete-continuous model represents a cantilever stepped Timoshenko beam. The mathematical model has been formulated and solved according to the Lagrange multiplier formalism. Optimization has been based on the genetic algorithm. The presented proceeding’s stages make the identification of any parameters of discrete-continuous systems possible.

  6. Inverse estimation of parameters for an estuarine eutrophication model

    SciTech Connect

    Shen, J.; Kuo, A.Y.

    1996-11-01

    An inverse model of an estuarine eutrophication model with eight state variables is developed. It provides a framework to estimate parameter values of the eutrophication model by assimilation of concentration data of these state variables. The inverse model using the variational technique in conjunction with a vertical two-dimensional eutrophication model is general enough to be applicable to aid model calibration. The formulation is illustrated by conducting a series of numerical experiments for the tidal Rappahannock River, a western shore tributary of the Chesapeake Bay. The numerical experiments of short-period model simulations with different hypothetical data sets and long-period model simulations with limited hypothetical data sets demonstrated that the inverse model can be satisfactorily used to estimate parameter values of the eutrophication model. The experiments also showed that the inverse model is useful to address some important questions, such as uniqueness of the parameter estimation and data requirements for model calibration. Because of the complexity of the eutrophication system, degrading of speed of convergence may occur. Two major factors which cause degradation of speed of convergence are cross effects among parameters and the multiple scales involved in the parameter system.

  7. Multiscale Modeling of Powder Bed–Based Additive Manufacturing

    NASA Astrophysics Data System (ADS)

    Markl, Matthias; Körner, Carolin

    2016-07-01

    Powder bed fusion processes are additive manufacturing technologies that are expected to induce the third industrial revolution. Components are built up layer by layer in a powder bed by selectively melting confined areas, according to sliced 3D model data. This technique allows for manufacturing of highly complex geometries hardly machinable with conventional technologies. However, the underlying physical phenomena are sparsely understood and difficult to observe during processing. Therefore, an intensive and expensive trial-and-error principle is applied to produce components with the desired dimensional accuracy, material characteristics, and mechanical properties. This review presents numerical modeling approaches on multiple length scales and timescales to describe different aspects of powder bed fusion processes. In combination with tailored experiments, the numerical results enlarge the process understanding of the underlying physical mechanisms and support the development of suitable process strategies and component topologies.

  8. Incorporation of shuttle CCT parameters in computer simulation models

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terry

    1990-01-01

    Computer simulations of shuttle missions have become increasingly important during recent years. The complexity of mission planning for satellite launch and repair operations which usually involve EVA has led to the need for accurate visibility and access studies. The PLAID modeling package used in the Man-Systems Division at Johnson currently has the necessary capabilities for such studies. In addition, the modeling package is used for spatial location and orientation of shuttle components for film overlay studies such as the current investigation of the hydrogen leaks found in the shuttle flight. However, there are a number of differences between the simulation studies and actual mission viewing. These include image blur caused by the finite resolution of the CCT monitors in the shuttle and signal noise from the video tubes of the cameras. During the course of this investigation the shuttle CCT camera and monitor parameters are incorporated into the existing PLAID framework. These parameters are specific for certain camera/lens combinations and the SNR characteristics of these combinations are included in the noise models. The monitor resolution is incorporated using a Gaussian spread function such as that found in the screen phosphors in the shuttle monitors. Another difference between the traditional PLAID generated images and actual mission viewing lies in the lack of shadows and reflections of light from surfaces. Ray tracing of the scene explicitly includes the lighting and material characteristics of surfaces. The results of some preliminary studies using ray tracing techniques for the image generation process combined with the camera and monitor effects are also reported.

  9. Analysis of Hydrogeologic Conceptual Model and Parameter Uncertainty

    SciTech Connect

    Meyer, Philip D.; Nicholson, Thomas J.; Mishra, Srikanta

    2003-06-24

    A systematic methodology for assessing hydrogeologic conceptual model, parameter, and scenario uncertainties is being developed to support technical reviews of environmental assessments related to decommissioning of nuclear facilities. The first major task being undertaken is to produce a coupled parameter and conceptual model uncertainty assessment methodology. This task is based on previous studies that have primarily dealt individually with these two types of uncertainties. Conceptual model uncertainty analysis is based on the existence of alternative conceptual models that are generated using a set of clearly stated guidelines targeted at the needs of NRC staff. Parameter uncertainty analysis makes use of generic site characterization data as well as site-specific characterization and monitoring data to evaluate parameter uncertainty in each of the alternative conceptual models. Propagation of parameter uncertainty will be carried out through implementation of a general stochastic model of groundwater flow and transport in the saturated and unsaturated zones. Evaluation of prediction uncertainty will make use of Bayesian model averaging and visualization of model results. The goal of this study is to develop a practical tool to quantify uncertainties in the conceptual model and parameters identified in performance assessments.

  10. Computationally Inexpensive Identification of Non-Informative Model Parameters

    NASA Astrophysics Data System (ADS)

    Mai, J.; Cuntz, M.; Kumar, R.; Zink, M.; Samaniego, L. E.; Schaefer, D.; Thober, S.; Rakovec, O.; Musuuza, J. L.; Craven, J. R.; Spieler, D.; Schrön, M.; Prykhodko, V.; Dalmasso, G.; Langenberg, B.; Attinger, S.

    2014-12-01

    Sensitivity analysis is used, for example, to identify parameters which induce the largest variability in model output and are thus informative during calibration. Variance-based techniques are employed for this purpose, which unfortunately require a large number of model evaluations and are thus ineligible for complex environmental models. We developed, therefore, a computational inexpensive screening method, which is based on Elementary Effects, that automatically separates informative and non-informative model parameters. The method was tested using the mesoscale hydrologic model (mHM) with 52 parameters. The model was applied in three European catchments with different hydrological characteristics, i.e. Neckar (Germany), Sava (Slovenia), and Guadalquivir (Spain). The method identified the same informative parameters as the standard Sobol method but with less than 1% of model runs. In Germany and Slovenia, 22 of 52 parameters were informative mostly in the formulations of evapotranspiration, interflow and percolation. In Spain 19 of 52 parameters were informative with an increased importance of soil parameters. We showed further that Sobol' indexes calculated for the subset of informative parameters are practically the same as Sobol' indexes before the screening but the number of model runs was reduced by more than 50%. The model mHM was then calibrated twice in the three test catchments. First all 52 parameters were taken into account and then only the informative parameters were calibrated while all others are kept fixed. The Nash-Sutcliffe efficiencies were 0.87 and 0.83 in Germany, 0.89 and 0.88 in Slovenia, and 0.86 and 0.85 in Spain, respectively. This minor loss of at most 4% in model performance comes along with a substantial decrease of at least 65% in model evaluations. In summary, we propose an efficient screening method to identify non-informative model parameters that can be discarded during further applications. We have shown that sensitivity

  11. Effect of Operating Parameters and Chemical Additives on Crystal Habit and Specific Cake Resistance of Zinc Hydroxide Precipitates

    SciTech Connect

    Alwin, Jennifer Louise

    1999-08-01

    The effect of process parameters and chemical additives on the specific cake resistance of zinc hydroxide precipitates was investigated. The ability of a slurry to be filtered is dependent upon the particle habit of the solid and the particle habit is influenced by certain process variables. The process variables studied include neutralization temperature, agitation type, and alkalinity source used for neutralization. Several commercially available chemical additives advertised to aid in solid/liquid separation were also examined in conjunction with hydroxide precipitation. A statistical analysis revealed that the neutralization temperature and the source of alkalinity were statistically significant in influencing the specific cake resistance of zinc hydroxide precipitates in this study. The type of agitation did not significantly effect the specific cake resistance of zinc hydroxide precipitates. The use of chemical additives in conjunction with hydroxide precipitation had a favorable effect on the filterability. The morphology of the hydroxide precipitates was analyzed using scanning electron microscopy.

  12. An automatic and effective parameter optimization method for model tuning

    NASA Astrophysics Data System (ADS)

    Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.

    2015-05-01

    Physical parameterizations in General Circulation Models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determines parameter sensitivity and the other chooses the optimum initial value of sensitive parameters, are introduced before the downhill simplex method to reduce the computational cost and improve the tuning performance. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9%. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameters tuning during the model development stage.

  13. Modeling of permanent magnets: Interpretation of parameters obtained from the Jiles-Atherton hysteresis model

    NASA Astrophysics Data System (ADS)

    Lewis, L. H.; Gao, J.; Jiles, D. C.; Welch, D. O.

    1996-04-01

    The Jiles-Atherton theory is based on considerations of the dependence of energy dissipation within a magnetic material resulting from changes in its magnetization. The algorithm based on the theory yields five computed model parameters, MS, a, α, k, and c, which represent the saturation magnetization, the effective domain density, the mean exchange coupling between the effective domains, the flexibility of domain walls and energy-dissipative features in the microstructure, respectively. Model parameters were calculated from the algorithm and linked with the physical attributes of a set of three related melt-quenched permanent magnets based on the Nd2Fe14B composition. Measured magnetic parameters were used as inputs into the model to reproduce the experimental hysteresis curves. The results show that two of the calculated parameters, the saturation magnetization MS and the effective coercivity k, agree well with their directly determined analogs. The calculated a and α parameters provide support for the concept of increased intergranular exchange coupling upon die upsetting, and decreased intergranular exchange coupling with the addition of gallium.

  14. Estimation of Graded Response Model Parameters Using MULTILOG.

    ERIC Educational Resources Information Center

    Baker, Frank B.

    1997-01-01

    Describes an idiosyncracy of the MULTILOG (D. Thissen, 1991) parameter estimation process discovered during a simulation study involving the graded response model. A misordering reflected in boundary function location parameter estimates resulted in a large negative contribution to the true score followed by a large positive contribution. These…

  15. Parameter variability estimation using stochastic response surface model updating

    NASA Astrophysics Data System (ADS)

    Fang, Sheng-En; Zhang, Qiu-Hu; Ren, Wei-Xin

    2014-12-01

    From a practical point of view, uncertainties existing in structural parameters and measurements must be handled in order to provide reliable structural condition evaluations. At this moment, deterministic model updating loses its practicability and a stochastic updating procedure should be employed seeking for statistical properties of parameters and responses. Presently this topic has not been well investigated on account of its greater complexity in theoretical configuration and difficulty in inverse problem solutions after involving uncertainty analyses. Due to it, this paper attempts to develop a stochastic model updating method for parameter variability estimation. Uncertain parameters and responses are correlated through stochastic response surface models, which are actually explicit polynomial chaos expansions based on Hermite polynomials. Then by establishing a stochastic inverse problem, parameter means and standard deviations are updated in a separate and successive way. For the purposes of problem simplification and optimization efficiency, in each updating iteration stochastic response surface models are reconstructed to avoid the construction and analysis of sensitivity matrices. Meanwhile, in the interest of investigating the effects of parameter variability on responses, a parameter sensitivity analysis method has been developed based on the derivation of polynomial chaos expansions. Lastly the feasibility and reliability of the proposed methods have been validated using a numerical beam and then a set of nominally identical metal plates. After comparing with a perturbation method, it is found that the proposed method can estimate parameter variability with satisfactory accuracy and the complexity of the inverse problem can be highly reduced resulting in cost-efficient optimization.

  16. Complexity, parameter sensitivity and parameter transferability in the modelling of floodplain inundation

    NASA Astrophysics Data System (ADS)

    Bates, P. D.; Neal, J. C.; Fewtrell, T. J.

    2012-12-01

    In this we paper we consider two related questions. First, we address the issue of how much physical complexity is necessary in a model in order to simulate floodplain inundation to within validation data error. This is achieved through development of a single code/multiple physics hydraulic model (LISFLOOD-FP) where different degrees of complexity can be switched on or off. Different configurations of this code are applied to four benchmark test cases, and compared to the results of a number of industry standard models. Second we address the issue of how parameter sensitivity and transferability change with increasing complexity using numerical experiments with models of different physical and geometric intricacy. Hydraulic models are a good example system with which to address such generic modelling questions as: (1) they have a strong physical basis; (2) there is only one set of equations to solve; (3) they require only topography and boundary conditions as input data; and (4) they typically require only a single free parameter, namely boundary friction. In terms of complexity required we show that for the problem of sub-critical floodplain inundation a number of codes of different dimensionality and resolution can be found to fit uncertain model validation data equally well, and that in this situation Occam's razor emerges as a useful logic to guide model selection. We find also find that model skill usually improves more rapidly with increases in model spatial resolution than increases in physical complexity, and that standard approaches to testing hydraulic models against laboratory data or analytical solutions may fail to identify this important fact. Lastly, we find that in benchmark testing studies significant differences can exist between codes with identical numerical solution techniques as a result of auxiliary choices regarding the specifics of model implementation that are frequently unreported by code developers. As a consequence, making sound

  17. Retrospective forecast of ETAS model with daily parameters estimate

    NASA Astrophysics Data System (ADS)

    Falcone, Giuseppe; Murru, Maura; Console, Rodolfo; Marzocchi, Warner; Zhuang, Jiancang

    2016-04-01

    We present a retrospective ETAS (Epidemic Type of Aftershock Sequence) model based on the daily updating of free parameters during the background, the learning and the test phase of a seismic sequence. The idea was born after the 2011 Tohoku-Oki earthquake. The CSEP (Collaboratory for the Study of Earthquake Predictability) Center in Japan provided an appropriate testing benchmark for the five 1-day submitted models. Of all the models, only one was able to successfully predict the number of events that really happened. This result was verified using both the real time and the revised catalogs. The main cause of the failure was in the underestimation of the forecasted events, due to model parameters maintained fixed during the test. Moreover, the absence in the learning catalog of an event similar to the magnitude of the mainshock (M9.0), which drastically changed the seismicity in the area, made the learning parameters not suitable to describe the real seismicity. As an example of this methodological development we show the evolution of the model parameters during the last two strong seismic sequences in Italy: the 2009 L'Aquila and the 2012 Reggio Emilia episodes. The achievement of the model with daily updated parameters is compared with that of same model where the parameters remain fixed during the test time.

  18. Practical identifiability of biokinetic parameters of a model describing two-step nitrification in biofilms.

    PubMed

    Brockmann, D; Rosenwinkel, K-H; Morgenroth, E

    2008-10-15

    Parameter estimation and model calibration are key problems in the application of biofilm models in engineering practice, where a large number of model parameters need to be determined usually based on experimental data with only limited information content. In this article, identifiability of biokinetic parameters of a biofilm model describing two-step nitrification was evaluated based solely on bulk phase measurements of ammonium, nitrite, and nitrate. In addition to evaluating the impact of experimental conditions and available measurements, the influence of mass transport limitation within the biofilm and the initial parameter values on identifiability of biokinetic parameters was evaluated. Selection of parameters for identifiability analysis was based on global mean sensitivities while parameter identifiability was analyzed using local sensitivity functions. At most, four of the six most sensitive biokinetic parameters were identifiable from results of batch experiments at bulk phase dissolved oxygen concentrations of 0.8 or 5 mg O(2)/L. High linear dependences between the parameters of the subsets (KO2,AOB,muAOB) and (KO2,NOB,muNOB) resulted in reduced identifiability. Mass transport limitation within the biofilm did not influence the number of identifiable parameters but, in fact, decreased collinearity between parameters, especially for parameters that are otherwise correlated (e.g., muAOB) and KO2,AOB, or muNOB and KO2,NOB). The choice of the initial parameter values had a significant impact on the identifiability of two parameter subsets, both including the parameters muAOB and KO2,AOB. Parameter subsets that did not include the subsets muAOB and KO2,AOB or muNOB and KO2,NOB were clearly identifiable independently of the choice of the initial parameter values. PMID:18512262

  19. Parameter Estimates in Differential Equation Models for Population Growth

    ERIC Educational Resources Information Center

    Winkel, Brian J.

    2011-01-01

    We estimate the parameters present in several differential equation models of population growth, specifically logistic growth models and two-species competition models. We discuss student-evolved strategies and offer "Mathematica" code for a gradient search approach. We use historical (1930s) data from microbial studies of the Russian biologist,…

  20. Hunting for hydrogen: random structure searching and prediction of NMR parameters of hydrous wadsleyite† †Electronic supplementary information (ESI) available: Further information on the structures generated by AIRSS, alternative structural models, supercell calculations, total enthalpies of all computed structures and further information on 1H/2H NMR parameters. Example input and all raw output files from AIRSS and CASTEP NMR calculations are also included. See DOI: 10.1039/c6cp01529h Click here for additional data file.

    PubMed Central

    Moran, Robert F.; McKay, David; Pickard, Chris J.; Berry, Andrew J.; Griffin, John M.

    2016-01-01

    The structural chemistry of materials containing low levels of nonstoichiometric hydrogen is difficult to determine, and producing structural models is challenging where hydrogen has no fixed crystallographic site. Here we demonstrate a computational approach employing ab initio random structure searching (AIRSS) to generate a series of candidate structures for hydrous wadsleyite (β-Mg2SiO4 with 1.6 wt% H2O), a high-pressure mineral proposed as a repository for water in the Earth's transition zone. Aligning with previous experimental work, we solely consider models with Mg3 (over Mg1, Mg2 or Si) vacancies. We adapt the AIRSS method by starting with anhydrous wadsleyite, removing a single Mg2+ and randomly placing two H+ in a unit cell model, generating 819 candidate structures. 103 geometries were then subjected to more accurate optimisation under periodic DFT. Using this approach, we find the most favourable hydration mechanism involves protonation of two O1 sites around the Mg3 vacancy. The formation of silanol groups on O3 or O4 sites (with loss of stable O1–H hydroxyls) coincides with an increase in total enthalpy. Importantly, the approach we employ allows observables such as NMR parameters to be computed for each structure. We consider hydrous wadsleyite (∼1.6 wt%) to be dominated by protonated O1 sites, with O3/O4–H silanol groups present as defects, a model that maps well onto experimental studies at higher levels of hydration (J. M. Griffin et al., Chem. Sci., 2013, 4, 1523). The AIRSS approach adopted herein provides the crucial link between atomic-scale structure and experimental studies. PMID:27020937

  1. Optimal parameter and uncertainty estimation of a land surface model: Sensitivity to parameter ranges and model complexities

    NASA Astrophysics Data System (ADS)

    Xia, Youlong; Yang, Zong-Liang; Stoffa, Paul L.; Sen, Mrinal K.

    2005-01-01

    Most previous land-surface model calibration studies have defined global ranges for their parameters to search for optimal parameter sets. Little work has been conducted to study the impacts of realistic versus global ranges as well as model complexities on the calibration and uncertainty estimates. The primary purpose of this paper is to investigate these impacts by employing Bayesian Stochastic Inversion (BSI) to the Chameleon Surface Model (CHASM). The CHASM was designed to explore the general aspects of land-surface energy balance representation within a common modeling framework that can be run from a simple energy balance formulation to a complex mosaic type structure. The BSI is an uncertainty estimation technique based on Bayes theorem, importance sampling, and very fast simulated annealing. The model forcing data and surface flux data were collected at seven sites representing a wide range of climate and vegetation conditions. For each site, four experiments were performed with simple and complex CHASM formulations as well as realistic and global parameter ranges. Twenty eight experiments were conducted and 50 000 parameter sets were used for each run. The results show that the use of global and realistic ranges gives similar simulations for both modes for most sites, but the global ranges tend to produce some unreasonable optimal parameter values. Comparison of simple and complex modes shows that the simple mode has more parameters with unreasonable optimal values. Use of parameter ranges and model complexities have significant impacts on frequency distribution of parameters, marginal posterior probability density functions, and estimates of uncertainty of simulated sensible and latent heat fluxes. Comparison between model complexity and parameter ranges shows that the former has more significant impacts on parameter and uncertainty estimations.

  2. [Critical of the additive model of the randomized controlled trial].

    PubMed

    Boussageon, Rémy; Gueyffier, François; Bejan-Angoulvant, Theodora; Felden-Dominiak, Géraldine

    2008-01-01

    Randomized, double-blind, placebo-controlled clinical trials are currently the best way to demonstrate the clinical effectiveness of drugs. Its methodology relies on the method of difference (John Stuart Mill), through which the observed difference between two groups (drug vs placebo) can be attributed to the pharmacological effect of the drug being tested. However, this additive model can be questioned in the event of statistical interactions between the pharmacological and the placebo effects. Evidence in different domains has shown that the placebo effect can influence the effect of the active principle. This article evaluates the methodological, clinical and epistemological consequences of this phenomenon. Topics treated include extrapolating results, accounting for heterogeneous results, demonstrating the existence of several factors in the placebo effect, the necessity to take these factors into account for given symptoms or pathologies, as well as the problem of the "specific" effect. PMID:18387273

  3. Determination of stellar parameters using binary system models

    NASA Astrophysics Data System (ADS)

    Blay, Georgina; Lovekin, Catherine

    2015-12-01

    Stellar parameters can be constrained more tightly with binary systems than can typically be done with single stars. We used a freely available binary fitting code to determine the best fitting parameters of a collection of potential eclipsing binary systems observed with the Kepler satellite. These model fits constrain the mass ratio, radii ratio, surface brightness ratio, and the orbital inclination of both stars in the binary system. The frequencies of these pulsations can then be determined and used to constrain asteroseismic models.

  4. Agricultural and Environmental Input Parameters for the Biosphere Model

    SciTech Connect

    K. Rasmuson; K. Rautenstrauch

    2004-09-14

    This analysis is one of 10 technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) (i.e., the biosphere model). It documents development of agricultural and environmental input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the ERMYN and its input parameters.

  5. Parameters and pitfalls to consider in the conduct of food additive research, Carrageenan as a case study.

    PubMed

    Weiner, Myra L

    2016-01-01

    This paper provides guidance on the conduct of new in vivo and in vitro studies on high molecular weight food additives, with carrageenan, the widely used food additive, as a case study. It is important to understand the physical/chemical properties and to verify the identity/purity, molecular weight and homogeneity/stability of the additive in the vehicle for oral delivery. The strong binding of CGN to protein in rodent chow or infant formula results in no gastrointestinal tract exposure to free CGN. It is recommended that doses of high Mw non-caloric, non-nutritive additives not exceed 5% by weight of total solid diet to avoid potential nutritional effects. Addition of some high Mw additives at high concentrations to liquid nutritional supplements increases viscosity and may affect palatability, caloric intake and body weight gain. In in vitro studies, the use of well-characterized, relevant cell types and the appropriate composition of the culture media are necessary for proper conduct and interpretation. CGN is bound to media protein and not freely accessible to cells in vitro. Interpretation of new studies on food additives should consider the interaction of food additives with the vehicle components and the appropriateness of the animal or cell model and dose-response. PMID:26615870

  6. Constitutive modeling of ascending thoracic aortic aneurysms using microstructural parameters.

    PubMed

    Pasta, Salvatore; Phillippi, Julie A; Tsamis, Alkiviadis; D'Amore, Antonio; Raffa, Giuseppe M; Pilato, Michele; Scardulla, Cesare; Watkins, Simon C; Wagner, William R; Gleason, Thomas G; Vorp, David A

    2016-02-01

    Ascending thoracic aortic aneurysm (ATAA) has been associated with diminished biomechanical strength and disruption in the collagen fiber microarchitecture. Additionally, the congenital bicuspid aortic valve (BAV) leads to a distinct extracellular matrix structure that may be related to ATAA development at an earlier age than degenerative aneurysms arising in patients with the morphological normal tricuspid aortic valve (TAV). The purpose of this study was to model the fiber-reinforced mechanical response of ATAA specimens from patients with either BAV or TAV. This was achieved by combining image-analysis derived parameters of collagen fiber dispersion and alignment with tensile testing data. Then, numerical simulations were performed to assess the role of anisotropic constitutive formulation on the wall stress distribution of aneurysmal aorta. Results indicate that both BAV ATAA and TAV ATAA have altered collagen fiber architecture in the medial plane of experimentally-dissected aortic tissues when compared to normal ascending aortic specimens. The study findings highlight that differences in the collagen fiber distribution mostly influences the resulting wall stress distribution rather than the peak stress. We conclude that fiber-reinforced constitutive modeling that takes into account the collagen fiber defect inherent to the aneurysmal ascending aorta is paramount for accurate finite element predictions and ultimately for biomechanical-based indicators to reliably distinguish the more from the less 'malignant' ATAAs. PMID:26669606

  7. Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty

    SciTech Connect

    Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.; Cantrell, Kirk J.

    2004-03-01

    The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates based on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four

  8. Accuracy of Aerodynamic Model Parameters Estimated from Flight Test Data

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Klein, Vladislav

    1997-01-01

    An important put of building mathematical models based on measured date is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of this accuracy, the parameter estimates themselves have limited value. An expression is developed for computing quantitatively correct parameter accuracy measures for maximum likelihood parameter estimates when the output residuals are colored. This result is important because experience in analyzing flight test data reveals that the output residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Monte Carlo simulation runs were used to show that parameter accuracy measures from the new technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for correction factors or frequency domain analysis of the output residuals. The technique was applied to flight test data from repeated maneuvers flown on the F-18 High Alpha Research Vehicle. As in the simulated cases, parameter accuracy measures from the new technique were in agreement with the scatter in the parameter estimates from repeated maneuvers, whereas conventional parameter accuracy measures were optimistic.

  9. Parameter Estimation and Model Selection in Computational Biology

    PubMed Central

    Lillacci, Gabriele; Khammash, Mustafa

    2010-01-01

    A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants) are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection. PMID:20221262

  10. A model of the holographic principle: Randomness and additional dimension

    NASA Astrophysics Data System (ADS)

    Boyarsky, Abraham; Góra, Paweł; Proppe, Harald

    2010-01-01

    In recent years an idea has emerged that a system in a 3-dimensional space can be described from an information point of view by a system on its 2-dimensional boundary. This mysterious correspondence is called the Holographic Principle and has had profound effects in string theory and our perception of space-time. In this note we describe a purely mathematical model of the Holographic Principle using ideas from nonlinear dynamical systems theory. We show that a random map on the surface S of a 3-dimensional open ball B has a natural counterpart in B, and the two maps acting in different dimensional spaces have the same entropy. We can reverse this construction if we start with a special 3-dimensional map in B called a skew product. The key idea is to use the randomness, as imbedded in the parameter of the 2-dimensional random map, to define a third dimension. The main result shows that if we start with an arbitrary dynamical system in B with entropy E we can construct a random map on S whose entropy is arbitrarily close to E.

  11. SPOTting Model Parameters Using a Ready-Made Python Package

    PubMed Central

    Houska, Tobias; Kraft, Philipp; Chamorro-Chavez, Alejandro; Breuer, Lutz

    2015-01-01

    The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI). We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function. PMID:26680783

  12. SPOTting Model Parameters Using a Ready-Made Python Package.

    PubMed

    Houska, Tobias; Kraft, Philipp; Chamorro-Chavez, Alejandro; Breuer, Lutz

    2015-01-01

    The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI). We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function. PMID:26680783

  13. Analysis of error-prone survival data under additive hazards models: measurement error effects and adjustments.

    PubMed

    Yan, Ying; Yi, Grace Y

    2016-07-01

    Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods. PMID:26328545

  14. An Effective Parameter Screening Strategy for High Dimensional Watershed Models

    NASA Astrophysics Data System (ADS)

    Khare, Y. P.; Martinez, C. J.; Munoz-Carpena, R.

    2014-12-01

    Watershed simulation models can assess the impacts of natural and anthropogenic disturbances on natural systems. These models have become important tools for tackling a range of water resources problems through their implementation in the formulation and evaluation of Best Management Practices, Total Maximum Daily Loads, and Basin Management Action Plans. For accurate applications of watershed models they need to be thoroughly evaluated through global uncertainty and sensitivity analyses (UA/SA). However, due to the high dimensionality of these models such evaluation becomes extremely time- and resource-consuming. Parameter screening, the qualitative separation of important parameters, has been suggested as an essential step before applying rigorous evaluation techniques such as the Sobol' and Fourier Amplitude Sensitivity Test (FAST) methods in the UA/SA framework. The method of elementary effects (EE) (Morris, 1991) is one of the most widely used screening methodologies. Some of the common parameter sampling strategies for EE, e.g. Optimized Trajectories [OT] (Campolongo et al., 2007) and Modified Optimized Trajectories [MOT] (Ruano et al., 2012), suffer from inconsistencies in the generated parameter distributions, infeasible sample generation time, etc. In this work, we have formulated a new parameter sampling strategy - Sampling for Uniformity (SU) - for parameter screening which is based on the principles of the uniformity of the generated parameter distributions and the spread of the parameter sample. A rigorous multi-criteria evaluation (time, distribution, spread and screening efficiency) of OT, MOT, and SU indicated that SU is superior to other sampling strategies. Comparison of the EE-based parameter importance rankings with those of Sobol' helped to quantify the qualitativeness of the EE parameter screening approach, reinforcing the fact that one should use EE only to reduce the resource burden required by FAST/Sobol' analyses but not to replace it.

  15. Identification of patient specific parameters for a minimal cardiac model.

    PubMed

    Hann, C E; Chase, J G; Shaw, G M; Smith, B W

    2004-01-01

    A minimal cardiac model has been developed which accurately captures the essential dynamics of the cardiovascular system (CVS). This paper develops an integral based parameter identification method for fast and accurate identification of patient specific parameters for this minimal model. The integral method is implemented using a single chamber model to prove the concept, and turns a previously nonlinear and nonconvex optimization problem into a linear and convex problem. The method can be readily extended to the full minimal cardiac model and enables rapid identification of model parameters to match a particular patient condition in clinical real time (3-5 minutes). This information can then be used to assist medical staff in understanding, diagnosis and treatment selection. PMID:17271801

  16. Parameter Identification in a Tuberculosis Model for Cameroon

    PubMed Central

    Moualeu-Ngangue, Dany Pascal; Röblitz, Susanna; Ehrig, Rainald; Deuflhard, Peter

    2015-01-01

    A deterministic model of tuberculosis in Cameroon is designed and analyzed with respect to its transmission dynamics. The model includes lack of access to treatment and weak diagnosis capacity as well as both frequency- and density-dependent transmissions. It is shown that the model is mathematically well-posed and epidemiologically reasonable. Solutions are non-negative and bounded whenever the initial values are non-negative. A sensitivity analysis of model parameters is performed and the most sensitive ones are identified by means of a state-of-the-art Gauss-Newton method. In particular, parameters representing the proportion of individuals having access to medical facilities are seen to have a large impact on the dynamics of the disease. The model predicts that a gradual increase of these parameters could significantly reduce the disease burden on the population within the next 15 years. PMID:25874885

  17. Application of physical parameter identification to finite element models

    NASA Technical Reports Server (NTRS)

    Bronowicki, Allen J.; Lukich, Michael S.; Kuritz, Steven P.

    1986-01-01

    A time domain technique for matching response predictions of a structural dynamic model to test measurements is developed. Significance is attached to prior estimates of physical model parameters and to experimental data. The Bayesian estimation procedure allows confidence levels in predicted physical and modal parameters to be obtained. Structural optimization procedures are employed to minimize an error functional with physical model parameters describing the finite element model as design variables. The number of complete FEM analyses are reduced using approximation concepts, including the recently developed convoluted Taylor series approach. The error function is represented in closed form by converting free decay test data to a time series model using Prony' method. The technique is demonstrated on simulated response of a simple truss structure.

  18. Parameter identification in a tuberculosis model for Cameroon.

    PubMed

    Moualeu-Ngangue, Dany Pascal; Röblitz, Susanna; Ehrig, Rainald; Deuflhard, Peter

    2015-01-01

    A deterministic model of tuberculosis in Cameroon is designed and analyzed with respect to its transmission dynamics. The model includes lack of access to treatment and weak diagnosis capacity as well as both frequency- and density-dependent transmissions. It is shown that the model is mathematically well-posed and epidemiologically reasonable. Solutions are non-negative and bounded whenever the initial values are non-negative. A sensitivity analysis of model parameters is performed and the most sensitive ones are identified by means of a state-of-the-art Gauss-Newton method. In particular, parameters representing the proportion of individuals having access to medical facilities are seen to have a large impact on the dynamics of the disease. The model predicts that a gradual increase of these parameters could significantly reduce the disease burden on the population within the next 15 years. PMID:25874885

  19. Minor hysteresis loops model based on exponential parameters scaling of the modified Jiles-Atherton model

    NASA Astrophysics Data System (ADS)

    Hamimid, M.; Mimoune, S. M.; Feliachi, M.

    2012-07-01

    In this present work, the minor hysteresis loops model based on parameters scaling of the modified Jiles-Atherton model is evaluated by using judicious expressions. These expressions give the minor hysteresis loops parameters as a function of the major hysteresis loop ones. They have exponential form and are obtained by parameters identification using the stochastic optimization method “simulated annealing”. The main parameters influencing the data fitting are three parameters, the pinning parameter k, the mean filed parameter α and the parameter which characterizes the shape of anhysteretic magnetization curve a. To validate this model, calculated minor hysteresis loops are compared with measured ones and good agreements are obtained.

  20. Parameter Sensitivity Evaluation of the CLM-Crop model

    NASA Astrophysics Data System (ADS)

    Drewniak, B. A.; Zeng, X.; Mametjanov, A.; Anitescu, M.; Norris, B.; Kotamarthi, V. R.

    2011-12-01

    In order to improve carbon cycling within Earth System Models, crop representation for corn, spring wheat, and soybean species has been incorporated into the latest version of the Community Land Model (CLM), the land surface model in the Community Earth System Model. As a means to evaluate and improve the CLM-Crop model, we will determine the sensitivity of various crop parameters on carbon fluxes (such as GPP and NEE), yields, and soil organic matter. The sensitivity analysis will perform small perturbations over a range of values for each parameter on individual grid sites, for comparison with AmeriFlux data, as well as globally so crop model parameters can be improved. Over 20 parameters have been identified for evaluation in this study including carbon-nitrogen ratios for leaves, stems, roots, and organs; fertilizer applications; growing degree days for each growth stage; and more. Results from this study will be presented to give a better understanding of the sensitivity of the various parameters used to represent crops, which will help improve the overall model performance and aid with determining future influences climate change will have on cropland ecosystems.

  1. Observation model and parameter partials for the JPL VLBI parameter estimation software MODEST, 19 94

    NASA Technical Reports Server (NTRS)

    Sovers, O. J.; Jacobs, C. S.

    1994-01-01

    This report is a revision of the document Observation Model and Parameter Partials for the JPL VLBI Parameter Estimation Software 'MODEST'---1991, dated August 1, 1991. It supersedes that document and its four previous versions (1983, 1985, 1986, and 1987). A number of aspects of the very long baseline interferometry (VLBI) model were improved from 1991 to 1994. Treatment of tidal effects is extended to model the effects of ocean tides on universal time and polar motion (UTPM), including a default model for nearly diurnal and semidiurnal ocean tidal UTPM variations, and partial derivatives for all (solid and ocean) tidal UTPM amplitudes. The time-honored 'K(sub 1) correction' for solid earth tides has been extended to include analogous frequency-dependent response of five tidal components. Partials of ocean loading amplitudes are now supplied. The Zhu-Mathews-Oceans-Anisotropy (ZMOA) 1990-2 and Kinoshita-Souchay models of nutation are now two of the modeling choices to replace the increasingly inadequate 1980 International Astronomical Union (IAU) nutation series. A rudimentary model of antenna thermal expansion is provided. Two more troposphere mapping functions have been added to the repertoire. Finally, corrections among VLBI observations via the model of Treuhaft and lanyi improve modeling of the dynamic troposphere. A number of minor misprints in Rev. 4 have been corrected.

  2. Kinetic modeling of molecular motors: pause model and parameter determination from single-molecule experiments

    NASA Astrophysics Data System (ADS)

    Morin, José A.; Ibarra, Borja; Cao, Francisco J.

    2016-05-01

    Single-molecule manipulation experiments of molecular motors provide essential information about the rate and conformational changes of the steps of the reaction located along the manipulation coordinate. This information is not always sufficient to define a particular kinetic cycle. Recent single-molecule experiments with optical tweezers showed that the DNA unwinding activity of a Phi29 DNA polymerase mutant presents a complex pause behavior, which includes short and long pauses. Here we show that different kinetic models, considering different connections between the active and the pause states, can explain the experimental pause behavior. Both the two independent pause model and the two connected pause model are able to describe the pause behavior of a mutated Phi29 DNA polymerase observed in an optical tweezers single-molecule experiment. For the two independent pause model all parameters are fixed by the observed data, while for the more general two connected pause model there is a range of values of the parameters compatible with the observed data (which can be expressed in terms of two of the rates and their force dependencies). This general model includes models with indirect entry and exit to the long-pause state, and also models with cycling in both directions. Additionally, assuming that detailed balance is verified, which forbids cycling, this reduces the ranges of the values of the parameters (which can then be expressed in terms of one rate and its force dependency). The resulting model interpolates between the independent pause model and the indirect entry and exit to the long-pause state model

  3. A distributed parameter wire model for transient electrical discharges

    NASA Astrophysics Data System (ADS)

    Maier, William B., II; Kadish, A.; Sutherland, C. D.; Robiscoe, R. T.

    1990-06-01

    This paper presents a three-dimensional model developed for freely propagating electrical discharges, such as lightning and punch-through arcs. In this model, charge transport is described by a nonlinear differential equation containing two phenomenological parameters characteristic of the medium and the arc; the electromagnetic field is described by Maxwell's equations. Using this model, a cylindrically symmetric small-diameter discharge is analyzed. It is shown that the model predicts discharge properties consistent with experimentally known phenomena.

  4. Parameter Estimation for Groundwater Models under Uncertain Irrigation Data.

    PubMed

    Demissie, Yonas; Valocchi, Albert; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen

    2015-01-01

    The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p < 0.05) bias in estimated parameters and model predictions that persist despite calibrating the models to different calibration data and sample sizes. However, by directly accounting for the irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes. PMID:25040235

  5. Failure of the addition of fresh seminal plasma to cryopreserved-thawed sperm to improve semen parameters.

    PubMed

    Check, D J; Check, M L; Bollendorf, A; Check, J H

    1993-01-01

    Previous data has shown that subnormal motility in some semen specimens can be improved by the addition of fresh human seminal plasma (HSP). However, if the HSP was first frozen the motility-enhancing factor was lost. We hypothesized that some of the reduction in sperm motility of cryopreserved-thawed sperm may be related to damage of the "motility-enhancing factor" of HSP. This study evaluated whether the addition of fresh HSP could improve the motility of frozen-thawed sperm. Each frozen-thawed specimen was evaluated for motile density and hypoosmotic swelling and then divided into two aliquots. Equal volumes of HSP, human tubal fluid (HTF), and control media were added and the semen parameters were reevaluated. The mean scores for motile density and percent motility did not change compared with baseline thawed volumes with either HSP or HTF additives. There were some isolated cases that did improve with either HSP (21%) or HTF (14%). Future studies are needed to determine whether this improvement is coincidental or consistent, and to determine whether at least some individuals can benefit from the addition of fresh HSP to frozen-thawed sperm. PMID:8215691

  6. Material parameter computation for multi-layered vocal fold models

    PubMed Central

    Schmidt, Bastian; Stingl, Michael; Leugering, Günter; Berry, David A.; Döllinger, Michael

    2011-01-01

    Today, the prevention and treatment of voice disorders is an ever-increasing health concern. Since many occupations rely on verbal communication, vocal health is necessary just to maintain one’s livelihood. Commonly applied models to study vocal fold vibrations and air flow distributions are self sustained physical models of the larynx composed of artificial silicone vocal folds. Choosing appropriate mechanical parameters for these vocal fold models while considering simplifications due to manufacturing restrictions is difficult but crucial for achieving realistic behavior. In the present work, a combination of experimental and numerical approaches to compute material parameters for synthetic vocal fold models is presented. The material parameters are derived from deformation behaviors of excised human larynges. The resulting deformations are used as reference displacements for a tracking functional to be optimized. Material optimization was applied to three-dimensional vocal fold models based on isotropic and transverse-isotropic material laws, considering both a layered model with homogeneous material properties on each layer and an inhomogeneous model. The best results exhibited a transversal-isotropic inhomogeneous (i.e., not producible) model. For the homogeneous model (three layers), the transversal-isotropic material parameters were also computed for each layer yielding deformations similar to the measured human vocal fold deformations. PMID:21476672

  7. Parameters of cosmological models and recent astronomical observations

    SciTech Connect

    Sharov, G.S.; Vorontsova, E.G. E-mail: elenavor@inbox.ru

    2014-10-01

    For different gravitational models we consider limitations on their parameters coming from recent observational data for type Ia supernovae, baryon acoustic oscillations, and from 34 data points for the Hubble parameter H(z) depending on redshift. We calculate parameters of 3 models describing accelerated expansion of the universe: the ΛCDM model, the model with generalized Chaplygin gas (GCG) and the multidimensional model of I. Pahwa, D. Choudhury and T.R. Seshadri. In particular, for the ΛCDM model 1σ estimates of parameters are: H{sub 0}=70.262±0.319 km {sup -1}Mp {sup -1}, Ω{sub m}=0.276{sub -0.008}{sup +0.009}, Ω{sub Λ}=0.769±0.029, Ω{sub k}=-0.045±0.032. The GCG model under restriction 0α≥ is reduced to the ΛCDM model. Predictions of the multidimensional model essentially depend on 3 data points for H(z) with z≥2.3.

  8. An IRT Model with a Parameter-Driven Process for Change

    ERIC Educational Resources Information Center

    Rijmen, Frank; De Boeck, Paul; van der Maas, Han L. J.

    2005-01-01

    An IRT model with a parameter-driven process for change is proposed. Quantitative differences between persons are taken into account by a continuous latent variable, as in common IRT models. In addition, qualitative inter-individual differences and auto-dependencies are accounted for by assuming within-subject variability with respect to the…

  9. Environmental Transport Input Parameters for the Biosphere Model

    SciTech Connect

    M. Wasiolek

    2004-09-10

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment for the license application (TSPA-LA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA-LA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]) (TWP). This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA). This report is one of the five reports that develop input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the conceptual model and the mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed description of the model input parameters. The output of this report is used as direct input in the ''Nominal Performance Biosphere Dose Conversion Factor Analysis'' and in the ''Disruptive Event Biosphere Dose Conversion Factor Analysis'' that calculate the values of biosphere dose conversion factors (BDCFs) for the groundwater and volcanic ash exposure scenarios, respectively. The purpose of this analysis was to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or in volcanic ash). The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573]).

  10. Inhalation Exposure Input Parameters for the Biosphere Model

    SciTech Connect

    K. Rautenstrauch

    2004-09-10

    This analysis is one of 10 reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. Inhalation Exposure Input Parameters for the Biosphere Model is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the Technical Work Plan for Biosphere Modeling and Expert Support (BSC 2004 [DIRS 169573]). This analysis report defines and justifies values of mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception.

  11. Simultaneous parameter estimation and contaminant source characterization for coupled groundwater flow and contaminant transport modelling

    USGS Publications Warehouse

    Wagner, B.J.

    1992-01-01

    Parameter estimation and contaminant source characterization are key steps in the development of a coupled groundwater flow and contaminant transport simulation model. Here a methodologyfor simultaneous model parameter estimation and source characterization is presented. The parameter estimation/source characterization inverse model combines groundwater flow and contaminant transport simulation with non-linear maximum likelihood estimation to determine optimal estimates of the unknown model parameters and source characteristics based on measurements of hydraulic head and contaminant concentration. First-order uncertainty analysis provides a means for assessing the reliability of the maximum likelihood estimates and evaluating the accuracy and reliability of the flow and transport model predictions. A series of hypothetical examples is presented to demonstrate the ability of the inverse model to solve the combined parameter estimation/source characterization inverse problem. Hydraulic conductivities, effective porosity, longitudinal and transverse dispersivities, boundary flux, and contaminant flux at the source are estimated for a two-dimensional groundwater system. In addition, characterization of the history of contaminant disposal or location of the contaminant source is demonstrated. Finally, the problem of estimating the statistical parameters that describe the errors associated with the head and concentration data is addressed. A stage-wise estimation procedure is used to jointly estimate these statistical parameters along with the unknown model parameters and source characteristics. ?? 1992.

  12. Parameter uncertainty analysis of a biokinetic model of caesium.

    PubMed

    Li, W B; Klein, W; Blanchardon, E; Puncher, M; Leggett, R W; Oeh, U; Breustedt, B; Noßke, D; Lopez, M A

    2015-01-01

    Parameter uncertainties for the biokinetic model of caesium (Cs) developed by Leggett et al. were inventoried and evaluated. The methods of parameter uncertainty analysis were used to assess the uncertainties of model predictions with the assumptions of model parameter uncertainties and distributions. Furthermore, the importance of individual model parameters was assessed by means of sensitivity analysis. The calculated uncertainties of model predictions were compared with human data of Cs measured in blood and in the whole body. It was found that propagating the derived uncertainties in model parameter values reproduced the range of bioassay data observed in human subjects at different times after intake. The maximum ranges, expressed as uncertainty factors (UFs) (defined as a square root of ratio between 97.5th and 2.5th percentiles) of blood clearance, whole-body retention and urinary excretion of Cs predicted at earlier time after intake were, respectively: 1.5, 1.0 and 2.5 at the first day; 1.8, 1.1 and 2.4 at Day 10 and 1.8, 2.0 and 1.8 at Day 100; for the late times (1000 d) after intake, the UFs were increased to 43, 24 and 31, respectively. The model parameters of transfer rates between kidneys and blood, muscle and blood and the rate of transfer from kidneys to urinary bladder content are most influential to the blood clearance and to the whole-body retention of Cs. For the urinary excretion, the parameters of transfer rates from urinary bladder content to urine and from kidneys to urinary bladder content impact mostly. The implication and effect on the estimated equivalent and effective doses of the larger uncertainty of 43 in whole-body retention in the later time, say, after Day 500 will be explored in a successive work in the framework of EURADOS. PMID:24743755

  13. Environmental Transport Input Parameters for the Biosphere Model

    SciTech Connect

    M. A. Wasiolek

    2003-06-27

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (TWP) (BSC 2003 [163602]). Some documents in Figure 1-1 may be under development and not available when this report is issued. This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA), but access to the listed documents is not required to understand the contents of this report. This report is one of the reports that develops input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2003 [160699]) describes the conceptual model, the mathematical model, and the input parameters. The purpose of this analysis is to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or volcanic ash). The analysis was performed in accordance with the TWP (BSC 2003 [163602]). This analysis develops values of parameters associated with many features, events, and processes (FEPs) applicable to the reference biosphere (DTN: M00303SEPFEPS2.000 [162452]), which are addressed in the biosphere model (BSC 2003 [160699]). The treatment of these FEPs is described in BSC (2003 [160699], Section 6.2). Parameter values

  14. Structural and parameter uncertainty in Bayesian cost-effectiveness models

    PubMed Central

    Jackson, Christopher H; Sharples, Linda D; Thompson, Simon G

    2010-01-01

    Health economic decision models are subject to various forms of uncertainty, including uncertainty about the parameters of the model and about the model structure. These uncertainties can be handled within a Bayesian framework, which also allows evidence from previous studies to be combined with the data. As an example, we consider a Markov model for assessing the cost-effectiveness of implantable cardioverter defibrillators. Using Markov chain Monte Carlo posterior simulation, uncertainty about the parameters of the model is formally incorporated in the estimates of expected cost and effectiveness. We extend these methods to include uncertainty about the choice between plausible model structures. This is accounted for by averaging the posterior distributions from the competing models using weights that are derived from the pseudo-marginal-likelihood and the deviance information criterion, which are measures of expected predictive utility. We also show how these cost-effectiveness calculations can be performed efficiently in the widely used software WinBUGS. PMID:20383261

  15. Global-scale regionalization of hydrologic model parameters

    NASA Astrophysics Data System (ADS)

    Beck, Hylke; van Dijk, Albert; de Roo, Ad; Miralles, Diego; Schellekens, Jaap; McVicar, Tim; Bruijnzeel, Sampurno

    2016-04-01

    Current state-of-the-art models typically applied at continental to global scales (hereafter called macro-scale) tend to use a priori parameters, resulting in suboptimal streamflow (Q) simulation. For the first time, a scheme for regionalization of model parameters at the global scale was developed. We used data from a diverse set of 1787 small-to-medium sized catchments (10--10 000~km^2) and the simple conceptual HBV model to set up and test the scheme. Each catchment was calibrated against observed daily Q, after which 674 catchments with high calibration and validation scores, and thus presumably good-quality observed Q and forcing data, were selected to serve as donor catchments. The calibrated parameter sets for the donors were subsequently transferred to 0.5° grid cells with similar climatic and physiographic characteristics, resulting in parameter maps for HBV with global coverage. For each grid cell, we used the ten most similar donor catchments, rather than the single most similar donor, and averaged the resulting simulated Q, which enhanced model performance. The 1113 catchments not used as donors were used to independently evaluate the scheme. The regionalized parameters outperformed spatially-uniform (i.e., averaged calibrated) parameters for 79~% of the evaluation catchments. Substantial improvements were evident for all major Köppen-Geiger climate types and even for evaluation catchments >5000~km distance from the donors. The median improvement was about half of the performance increase achieved through calibration. HBV using regionalized parameters outperformed nine state-of-the-art macro-scale models, suggesting these might also benefit from the new regionalization scheme. The produced HBV parameter maps including ancillary data are available via http://water.jrc.ec.europa.eu/HBV/.

  16. Global-scale regionalization of hydrologic model parameters

    NASA Astrophysics Data System (ADS)

    Beck, Hylke E.; van Dijk, Albert I. J. M.; de Roo, Ad; Miralles, Diego G.; McVicar, Tim R.; Schellekens, Jaap; Bruijnzeel, L. Adrian

    2016-05-01

    Current state-of-the-art models typically applied at continental to global scales (hereafter called macroscale) tend to use a priori parameters, resulting in suboptimal streamflow (Q) simulation. For the first time, a scheme for regionalization of model parameters at the global scale was developed. We used data from a diverse set of 1787 small-to-medium sized catchments (10-10,000 km2) and the simple conceptual HBV model to set up and test the scheme. Each catchment was calibrated against observed daily Q, after which 674 catchments with high calibration and validation scores, and thus presumably good-quality observed Q and forcing data, were selected to serve as donor catchments. The calibrated parameter sets for the donors were subsequently transferred to 0.5° grid cells with similar climatic and physiographic characteristics, resulting in parameter maps for HBV with global coverage. For each grid cell, we used the 10 most similar donor catchments, rather than the single most similar donor, and averaged the resulting simulated Q, which enhanced model performance. The 1113 catchments not used as donors were used to independently evaluate the scheme. The regionalized parameters outperformed spatially uniform (i.e., averaged calibrated) parameters for 79% of the evaluation catchments. Substantial improvements were evident for all major Köppen-Geiger climate types and even for evaluation catchments > 5000 km distant from the donors. The median improvement was about half of the performance increase achieved through calibration. HBV with regionalized parameters outperformed nine state-of-the-art macroscale models, suggesting these might also benefit from the new regionalization scheme. The produced HBV parameter maps including ancillary data are available via www.gloh2o.org.

  17. Climate Model Sensitivity to Moist Convection Parameter Perturbations

    NASA Astrophysics Data System (ADS)

    Bernstein, D. N.; Neelin, J. D.

    2014-12-01

    The perturbed physics ensemble in this study examines the impact of poorly constrained parameters in representations of subgrid-scale processes affecting moist convection using the National Center for Atmospheric Research fully coupled ocean-atmosphere Community Earth System Model. An ensemble of historical period simulations and an ensemble of end-of-the-century simulations under the Representative Concentration Pathway 8.5 scenario for global warming quantify some of the implications of parameter uncertainty for simulation of precipitation processes in current climate and in projections of climate change. Regional precipitation patterns prove highly sensitive to the parameter perturbations, especially in the tropics. In the historical period, nonlinear parameter response with local changes exceeding 3 mm/day is noted. Over the full range of parameters, the error with respect to observations is within the range typical of different climate models used in the Coupled Model Intercomparison Project phase 5 (CMIP5). For parameter perturbations within this range, differences in the end of century projections for global warming precipitation change also regionally exceed 3 mm/day, also comparable to differences in global warming predictions among the CMIP5 models. These results suggest that improving constraints within moist convective parameterized processes based on better assessment against observations in the historical period will be required to reduce the range of uncertainty in regional projections of precipitation change.

  18. Multiplicity Control in Structural Equation Modeling: Incorporating Parameter Dependencies

    ERIC Educational Resources Information Center

    Smith, Carrie E.; Cribbie, Robert A.

    2013-01-01

    When structural equation modeling (SEM) analyses are conducted, significance tests for all important model relationships (parameters including factor loadings, covariances, etc.) are typically conducted at a specified nominal Type I error rate ([alpha]). Despite the fact that many significance tests are often conducted in SEM, rarely is…

  19. Estimability of Parameters in the Generalized Graded Unfolding Model.

    ERIC Educational Resources Information Center

    Roberts, James S.; Donoghue, John R.; Laughlin, James E.

    The generalized graded unfolding model (GGUM) (J. Roberts, J. Donoghue, and J. Laughlin, 1998) is an item response theory model designed to analyze binary or graded responses that are based on a proximity relation. The purpose of this study was to assess conditions under which item parameter estimation accuracy increases or decreases, with special…

  20. Separability of Item and Person Parameters in Response Time Models.

    ERIC Educational Resources Information Center

    Van Breukelen, Gerard J. P.

    1997-01-01

    Discusses two forms of separability of item and person parameters in the context of response time models. The first is "separate sufficiency," and the second is "ranking independence." For each form a theorem stating sufficient conditions is proved. The two forms are shown to include several cases of models from psychometric and biometric…

  1. Global Characterization of Model Parameter Space Using Information Topology

    NASA Astrophysics Data System (ADS)

    Transtrum, Mark

    A generic parameterized model is a mapping between parameters and data and is naturally interpreted as a prediction manifold embedded in data space. In this interpretation, known as Information Geometry, the Fisher Information Matrix (FIM) is a Riemannian metric that measures the identifiability of the model parameters. Varying the experimental conditions (e.g., times at which measurements are made) alters both the FIM and the geometric properties of the model. However, several global features of the model manifold (e.g., edges and corners) are invariant to changes in experimental conditions as long as the FIM is not singular. Invariance of these features to changing experimental conditions generates an ''Information Topology'' that globally characterizes a model's parameter space and reflects the underlying physical principles from which the model was derived. Understanding a model's information topology can give insights into the emergent physics that controls a system's collective behavior, identify reduced models and describe the relationship among them, and determine which parameter combinations will be difficult to identify for various experimental conditions.

  2. Atmospheric turbulence parameters for modeling wind turbine dynamics

    NASA Technical Reports Server (NTRS)

    Holley, W. E.; Thresher, R. W.

    1982-01-01

    A model which can be used to predict the response of wind turbines to atmospheric turbulence is given. The model was developed using linearized aerodynamics for a three-bladed rotor and accounts for three turbulent velocity components as well as velocity gradients across the rotor disk. Typical response power spectral densities are shown. The system response depends critically on three wind and turbulence parameters, and models are presented to predict desired response statistics. An equation error method, which can be used to estimate the required parameters from field data, is also presented.

  3. Distributed activation energy model parameters of some Turkish coals

    SciTech Connect

    Gunes, M.; Gunes, S.K.

    2008-07-01

    A multi-reaction model based on distributed activation energy has been applied to some Turkish coals. The kinetic parameters of distributed activation energy model were calculated via computer program developed for this purpose. It was observed that the values of mean of activation energy distribution vary between 218 and 248 kJ/mol, and the values of standard deviation of activation energy distribution vary between 32 and 70 kJ/mol. The correlations between kinetic parameters of the distributed activation energy model and certain properties of coal have been investigated.

  4. Maximum likelihood estimation for distributed parameter models of flexible spacecraft

    NASA Technical Reports Server (NTRS)

    Taylor, L. W., Jr.; Williams, J. L.

    1989-01-01

    A distributed-parameter model of the NASA Solar Array Flight Experiment spacecraft structure is constructed on the basis of measurement data and analyzed to generate a priori estimates of modal frequencies and mode shapes. A Newton-Raphson maximum-likelihood algorithm is applied to determine the unknown parameters, using a truncated model for the estimation and the full model for the computation of the higher modes. Numerical results are presented in a series of graphs and briefly discussed, and the significant improvement in computation speed obtained by parallel implementation of the method on a supercomputer is noted.

  5. Uncertainty Analysis and Parameter Estimation For Nearshore Hydrodynamic Models

    NASA Astrophysics Data System (ADS)

    Ardani, S.; Kaihatu, J. M.

    2012-12-01

    Numerical models represent deterministic approaches used for the relevant physical processes in the nearshore. Complexity of the physics of the model and uncertainty involved in the model inputs compel us to apply a stochastic approach to analyze the robustness of the model. The Bayesian inverse problem is one powerful way to estimate the important input model parameters (determined by apriori sensitivity analysis) and can be used for uncertainty analysis of the outputs. Bayesian techniques can be used to find the range of most probable parameters based on the probability of the observed data and the residual errors. In this study, the effect of input data involving lateral (Neumann) boundary conditions, bathymetry and off-shore wave conditions on nearshore numerical models are considered. Monte Carlo simulation is applied to a deterministic numerical model (the Delft3D modeling suite for coupled waves and flow) for the resulting uncertainty analysis of the outputs (wave height, flow velocity, mean sea level and etc.). Uncertainty analysis of outputs is performed by random sampling from the input probability distribution functions and running the model as required until convergence to the consistent results is achieved. The case study used in this analysis is the Duck94 experiment, which was conducted at the U.S. Army Field Research Facility at Duck, North Carolina, USA in the fall of 1994. The joint probability of model parameters relevant for the Duck94 experiments will be found using the Bayesian approach. We will further show that, by using Bayesian techniques to estimate the optimized model parameters as inputs and applying them for uncertainty analysis, we can obtain more consistent results than using the prior information for input data which means that the variation of the uncertain parameter will be decreased and the probability of the observed data will improve as well. Keywords: Monte Carlo Simulation, Delft3D, uncertainty analysis, Bayesian techniques

  6. Effects of anodizing parameters and heat treatment on nanotopographical features, bioactivity, and cell culture response of additively manufactured porous titanium.

    PubMed

    Amin Yavari, S; Chai, Y C; Böttger, A J; Wauthle, R; Schrooten, J; Weinans, H; Zadpoor, A A

    2015-06-01

    Anodizing could be used for bio-functionalization of the surfaces of titanium alloys. In this study, we use anodizing for creating nanotubes on the surface of porous titanium alloy bone substitutes manufactured using selective laser melting. Different sets of anodizing parameters (voltage: 10 or 20V anodizing time: 30min to 3h) are used for anodizing porous titanium structures that were later heat treated at 500°C. The nanotopographical features are examined using electron microscopy while the bioactivity of anodized surfaces is measured using immersion tests in the simulated body fluid (SBF). Moreover, the effects of anodizing and heat treatment on the performance of one representative anodized porous titanium structures are evaluated using in vitro cell culture assays using human periosteum-derived cells (hPDCs). It has been shown that while anodizing with different anodizing parameters results in very different nanotopographical features, i.e. nanotubes in the range of 20 to 55nm, anodized surfaces have limited apatite-forming ability regardless of the applied anodizing parameters. The results of in vitro cell culture show that both anodizing, and thus generation of regular nanotopographical feature, and heat treatment improve the cell culture response of porous titanium. In particular, cell proliferation measured using metabolic activity and DNA content was improved for anodized and heat treated as well as for anodized but not heat-treated specimens. Heat treatment additionally improved the cell attachment of porous titanium surfaces and upregulated expression of osteogenic markers. Anodized but not heat-treated specimens showed some limited signs of upregulated expression of osteogenic markers. In conclusion, while varying the anodizing parameters creates different nanotube structure, it does not improve apatite-forming ability of porous titanium. However, both anodizing and heat treatment at 500°C improve the cell culture response of porous titanium. PMID

  7. Improvement of Continuous Hydrologic Models and HMS SMA Parameters Reduction

    NASA Astrophysics Data System (ADS)

    Rezaeian Zadeh, Mehdi; Zia Hosseinipour, E.; Abghari, Hirad; Nikian, Ashkan; Shaeri Karimi, Sara; Moradzadeh Azar, Foad

    2010-05-01

    Hydrological models can help us to predict stream flows and associated runoff volumes of rainfall events within a watershed. There are many different reasons why we need to model the rainfall-runoff processes of for a watershed. However, the main reason is the limitation of hydrological measurement techniques and the costs of data collection at a fine scale. Generally, we are not able to measure all that we would like to know about a given hydrological systems. This is very particularly the case for ungauged catchments. Since the ultimate aim of prediction using models is to improve decision-making about a hydrological problem, therefore, having a robust and efficient modeling tool becomes an important factor. Among several hydrologic modeling approaches, continuous simulation has the best predictions because it can model dry and wet conditions during a long-term period. Continuous hydrologic models, unlike event based models, account for a watershed's soil moisture balance over a long-term period and are suitable for simulating daily, monthly, and seasonal streamflows. In this paper, we describe a soil moisture accounting (SMA) algorithm added to the hydrologic modeling system (HEC-HMS) computer program. As is well known in the hydrologic modeling community one of the ways for improving a model utility is the reduction of input parameters. The enhanced model developed in this study is applied to Khosrow Shirin Watershed, located in the north-west part of Fars Province in Iran, a data limited watershed. The HMS SMA algorithm divides the potential path of rainfall onto a watershed into five zones. The results showed that the output of HMS SMA is insensitive with the variation of many parameters such as soil storage and soil percolation rate. The study's objective is to remove insensitive parameters from the model input using Multi-objective sensitivity analysis. Keywords: Continuous Hydrologic Modeling, HMS SMA, Multi-objective sensitivity analysis, SMA Parameters

  8. Performance and Probabilistic Verification of Regional Parameter Estimates for Conceptual Rainfall-runoff Models

    NASA Astrophysics Data System (ADS)

    Franz, K.; Hogue, T.; Barco, J.

    2007-12-01

    Identification of appropriate parameter sets for simulation of streamflow in ungauged basins has become a significant challenge for both operational and research hydrologists. This is especially difficult in the case of conceptual models, when model parameters typically must be "calibrated" or adjusted to match streamflow conditions in specific systems (i.e. some of the parameters are not directly observable). This paper addresses the performance and uncertainty associated with transferring conceptual rainfall-runoff model parameters between basins within large-scale ecoregions. We use the National Weather Service's (NWS) operational hydrologic model, the SACramento Soil Moisture Accounting (SAC-SMA) model. A Multi-Step Automatic Calibration Scheme (MACS), using the Shuffle Complex Evolution (SCE), is used to optimize SAC-SMA parameters for a group of watersheds with extensive hydrologic records from the Model Parameter Estimation Experiment (MOPEX) database. We then explore "hydroclimatic" relationships between basins to facilitate regionalization of parameters for an established ecoregion in the southeastern United States. The impact of regionalized parameters is evaluated via standard model performance statistics as well as through generation of hindcasts and probabilistic verification procedures to evaluate streamflow forecast skill. Preliminary results show climatology ("climate neighbor") to be a better indicator of transferability than physical similarities or proximity ("nearest neighbor"). The mean and median of all the parameters within the ecoregion are the poorest choice for the ungauged basin. The choice of regionalized parameter set affected the skill of the ensemble streamflow hindcasts, however, all parameter sets show little skill in forecasts after five weeks (i.e. climatology is as good an indicator of future streamflows). In addition, the optimum parameter set changed seasonally, with the "nearest neighbor" showing the highest skill in the

  9. Soil-related Input Parameters for the Biosphere Model

    SciTech Connect

    A. J. Smith

    2003-07-02

    This analysis is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the geologic repository at Yucca Mountain. The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A graphical representation of the documentation hierarchy for the ERMYN biosphere model is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (BSC 2003 [163602]). It should be noted that some documents identified in Figure 1-1 may be under development at the time this report is issued and therefore not available. This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. ''The Biosphere Model Report'' (BSC 2003 [160699]) describes in detail the conceptual model as well as the mathematical model and its input parameters. The purpose of this analysis was to develop the biosphere model parameters needed to evaluate doses from pathways associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation and ash

  10. Bayesian methods for characterizing unknown parameters of material models

    DOE PAGESBeta

    Emery, J. M.; Grigoriu, M. D.; Field Jr., R. V.

    2016-02-04

    A Bayesian framework is developed for characterizing the unknown parameters of probabilistic models for material properties. In this framework, the unknown parameters are viewed as random and described by their posterior distributions obtained from prior information and measurements of quantities of interest that are observable and depend on the unknown parameters. The proposed Bayesian method is applied to characterize an unknown spatial correlation of the conductivity field in the definition of a stochastic transport equation and to solve this equation by Monte Carlo simulation and stochastic reduced order models (SROMs). As a result, the Bayesian method is also employed tomore » characterize unknown parameters of material properties for laser welds from measurements of peak forces sustained by these welds.« less

  11. Automatic Determination of the Conic Coronal Mass Ejection Model Parameters

    NASA Technical Reports Server (NTRS)

    Pulkkinen, A.; Oates, T.; Taktakishvili, A.

    2009-01-01

    Characterization of the three-dimensional structure of solar transients using incomplete plane of sky data is a difficult problem whose solutions have potential for societal benefit in terms of space weather applications. In this paper transients are characterized in three dimensions by means of conic coronal mass ejection (CME) approximation. A novel method for the automatic determination of cone model parameters from observed halo CMEs is introduced. The method uses both standard image processing techniques to extract the CME mass from white-light coronagraph images and a novel inversion routine providing the final cone parameters. A bootstrap technique is used to provide model parameter distributions. When combined with heliospheric modeling, the cone model parameter distributions will provide direct means for ensemble predictions of transient propagation in the heliosphere. An initial validation of the automatic method is carried by comparison to manually determined cone model parameters. It is shown using 14 halo CME events that there is reasonable agreement, especially between the heliocentric locations of the cones derived with the two methods. It is argued that both the heliocentric locations and the opening half-angles of the automatically determined cones may be more realistic than those obtained from the manual analysis

  12. Parameter Estimation for Single Diode Models of Photovoltaic Modules

    SciTech Connect

    Hansen, Clifford

    2015-03-01

    Many popular models for photovoltaic system performance employ a single diode model to compute the I - V curve for a module or string of modules at given irradiance and temperature conditions. A single diode model requires a number of parameters to be estimated from measured I - V curves. Many available parameter estimation methods use only short circuit, o pen circuit and maximum power points for a single I - V curve at standard test conditions together with temperature coefficients determined separately for individual cells. In contrast, module testing frequently records I - V curves over a wide range of irradi ance and temperature conditions which, when available , should also be used to parameterize the performance model. We present a parameter estimation method that makes use of a fu ll range of available I - V curves. We verify the accuracy of the method by recov ering known parameter values from simulated I - V curves . We validate the method by estimating model parameters for a module using outdoor test data and predicting the outdoor performance of the module.

  13. NB-PLC channel modelling with cyclostationary noise addition & OFDM implementation for smart grid

    NASA Astrophysics Data System (ADS)

    Thomas, Togis; Gupta, K. K.

    2016-03-01

    Power line communication (PLC) technology can be a viable solution for the future ubiquitous networks because it provides a cheaper alternative to other wired technology currently being used for communication. In smart grid Power Line Communication (PLC) is used to support communication with low rate on low voltage (LV) distribution network. In this paper, we propose the channel modelling of narrowband (NB) PLC in the frequency range 5 KHz to 500 KHz by using ABCD parameter with cyclostationary noise addition. Behaviour of the channel was studied by the addition of 11KV/230V transformer, by varying load location and load. Bit error rate (BER) Vs signal to noise ratio SNR) was plotted for the proposed model by employing OFDM. Our simulation results based on the proposed channel model show an acceptable performance in terms of bit error rate versus signal to noise ratio, which enables communication required for smart grid applications.

  14. Genomic prediction of growth in pigs based on a model including additive and dominance effects.

    PubMed

    Lopes, M S; Bastiaansen, J W M; Janss, L; Knol, E F; Bovenhuis, H

    2016-06-01

    Independent of whether prediction is based on pedigree or genomic information, the focus of animal breeders has been on additive genetic effects or 'breeding values'. However, when predicting phenotypes rather than breeding values of an animal, models that account for both additive and dominance effects might be more accurate. Our aim with this study was to compare the accuracy of predicting phenotypes using a model that accounts for only additive effects (MA) and a model that accounts for both additive and dominance effects simultaneously (MAD). Lifetime daily gain (DG) was evaluated in three pig populations (1424 Pietrain, 2023 Landrace, and 2157 Large White). Animals were genotyped using the Illumina SNP60K Beadchip and assigned to either a training data set to estimate the genetic parameters and SNP effects, or to a validation data set to assess the prediction accuracy. Models MA and MAD applied random regression on SNP genotypes and were implemented in the program Bayz. The additive heritability of DG across the three populations and the two models was very similar at approximately 0.26. The proportion of phenotypic variance explained by dominance effects ranged from 0.04 (Large White) to 0.11 (Pietrain), indicating that importance of dominance might be breed-specific. Prediction accuracies were higher when predicting phenotypes using total genetic values (sum of breeding values and dominance deviations) from the MAD model compared to using breeding values from both MA and MAD models. The highest increase in accuracy (from 0.195 to 0.222) was observed in the Pietrain, and the lowest in Large White (from 0.354 to 0.359). Predicting phenotypes using total genetic values instead of breeding values in purebred data improved prediction accuracy and reduced the bias of genomic predictions. Additional benefit of the method is expected when applied to predict crossbred phenotypes, where dominance levels are expected to be higher. PMID:26676611

  15. Role of anaerobic fungi in wheat straw degradation and effects of plant feed additives on rumen fermentation parameters in vitro.

    PubMed

    Dagar, S S; Singh, N; Goel, N; Kumar, S; Puniya, A K

    2015-01-01

    In the present study, rumen microbial groups, i.e. total rumen microbes (TRM), total anaerobic fungi (TAF), avicel enriched bacteria (AEB) and neutral detergent fibre enriched bacteria (NEB) were evaluated for wheat straw (WS) degradability and different fermentation parameters in vitro. Highest WS degradation was shown for TRM, followed by TAF, NEB and least by AEB. Similar patterns were observed with total gas production and short chain fatty acid profiles. Overall, TAF emerged as the most potent individual microbial group. In order to enhance the fibrolytic and rumen fermentation potential of TAF, we evaluated 18 plant feed additives in vitro. Among these, six plant additives namely Albizia lebbeck, Alstonia scholaris, Bacopa monnieri, Lawsonia inermis, Psidium guajava and Terminalia arjuna considerably improved WS degradation by TAF. Further evaluation showed A. lebbeck as best feed additive. The study revealed that TAF plays a significant role in WS degradation and their fibrolytic activities can be improved by inclusion of A. lebbeck in fermentation medium. Further studies are warranted to elucidate its active constituents, effect on fungal population and in vivo potential in animal system. PMID:25391347

  16. Simultaneous estimation of parameters in the bivariate Emax model.

    PubMed

    Magnusdottir, Bergrun T; Nyquist, Hans

    2015-12-10

    In this paper, we explore inference in multi-response, nonlinear models. By multi-response, we mean models with m > 1 response variables and accordingly m relations. Each parameter/explanatory variable may appear in one or more of the relations. We study a system estimation approach for simultaneous computation and inference of the model and (co)variance parameters. For illustration, we fit a bivariate Emax model to diabetes dose-response data. Further, the bivariate Emax model is used in a simulation study that compares the system estimation approach to equation-by-equation estimation. We conclude that overall, the system estimation approach performs better for the bivariate Emax model when there are dependencies among relations. The stronger the dependencies, the more we gain in precision by using system estimation rather than equation-by-equation estimation. PMID:26190048

  17. Parameter selection and testing the soil water model SOIL

    NASA Astrophysics Data System (ADS)

    McGechan, M. B.; Graham, R.; Vinten, A. J. A.; Douglas, J. T.; Hooda, P. S.

    1997-08-01

    The soil water and heat simulation model SOIL was tested for its suitability to study the processes of transport of water in soil. Required parameters, particularly soil hydraulic parameters, were determined by field and laboratory tests for some common soil types and for soils subjected to contrasting treatments of long-term grassland and tilled land under cereal crops. Outputs from simulations were shown to be in reasonable agreement with independently measured field drain outflows and soil water content histories.

  18. Application of physical parameter identification to finite-element models

    NASA Technical Reports Server (NTRS)

    Bronowicki, Allen J.; Lukich, Michael S.; Kuritz, Steven P.

    1987-01-01

    The time domain parameter identification method described previously is applied to TRW's Large Space Structure Truss Experiment. Only control sensors and actuators are employed in the test procedure. The fit of the linear structural model to the test data is improved by more than an order of magnitude using a physically reasonable parameter set. The electro-magnetic control actuators are found to contribute significant damping due to a combination of eddy current and back electro-motive force (EMF) effects. Uncertainties in both estimated physical parameters and modal behavior variables are given.

  19. Estimation of nonlinear pilot model parameters including time delay.

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.; Roland, V. R.; Wells, W. R.

    1972-01-01

    Investigation of the feasibility of using a Kalman filter estimator for the identification of unknown parameters in nonlinear dynamic systems with a time delay. The problem considered is the application of estimation theory to determine the parameters of a family of pilot models containing delayed states. In particular, the pilot-plant dynamics are described by differential-difference equations of the retarded type. The pilot delay, included as one of the unknown parameters to be determined, is kept in pure form as opposed to the Pade approximations generally used for these systems. Problem areas associated with processing real pilot response data are included in the discussion.

  20. Accuracy in Parameter Estimation for Targeted Effects in Structural Equation Modeling: Sample Size Planning for Narrow Confidence Intervals

    ERIC Educational Resources Information Center

    Lai, Keke; Kelley, Ken

    2011-01-01

    In addition to evaluating a structural equation model (SEM) as a whole, often the model parameters are of interest and confidence intervals for those parameters are formed. Given a model with a good overall fit, it is entirely possible for the targeted effects of interest to have very wide confidence intervals, thus giving little information about…

  1. Parameter Estimation for a crop model: separate and joint calibration of soil and plant parameters

    NASA Astrophysics Data System (ADS)

    Hildebrandt, A.; Jackisch, C.; Luis, S.

    2008-12-01

    Vegetation plays a major role both in the atmospheric and terrestrial water cycle. A great deal of vegetation cover in the developed world consists of agricultural used land (i.e. 44 % of the territory of the EU). Therefore, crop models have become increasingly prominent for studying the impact of Global Change both on economic welfare as well as on influence of vegetation on climate, and feedbacks with hydrological processes. By doing so, it is implied that crop models properly reflect the soil water balance and vertical exchange with the atmosphere. Although crop models can be incorporated in Surface Vegetation Atmosphere Transfer Schemes for that purpose, their main focus has traditionally not been on predicting water and energy fluxes, but yield. In this research we use data from two lysimeters in Brandis (Saxony, Germany), which have been planted with the crops of the surrounding farm, to test the capability of the crop model in SWAP. The lysimeters contain different natural soil cores, leading to substantially different yield. This experiment gives the opportunity to test, if the crop model is portable - that is if a calibrated crop can be moved between different locations. When using the default parameters for the respective environment, the model does neither quantitatively nor qualitatively reproduce the difference in yield and LAI for the different lysimeters. The separate calibration of soil and plant parameter was poor compared to the joint calibration of plant and soil parameters. This suggests that the model is not portable, but needs to be calibrated for individual locations, based on measurements or expert knowledge.

  2. Control of the SCOLE configuration using distributed parameter models

    NASA Technical Reports Server (NTRS)

    Hsiao, Min-Hung; Huang, Jen-Kuang

    1994-01-01

    A continuum model for the SCOLE configuration has been derived using transfer matrices. Controller designs for distributed parameter systems have been analyzed. Pole-assignment controller design is considered easy to implement but stability is not guaranteed. An explicit transfer function of dynamic controllers has been obtained and no model reduction is required before the controller is realized. One specific LQG controller for continuum models had been derived, but other optimal controllers for more general performances need to be studied.

  3. Parameter fitting for piano sound synthesis by physical modeling

    NASA Astrophysics Data System (ADS)

    Bensa, Julien; Gipouloux, Olivier; Kronland-Martinet, Richard

    2005-07-01

    A difficult issue in the synthesis of piano tones by physical models is to choose the values of the parameters governing the hammer-string model. In fact, these parameters are hard to estimate from static measurements, causing the synthesis sounds to be unrealistic. An original approach that estimates the parameters of a piano model, from the measurement of the string vibration, by minimizing a perceptual criterion is proposed. The minimization process that was used is a combination of a gradient method and a simulated annealing algorithm, in order to avoid convergence problems in case of multiple local minima. The criterion, based on the tristimulus concept, takes into account the spectral energy density in three bands, each allowing particular parameters to be estimated. The optimization process has been run on signals measured on an experimental setup. The parameters thus estimated provided a better sound quality than the one obtained using a global energetic criterion. Both the sound's attack and its brightness were better preserved. This quality gain was obtained for parameter values very close to the initial ones, showing that only slight deviations are necessary to make synthetic sounds closer to the real ones.

  4. Model parameter estimation with data assimilation and MCMC in small and large scale models

    NASA Astrophysics Data System (ADS)

    Susiluoto, Jouni; Hakkarainen, Janne

    2014-05-01

    Climate models in general, have non-linear responses to changing environmental forcing. Many of the participating processes contain, partly for computational reasons, simplifications, that is, parametrizations of physical phenomena. Due to lack of complete information and thus mismatch between model world and the real world, the parametrizations are not measurable, but rather approximations of some abstract simplified processes' properties. Hence they cannot be tuned directly with observations. We investigate how MCMC using an objective function constructed from the extended kalman filter helps us gain understanding to what the studied parameter posterior PDFs look like. This is done at different levels: using Lorenz96 model as a testbed and then exporting the methods to a full-blown climate model ECHAM5. Additionally, the limitations of the method are discussed.

  5. Integrated reservoir characterization: Improvement in heterogeneities stochastic modelling by integration of additional external constraints

    SciTech Connect

    Doligez, B.; Eschard, R.; Geffroy, F.

    1997-08-01

    The classical approach to construct reservoir models is to start with a fine scale geological model which is informed with petrophysical properties. Then scaling-up techniques allow to obtain a reservoir model which is compatible with the fluid flow simulators. Geostatistical modelling techniques are widely used to build the geological models before scaling-up. These methods provide equiprobable images of the area under investigation, which honor the well data, and which variability is the same than the variability computed from the data. At an appraisal phase, when few data are available, or when the wells are insufficient to describe all the heterogeneities and the behavior of the field, additional constraints are needed to obtain a more realistic geological model. For example, seismic data or stratigraphic models can provide average reservoir information with an excellent areal coverage, but with a poor vertical resolution. New advances in modelisation techniques allow now to integrate this type of additional external information in order to constrain the simulations. In particular, 2D or 3D seismic derived information grids, or sand-shale ratios maps coming from stratigraphic models can be used as external drifts to compute the geological image of the reservoir at the fine scale. Examples are presented to illustrate the use of these new tools, their impact on the final reservoir model, and their sensitivity to some key parameters.

  6. Quantifying the parameters of Prusiner's heterodimer model for prion replication

    NASA Astrophysics Data System (ADS)

    Li, Z. R.; Liu, G. R.; Mi, D.

    2005-02-01

    A novel approach for the determination of parameters in prion replication kinetics is developed based on Prusiner's heterodimer model. It is proposed to employ a simple 2D HP lattice model and a two-state transition theory to determine kinetic parameters that play the key role in the prion replication process. The simulation results reveal the most important facts observed in the prion diseases, including the long incubation time, rapid death following symptom manifestation, the effect of inoculation size, different mechanisms of the familial and infectious prion diseases, etc. Extensive simulation with various thermodynamic parameters shows that the Prusiner's heterodimer model is applicable, and the putative protein X plays a critical role in replication of the prion disease.

  7. Percolation model with an additional source of disorder.

    PubMed

    Kundu, Sumanta; Manna, S S

    2016-06-01

    The ranges of transmission of the mobiles in a mobile ad hoc network are not uniform in reality. They are affected by the temperature fluctuation in air, obstruction due to the solid objects, even the humidity difference in the environment, etc. How the varying range of transmission of the individual active elements affects the global connectivity in the network may be an important practical question to ask. Here a model of percolation phenomena, with an additional source of disorder, is introduced for a theoretical understanding of this problem. As in ordinary percolation, sites of a square lattice are occupied randomly with probability p. Each occupied site is then assigned a circular disk of random value R for its radius. A bond is defined to be occupied if and only if the radii R_{1} and R_{2} of the disks centered at the ends satisfy a certain predefined condition. In a very general formulation, one divides the R_{1}-R_{2} plane into two regions by an arbitrary closed curve. One defines a point within one region as representing an occupied bond; otherwise it is a vacant bond. The study of three different rules under this general formulation indicates that the percolation threshold always varies continuously. This threshold has two limiting values, one is p_{c}(sq), the percolation threshold for the ordinary site percolation on the square lattice, and the other is unity. The approach of the percolation threshold to its limiting values are characterized by two exponents. In a special case, all lattice sites are occupied by disks of random radii R∈{0,R_{0}} and a percolation transition is observed with R_{0} as the control variable, similar to the site occupation probability. PMID:27415234

  8. Percolation model with an additional source of disorder

    NASA Astrophysics Data System (ADS)

    Kundu, Sumanta; Manna, S. S.

    2016-06-01

    The ranges of transmission of the mobiles in a mobile ad hoc network are not uniform in reality. They are affected by the temperature fluctuation in air, obstruction due to the solid objects, even the humidity difference in the environment, etc. How the varying range of transmission of the individual active elements affects the global connectivity in the network may be an important practical question to ask. Here a model of percolation phenomena, with an additional source of disorder, is introduced for a theoretical understanding of this problem. As in ordinary percolation, sites of a square lattice are occupied randomly with probability p . Each occupied site is then assigned a circular disk of random value R for its radius. A bond is defined to be occupied if and only if the radii R1 and R2 of the disks centered at the ends satisfy a certain predefined condition. In a very general formulation, one divides the R1-R2 plane into two regions by an arbitrary closed curve. One defines a point within one region as representing an occupied bond; otherwise it is a vacant bond. The study of three different rules under this general formulation indicates that the percolation threshold always varies continuously. This threshold has two limiting values, one is pc(sq) , the percolation threshold for the ordinary site percolation on the square lattice, and the other is unity. The approach of the percolation threshold to its limiting values are characterized by two exponents. In a special case, all lattice sites are occupied by disks of random radii R ∈{0 ,R0} and a percolation transition is observed with R0 as the control variable, similar to the site occupation probability.

  9. Synchronous Generator Model Parameter Estimation Based on Noisy Dynamic Waveforms

    NASA Astrophysics Data System (ADS)

    Berhausen, Sebastian; Paszek, Stefan

    2016-01-01

    In recent years, there have occurred system failures in many power systems all over the world. They have resulted in a lack of power supply to a large number of recipients. To minimize the risk of occurrence of power failures, it is necessary to perform multivariate investigations, including simulations, of power system operating conditions. To conduct reliable simulations, the current base of parameters of the models of generating units, containing the models of synchronous generators, is necessary. In the paper, there is presented a method for parameter estimation of a synchronous generator nonlinear model based on the analysis of selected transient waveforms caused by introducing a disturbance (in the form of a pseudorandom signal) in the generator voltage regulation channel. The parameter estimation was performed by minimizing the objective function defined as a mean square error for deviations between the measurement waveforms and the waveforms calculated based on the generator mathematical model. A hybrid algorithm was used for the minimization of the objective function. In the paper, there is described a filter system used for filtering the noisy measurement waveforms. The calculation results of the model of a 44 kW synchronous generator installed on a laboratory stand of the Institute of Electrical Engineering and Computer Science of the Silesian University of Technology are also given. The presented estimation method can be successfully applied to parameter estimation of different models of high-power synchronous generators operating in a power system.

  10. SPOTting model parameters using a ready-made Python package

    NASA Astrophysics Data System (ADS)

    Houska, Tobias; Kraft, Philipp; Breuer, Lutz

    2015-04-01

    The selection and parameterization of reliable process descriptions in ecological modelling is driven by several uncertainties. The procedure is highly dependent on various criteria, like the used algorithm, the likelihood function selected and the definition of the prior parameter distributions. A wide variety of tools have been developed in the past decades to optimize parameters. Some of the tools are closed source. Due to this, the choice for a specific parameter estimation method is sometimes more dependent on its availability than the performance. A toolbox with a large set of methods can support users in deciding about the most suitable method. Further, it enables to test and compare different methods. We developed the SPOT (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of modules, to analyze and optimize parameters of (environmental) models. SPOT comes along with a selected set of algorithms for parameter optimization and uncertainty analyses (Monte Carlo, MC; Latin Hypercube Sampling, LHS; Maximum Likelihood, MLE; Markov Chain Monte Carlo, MCMC; Scuffled Complex Evolution, SCE-UA; Differential Evolution Markov Chain, DE-MCZ), together with several likelihood functions (Bias, (log-) Nash-Sutcliff model efficiency, Correlation Coefficient, Coefficient of Determination, Covariance, (Decomposed-, Relative-, Root-) Mean Squared Error, Mean Absolute Error, Agreement Index) and prior distributions (Binomial, Chi-Square, Dirichlet, Exponential, Laplace, (log-, multivariate-) Normal, Pareto, Poisson, Cauchy, Uniform, Weibull) to sample from. The model-independent structure makes it suitable to analyze a wide range of applications. We apply all algorithms of the SPOT package in three different case studies. Firstly, we investigate the response of the Rosenbrock function, where the MLE algorithm shows its strengths. Secondly, we study the Griewank function, which has a challenging response surface for

  11. Parameter uncertainty and interaction in complex environmental models

    NASA Astrophysics Data System (ADS)

    Spear, Robert C.; Grieb, Thomas M.; Shang, Nong

    1994-11-01

    Recently developed models for the estimation of risks arising from the release of toxic chemicals from hazardous waste sites are inherently complex both structurally and parametrically. To better understand the impact of uncertainty and interaction in the high-dimensional parameter spaces of these models, the set of procedures termed regional sensitivity analysis has been extended and applied to the groundwater pathway of the MMSOILS model. The extension consists of a tree-structured density estimation technique which allows the characterization of complex interaction in that portion of the parameter space which gives rise to successful simulation. Results show that the parameter space can be partitioned into small, densely populated regions and relatively large, sparsely populated regions. From the high-density regions one can identify the important or controlling parameters as well as the interaction between parameters in different local areas of the space. This new tool can provide guidance in the analysis and interpretation of site-specific application of these complex models.

  12. Optimizing Muscle Parameters in Musculoskeletal Modeling Using Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Hanson, Andrea; Reed, Erik; Cavanagh, Peter

    2011-01-01

    Astronauts assigned to long-duration missions experience bone and muscle atrophy in the lower limbs. The use of musculoskeletal simulation software has become a useful tool for modeling joint and muscle forces during human activity in reduced gravity as access to direct experimentation is limited. Knowledge of muscle and joint loads can better inform the design of exercise protocols and exercise countermeasure equipment. In this study, the LifeModeler(TM) (San Clemente, CA) biomechanics simulation software was used to model a squat exercise. The initial model using default parameters yielded physiologically reasonable hip-joint forces. However, no activation was predicted in some large muscles such as rectus femoris, which have been shown to be active in 1-g performance of the activity. Parametric testing was conducted using Monte Carlo methods and combinatorial reduction to find a muscle parameter set that more closely matched physiologically observed activation patterns during the squat exercise. Peak hip joint force using the default parameters was 2.96 times body weight (BW) and increased to 3.21 BW in an optimized, feature-selected test case. The rectus femoris was predicted to peak at 60.1% activation following muscle recruitment optimization, compared to 19.2% activation with default parameters. These results indicate the critical role that muscle parameters play in joint force estimation and the need for exploration of the solution space to achieve physiologically realistic muscle activation.

  13. [Parameter uncertainty analysis for urban rainfall runoff modelling].

    PubMed

    Huang, Jin-Liang; Lin, Jie; Du, Peng-Fei

    2012-07-01

    An urban watershed in Xiamen was selected to perform the parameter uncertainty analysis for urban stormwater runoff modeling in terms of identification and sensitivity analysis based on storm water management model (SWMM) using Monte-Carlo sampling and regionalized sensitivity analysis (RSA) algorithm. Results show that Dstore-Imperv, Dstore-Perv and Curve Number (CN) are the identifiable parameters with larger K-S values in hydrological and hydraulic module, and the rank of K-S values in hydrological and hydraulic module is Dstore-Imperv > CN > Dstore-Perv > N-Perv > conductivity > Con-Mann > N-Imperv. With regards to water quality module, the parameters in exponent washoff model including Coefficient and Exponent and the Max. Buildup parameter of saturation buildup model in three land cover types are the identifiable parameters with the larger K-S values. In comparison, the K-S value of rate constant in three landuse/cover types is smaller than that of Max. Buildup, Coefficient and Exponent. PMID:23002595

  14. The effects of additive gases (Ar, N2, H2, Cl2, O2) on HCl plasma parameters and composition

    NASA Astrophysics Data System (ADS)

    Efremov, A.; Yudina, A.; Davlyatshina, A.; Murin, D.; Svetsov, V.

    2013-01-01

    The direct current (dc) glow discharge plasma parameters and active species kinetics in HCl-X (X = Ar, N2, H2, Cl2, O2) mixtures were studied using both plasma diagnostics Langmuir probes and modeling. The 0-dimensional self-consistent steady-state model included the simultaneous solution of Boltzmann kinetic equation, the equations of chemical kinetics for neutral and charge particles, plasma conductivity equation and the quasi-neutrality conditions for volume densities of charged particles as well as for their fluxes to the reactor walls. The data on the steady-state electron energy distribution function, electron gas characteristics (mean energy, drift rate and transport coefficients), volume-averaged densities of plasma active species and their fluxed to the reactor walls were obtained as functions of gas mixing ratios and gas pressure at fixed discharge current.

  15. Hematological parameters in Polish mixed breed rabbits with addition of meat breed blood in the annual cycle.

    PubMed

    Tokarz-Deptuła, B; Niedźwiedzka-Rystwej, P; Adamiak, M; Hukowska-Szematowicz, B; Trzeciak-Ryczek, A; Deptuła, W

    2015-01-01

    In the paper we studied haematologic values, such as haemoglobin concentration, haematocrit value, thrombocytes, leucocytes: lymphocytes, neutrophils, basophils, eosinophils and monocytes in the pheral blood in Polish mixed-breeds with addition of meat breed blood in order to obtain the reference values which are until now not available for this animals. In studying this indices we took into consideration the impact of the season (spring, summer, autumn, winter), and sex of the animals. The studies have shown a high impact of the season of the year on those rabbits, but only in spring and summer. Moreover we observed that the sex has mean impact on the studied values of haematological parameters in those rabbits. According to our knowledge, this is the first paper on haematologic values in this widely used group of rabbits, so they may serve as reference values. PMID:26812808

  16. Atomic modeling of cryo-electron microscopy reconstructions – Joint refinement of model and imaging parameters

    PubMed Central

    Chapman, Michael S.; Trzynka, Andrew; Chapman, Brynmor K.

    2013-01-01

    When refining the fit of component atomic structures into electron microscopic reconstructions, use of a resolution-dependent atomic density function makes it possible to jointly optimize the atomic model and imaging parameters of the microscope. Atomic density is calculated by one-dimensional Fourier transform of atomic form factors convoluted with a microscope envelope correction and a low-pass filter, allowing refinement of imaging parameters such as resolution, by optimizing the agreement of calculated and experimental maps. A similar approach allows refinement of atomic displacement parameters, providing indications of molecular flexibility even at low resolution. A modest improvement in atomic coordinates is possible following optimization of these additional parameters. Methods have been implemented in a Python program that can be used in stand-alone mode for rigid-group refinement, or embedded in other optimizers for flexible refinement with stereochemical restraints. The approach is demonstrated with refinements of virus and chaperonin structures at resolutions of 9 through 4.5 Å, representing regimes where rigid-group and fully flexible parameterizations are appropriate. Through comparisons to known crystal structures, flexible fitting by RSRef is shown to be an improvement relative to other methods and to generate models with all-atom rms accuracies of 1.5–2.5 Å at resolutions of 4.5–6 Å. PMID:23376441

  17. Atomic modeling of cryo-electron microscopy reconstructions--joint refinement of model and imaging parameters.

    PubMed

    Chapman, Michael S; Trzynka, Andrew; Chapman, Brynmor K

    2013-04-01

    When refining the fit of component atomic structures into electron microscopic reconstructions, use of a resolution-dependent atomic density function makes it possible to jointly optimize the atomic model and imaging parameters of the microscope. Atomic density is calculated by one-dimensional Fourier transform of atomic form factors convoluted with a microscope envelope correction and a low-pass filter, allowing refinement of imaging parameters such as resolution, by optimizing the agreement of calculated and experimental maps. A similar approach allows refinement of atomic displacement parameters, providing indications of molecular flexibility even at low resolution. A modest improvement in atomic coordinates is possible following optimization of these additional parameters. Methods have been implemented in a Python program that can be used in stand-alone mode for rigid-group refinement, or embedded in other optimizers for flexible refinement with stereochemical restraints. The approach is demonstrated with refinements of virus and chaperonin structures at resolutions of 9 through 4.5 Å, representing regimes where rigid-group and fully flexible parameterizations are appropriate. Through comparisons to known crystal structures, flexible fitting by RSRef is shown to be an improvement relative to other methods and to generate models with all-atom rms accuracies of 1.5-2.5 Å at resolutions of 4.5-6 Å. PMID:23376441

  18. The influence of non-solvent addition on the independent and dependent parameters in roller electrospinning of polyurethane.

    PubMed

    Cengiz-Callioglu, Funda; Jirsak, Oldrich; Dayik, Mehmet

    2013-07-01

    This paper discusses the effects of 1,1,2,2 tetrachlorethylen (TCE) non-solvent addition on the independent (electrical conductivity, dielectric constant, surface tension and the theological properties of the solution etc.) and dependent parameters (number of Taylor cones per square meter (NTC/m2), spinning performance for one Taylor cone (SP/TC), total spinning performance (SP), fiber properties such as diameter, diameter uniformity, non-fibrous area) in roller electrospinning of polyurethane (PU). The same process parameters (voltage, distance of the electrodes, humidity, etc.) were applied for all solutions during the spinning process. According to the results, the effect of TCE non-solvent concentration on the dielectric constant, surface tension, rheological properties of the solution and also spinning performance was important statistically. Beside these results, TCE non-solvent concentration effects quality of fiber and nano web structure. Generally high fiber density, low non-fibrous percentage and uniform nanofibers were obtained from fiber morphology analyses. PMID:23901497

  19. A Bayesian approach to parameter estimation in HIV dynamical models.

    PubMed

    Putter, H; Heisterkamp, S H; Lange, J M A; de Wolf, F

    2002-08-15

    In the context of a mathematical model describing HIV infection, we discuss a Bayesian modelling approach to a non-linear random effects estimation problem. The model and the data exhibit a number of features that make the use of an ordinary non-linear mixed effects model intractable: (i) the data are from two compartments fitted simultaneously against the implicit numerical solution of a system of ordinary differential equations; (ii) data from one compartment are subject to censoring; (iii) random effects for one variable are assumed to be from a beta distribution. We show how the Bayesian framework can be exploited by incorporating prior knowledge on some of the parameters, and by combining the posterior distributions of the parameters to obtain estimates of quantities of interest that follow from the postulated model. PMID:12210633

  20. Estimation of the parameters of ETAS models by Simulated Annealing

    PubMed Central

    Lombardi, Anna Maria

    2015-01-01

    This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is more significant. These results give new insights into the ETAS model and the efficiency of the maximum-likelihood method within this context. PMID:25673036

  1. Climate change decision-making: Model & parameter uncertainties explored

    SciTech Connect

    Dowlatabadi, H.; Kandlikar, M.; Linville, C.

    1995-12-31

    A critical aspect of climate change decision-making is uncertainties in current understanding of the socioeconomic, climatic and biogeochemical processes involved. Decision-making processes are much better informed if these uncertainties are characterized and their implications understood. Quantitative analysis of these uncertainties serve to inform decision makers about the likely outcome of policy initiatives, and help set priorities for research so that outcome ambiguities faced by the decision-makers are reduced. A family of integrated assessment models of climate change have been developed at Carnegie Mellon. These models are distinguished from other integrated assessment efforts in that they were designed from the outset to characterize and propagate parameter, model, value, and decision-rule uncertainties. The most recent of these models is ICAM 2.1. This model includes representation of the processes of demographics, economic activity, emissions, atmospheric chemistry, climate and sea level change and impacts from these changes and policies for emissions mitigation, and adaptation to change. The model has over 800 objects of which about one half are used to represent uncertainty. In this paper we show, that when considering parameter uncertainties, the relative contribution of climatic uncertainties are most important, followed by uncertainties in damage calculations, economic uncertainties and direct aerosol forcing uncertainties. When considering model structure uncertainties we find that the choice of policy is often dominated by model structure choice, rather than parameter uncertainties.

  2. Determination of modeling parameters for power IGBTs under pulsed power conditions

    SciTech Connect

    Dale, Gregory E; Van Gordon, Jim A; Kovaleski, Scott D

    2010-01-01

    While the power insulated gate bipolar transistor (IGRT) is used in many applications, it is not well characterized under pulsed power conditions. This makes the IGBT difficult to model for solid state pulsed power applications. The Oziemkiewicz implementation of the Hefner model is utilized to simulate IGBTs in some circuit simulation software packages. However, the seventeen parameters necessary for the Oziemkiewicz implementation must be known for the conditions under which the device will be operating. Using both experimental and simulated data with a least squares curve fitting technique, the parameters necessary to model a given IGBT can be determined. This paper presents two sets of these seventeen parameters that correspond to two different models of power IGBTs. Specifically, these parameters correspond to voltages up to 3.5 kV, currents up to 750 A, and pulse widths up to 10 {micro}s. Additionally, comparisons of the experimental and simulated data will be presented.

  3. Inhalation Exposure Input Parameters for the Biosphere Model

    SciTech Connect

    M. Wasiolek

    2006-06-05

    This analysis is one of the technical reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), referred to in this report as the biosphere model. ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. ''Inhalation Exposure Input Parameters for the Biosphere Model'' is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the biosphere model is presented in Figure 1-1 (based on BSC 2006 [DIRS 176938]). This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and how this analysis report contributes to biosphere modeling. This analysis report defines and justifies values of atmospheric mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of the biosphere model to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception. This report is concerned primarily with the

  4. Force Field Independent Metal Parameters Using a Nonbonded Dummy Model

    PubMed Central

    2014-01-01

    The cationic dummy atom approach provides a powerful nonbonded description for a range of alkaline-earth and transition-metal centers, capturing both structural and electrostatic effects. In this work we refine existing literature parameters for octahedrally coordinated Mn2+, Zn2+, Mg2+, and Ca2+, as well as providing new parameters for Ni2+, Co2+, and Fe2+. In all the cases, we are able to reproduce both M2+–O distances and experimental solvation free energies, which has not been achieved to date for transition metals using any other model. The parameters have also been tested using two different water models and show consistent performance. Therefore, our parameters are easily transferable to any force field that describes nonbonded interactions using Coulomb and Lennard-Jones potentials. Finally, we demonstrate the stability of our parameters in both the human and Escherichia coli variants of the enzyme glyoxalase I as showcase systems, as both enzymes are active with a range of transition metals. The parameters presented in this work provide a valuable resource for the molecular simulation community, as they extend the range of metal ions that can be studied using classical approaches, while also providing a starting point for subsequent parametrization of new metal centers. PMID:24670003

  5. Force field independent metal parameters using a nonbonded dummy model.

    PubMed

    Duarte, Fernanda; Bauer, Paul; Barrozo, Alexandre; Amrein, Beat Anton; Purg, Miha; Aqvist, Johan; Kamerlin, Shina Caroline Lynn

    2014-04-24

    The cationic dummy atom approach provides a powerful nonbonded description for a range of alkaline-earth and transition-metal centers, capturing both structural and electrostatic effects. In this work we refine existing literature parameters for octahedrally coordinated Mn(2+), Zn(2+), Mg(2+), and Ca(2+), as well as providing new parameters for Ni(2+), Co(2+), and Fe(2+). In all the cases, we are able to reproduce both M(2+)-O distances and experimental solvation free energies, which has not been achieved to date for transition metals using any other model. The parameters have also been tested using two different water models and show consistent performance. Therefore, our parameters are easily transferable to any force field that describes nonbonded interactions using Coulomb and Lennard-Jones potentials. Finally, we demonstrate the stability of our parameters in both the human and Escherichia coli variants of the enzyme glyoxalase I as showcase systems, as both enzymes are active with a range of transition metals. The parameters presented in this work provide a valuable resource for the molecular simulation community, as they extend the range of metal ions that can be studied using classical approaches, while also providing a starting point for subsequent parametrization of new metal centers. PMID:24670003

  6. Global parameter estimation for thermodynamic models of transcriptional regulation.

    PubMed

    Suleimenov, Yerzhan; Ay, Ahmet; Samee, Md Abul Hassan; Dresch, Jacqueline M; Sinha, Saurabh; Arnosti, David N

    2013-07-15

    Deciphering the mechanisms involved in gene regulation holds the key to understanding the control of central biological processes, including human disease, population variation, and the evolution of morphological innovations. New experimental techniques including whole genome sequencing and transcriptome analysis have enabled comprehensive modeling approaches to study gene regulation. In many cases, it is useful to be able to assign biological significance to the inferred model parameters, but such interpretation should take into account features that affect these parameters, including model construction and sensitivity, the type of fitness calculation, and the effectiveness of parameter estimation. This last point is often neglected, as estimation methods are often selected for historical reasons or for computational ease. Here, we compare the performance of two parameter estimation techniques broadly representative of local and global approaches, namely, a quasi-Newton/Nelder-Mead simplex (QN/NMS) method and a covariance matrix adaptation-evolutionary strategy (CMA-ES) method. The estimation methods were applied to a set of thermodynamic models of gene transcription applied to regulatory elements active in the Drosophila embryo. Measuring overall fit, the global CMA-ES method performed significantly better than the local QN/NMS method on high quality data sets, but this difference was negligible on lower quality data sets with increased noise or on data sets simplified by stringent thresholding. Our results suggest that the choice of parameter estimation technique for evaluation of gene expression models depends both on quality of data, the nature of the models [again, remains to be established] and the aims of the modeling effort. PMID:23726942

  7. Considerations for parameter optimization and sensitivity in climate models.

    PubMed

    Neelin, J David; Bracco, Annalisa; Luo, Hao; McWilliams, James C; Meyerson, Joyce E

    2010-12-14

    Climate models exhibit high sensitivity in some respects, such as for differences in predicted precipitation changes under global warming. Despite successful large-scale simulations, regional climatology features prove difficult to constrain toward observations, with challenges including high-dimensionality, computationally expensive simulations, and ambiguity in the choice of objective function. In an atmospheric General Circulation Model forced by observed sea surface temperature or coupled to a mixed-layer ocean, many climatic variables yield rms-error objective functions that vary smoothly through the feasible parameter range. This smoothness occurs despite nonlinearity strong enough to reverse the curvature of the objective function in some parameters, and to imply limitations on multimodel ensemble means as an estimator of global warming precipitation changes. Low-order polynomial fits to the model output spatial fields as a function of parameter (quadratic in model field, fourth-order in objective function) yield surprisingly successful metamodels for many quantities and facilitate a multiobjective optimization approach. Tradeoffs arise as optima for different variables occur at different parameter values, but with agreement in certain directions. Optima often occur at the limit of the feasible parameter range, identifying key parameterization aspects warranting attention--here the interaction of convection with free tropospheric water vapor. Analytic results for spatial fields of leading contributions to the optimization help to visualize tradeoffs at a regional level, e.g., how mismatches between sensitivity and error spatial fields yield regional error under minimization of global objective functions. The approach is sufficiently simple to guide parameter choices and to aid intercomparison of sensitivity properties among climate models. PMID:21115841

  8. [Influence Additional Cognitive Tasks on EEG Beta Rhythm Parameters during Forming and Testing Set to Perception of the Facial Expression].

    PubMed

    Yakovenko, I A; Cheremushkin, E A; Kozlov, M K

    2015-01-01

    The research of changes of a beta rhythm parameters on condition of working memory loading by extension of a interstimuli interval between the target and triggering stimuli to 16 sec is investigated on 70 healthy adults in two series of experiments with set to a facial expression. In the second series at the middle of this interval for strengthening of the load was entered the additional cognitive task in the form of conditioning stimuli like Go/NoGo--circles of blue or green color. Data analysis of the research was carried out by means of continuous wavelet-transformation on the basis of "mather" complex Morlet-wavelet in the range of 1-35 Hz. Beta rhythm power was characterized by the mean level, maxima of wavelet-transformation coefficient (WLC) and latent periods of maxima. Introduction of additional cognitive task to pause between the target and triggering stimuli led to essential increase in absolute values of the mean level of beta rhythm WLC and relative sizes of maxima of beta rhythm WLC. In the series of experiments without conditioning stimulus subjects with large number of mistakes (from 6 to 40), i.e. rigid set, in comparison with subjects with small number of mistakes (to 5), i.e. plastic set, at the forming stage were characterized by higher values of the mean level of beta rhythm WLC. Introduction of the conditioning stimuli led to smoothing of intergroup distinctions throughout the experiment. PMID:26601500

  9. Parameter space of the Rulkov chaotic neuron model

    NASA Astrophysics Data System (ADS)

    Wang, Caixia; Cao, Hongjun

    2014-06-01

    The parameter space of the two dimensional Rulkov chaotic neuron model is taken into account by using the qualitative analysis, the co-dimension 2 bifurcation, the center manifold theorem, and the normal form. The goal is intended to clarify analytically different dynamics and firing regimes of a single neuron in a two dimensional parameter space. Our research demonstrates the origin that there exist very rich nonlinear dynamics and complex biological firing regimes lies in different domains and their boundary curves in the two dimensional parameter plane. We present the parameter domains of fixed points, the saddle-node bifurcation, the supercritical/subcritical Neimark-Sacker bifurcation, stability conditions of non hyperbolic fixed points and quasiperiodic solutions. Based on these parameter domains, it is easy to know that the Rulkov chaotic neuron model can produce what kinds of firing regimes as well as their transition mechanisms. These results are very useful for building-up a large-scale neuron network with different biological functional roles and cognitive activities, especially in establishing some specific neuron network models of neurological diseases.

  10. A strategy for "constraint-based" parameter specification for environmental models

    NASA Astrophysics Data System (ADS)

    Gharari, S.; Shafiei, M.; Hrachowitz, M.; Fenicia, F.; Gupta, H. V.; Savenije, H. H. G.

    2013-12-01

    Many environmental systems models, such as conceptual rainfall-runoff models, rely on model calibration for parameter identification. For this, an observed output time series (such as runoff) is needed, but frequently not available. Here, we explore another way to constrain the parameter values of semi-distributed conceptual models, based on two types of restrictions derived from prior (or expert) knowledge. The first, called "parameter constraints", restrict the solution space based on realistic relationships that must hold between the different parameters of the model while the second, called "process constraints" require that additional realism relationships between the fluxes and state variables must be satisfied. Specifically, we propose a strategy for finding parameter sets that simultaneously satisfy all such constraints, based on stepwise sampling of the parameter space. Such parameter sets have the desirable property of being consistent with the modeler's intuition of how the catchment functions, and can (if necessary) serve as prior information for further investigations by reducing the prior uncertainties associated with both calibration and prediction.

  11. Modelling Biophysical Parameters of Maize Using Landsat 8 Time Series

    NASA Astrophysics Data System (ADS)

    Dahms, Thorsten; Seissiger, Sylvia; Conrad, Christopher; Borg, Erik

    2016-06-01

    Open and free access to multi-frequent high-resolution data (e.g. Sentinel - 2) will fortify agricultural applications based on satellite data. The temporal and spatial resolution of these remote sensing datasets directly affects the applicability of remote sensing methods, for instance a robust retrieving of biophysical parameters over the entire growing season with very high geometric resolution. In this study we use machine learning methods to predict biophysical parameters, namely the fraction of absorbed photosynthetic radiation (FPAR), the leaf area index (LAI) and the chlorophyll content, from high resolution remote sensing. 30 Landsat 8 OLI scenes were available in our study region in Mecklenburg-Western Pomerania, Germany. In-situ data were weekly to bi-weekly collected on 18 maize plots throughout the summer season 2015. The study aims at an optimized prediction of biophysical parameters and the identification of the best explaining spectral bands and vegetation indices. For this purpose, we used the entire in-situ dataset from 24.03.2015 to 15.10.2015. Random forest and conditional inference forests were used because of their explicit strong exploratory and predictive character. Variable importance measures allowed for analysing the relation between the biophysical parameters with respect to the spectral response, and the performance of the two approaches over the plant stock evolvement. Classical random forest regression outreached the performance of conditional inference forests, in particular when modelling the biophysical parameters over the entire growing period. For example, modelling biophysical parameters of maize for the entire vegetation period using random forests yielded: FPAR: R² = 0.85; RMSE = 0.11; LAI: R² = 0.64; RMSE = 0.9 and chlorophyll content (SPAD): R² = 0.80; RMSE=4.9. Our results demonstrate the great potential in using machine-learning methods for the interpretation of long-term multi-frequent remote sensing datasets to model

  12. Revised Parameters for the AMOEBA Polarizable Atomic Multipole Water Model.

    PubMed

    Laury, Marie L; Wang, Lee-Ping; Pande, Vijay S; Head-Gordon, Teresa; Ponder, Jay W

    2015-07-23

    A set of improved parameters for the AMOEBA polarizable atomic multipole water model is developed. An automated procedure, ForceBalance, is used to adjust model parameters to enforce agreement with ab initio-derived results for water clusters and experimental data for a variety of liquid phase properties across a broad temperature range. The values reported here for the new AMOEBA14 water model represent a substantial improvement over the previous AMOEBA03 model. The AMOEBA14 model accurately predicts the temperature of maximum density and qualitatively matches the experimental density curve across temperatures from 249 to 373 K. Excellent agreement is observed for the AMOEBA14 model in comparison to experimental properties as a function of temperature, including the second virial coefficient, enthalpy of vaporization, isothermal compressibility, thermal expansion coefficient, and dielectric constant. The viscosity, self-diffusion constant, and surface tension are also well reproduced. In comparison to high-level ab initio results for clusters of 2-20 water molecules, the AMOEBA14 model yields results similar to AMOEBA03 and the direct polarization iAMOEBA models. With advances in computing power, calibration data, and optimization techniques, we recommend the use of the AMOEBA14 water model for future studies employing a polarizable water model. PMID:25683601

  13. A Hamiltonian Model of Generator With AVR and PSS Parameters*

    NASA Astrophysics Data System (ADS)

    Qian, Jing.; Zeng, Yun.; Zhang, Lixiang.; Xu, Tianmao.

    Take the typical thyristor excitation system including the automatic voltage regulator (AVR) and the power system stabilizer (PSS) as an example, the supply rate of AVR and PSS branch are selected as the energy function of controller, and that is added to the Hamiltonian function of the generator to compose the total energy function. By proper transformation, the standard form of the Hamiltonian model of the generator including AVR and PSS is derived. The structure matrix and damping matrix of the model include feature parameters of AVR and PSS, which gives a foundation to study the interaction mechanism of parameters between AVR, PSS and the generator. Finally, the structural relationships and interactions of the system model are studied, the results show that the relationship of structure and damping characteristic reflected by model consistent with practical system.

  14. Prediction of interest rate using CKLS model with stochastic parameters

    SciTech Connect

    Ying, Khor Chia; Hin, Pooi Ah

    2014-06-19

    The Chan, Karolyi, Longstaff and Sanders (CKLS) model is a popular one-factor model for describing the spot interest rates. In this paper, the four parameters in the CKLS model are regarded as stochastic. The parameter vector φ{sup (j)} of four parameters at the (J+n)-th time point is estimated by the j-th window which is defined as the set consisting of the observed interest rates at the j′-th time point where j≤j′≤j+n. To model the variation of φ{sup (j)}, we assume that φ{sup (j)} depends on φ{sup (j−m)}, φ{sup (j−m+1)},…, φ{sup (j−1)} and the interest rate r{sub j+n} at the (j+n)-th time point via a four-dimensional conditional distribution which is derived from a [4(m+1)+1]-dimensional power-normal distribution. Treating the (j+n)-th time point as the present time point, we find a prediction interval for the future value r{sub j+n+1} of the interest rate at the next time point when the value r{sub j+n} of the interest rate is given. From the above four-dimensional conditional distribution, we also find a prediction interval for the future interest rate r{sub j+n+d} at the next d-th (d≥2) time point. The prediction intervals based on the CKLS model with stochastic parameters are found to have better ability of covering the observed future interest rates when compared with those based on the model with fixed parameters.

  15. Prediction of interest rate using CKLS model with stochastic parameters

    NASA Astrophysics Data System (ADS)

    Ying, Khor Chia; Hin, Pooi Ah

    2014-06-01

    The Chan, Karolyi, Longstaff and Sanders (CKLS) model is a popular one-factor model for describing the spot interest rates. In this paper, the four parameters in the CKLS model are regarded as stochastic. The parameter vector φ(j) of four parameters at the (J+n)-th time point is estimated by the j-th window which is defined as the set consisting of the observed interest rates at the j'-th time point where j≤j'≤j+n. To model the variation of φ(j), we assume that φ(j) depends on φ(j-m), φ(j-m+1),…, φ(j-1) and the interest rate rj+n at the (j+n)-th time point via a four-dimensional conditional distribution which is derived from a [4(m+1)+1]-dimensional power-normal distribution. Treating the (j+n)-th time point as the present time point, we find a prediction interval for the future value rj+n+1 of the interest rate at the next time point when the value rj+n of the interest rate is given. From the above four-dimensional conditional distribution, we also find a prediction interval for the future interest rate rj+n+d at the next d-th (d≥2) time point. The prediction intervals based on the CKLS model with stochastic parameters are found to have better ability of covering the observed future interest rates when compared with those based on the model with fixed parameters.

  16. Atmosphere models and the determination of stellar parameters

    NASA Astrophysics Data System (ADS)

    Martins, F.

    2014-11-01

    We present the basic concepts necessary to build atmosphere models for any type of star. We then illustrate how atmosphere models can be used to determine stellar parameters. We focus on the effects of line-blanketing for hot stars, and on non-LTE and three dimensional effects for cool stars. We illustrate the impact of these effects on the determination of the ages of stars from the HR diagram.

  17. Parabolic problems with parameters arising in evolution model for phytromediation

    NASA Astrophysics Data System (ADS)

    Sahmurova, Aida; Shakhmurov, Veli

    2012-12-01

    The past few decades, efforts have been made to clean sites polluted by heavy metals as chromium. One of the new innovative methods of eradicating metals from soil is phytoremediation. This uses plants to pull metals from the soil through the roots. This work develops a system of differential equations with parameters to model the plant metal interaction of phytoremediation (see [1]).

  18. Long wave infrared polarimetric model: theory, measurements and parameters

    NASA Astrophysics Data System (ADS)

    Wellems, David; Ortega, Steve; Bowers, David; Boger, Jim; Fetrow, Matthew

    2006-10-01

    Material parameters, which include the complex index of refraction, (n,k), and surface roughness, are needed to determine passive long wave infrared (LWIR) polarimetric radiance. A single scatter microfacet bi-direction reflectance distribution function (BRDF) is central to the energy conserving (EC) model which determines emitted and reflected polarized surface radiance. Model predictions are compared to LWIR polarimetric data. An ellipsometry approach is described for finding an effective complex index of refraction or (n,k) averaged over the 8.5-9.5 µm wavelength range. The reflected S3/S2 ratios, where S2 and S3 are components of the Stokes (Born and Wolf 1975 Principles of Optics (London: Pergamon) p 30) vector, are used to determine (n,k). An imaging polarimeter with a rotating retarder is utilized to measure the Stokes vector. Effective (n,k) and two EC optical roughness parameters are presented for roughened glass and several unprepared, typical outdoor materials including metals and paints. A two parameter slope distribution function is introduced which is more flexible in modelling the source reflected intensity profiles or BRDF data than one parameter Cauchy or Gaussian distributions (Jordan et al 1996 Appl. Opt. 35 3585-90 Priest and Meier 2002 Opt. Eng. 41 992). The glass results show that the (n,k) needed to model polarimetric emission and scatter differ from that for a smooth surface and that surface roughness reduces the degree of linear polarization.

  19. A Fully Conditional Estimation Procedure for Rasch Model Parameters.

    ERIC Educational Resources Information Center

    Choppin, Bruce

    A strategy for overcoming problems with the Rasch model's inability to handle missing data involves a pairwise algorithm which manipulates the data matrix to separate out the information needed for the estimation of item difficulty parameters in a test. The method of estimation compares two or three items at a time, separating out the ability…

  20. Inhalation Exposure Input Parameters for the Biosphere Model

    SciTech Connect

    M. A. Wasiolek

    2003-09-24

    This analysis is one of the nine reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2003a) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents a set of input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for a Yucca Mountain repository. This report, ''Inhalation Exposure Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (BSC 2003b). It should be noted that some documents identified in Figure 1-1 may be under development at the time this report is issued and therefore not available at that time. This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this analysis report. This analysis report defines and justifies values of mass loading, which is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Measurements of mass loading are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air surrounding crops and concentrations in air inhaled by a receptor. Concentrations in air to which the

  1. Assessment of structural model and parameter uncertainty with a multi-model system for soil water balance models

    NASA Astrophysics Data System (ADS)

    Michalik, Thomas; Multsch, Sebastian; Frede, Hans-Georg; Breuer, Lutz

    2016-04-01

    Water for agriculture is strongly limited in arid and semi-arid regions and often of low quality in terms of salinity. The application of saline waters for irrigation increases the salt load in the rooting zone and has to be managed by leaching to maintain a healthy soil, i.e. to wash out salts by additional irrigation. Dynamic simulation models are helpful tools to calculate the root zone water fluxes and soil salinity content in order to investigate best management practices. However, there is little information on structural and parameter uncertainty for simulations regarding the water and salt balance of saline irrigation. Hence, we established a multi-model system with four different models (AquaCrop, RZWQM, SWAP, Hydrus1D/UNSATCHEM) to analyze the structural and parameter uncertainty by using the Global Likelihood and Uncertainty Estimation (GLUE) method. Hydrus1D/UNSATCHEM and SWAP were set up with multiple sets of different implemented functions (e.g. matric and osmotic stress for root water uptake) which results in a broad range of different model structures. The simulations were evaluated against soil water and salinity content observations. The posterior distribution of the GLUE analysis gives behavioral parameters sets and reveals uncertainty intervals for parameter uncertainty. Throughout all of the model sets, most parameters accounting for the soil water balance show a low uncertainty, only one or two out of five to six parameters in each model set displays a high uncertainty (e.g. pore-size distribution index in SWAP and Hydrus1D/UNSATCHEM). The differences between the models and model setups reveal the structural uncertainty. The highest structural uncertainty is observed for deep percolation fluxes between the model sets of Hydrus1D/UNSATCHEM (~200 mm) and RZWQM (~500 mm) that are more than twice as high for the latter. The model sets show a high variation in uncertainty intervals for deep percolation as well, with an interquartile range (IQR) of

  2. High speed parameter estimation for a homogenized energy model

    NASA Astrophysics Data System (ADS)

    Ernstberger, Jon M.

    Industrial, commercial, military, biomedical, and civilian uses of smart materials are increasingly investigated for high performance applications. These compounds couple applied field or thermal energy to mechanical forces that are generated within the material. The devices utilizing these compounds are often much smaller than their traditional counterparts and provide greater design capabilities and energy efficiency. The relations that couple field and mechanical energies are often hysteretic and nonlinear. To accurately control devices employing these compounds, models must quantify these effects. Further, since these compounds exhibit environment-dependent behavior, the models must be robust for accurate actuator quantification. In this dissertation, we investigate the construction of models that characterize these internal mechanisms and that manifest themselves in material deformation in a hysteretic fashion. Results of previously-presented model formulations are given. New techniques for generating model components are presented which reduce the computational load for parameter estimations. The use of various deterministic and stochastic search algorithms for parameter estimation are discussed with strengths and weaknesses of each examined. New end-user graphical tools for properly initiating the parameter estimation are also presented. Finally, results from model fits to data from ferroelectric---e.g., Lead Zirconate Titanate (PZT)---and ferromagnetic---e.g., Terfenol-D---materials are presented.

  3. Testing Departure from Additivity in Tukey’s Model using Shrinkage: Application to a Longitudinal Setting

    PubMed Central

    Ko, Yi-An; Mukherjee, Bhramar; Smith, Jennifer A.; Park, Sung Kyun; Kardia, Sharon L.R.; Allison, Matthew A.; Vokonas, Pantel S.; Chen, Jinbo; Diez-Roux, Ana V.

    2014-01-01

    While there has been extensive research developing gene-environment interaction (GEI) methods in case-control studies, little attention has been given to sparse and efficient modeling of GEI in longitudinal studies. In a two-way table for GEI with rows and columns as categorical variables, a conventional saturated interaction model involves estimation of a specific parameter for each cell, with constraints ensuring identifiability. The estimates are unbiased but are potentially inefficient because the number of parameters to be estimated can grow quickly with increasing categories of row/column factors. On the other hand, Tukey’s one degree of freedom (df) model for non-additivity treats the interaction term as a scaled product of row and column main effects. Due to the parsimonious form of interaction, the interaction estimate leads to enhanced efficiency and the corresponding test could lead to increased power. Unfortunately, Tukey’s model gives biased estimates and low power if the model is misspecified. When screening multiple GEIs where each genetic and environmental marker may exhibit a distinct interaction pattern, a robust estimator for interaction is important for GEI detection. We propose a shrinkage estimator for interaction effects that combines estimates from both Tukey’s and saturated interaction models and use the corresponding Wald test for testing interaction in a longitudinal setting. The proposed estimator is robust to misspecification of interaction structure. We illustrate the proposed methods using two longitudinal studies — the Normative Aging Study and the Multi-Ethnic Study of Atherosclerosis. PMID:25112650

  4. Squares of different sizes: effect of geographical projection on model parameter estimates in species distribution modeling.

    PubMed

    Budic, Lara; Didenko, Gregor; Dormann, Carsten F

    2016-01-01

    In species distribution analyses, environmental predictors and distribution data for large spatial extents are often available in long-lat format, such as degree raster grids. Long-lat projections suffer from unequal cell sizes, as a degree of longitude decreases in length from approximately 110 km at the equator to 0 km at the poles. Here we investigate whether long-lat and equal-area projections yield similar model parameter estimates, or result in a consistent bias. We analyzed the environmental effects on the distribution of 12 ungulate species with a northern distribution, as models for these species should display the strongest effect of projectional distortion. Additionally we choose four species with entirely continental distributions to investigate the effect of incomplete cell coverage at the coast. We expected that including model weights proportional to the actual cell area should compensate for the observed bias in model coefficients, and similarly that using land coverage of a cell should decrease bias in species with coastal distribution. As anticipated, model coefficients were different between long-lat and equal-area projections. Having progressively smaller and a higher number of cells with increasing latitude influenced the importance of parameters in models, increased the sample size for the northernmost parts of species ranges, and reduced the subcell variability of those areas. However, this bias could be largely removed by weighting long-lat cells by the area they cover, and marginally by correcting for land coverage. Overall we found little effect of using long-lat rather than equal-area projections in our analysis. The fitted relationship between environmental parameters and occurrence probability differed only very little between the two projection types. We still recommend using equal-area projections to avoid possible bias. More importantly, our results suggest that the cell area and the proportion of a cell covered by land should be

  5. Parameter Calibration of Mini-LEO Hill Slope Model

    NASA Astrophysics Data System (ADS)

    Siegel, H.

    2015-12-01

    The mini-LEO hill slope, located at Biosphere 2, is a small-scale catchment model that is used to study the ways landscapes change in response to biological, chemical, and hydrological processes. Previous experiments have shown that soil heterogeneity can develop as a result of groundwater flow; changing the characteristics of the landscape. To determine whether or not flow has caused heterogeneity within the mini-LEO hill slope, numerical models were used to simulate the observed seepage flow, water table height, and storativity. To begin a numerical model of the hill slope was created using CATchment Hydrology (CATHY). The model was then brought to an initial steady state by applying a rainfall event of 5mm/day for 180 days. Then a specific rainfall experiment of alternating intensities was applied to the model. Next, a parameter calibration was conducted, to fit the model to the observed data, by changing soil parameters individually. The parameters of the best fitting calibration were taken to be the most representative of those present within the mini-LEO hill slope. Our model concluded that heterogeneities had indeed arisen as a result of the rainfall event, resulting in a lower hydraulic conductivity downslope. The lower hydraulic conductivity downslope in turn caused in an increased storage of water and a decrease in seepage flow compared to homogeneous models. This shows that the hydraulic processes acting within a landscape can change the very characteristics of the landscape itself, namely the permeability and conductivity of the soil. In the future results from the excavation of soil in mini-LEO can be compared to the models results to improve the model and validate its findings.

  6. Soil-Related Input Parameters for the Biosphere Model

    SciTech Connect

    A. J. Smith

    2004-09-09

    This report presents one of the analyses that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the details of the conceptual model as well as the mathematical model and the required input parameters. The biosphere model is one of a series of process models supporting the postclosure Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A schematic representation of the documentation flow for the Biosphere input to TSPA is presented in Figure 1-1. This figure shows the evolutionary relationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (TWP) (BSC 2004 [DIRS 169573]). This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil-Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. The purpose of this analysis was to develop the biosphere model parameters associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation or ash deposition and, as a direct consequence, radionuclide concentration in other environmental media that are affected by radionuclide concentrations in soil. The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573]) where the governing procedure was defined as AP-SIII.9Q, ''Scientific Analyses''. This

  7. Space geodetic techniques for global modeling of ionospheric peak parameters

    NASA Astrophysics Data System (ADS)

    Alizadeh, M. Mahdi; Schuh, Harald; Schmidt, Michael

    The rapid development of new technological systems for navigation, telecommunication, and space missions which transmit signals through the Earth’s upper atmosphere - the ionosphere - makes the necessity of precise, reliable and near real-time models of the ionospheric parameters more crucial. In the last decades space geodetic techniques have turned into a capable tool for measuring ionospheric parameters in terms of Total Electron Content (TEC) or the electron density. Among these systems, the current space geodetic techniques, such as Global Navigation Satellite Systems (GNSS), Low Earth Orbiting (LEO) satellites, satellite altimetry missions, and others have found several applications in a broad range of commercial and scientific fields. This paper aims at the development of a three-dimensional integrated model of the ionosphere, by using various space geodetic techniques and applying a combination procedure for computation of the global model of electron density. In order to model ionosphere in 3D, electron density is represented as a function of maximum electron density (NmF2), and its corresponding height (hmF2). NmF2 and hmF2 are then modeled in longitude, latitude, and height using two sets of spherical harmonic expansions with degree and order 15. To perform the estimation, GNSS input data are simulated in such a way that the true position of the satellites are detected and used, but the STEC values are obtained through a simulation procedure, using the IGS VTEC maps. After simulating the input data, the a priori values required for the estimation procedure are calculated using the IRI-2012 model and also by applying the ray-tracing technique. The estimated results are compared with F2-peak parameters derived from the IRI model to assess the least-square estimation procedure and moreover, to validate the developed maps, the results are compared with the raw F2-peak parameters derived from the Formosat-3/Cosmic data.

  8. Parameter identifiability of cardiac ionic models using a novel CellML least squares optimization tool.

    PubMed

    Hui, Ben B B; Dokos, Socrates; Lovell, Nigel H

    2007-01-01

    Published models of excitable cells can be used to fit to a range of action potential experimental data. CellML is a well-defined standard for publishing and exchanging such models, but currently there is a lack of software that utilizes CellML for parameter analysis. In this paper, we introduce a Java-based utility capable of performing model simulation, identifiability analysis, and parameter optimization of ionic cardiac cell models written in CellML. Identifiability analysis was performed in seven CellML models. Parameter identifiability was consistently improved by using the compensatory membrane current as opposed to the membrane voltage as the residual. as well as through the introduction of an additional stimulus set used in the fitting process. PMID:18003205

  9. Realistic uncertainties on Hapke model parameters from photometric measurement

    NASA Astrophysics Data System (ADS)

    Schmidt, Frédéric; Fernando, Jennifer

    2015-11-01

    The single particle phase function describes the manner in which an average element of a granular material diffuses the light in the angular space usually with two parameters: the asymmetry parameter b describing the width of the scattering lobe and the backscattering fraction c describing the main direction of the scattering lobe. Hapke proposed a convenient and widely used analytical model to describe the spectro-photometry of granular materials. Using a compilation of the published data, Hapke (Hapke, B. [2012]. Icarus 221, 1079-1083) recently studied the relationship of b and c for natural examples and proposed the hockey stick relation (excluding b > 0.5 and c > 0.5). For the moment, there is no theoretical explanation for this relationship. One goal of this article is to study a possible bias due to the retrieval method. We expand here an innovative Bayesian inversion method in order to study into detail the uncertainties of retrieved parameters. On Emission Phase Function (EPF) data, we demonstrate that the uncertainties of the retrieved parameters follow the same hockey stick relation, suggesting that this relation is due to the fact that b and c are coupled parameters in the Hapke model instead of a natural phenomena. Nevertheless, the data used in the Hapke (Hapke, B. [2012]. Icarus 221, 1079-1083) compilation generally are full Bidirectional Reflectance Diffusion Function (BRDF) that are shown not to be subject to this artifact. Moreover, the Bayesian method is a good tool to test if the sampling geometry is sufficient to constrain the parameters (single scattering albedo, surface roughness, b, c , opposition effect). We performed sensitivity tests by mimicking various surface scattering properties and various single image-like/disk resolved image, EPF-like and BRDF-like geometric sampling conditions. The second goal of this article is to estimate the favorable geometric conditions for an accurate estimation of photometric parameters in order to provide

  10. Model and parameter uncertainty in IDF relationships under climate change

    NASA Astrophysics Data System (ADS)

    Chandra, Rupa; Saha, Ujjwal; Mujumdar, P. P.

    2015-05-01

    Quantifying distributional behavior of extreme events is crucial in hydrologic designs. Intensity Duration Frequency (IDF) relationships are used extensively in engineering especially in urban hydrology, to obtain return level of extreme rainfall event for a specified return period and duration. Major sources of uncertainty in the IDF relationships are due to insufficient quantity and quality of data leading to parameter uncertainty due to the distribution fitted to the data and uncertainty as a result of using multiple GCMs. It is important to study these uncertainties and propagate them to future for accurate assessment of return levels for future. The objective of this study is to quantify the uncertainties arising from parameters of the distribution fitted to data and the multiple GCM models using Bayesian approach. Posterior distribution of parameters is obtained from Bayes rule and the parameters are transformed to obtain return levels for a specified return period. Markov Chain Monte Carlo (MCMC) method using Metropolis Hastings algorithm is used to obtain the posterior distribution of parameters. Twenty six CMIP5 GCMs along with four RCP scenarios are considered for studying the effects of climate change and to obtain projected IDF relationships for the case study of Bangalore city in India. GCM uncertainty due to the use of multiple GCMs is treated using Reliability Ensemble Averaging (REA) technique along with the parameter uncertainty. Scale invariance theory is employed for obtaining short duration return levels from daily data. It is observed that the uncertainty in short duration rainfall return levels is high when compared to the longer durations. Further it is observed that parameter uncertainty is large compared to the model uncertainty.

  11. A state parameter-based model for static recrystallization interacting with precipitation

    NASA Astrophysics Data System (ADS)

    Buken, Heinrich; Sherstnev, Pavel; Kozeschnik, Ernst

    2016-03-01

    In the present work, we develop a state parameter-based model for the treatment of simultaneous precipitation and recrystallization based on a single-parameter representation of the total dislocation density and a multi-particle multi-component framework for precipitation kinetics. In contrast to conventional approaches, the interaction of particles with recrystallization is described with a non-zero grain boundary mobility even for the case where the Zener pressure exceeds the driving pressure for recrystallization. The model successfully reproduces the experimentally observed particle-induced recrystallization stasis and subsequent continuation in micro-alloyed steel with a single consistent set of input parameters. In addition, as a state parameter-based approach, our model naturally supports introspection into the physical mechanisms governing the competing recrystallization and recovery processes.

  12. Inverse parameter determination in the development of an optimized lithium iron phosphate - Graphite battery discharge model

    NASA Astrophysics Data System (ADS)

    Maheshwari, Arpit; Dumitrescu, Mihaela Aneta; Destro, Matteo; Santarelli, Massimo

    2016-03-01

    Battery models are riddled with incongruous values of parameters considered for validation. In this work, thermally coupled electrochemical model of the pouch is developed and discharge tests on a LiFePO4 pouch cell at different discharge rates are used to optimize the LiFePO4 battery model by determining parameters for which there is no consensus in literature. A discussion on parameter determination, selection and comparison with literature values has been made. The electrochemical model is a P2D model, while the thermal model considers heat transfer in 3D. It is seen that even with no phase change considered for LiFePO4 electrode, the model is able to simulate the discharge curves over a wide range of discharge rates with a single set of parameters provided a dependency of the radius of the LiFePO4 electrode on discharge rate. The approach of using a current dependent radius is shown to be equivalent to using a current dependent diffusion coefficient. Both these modelling approaches are a representation of the particle size distribution in the electrode. Additionally, the model has been thermally validated, which increases the confidence level in the selection of values of parameters.

  13. A constraint-based search algorithm for parameter identification of environmental models

    NASA Astrophysics Data System (ADS)

    Gharari, S.; Shafiei, M.; Hrachowitz, M.; Kumar, R.; Fenicia, F.; Gupta, H. V.; Savenije, H. H. G.

    2014-12-01

    Many environmental systems models, such as conceptual rainfall-runoff models, rely on model calibration for parameter identification. For this, an observed output time series (such as runoff) is needed, but frequently not available (e.g., when making predictions in ungauged basins). In this study, we provide an alternative approach for parameter identification using constraints based on two types of restrictions derived from prior (or expert) knowledge. The first, called parameter constraints, restricts the solution space based on realistic relationships that must hold between the different model parameters while the second, called process constraints requires that additional realism relationships between the fluxes and state variables must be satisfied. Specifically, we propose a search algorithm for finding parameter sets that simultaneously satisfy such constraints, based on stepwise sampling of the parameter space. Such parameter sets have the desirable property of being consistent with the modeler's intuition of how the catchment functions, and can (if necessary) serve as prior information for further investigations by reducing the prior uncertainties associated with both calibration and prediction.

  14. Mass balance model parameter transferability on a tropical glacier

    NASA Astrophysics Data System (ADS)

    Gurgiser, Wolfgang; Mölg, Thomas; Nicholson, Lindsey; Kaser, Georg

    2013-04-01

    The mass balance and melt water production of glaciers is of particular interest in the Peruvian Andes where glacier melt water has markedly increased water supply during the pronounced dry seasons in recent decades. However, the melt water contribution from glaciers is projected to decrease with appreciable negative impacts on the local society within the coming decades. Understanding mass balance processes on tropical glaciers is a prerequisite for modeling present and future glacier runoff. As a first step towards this aim we applied a process-based surface mass balance model in order to calculate observed ablation at two stakes in the ablation zone of Shallap Glacier (4800 m a.s.l., 9°S) in the Cordillera Blanca, Peru. Under the tropical climate, the snow line migrates very frequently across most of the ablation zone all year round causing large temporal and spatial variations of glacier surface conditions and related ablation. Consequently, pronounced differences between the two chosen stakes and the two years were observed. Hourly records of temperature, humidity, wind speed, short wave incoming radiation, and precipitation are available from an automatic weather station (AWS) on the moraine near the glacier for the hydrological years 2006/07 and 2007/08 while stake readings are available at intervals of between 14 to 64 days. To optimize model parameters, we used 1000 model simulations in which the most sensitive model parameters were varied randomly within their physically meaningful ranges. The modeled surface height change was evaluated against the two stake locations in the lower ablation zone (SH11, 4760m) and in the upper ablation zone (SH22, 4816m), respectively. The optimal parameter set for each point achieved good model skill but if we transfer the best parameter combination from one stake site to the other stake site model errors increases significantly. The same happens if we optimize the model parameters for each year individually and transfer

  15. Using Generalized Additive Models to Analyze Single-Case Designs

    ERIC Educational Resources Information Center

    Shadish, William; Sullivan, Kristynn

    2013-01-01

    Many analyses for single-case designs (SCDs)--including nearly all the effect size indicators-- currently assume no trend in the data. Regression and multilevel models allow for trend, but usually test only linear trend and have no principled way of knowing if higher order trends should be represented in the model. This paper shows how Generalized…

  16. Estimating demographic parameters using hidden process dynamic models.

    PubMed

    Gimenez, Olivier; Lebreton, Jean-Dominique; Gaillard, Jean-Michel; Choquet, Rémi; Pradel, Roger

    2012-12-01

    Structured population models are widely used in plant and animal demographic studies to assess population dynamics. In matrix population models, populations are described with discrete classes of individuals (age, life history stage or size). To calibrate these models, longitudinal data are collected at the individual level to estimate demographic parameters. However, several sources of uncertainty can complicate parameter estimation, such as imperfect detection of individuals inherent to monitoring in the wild and uncertainty in assigning a state to an individual. Here, we show how recent statistical models can help overcome these issues. We focus on hidden process models that run two time series in parallel, one capturing the dynamics of the true states and the other consisting of observations arising from these underlying possibly unknown states. In a first case study, we illustrate hidden Markov models with an example of how to accommodate state uncertainty using Frequentist theory and maximum likelihood estimation. In a second case study, we illustrate state-space models with an example of how to estimate lifetime reproductive success despite imperfect detection, using a Bayesian framework and Markov Chain Monte Carlo simulation. Hidden process models are a promising tool as they allow population biologists to cope with process variation while simultaneously accounting for observation error. PMID:22373775

  17. Comparison of Cone Model Parameters for Halo Coronal Mass Ejections

    NASA Astrophysics Data System (ADS)

    Na, Hyeonock; Moon, Y.-J.; Jang, Soojeong; Lee, Kyoung-Sun; Kim, Hae-Yeon

    2013-11-01

    Halo coronal mass ejections (HCMEs) are a major cause of geomagnetic storms, hence their three-dimensional structures are important for space weather. We compare three cone models: an elliptical-cone model, an ice-cream-cone model, and an asymmetric-cone model. These models allow us to determine three-dimensional parameters of HCMEs such as radial speed, angular width, and the angle [ γ] between sky plane and cone axis. We compare these parameters obtained from three models using 62 HCMEs observed by SOHO/LASCO from 2001 to 2002. Then we obtain the root-mean-square (RMS) error between the highest measured projection speeds and their calculated projection speeds from the cone models. As a result, we find that the radial speeds obtained from the models are well correlated with one another ( R > 0.8). The correlation coefficients between angular widths range from 0.1 to 0.48 and those between γ-values range from -0.08 to 0.47, which is much smaller than expected. The reason may be the different assumptions and methods. The RMS errors between the highest measured projection speeds and the highest estimated projection speeds of the elliptical-cone model, the ice-cream-cone model, and the asymmetric-cone model are 376 km s-1, 169 km s-1, and 152 km s-1. We obtain the correlation coefficients between the location from the models and the flare location ( R > 0.45). Finally, we discuss strengths and weaknesses of these models in terms of space-weather application.

  18. Multiple beam interference model for measuring parameters of a capillary.

    PubMed

    Xu, Qiwei; Tian, Wenjing; You, Zhihong; Xiao, Jinghua

    2015-08-01

    A multiple beam interference model based on the ray tracing method and interference theory is built to analyze the interference patterns of a capillary tube filled with a liquid. The relations between the angular widths of the interference fringes and the parameters of both the capillary and liquid are derived. Based on these relations, an approach is proposed to simultaneously determine four parameters of the capillary, i.e., the inner and outer radii of the capillary, the refractive indices of the liquid, and the wall material. PMID:26368114

  19. Inversion of canopy reflectance models for estimation of vegetation parameters

    NASA Technical Reports Server (NTRS)

    Goel, Narendra S.

    1987-01-01

    One of the keys to successful remote sensing of vegetation is to be able to estimate important agronomic parameters like leaf area index (LAI) and biomass (BM) from the bidirectional canopy reflectance (CR) data obtained by a space-shuttle or satellite borne sensor. One approach for such an estimation is through inversion of CR models which relate these parameters to CR. The feasibility of this approach was shown. The overall objective of the research carried out was to address heretofore uninvestigated but important fundamental issues, develop the inversion technique further, and delineate its strengths and limitations.

  20. Order parameter in complex dipolar structures: Microscopic modeling

    NASA Astrophysics Data System (ADS)

    Prosandeev, S.; Bellaiche, L.

    2008-02-01

    Microscopic models have been used to reveal the existence of an order parameter that is associated with many complex dipolar structures in magnets and ferroelectrics. This order parameter involves a double cross product of the local dipoles with their positions. It provides a measure of subtle microscopic features, such as the helicity of the two domains inherent to onion states, curvature of the dipolar pattern in flower states, or characteristics of sets of vortices with opposite chirality (e.g., distance between the vortex centers and/or the magnitude of their local dipoles).

  1. Considering Measurement Model Parameter Errors in Static and Dynamic Systems

    NASA Astrophysics Data System (ADS)

    Woodbury, Drew P.; Majji, Manoranjan; Junkins, John L.

    2011-07-01

    In static systems, state values are estimated using traditional least squares techniques based on a redundant set of measurements. Inaccuracies in measurement model parameter estimates can lead to significant errors in the state estimates. This paper describes a technique that considers these parameters in a modified least squares framework. It is also shown that this framework leads to the minimum variance solution. Both batch and sequential (recursive) least squares methods are described. One static system and one dynamic system are used as examples to show the benefits of the consider least squares methodology.

  2. Parameter estimation for a nonlinear control-oriented tokamak profile evolution model

    NASA Astrophysics Data System (ADS)

    Geelen, P.; Felici, F.; Merle, A.; Sauter, O.

    2015-12-01

    A control-oriented tokamak profile evolution model is crucial for the development and testing of control schemes for a fusion plasma. The RAPTOR (RApid Plasma Transport simulatOR) code was developed with this aim in mind (Felici 2011 Nucl. Fusion 51 083052). The performance of the control system strongly depends on the quality of the control-oriented model predictions. In RAPTOR a semi-empirical transport model is used, instead of a first-principles physics model, to describe the electron heat diffusivity {χ\\text{e}} in view of computational speed. The structure of the empirical model is given by the physics knowledge, and only some unknown physics of {χ\\text{e}} , which is more complicated and less well understood, is captured in its model parameters. Additionally, time-averaged sawtooth behavior is modeled by an ad hoc addition to the neoclassical conductivity {σ\\parallel} and electron heat diffusivity. As a result, RAPTOR contains parameters that need to be estimated for a tokamak plasma to make reliable predictions. In this paper a generic parameter estimation method, based on the nonlinear least-squares theory, was developed to estimate these model parameters. For the TCV tokamak, interpretative transport simulations that used measured {{T}\\text{e}} profiles were performed and it was shown that the developed method is capable of finding the model parameters such that RAPTOR’s predictions agree within ten percent with the simulated q profile and twenty percent with the measured {{T}\\text{e}} profile. The newly developed model-parameter estimation procedure now results in a better description of a fusion plasma and allows for a less ad hoc and more automated method to implement RAPTOR on a variety of tokamaks.

  3. Additive Manufacturing of Anatomical Models from Computed Tomography Scan Data.

    PubMed

    Gür, Y

    2014-12-01

    The purpose of the study presented here was to investigate the manufacturability of human anatomical models from Computed Tomography (CT) scan data via a 3D desktop printer which uses fused deposition modelling (FDM) technology. First, Digital Imaging and Communications in Medicine (DICOM) CT scan data were converted to 3D Standard Triangle Language (STL) format by using In Vaselius digital imaging program. Once this STL file is obtained, a 3D physical version of the anatomical model can be fabricated by a desktop 3D FDM printer. As a case study, a patient's skull CT scan data was considered, and a tangible version of the skull was manufactured by a 3D FDM desktop printer. During the 3D printing process, the skull was built using acrylonitrile-butadiene-styrene (ABS) co-polymer plastic. The printed model showed that the 3D FDM printing technology is able to fabricate anatomical models with high accuracy. As a result, the skull model can be used for preoperative surgical planning, medical training activities, implant design and simulation to show the potential of the FDM technology in medical field. It will also improve communication between medical stuff and patients. Current result indicates that a 3D desktop printer which uses FDM technology can be used to obtain accurate anatomical models. PMID:26336695

  4. Pursuing parameters for critical-density dark matter models

    NASA Astrophysics Data System (ADS)

    Liddle, Andrew R.; Lyth, David H.; Schaefer, R. K.; Shafi, Q.; Viana, Pedro T. P.

    1996-07-01

    We present an extensive comparison of models of structure formation with observations, based on linear and quasi-linear theory. We assume a critical matter density, and study both cold dark matter models and cold plus hot dark matter models. We explore a wide range of parameters, by varying the fraction of hot dark matter , the Hubble parameter h and the spectral index of density perturbations n, and allowing for the possibility of gravitational waves from inflation influencing large-angle microwave background anisotropies. New calculations are made of the transfer functions describing the linear power spectrum, with special emphasis on improving the accuracy on short scales where there are strong constraints. For assessing early object formation, the transfer functions are explicitly evaluated at the appropriate redshift. The observations considered are the four-year COBE observations of microwave background anisotropies, peculiar velocity flows, the galaxy correlation function, and the abundances of galaxy clusters, quasars and damped Lyman alpha systems. Each observation is interpreted in terms of the power spectrum filtered by a top-hat window function. We find that there remains a viable region of parameter space for critical-density models when all the dark matter is cold, though h must be less than 0.5 before any fit is found and n significantly below unity is preferred. Once a hot dark matter component is invoked, a wide parameter space is acceptable, including n 1. The allowed region is characterized by 0.35 and 0.60 n 1.25, at 95 per cent confidence on at least one piece of data. There is no useful lower bound on h, and for curious combinations of the other parameters it is possible to fit the data with h as high as 0.65.

  5. Multi-objective parameter optimization of common land model using adaptive surrogate modelling

    NASA Astrophysics Data System (ADS)

    Gong, W.; Duan, Q.; Li, J.; Wang, C.; Di, Z.; Dai, Y.; Ye, A.; Miao, C.

    2014-06-01

    Parameter specification usually has significant influence on the performance of land surface models (LSMs). However, estimating the parameters properly is a challenging task due to the following reasons: (1) LSMs usually have too many adjustable parameters (20-100 or even more), leading to the curse of dimensionality in the parameter input space; (2) LSMs usually have many output variables involving water/energy/carbon cycles, so that calibrating LSMs is actually a multi-objective optimization problem; (3) regional LSMs are expensive to run, while conventional multi-objective optimization methods needs a huge number of model runs (typically 105~106). It makes parameter optimization computationally prohibitive. An uncertainty qualification framework was developed to meet the aforementioned challenges: (1) use parameter screening to reduce the number of adjustable parameters; (2) use surrogate models to emulate the response of dynamic models to the variation of adjustable parameters; (3) use an adaptive strategy to promote the efficiency of surrogate modeling based optimization; (4) use a weighting function to transfer multi-objective optimization to single objective optimization. In this study, we demonstrate the uncertainty quantification framework on a single column case study of a land surface model - Common Land Model (CoLM) and evaluate the effectiveness and efficiency of the proposed framework. The result indicated that this framework can achieve optimal parameter set using totally 411 model runs, and worth to be extended to other large complex dynamic models, such as regional land surface models, atmospheric models and climate models.

  6. Variations in environmental tritium doses due to meteorological data averaging and uncertainties in pathway model parameters

    SciTech Connect

    Kock, A.

    1996-05-01

    The objectives of this research are: (1) to calculate and compare off site doses from atmospheric tritium releases at the Savannah River Site using monthly versus 5 year meteorological data and annual source terms, including additional seasonal and site specific parameters not included in present annual assessments; and (2) to calculate the range of the above dose estimates based on distributions in model parameters given by uncertainty estimates found in the literature. Consideration will be given to the sensitivity of parameters given in former studies.

  7. Sonochemical degradation of the pharmaceutical fluoxetine: Effect of parameters, organic and inorganic additives and combination with a biological system.

    PubMed

    Serna-Galvis, Efraím A; Silva-Agredo, Javier; Giraldo-Aguirre, Ana L; Torres-Palma, Ricardo A

    2015-08-15

    Fluoxetine (FLX), one of the most widely used antidepressants in the world, is an emergent pollutant found in natural waters that causes disrupting effects on the endocrine systems of some aquatic species. This work explores the total elimination of FLX by sonochemical treatment coupled to a biological system. The biological process acting alone was shown to be unable to remove the pollutant, even under favourable conditions of pH and temperature. However, sonochemical treatment (600 kHz) was shown to be able to remove the pharmaceutical. Several parameters were evaluated for the ultrasound application: the applied power (20-60 W), dissolved gas (air, Ar and He), pH (3-11) and initial concentration of fluoxetine (2.9-162.0 μmol L(-1)). Additionally, the presence of organic (1-hexanol and 2-propanol) and inorganic (Fe(2+)) compounds in the water matrix and the degradation of FLX in a natural mineral water were evaluated. The sonochemical treatment readily eliminates FLX following a kinetic Langmuir. After 360 min of ultrasonic irradiation, 15% mineralization was achieved. Analysis of the biodegradability provided evidence that the sonochemical process transforms the pollutant into biodegradable substances, which can then be mineralized in a subsequent biological treatment. PMID:25912531

  8. Nano-Fe as feed additive improves the hematological and immunological parameters of fish, Labeo rohita H.

    NASA Astrophysics Data System (ADS)

    Behera, T.; Swain, P.; Rangacharulu, P. V.; Samanta, M.

    2014-08-01

    An experiment was conducted to compare the effects of iron oxide nanoparticles ( T 1) and ferrous sulfate ( T 2) on Indian major carp, Labeo rohita H. There were significant differences ( P < 0.05) in the final weight of T 1 and T 2 compared with the control. Survival rates were not affected by the dietary treatments. Fish fed a basal diet (control) showed lower ( P < 0.05) iron content in muscle compared to T 1 and T 2. Furthermore, the highest value ( P < 0.05) of iron content was observed in T 1. In addition, RBCs and hemoglobin levels were significantly higher in T 1 as compared to other treated groups. Different innate immune parameters such as respiratory burst activity, bactericidal activity and myeloperoxidase activity were higher in nano-Fe-treated diet ( T 1) as compared to other iron source ( T 2) and control in 30 days post-feeding. Moreover, nano-Fe appeared to be more effective ( P < 0.05) than ferrous sulfate in increasing muscle iron and hemoglobin contents. Dietary administration of nano-Fe did not cause any oxidative damage, but improved antioxidant enzymatic activities (SOD and GSH level) irrespective of different iron sources in the basal diet.

  9. Addition of Diffusion Model to MELCOR and Comparison with Data

    SciTech Connect

    Brad Merrill; Richard Moore; Chang Oh

    2004-06-01

    A chemical diffusion model was incorporated into the thermal-hydraulics package of the MELCOR Severe Accident code (Reference 1) for analyzing air ingress events for a very high temperature gas-cooled reactor.

  10. Modelling dissimilarity: generalizing ultrametric and additive tree representations.

    PubMed

    Hubert, L; Arabie, P; Meulman, J

    2001-05-01

    Methods for the hierarchical clustering of an object set produce a sequence of nested partitions such that object classes within each successive partition are constructed from the union of object classes present at the previous level. Any such sequence of nested partitions can in turn be characterized by an ultrametric. An approach to generalizing an (ultrametric) representation is proposed in which the nested character of the partition sequence is relaxed and replaced by the weaker requirement that the classes within each partition contain objects consecutive with respect to a fixed ordering of the objects. A method for fitting such a structure to a given proximity matrix is discussed, along with several alternative strategies for graphical representation. Using this same ultrametric extension, additive tree representations can also be generalized by replacing the ultrametric component in the decomposition of an additive tree (into an ultrametric and a centroid metric). A common numerical illustration is developed and maintained throughout the paper. PMID:11393895

  11. A robust method for the joint estimation of yield coefficients and kinetic parameters in bioprocess models.

    PubMed

    Vastemans, V; Rooman, M; Bogaerts, Ph

    2009-01-01

    Bioprocess model structures that require nonlinear parameter estimation, thus initialization values, are often subject to poor identification performances because of the uncertainty on those initialization values. Under some conditions on the model structure, it is possible to partially circumvent this problem by an appropriate decoupling of the linear part of the model from the nonlinear part of it. This article provides a procedure to be followed when these structural conditions are not satisfied. An original method for decoupling two sets of parameters, namely, kinetic parameters from maximum growth, production, decay rates, and yield coefficients, is presented. It exhibits the advantage of requiring only initialization of the first subset of parameters. In comparison with a classical nonlinear estimation procedure, in which all the parameters are freed, results show enhanced robustness of model identification with regard to parameter initialization errors. This is illustrated by means of three simulation case studies: a fed-batch Human Embryo Kidney cell cultivation process using a macroscopic reaction scheme description, a process of cyclodextrin-glucanotransferase production by Bacillus circulans, and a process of simultaneous starch saccharification and glucose fermentation to lactic acid by Lactobacillus delbrückii, both based on a Luedeking-Piret model structure. Additionally, perspectives of the presented procedure in the context of systematic bioprocess modeling are promising. PMID:19455623

  12. Linear parameter varying battery model identification using subspace methods

    NASA Astrophysics Data System (ADS)

    Hu, Y.; Yurkovich, S.

    2011-03-01

    The advent of hybrid and plug-in hybrid electric vehicles has created a demand for more precise battery pack management systems (BMS). Among methods used to design various components of a BMS, such as state-of-charge (SoC) estimators, model based approaches offer a good balance between accuracy, calibration effort and implementability. Because models used for these approaches are typically low in order and complexity, the traditional approach is to identify linear (or slightly nonlinear) models that are scheduled based on operating conditions. These models, formally known as linear parameter varying (LPV) models, tend to be difficult to identify because they contain a large amount of coefficients that require calibration. Consequently, the model identification process can be very laborious and time-intensive. This paper describes a comprehensive identification algorithm that uses linear-algebra-based subspace methods to identify a parameter varying state variable model that can describe the input-to-output dynamics of a battery under various operating conditions. Compared with previous methods, this approach is much faster and provides the user with information on the order of the system without placing an a priori structure on the system matrices. The entire process and various nuances are demonstrated using data collected from a lithium ion battery, and the focus is on applications for energy storage in automotive applications.

  13. Unrealistic parameter estimates in inverse modelling: A problem or a benefit for model calibration?

    USGS Publications Warehouse

    Poeter, E.P.; Hill, M.C.

    1996-01-01

    Estimation of unrealistic parameter values by inverse modelling is useful for constructed model discrimination. This utility is demonstrated using the three-dimensional, groundwater flow inverse model MODFLOWP to estimate parameters in a simple synthetic model where the true conditions and character of the errors are completely known. When a poorly constructed model is used, unreasonable parameter values are obtained even when using error free observations and true initial parameter values. This apparent problem is actually a benefit because it differentiates accurately and inaccurately constructed models. The problems seem obvious for a synthetic problem in which the truth is known, but are obscure when working with field data. Situations in which unrealistic parameter estimates indicate constructed model problems are illustrated in applications of inverse modelling to three field sites and to complex synthetic test cases in which it is shown that prediction accuracy also suffers when constructed models are inaccurate.

  14. Neural Models: An Option to Estimate Seismic Parameters of Accelerograms

    NASA Astrophysics Data System (ADS)

    Alcántara, L.; García, S.; Ovando-Shelley, E.; Macías, M. A.

    2014-12-01

    Seismic instrumentation for recording strong earthquakes, in Mexico, goes back to the 60´s due the activities carried out by the Institute of Engineering at Universidad Nacional Autónoma de México. However, it was after the big earthquake of September 19, 1985 (M=8.1) when the project of seismic instrumentation assumes a great importance. Currently, strong ground motion networks have been installed for monitoring seismic activity mainly along the Mexican subduction zone and in Mexico City. Nevertheless, there are other major regions and cities that can be affected by strong earthquakes and have not yet begun their seismic instrumentation program or this is still in development.Because of described situation some relevant earthquakes (e.g. Huajuapan de León Oct 24, 1980 M=7.1, Tehuacán Jun 15, 1999 M=7 and Puerto Escondido Sep 30, 1999 M= 7.5) have not been registered properly in some cities, like Puebla and Oaxaca, and that were damaged during those earthquakes. Fortunately, the good maintenance work carried out in the seismic network has permitted the recording of an important number of small events in those cities. So in this research we present a methodology based on the use of neural networks to estimate significant duration and in some cases the response spectra for those seismic events. The neural model developed predicts significant duration in terms of magnitude, epicenter distance, focal depth and soil characterization. Additionally, for response spectra we used a vector of spectral accelerations. For training the model we selected a set of accelerogram records obtained from the small events recorded in the strong motion instruments installed in the cities of Puebla and Oaxaca. The final results show that neural networks as a soft computing tool that use a multi-layer feed-forward architecture provide good estimations of the target parameters and they also have a good predictive capacity to estimate strong ground motion duration and response spectra.

  15. Dynamic imaging model and parameter optimization for a star tracker.

    PubMed

    Yan, Jinyun; Jiang, Jie; Zhang, Guangjun

    2016-03-21

    Under dynamic conditions, star spots move across the image plane of a star tracker and form a smeared star image. This smearing effect increases errors in star position estimation and degrades attitude accuracy. First, an analytical energy distribution model of a smeared star spot is established based on a line segment spread function because the dynamic imaging process of a star tracker is equivalent to the static imaging process of linear light sources. The proposed model, which has a clear physical meaning, explicitly reflects the key parameters of the imaging process, including incident flux, exposure time, velocity of a star spot in an image plane, and Gaussian radius. Furthermore, an analytical expression of the centroiding error of the smeared star spot is derived using the proposed model. An accurate and comprehensive evaluation of centroiding accuracy is obtained based on the expression. Moreover, analytical solutions of the optimal parameters are derived to achieve the best performance in centroid estimation. Finally, we perform numerical simulations and a night sky experiment to validate the correctness of the dynamic imaging model, the centroiding error expression, and the optimal parameters. PMID:27136791

  16. Modeling crash spatial heterogeneity: random parameter versus geographically weighting.

    PubMed

    Xu, Pengpeng; Huang, Helai

    2015-02-01

    The widely adopted techniques for regional crash modeling include the negative binomial model (NB) and Bayesian negative binomial model with conditional autoregressive prior (CAR). The outputs from both models consist of a set of fixed global parameter estimates. However, the impacts of predicting variables on crash counts might not be stationary over space. This study intended to quantitatively investigate this spatial heterogeneity in regional safety modeling using two advanced approaches, i.e., random parameter negative binomial model (RPNB) and semi-parametric geographically weighted Poisson regression model (S-GWPR). Based on a 3-year data set from the county of Hillsborough, Florida, results revealed that (1) both RPNB and S-GWPR successfully capture the spatially varying relationship, but the two methods yield notably different sets of results; (2) the S-GWPR performs best with the highest value of Rd(2) as well as the lowest mean absolute deviance and Akaike information criterion measures. Whereas the RPNB is comparable to the CAR, in some cases, it provides less accurate predictions; (3) a moderately significant spatial correlation is found in the residuals of RPNB and NB, implying the inadequacy in accounting for the spatial correlation existed across adjacent zones. As crash data are typically collected with reference to location dimension, it is desirable to firstly make use of the geographical component to explore explicitly spatial aspects of the crash data (i.e., the spatial heterogeneity, or the spatially structured varying relationships), then is the unobserved heterogeneity by non-spatial or fuzzy techniques. The S-GWPR is proven to be more appropriate for regional crash modeling as the method outperforms the global models in capturing the spatial heterogeneity occurring in the relationship that is model, and compared with the non-spatial model, it is capable of accounting for the spatial correlation in crash data. PMID:25460087

  17. Modeling association among demographic parameters in analysis of open population capture?recapture data

    USGS Publications Warehouse

    Link, W.A.; Barker, R.J.

    2005-01-01

    We present a hierarchical extension of the Cormack?Jolly?Seber (CJS) model for open population capture?recapture data. In addition to recaptures of marked animals, we model first captures of animals and losses on capture. The parameter set includes capture probabilities, survival rates, and birth rates. The survival rates and birth rates are treated as a random sample from a bivariate distribution, thus the model explicitly incorporates correlation in these demographic rates. A key feature of the model is that the likelihood function, which includes a CJS model factor, is expressed entirely in terms of identifiable parameters; losses on capture can be factored out of the model. Since the computational complexity of classical likelihood methods is prohibitive, we use Markov chain Monte Carlo in a Bayesian analysis. We describe an efficient candidate-generation scheme for Metropolis?Hastings sampling of CJS models and extensions. The procedure is illustrated using mark-recapture data for the moth Gonodontis bidentata.

  18. Dependency of parameter values of a crop model on the spatial scale of simulation

    NASA Astrophysics Data System (ADS)

    Iizumi, Toshichika; Tanaka, Yukiko; Sakurai, Gen; Ishigooka, Yasushi; Yokozawa, Masayuki

    2014-09-01

    Reliable regional-scale representation of crop growth and yields has been increasingly important in earth system modeling for the simulation of atmosphere-vegetation-soil interactions in managed ecosystems. While the parameter values in many crop models are location specific or cultivar specific, the validity of such values for regional simulation is in question. We present the scale dependency of likely parameter values that are related to the responses of growth rate and yield to temperature, using the paddy rice model applied to Japan as an example. For all regions, values of the two parameters that determine the degree of yield response to low temperature (the base temperature for calculating cooling degree days and the curvature factor of spikelet sterility caused by low temperature) appeared to change relative to the grid interval. Two additional parameters (the air temperature at which the developmental rate is half of the maximum rate at the optimum temperature and the value of developmental index at which point the crop becomes sensitive to the photoperiod) showed scale dependency in a limited region, whereas the remaining three parameters that determine the phenological characteristics of a rice cultivar and the technological level show no clear scale dependency. These results indicate the importance of using appropriate parameter values for the spatial scale at which a crop model operates. We recommend avoiding the use of location-specific or cultivar-specific parameter values for regional crop simulation, unless a rationale is presented suggesting these values are insensitive to spatial scale.

  19. HOM study and parameter calculation of the TESLA cavity model

    NASA Astrophysics Data System (ADS)

    Zeng, Ri-Hua; Schuh, Marcel; Gerigk, Frank; Wegner, Rolf; Pan, Wei-Min; Wang, Guang-Wei; Liu, Rong

    2010-01-01

    The Superconducting Proton Linac (SPL) is the project for a superconducting, high current H-accelerator at CERN. To find dangerous higher order modes (HOMs) in the SPL superconducting cavities, simulation and analysis for the cavity model using simulation tools are necessary. The existing TESLA 9-cell cavity geometry data have been used for the initial construction of the models in HFSS. Monopole, dipole and quadrupole modes have been obtained by applying different symmetry boundaries on various cavity models. In calculation, scripting language in HFSS was used to create scripts to automatically calculate the parameters of modes in these cavity models (these scripts are also available in other cavities with different cell numbers and geometric structures). The results calculated automatically are then compared with the values given in the TESLA paper. The optimized cavity model with the minimum error will be taken as the base for further simulation of the SPL cavities.

  20. Neural mass model parameter identification for MEG/EEG

    NASA Astrophysics Data System (ADS)

    Kybic, Jan; Faugeras, Olivier; Clerc, Maureen; Papadopoulo, Théo

    2007-03-01

    Electroencephalography (EEG) and magnetoencephalography (MEG) have excellent time resolution. However, the poor spatial resolution and small number of sensors do not permit to reconstruct a general spatial activation pattern. Moreover, the low signal to noise ratio (SNR) makes accurate reconstruction of a time course also challenging. We therefore propose to use constrained reconstruction, modeling the relevant part of the brain using a neural mass model: There is a small number of zones that are considered as entities, neurons within a zone are assumed to be activated simultaneously. The location and spatial extend of the zones as well as the interzonal connection pattern can be determined from functional MRI (fMRI), diffusion tensor MRI (DTMRI), and other anatomical and brain mapping observation techniques. The observation model is linear, its deterministic part is known from EEG/MEG forward modeling, the statistics of the stochastic part can be estimated. The dynamics of the neural model is described by a moderate number of parameters that can be estimated from the recorded EEG/MEG data. We explicitly model the long-distance communication delays. Our parameters have physiological meaning and their plausible range is known. Since the problem is highly nonlinear, a quasi-Newton optimization method with random sampling and automatic success evaluation is used. The actual connection topology can be identified from several possibilities. The method was tested on synthetic data as well as on true MEG somatosensory-evoked field (SEF) data.

  1. Concentration Addition, Independent Action and Generalized Concentration Addition Models for Mixture Effect Prediction of Sex Hormone Synthesis In Vitro

    PubMed Central

    Hadrup, Niels; Taxvig, Camilla; Pedersen, Mikael; Nellemann, Christine; Hass, Ulla; Vinggaard, Anne Marie

    2013-01-01

    Humans are concomitantly exposed to numerous chemicals. An infinite number of combinations and doses thereof can be imagined. For toxicological risk assessment the mathematical prediction of mixture effects, using knowledge on single chemicals, is therefore desirable. We investigated pros and cons of the concentration addition (CA), independent action (IA) and generalized concentration addition (GCA) models. First we measured effects of single chemicals and mixtures thereof on steroid synthesis in H295R cells. Then single chemical data were applied to the models; predictions of mixture effects were calculated and compared to the experimental mixture data. Mixture 1 contained environmental chemicals adjusted in ratio according to human exposure levels. Mixture 2 was a potency adjusted mixture containing five pesticides. Prediction of testosterone effects coincided with the experimental Mixture 1 data. In contrast, antagonism was observed for effects of Mixture 2 on this hormone. The mixtures contained chemicals exerting only limited maximal effects. This hampered prediction by the CA and IA models, whereas the GCA model could be used to predict a full dose response curve. Regarding effects on progesterone and estradiol, some chemicals were having stimulatory effects whereas others had inhibitory effects. The three models were not applicable in this situation and no predictions could be performed. Finally, the expected contributions of single chemicals to the mixture effects were calculated. Prochloraz was the predominant but not sole driver of the mixtures, suggesting that one chemical alone was not responsible for the mixture effects. In conclusion, the GCA model seemed to be superior to the CA and IA models for the prediction of testosterone effects. A situation with chemicals exerting opposing effects, for which the models could not be applied, was identified. In addition, the data indicate that in non-potency adjusted mixtures the effects cannot always be

  2. Parameter discovery in stochastic biological models using simulated annealing and statistical model checking.

    PubMed

    Hussain, Faraz; Jha, Sumit K; Jha, Susmit; Langmead, Christopher J

    2014-01-01

    Stochastic models are increasingly used to study the behaviour of biochemical systems. While the structure of such models is often readily available from first principles, unknown quantitative features of the model are incorporated into the model as parameters. Algorithmic discovery of parameter values from experimentally observed facts remains a challenge for the computational systems biology community. We present a new parameter discovery algorithm that uses simulated annealing, sequential hypothesis testing, and statistical model checking to learn the parameters in a stochastic model. We apply our technique to a model of glucose and insulin metabolism used for in-silico validation of artificial pancreata and demonstrate its effectiveness by developing parallel CUDA-based implementation for parameter synthesis in this model. PMID:24989866

  3. Additional interfacial force in lattice Boltzmann models for incompressible multiphase flows.

    PubMed

    Li, Q; Luo, K H; Gao, Y J; He, Y L

    2012-02-01

    The existing lattice Boltzmann models for incompressible multiphase flows are mostly constructed with two distribution functions: one is the order parameter distribution function, which is used to track the interface between different phases, and the other is the pressure distribution function for solving the velocity field. In this paper, it is shown that in these models the recovered momentum equation is inconsistent with the target one: an additional force is included in the recovered momentum equation. The additional force has the following features. First, it is proportional to the macroscopic velocity. Second, it is zero in every single-phase region but is nonzero in the interface. Therefore it can be interpreted as an interfacial force. To investigate the effects of the additional interfacial force, numerical simulations are carried out for the problem of Rayleigh-Taylor instability, droplet splashing on a thin liquid film, and the evolution of a falling droplet under gravity. Numerical results demonstrate that, with the increase of the velocity or the Reynolds number, the additional interfacial force will gradually have an important influence on the interface and affect the numerical accuracy. PMID:22463354

  4. Model for Assembly Line Re-Balancing Considering Additional Capacity and Outsourcing to Face Demand Fluctuations

    NASA Astrophysics Data System (ADS)

    Samadhi, TMAA; Sumihartati, Atin

    2016-02-01

    The most critical stage in a garment industry is sewing process, because generally, it consists of a number of operations and a large number of sewing machines for each operation. Therefore, it requires a balancing method that can assign task to work station with balance workloads. Many studies on assembly line balancing assume a new assembly line, but in reality, due to demand fluctuation and demand increased a re-balancing is needed. To cope with those fluctuating demand changes, additional capacity can be carried out by investing in spare sewing machine and paying for sewing service through outsourcing. This study develops an assembly line balancing (ALB) model on existing line to cope with fluctuating demand change. Capacity redesign is decided if the fluctuation demand exceeds the available capacity through a combination of making investment on new machines and outsourcing while considering for minimizing the cost of idle capacity in the future. The objective of the model is to minimize the total cost of the line assembly that consists of operating costs, machine cost, adding capacity cost, losses cost due to idle capacity and outsourcing costs. The model develop is based on an integer programming model. The model is tested for a set of data of one year demand with the existing number of sewing machines of 41 units. The result shows that additional maximum capacity up to 76 units of machine required when there is an increase of 60% of the average demand, at the equal cost parameters..

  5. Analysis and Modeling of soil hydrology under different soil additives in artificial runoff plots

    NASA Astrophysics Data System (ADS)

    Ruidisch, M.; Arnhold, S.; Kettering, J.; Huwe, B.; Kuzyakov, Y.; Ok, Y.; Tenhunen, J. D.

    2009-12-01

    The impact of monsoon events during June and July in the Korean project region Haean Basin, which is located in the northeastern part of South Korea plays a key role for erosion, leaching and groundwater pollution risk by agrochemicals. Therefore, the project investigates the main hydrological processes in agricultural soils under field and laboratory conditions on different scales (plot, hillslope and catchment). Soil hydrological parameters were analysed depending on different soil additives, which are known for prevention of soil erosion and nutrient loss as well as increasing of water infiltration, aggregate stability and soil fertility. Hence, synthetic water-soluble Polyacrylamides (PAM), Biochar (Black Carbon mixed with organic fertilizer), both PAM and Biochar were applied in runoff plots at three agricultural field sites. Additionally, as control a subplot was set up without any additives. The field sites were selected in areas with similar hillslope gradients and with emphasis on the dominant land management form of dryland farming in Haean, which is characterised by row planting and row covering by foil. Hydrological parameters like satured water conductivity, matrix potential and water content were analysed by infiltration experiments, continuous tensiometer measurements, time domain reflectometry as well as pressure plates to indentify characteristic water retention curves of each horizon. Weather data were observed by three weather stations next to the runoff plots. Measured data also provide the input data for modeling water transport in the unsatured zone in runoff plots with HYDRUS 1D/2D/3D and SWAT (Soil & Water Assessment Tool).

  6. Automated parameter estimation for biological models using Bayesian statistical model checking

    PubMed Central

    2015-01-01

    Background Probabilistic models have gained widespread acceptance in the systems biology community as a useful way to represent complex biological systems. Such models are developed using existing knowledge of the structure and dynamics of the system, experimental observations, and inferences drawn from statistical analysis of empirical data. A key bottleneck in building such models is that some system variables cannot be measured experimentally. These variables are incorporated into the model as numerical parameters. Determining values of these parameters that justify existing experiments and provide reliable predictions when model simulations are performed is a key research problem. Domain experts usually estimate the values of these parameters by fitting the model to experimental data. Model fitting is usually expressed as an optimization problem that requires minimizing a cost-function which measures some notion of distance between the model and the data. This optimization problem is often solved by combining local and global search methods that tend to perform well for the specific application domain. When some prior information about parameters is available, methods such as Bayesian inference are commonly used for parameter learning. Choosing the appropriate parameter search technique requires detailed domain knowledge and insight into the underlying system. Results Using an agent-based model of the dynamics of acute inflammation, we demonstrate a novel parameter estimation algorithm by discovering the amount and schedule of doses of bacterial lipopolysaccharide that guarantee a set of observed clinical outcomes with high probability. We synthesized values of twenty-eight unknown parameters such that the parameterized model instantiated with these parameter values satisfies four specifications describing the dynamic behavior of the model. Conclusions We have developed a new algorithmic technique for discovering parameters in complex stochastic models of

  7. The addition of algebraic turbulence modeling to program LAURA

    NASA Technical Reports Server (NTRS)

    Cheatwood, F. Mcneil; Thompson, R. A.

    1993-01-01

    The Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) is modified to allow the calculation of turbulent flows. This is accomplished using the Cebeci-Smith and Baldwin-Lomax eddy-viscosity models in conjunction with the thin-layer Navier-Stokes options of the program. Turbulent calculations can be performed for both perfect-gas and equilibrium flows. However, a requirement of the models is that the flow be attached. It is seen that for slender bodies, adequate resolution of the boundary-layer gradients may require more cells in the normal direction than a laminar solution, even when grid stretching is employed. Results for axisymmetric and three-dimensional flows are presented. Comparison with experimental data and other numerical results reveal generally good agreement, except in the regions of detached flow.

  8. Water quality modelling for ephemeral rivers: Model development and parameter assessment

    NASA Astrophysics Data System (ADS)

    Mannina, Giorgio; Viviani, Gaspare

    2010-11-01

    SummaryRiver water quality models can be valuable tools for the assessment and management of receiving water body quality. However, such water quality models require accurate model calibration in order to specify model parameters. Reliable model calibration requires an extensive array of water quality data that are generally rare and resource-intensive, both economically and in terms of human resources, to collect. In the case of small rivers, such data are scarce due to the fact that these rivers are generally considered too insignificant, from a practical and economic viewpoint, to justify the investment of such considerable time and resources. As a consequence, the literature contains very few studies on the water quality modelling for small rivers, and such studies as have been published are fairly limited in scope. In this paper, a simplified river water quality model is presented. The model is an extension of the Streeter-Phelps model and takes into account the physico-chemical and biological processes most relevant to modelling the quality of receiving water bodies (i.e., degradation of dissolved carbonaceous substances, ammonium oxidation, algal uptake and denitrification, dissolved oxygen balance, including depletion by degradation processes and supply by physical reaeration and photosynthetic production). The model has been applied to an Italian case study, the Oreto river (IT), which has been the object of an Italian research project aimed at assessing the river's water quality. For this reason, several monitoring campaigns have been previously carried out in order to collect water quantity and quality data on this river system. In particular, twelve river cross sections were monitored, and both flow and water quality data were collected for each cross section. The results of the calibrated model show satisfactory agreement with the measured data and results reveal important differences between the parameters used to model small rivers as compared to

  9. Modeling and Extraction of Parasitic Thermal Conductance and Intrinsic Model Parameters of Thermoelectric Modules

    NASA Astrophysics Data System (ADS)

    Sim, Minseob; Park, Hyunbin; Kim, Shiho

    2015-11-01

    We have presented both modeling and a method for extracting parasitic thermal conductance as well as intrinsic device parameters of a thermoelectric module based on information readily available in vendor datasheets. An equivalent circuit model that is compatible with circuit simulators is derived, followed by a methodology for extracting both intrinsic and parasitic model parameters. For the first time, the effective thermal resistance of the ceramic and copper interconnect layers of the thermoelectric module is extracted using only parameters listed in vendor datasheets. In the experimental condition, including under condition of varying electric current, the parameters extracted from the model accurately reproduce the performance of commercial thermoelectric modules.

  10. Modeling shortest path selection of the ant Linepithema humile using psychophysical theory and realistic parameter values.

    PubMed

    von Thienen, Wolfhard; Metzler, Dirk; Witte, Volker

    2015-05-01

    The emergence of self-organizing behavior in ants has been modeled in various theoretical approaches in the past decades. One model explains experimental observations in which Argentine ants (Linepithema humile) selected the shorter of two alternative paths from their nest to a food source (shortest path experiments). This model serves as an important example for the emergence of collective behavior and self-organization in biological systems. In addition, it inspired the development of computer algorithms for optimization problems called ant colony optimization (ACO). In the model, a choice function describing how ants react to different pheromone concentrations is fundamental. However, the parameters of the choice function were not deduced experimentally but freely adapted so that the model fitted the observations of the shortest path experiments. Thus, important knowledge was lacking about crucial model assumptions. A recent study on the Argentine ant provided this information by measuring the response of the ants to varying pheromone concentrations. In said study, the above mentioned choice function was fitted to the experimental data and its parameters were deduced. In addition, a psychometric function was fitted to the data and its parameters deduced. Based on these findings, it is possible to test the shortest path model by applying realistic parameter values. Here we present the results of such tests using Monte Carlo simulations of shortest path experiments with Argentine ants. We compare the choice function and the psychometric function, both with parameter values deduced from the above-mentioned experiments. Our results show that by applying the psychometric function, the shortest path experiments can be explained satisfactorily by the model. The study represents the first example of how psychophysical theory can be used to understand and model collective foraging behavior of ants based on trail pheromones. These findings may be important for other

  11. Bias, precision, and parameter redundancy in complex multistate models with unobservable states.

    PubMed

    Bailey, Larissa L; Converse, Sarah J; Kendall, William L

    2010-06-01

    Multistate mark-recapture models with unobservable states can yield unbiased estimators of survival probabilities in the presence of temporary emigration (i.e., in cases where some individuals are temporarily unavailable for capture). In addition, these models permit the estimation of transition probabilities between states, which may themselves be of interest; for example, when only breeding animals are available for capture. However, parameter redundancy is frequently a problem in these models, yielding biased parameter estimates and influencing model selection. Using numerical methods, we examine complex multistate mark-recapture models involving two observable and two unobservable states. This model structure was motivated by two different biological systems: one involving island-nesting albatross, and another involving pond-breeding amphibians. We found that, while many models are theoretically identifiable given appropriate constraints, obtaining accurate and precise parameter estimates in practice can be difficult. Practitioners should consider ways to increase detection probabilities or adopt robust design sampling in order to improve the properties of estimates obtained from these models. We suggest that investigators interested in using these models explore both theoretical identifiability and possible near-singularity for likely parameter values using a combination of available methods. PMID:20583702

  12. An improved swarm optimization for parameter estimation and biological model selection.

    PubMed

    Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail

    2013-01-01

    One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This

  13. An Improved Swarm Optimization for Parameter Estimation and Biological Model Selection

    PubMed Central

    Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail

    2013-01-01

    One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This

  14. Determination of structure parameters in molecular tunnelling ionisation model

    NASA Astrophysics Data System (ADS)

    Wang, Jun-Ping; Zhao, Song-Feng; Zhang, Cai-Rong; Li, Wei; Zhou, Xiao-Xin

    2014-04-01

    We extracted the accurate structure parameters in a molecular tunnelling ionisation model (the so-called MO-ADK model) for 23 selected linear molecules including some inner orbitals. The molecular wave functions with the correct asymptotic behaviour are obtained by solving the time-independent Schrödinger equation with B-spline functions and molecular potentials numerically constructed using the modified Leeuwen-Baerends (LBα) model. We show that the orientation-dependent ionisation rate reflects the shape of the ionising orbitals in general. The influences of the Stark shifts of the energy levels on the orientation-dependent ionisation rates of the polar molecules are studied. We also examine the angle-dependent ionisation rates (or probabilities) based on the MO-ADK model by comparing with the molecular strong-field approximation calculations and with recent experimental measurements.

  15. Parameter Estimation in a Delay Differential Model of ENSO

    NASA Astrophysics Data System (ADS)

    Roux, J.; Gerchinovitz, S.; Ghil, M.

    2009-04-01

    In this talk, we present very generic statistical methods to perform parameter estimation in a Delay Differential Equation. Our reference DDE is the toy model of El Nino/Southern Oscillation introduced by Ghil, Zaliapin and Thompson (2008). We first recall some properties of this model in comparison with other models, together with basic results in Functional Differential Equation theory. We then briefly describe two statistical estimation procedures (the very classic Ordinary Least Squares estimator computed via simulated annealing, and a new two stage method based on nonparametric regression using the Nadaraya-Watson kernel). We finally comment on the numerical tests we performed on simulated noised data. These results encourage further application of this kind of methods to more complex (and more realistic) models of ENSO, to other problems in the Geosciences or to other fields.

  16. Parameter and uncertainty estimation for mechanistic, spatially explicit epidemiological models

    NASA Astrophysics Data System (ADS)

    Finger, Flavio; Schaefli, Bettina; Bertuzzo, Enrico; Mari, Lorenzo; Rinaldo, Andrea

    2014-05-01

    Epidemiological models can be a crucially important tool for decision-making during disease outbreaks. The range of possible applications spans from real-time forecasting and allocation of health-care resources to testing alternative intervention mechanisms such as vaccines, antibiotics or the improvement of sanitary conditions. Our spatially explicit, mechanistic models for cholera epidemics have been successfully applied to several epidemics including, the one that struck Haiti in late 2010 and is still ongoing. Calibration and parameter estimation of such models represents a major challenge because of properties unusual in traditional geoscientific domains such as hydrology. Firstly, the epidemiological data available might be subject to high uncertainties due to error-prone diagnosis as well as manual (and possibly incomplete) data collection. Secondly, long-term time-series of epidemiological data are often unavailable. Finally, the spatially explicit character of the models requires the comparison of several time-series of model outputs with their real-world counterparts, which calls for an appropriate weighting scheme. It follows that the usual assumption of a homoscedastic Gaussian error distribution, used in combination with classical calibration techniques based on Markov chain Monte Carlo algorithms, is likely to be violated, whereas the construction of an appropriate formal likelihood function seems close to impossible. Alternative calibration methods, which allow for accurate estimation of total model uncertainty, particularly regarding the envisaged use of the models for decision-making, are thus needed. Here we present the most recent developments regarding methods for parameter and uncertainty estimation to be used with our mechanistic, spatially explicit models for cholera epidemics, based on informal measures of goodness of fit.

  17. Accelerated gravitational wave parameter estimation with reduced order modeling.

    PubMed

    Canizares, Priscilla; Field, Scott E; Gair, Jonathan; Raymond, Vivien; Smith, Rory; Tiglio, Manuel

    2015-02-20

    Inferring the astrophysical parameters of coalescing compact binaries is a key science goal of the upcoming advanced LIGO-Virgo gravitational-wave detector network and, more generally, gravitational-wave astronomy. However, current approaches to parameter estimation for these detectors require computationally expensive algorithms. Therefore, there is a pressing need for new, fast, and accurate Bayesian inference techniques. In this Letter, we demonstrate that a reduced order modeling approach enables rapid parameter estimation to be performed. By implementing a reduced order quadrature scheme within the LIGO Algorithm Library, we show that Bayesian inference on the 9-dimensional parameter space of nonspinning binary neutron star inspirals can be sped up by a factor of ∼30 for the early advanced detectors' configurations (with sensitivities down to around 40 Hz) and ∼70 for sensitivities down to around 20 Hz. This speedup will increase to about 150 as the detectors improve their low-frequency limit to 10 Hz, reducing to hours analyses which could otherwise take months to complete. Although these results focus on interferometric gravitational wave detectors, the techniques are broadly applicable to any experiment where fast Bayesian analysis is desirable. PMID:25763948

  18. Accelerated Gravitational Wave Parameter Estimation with Reduced Order Modeling

    NASA Astrophysics Data System (ADS)

    Canizares, Priscilla; Field, Scott E.; Gair, Jonathan; Raymond, Vivien; Smith, Rory; Tiglio, Manuel

    2015-02-01

    Inferring the astrophysical parameters of coalescing compact binaries is a key science goal of the upcoming advanced LIGO-Virgo gravitational-wave detector network and, more generally, gravitational-wave astronomy. However, current approaches to parameter estimation for these detectors require computationally expensive algorithms. Therefore, there is a pressing need for new, fast, and accurate Bayesian inference techniques. In this Letter, we demonstrate that a reduced order modeling approach enables rapid parameter estimation to be performed. By implementing a reduced order quadrature scheme within the LIGO Algorithm Library, we show that Bayesian inference on the 9-dimensional parameter space of nonspinning binary neutron star inspirals can be sped up by a factor of ˜30 for the early advanced detectors' configurations (with sensitivities down to around 40 Hz) and ˜70 for sensitivities down to around 20 Hz. This speedup will increase to about 150 as the detectors improve their low-frequency limit to 10 Hz, reducing to hours analyses which could otherwise take months to complete. Although these results focus on interferometric gravitational wave detectors, the techniques are broadly applicable to any experiment where fast Bayesian analysis is desirable.

  19. Computational approaches to parameter estimation and model selection in immunology

    NASA Astrophysics Data System (ADS)

    Baker, C. T. H.; Bocharov, G. A.; Ford, J. M.; Lumb, P. M.; Norton, S. J.; Paul, C. A. H.; Junt, T.; Krebs, P.; Ludewig, B.

    2005-12-01

    One of the significant challenges in biomathematics (and other areas of science) is to formulate meaningful mathematical models. Our problem is to decide on a parametrized model which is, in some sense, most likely to represent the information in a set of observed data. In this paper, we illustrate the computational implementation of an information-theoretic approach (associated with a maximum likelihood treatment) to modelling in immunology.The approach is illustrated by modelling LCMV infection using a family of models based on systems of ordinary differential and delay differential equations. The models (which use parameters that have a scientific interpretation) are chosen to fit data arising from experimental studies of virus-cytotoxic T lymphocyte kinetics; the parametrized models that result are arranged in a hierarchy by the computation of Akaike indices. The practical illustration is used to convey more general insight. Because the mathematical equations that comprise the models are solved numerically, the accuracy in the computation has a bearing on the outcome, and we address this and other practical details in our discussion.

  20. Modeling heart rate regulation--part II: parameter identification and analysis.

    PubMed

    Fowler, K R; Gray, G A; Olufsen, M S

    2008-06-01

    In part I of this study we introduced a 17-parameter model that can predict heart rate regulation during postural change from sitting to standing. In this subsequent study, we focus on the 17 model parameters needed to adequately represent the observed heart rate response. In part I and in previous work (Olufsen et al. 2006), we estimated the 17 model parameters by minimizing the least squares error between computed and measured values of the heart rate using the Nelder-Mead method (a simplex algorithm). In this study, we compare the Nelder-Mead optimization method to two sampling methods: the implicit filtering method and a genetic algorithm. We show that these off-the-shelf optimization methods can work in conjunction with the heart rate model and provide reasonable parameter estimates with little algorithm tuning. In addition, we make use of the thousands of points sampled by the optimizers in the course of the minimization to perform an overall analysis of the model itself. Our findings show that the resulting least-squares problem has multiple local minima and that the non-linear-least squares error can vary over two orders of magnitude due to the complex interaction between the model parameters, even when provided with reasonable bound constraints. PMID:18172764

  1. Parameter estimation and uncertainty quantification in a biogeochemical model using optimal experimental design methods

    NASA Astrophysics Data System (ADS)

    Reimer, Joscha; Piwonski, Jaroslaw; Slawig, Thomas

    2016-04-01

    The statistical significance of any model-data comparison strongly depends on the quality of the used data and the criterion used to measure the model-to-data misfit. The statistical properties (such as mean values, variances and covariances) of the data should be taken into account by choosing a criterion as, e.g., ordinary, weighted or generalized least squares. Moreover, the criterion can be restricted onto regions or model quantities which are of special interest. This choice influences the quality of the model output (also for not measured quantities) and the results of a parameter estimation or optimization process. We have estimated the parameters of a three-dimensional and time-dependent marine biogeochemical model describing the phosphorus cycle in the ocean. For this purpose, we have developed a statistical model for measurements of phosphate and dissolved organic phosphorus. This statistical model includes variances and correlations varying with time and location of the measurements. We compared the obtained estimations of model output and parameters for different criteria. Another question is if (and which) further measurements would increase the model's quality at all. Using experimental design criteria, the information content of measurements can be quantified. This may refer to the uncertainty in unknown model parameters as well as the uncertainty regarding which model is closer to reality. By (another) optimization, optimal measurement properties such as locations, time instants and quantities to be measured can be identified. We have optimized such properties for additional measurement for the parameter estimation of the marine biogeochemical model. For this purpose, we have quantified the uncertainty in the optimal model parameters and the model output itself regarding the uncertainty in the measurement data using the (Fisher) information matrix. Furthermore, we have calculated the uncertainty reduction by additional measurements depending on time

  2. Modelling rock-avalanche induced impact waves: Sensitivity of the model chains to model parameters

    NASA Astrophysics Data System (ADS)

    Schaub, Yvonne; Huggel, Christian

    2014-05-01

    New lakes are forming in high-mountain areas all over the world due to glacier recession. Often they will be located below steep, destabilized flanks and are therefore exposed to impacts from rock-/ice-avalanches. Several events worldwide are known, where an outburst flood has been triggered by such an impact. In regions such as in the European Alps or in the Cordillera Blanca in Peru, where valley bottoms are densely populated, these far-travelling, high-magnitude events can result in major disasters. Usually natural hazards are assessed as single hazardous processes, for the above mentioned reasons, however, development of assessment and reproduction methods of the hazardous process chain for the purpose of hazard map generation have to be brought forward. A combination of physical process models have already been suggested and illustrated by means of lake outburst in the Cordillera Blanca, Peru, where on April 11th 2010 an ice-avalanche of approx. 300'000m3 triggered an impact wave, which overtopped the 22m freeboard of the rock-dam for 5 meters and caused and outburst flood which travelled 23 km to the city of Carhuaz. We here present a study, where we assessed the sensitivity of the model chain from ice-avalanche and impact wave to single parameters considering rock-/ice-avalanche modeling by RAMMS and impact wave modeling by IBER. Assumptions on the initial rock-/ice-avalanche volume, calibration of the friction parameters in RAMMS and assumptions on erosion considered in RAMMS were parameters tested regarding their influence on overtopping parameters that are crucial for outburst flood modeling. Further the transformation of the RAMMS-output (flow height and flow velocities on the shoreline of the lake) into an inflow-hydrograph for IBER was also considered a possible source of uncertainties. Overtopping time, volume, and wave height as much as mean and maximum discharge were considered decisive parameters for the outburst flood modeling and were therewith

  3. Anisotropic effects on constitutive model parameters of aluminum alloys

    NASA Astrophysics Data System (ADS)

    Brar, Nachhatter S.; Joshi, Vasant S.

    2012-03-01

    Simulation of low velocity impact on structures or high velocity penetration in armor materials heavily rely on constitutive material models. Model constants are determined from tension, compression or torsion stress-strain at low and high strain rates at different temperatures. These model constants are required input to computer codes (LS-DYNA, DYNA3D or SPH) to accurately simulate fragment impact on structural components made of high strength 7075-T651 aluminum alloy. Johnson- Cook model constants determined for Al7075-T651 alloy bar material failed to simulate correctly the penetration into 1' thick Al-7075-T651plates. When simulation go well beyond minor parameter tweaking and experimental results show drastically different behavior it becomes important to determine constitutive parameters from the actual material used in impact/penetration experiments. To investigate anisotropic effects on the yield/flow stress of this alloy quasi-static and high strain rate tensile tests were performed on specimens fabricated in the longitudinal "L", transverse "T", and thickness "TH" directions of 1' thick Al7075 Plate. While flow stress at a strain rate of ~1/s as well as ~1100/s in the thickness and transverse directions are lower than the longitudinal direction. The flow stress in the bar was comparable to flow stress in the longitudinal direction of the plate. Fracture strain data from notched tensile specimens fabricated in the L, T, and Thickness directions of 1' thick plate are used to derive fracture constants.

  4. Hydrological Modelling and Parameter Identification for Green Roof

    NASA Astrophysics Data System (ADS)

    Lo, W.; Tung, C.

    2012-12-01

    Green roofs, a multilayered system covered by plants, can be used to replace traditional concrete roofs as one of various measures to mitigate the increasing stormwater runoff in the urban environment. Moreover, facing the high uncertainty of the climate change, the present engineering method as adaptation may be regarded as improper measurements; reversely, green roofs are unregretful and flexible, and thus are rather important and suitable. The related technology has been developed for several years and the researches evaluating the stormwater reduction performance of green roofs are ongoing prosperously. Many European counties, cities in the U.S., and other local governments incorporate green roof into the stormwater control policy. Therefore, in terms of stormwater management, it is necessary to develop a robust hydrologic model to quantify the efficacy of green roofs over different types of designs and environmental conditions. In this research, a physical based hydrologic model is proposed to simulate water flowing process in the green roof system. In particular, the model adopts the concept of water balance, bringing a relatively simple and intuitive idea. Also, the research compares the two methods in the surface water balance calculation. One is based on Green-Ampt equation, and the other is under the SCS curve number calculation. A green roof experiment is designed to collect weather data and water discharge. Then, the proposed model is verified with these observed data; furthermore, the parameters using in the model are calibrated to find appropriate values in the green roof hydrologic simulation. This research proposes a simple physical based hydrologic model and the measures to determine parameters for the model.

  5. Maximum profile likelihood estimation of differential equation parameters through model based smoothing state estimates.

    PubMed

    Campbell, D A; Chkrebtii, O

    2013-12-01

    Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories. PMID:23579098

  6. Parameter estimation for models of ligninolytic and cellulolytic enzyme kinetics

    SciTech Connect

    Wang, Gangsheng; Post, Wilfred M; Mayes, Melanie; Frerichs, Joshua T; Jagadamma, Sindhu

    2012-01-01

    While soil enzymes have been explicitly included in the soil organic carbon (SOC) decomposition models, there is a serious lack of suitable data for model parameterization. This study provides well-documented enzymatic parameters for application in enzyme-driven SOC decomposition models from a compilation and analysis of published measurements. In particular, we developed appropriate kinetic parameters for five typical ligninolytic and cellulolytic enzymes ( -glucosidase, cellobiohydrolase, endo-glucanase, peroxidase, and phenol oxidase). The kinetic parameters included the maximum specific enzyme activity (Vmax) and half-saturation constant (Km) in the Michaelis-Menten equation. The activation energy (Ea) and the pH optimum and sensitivity (pHopt and pHsen) were also analyzed. pHsen was estimated by fitting an exponential-quadratic function. The Vmax values, often presented in different units under various measurement conditions, were converted into the same units at a reference temperature (20 C) and pHopt. Major conclusions are: (i) Both Vmax and Km were log-normal distributed, with no significant difference in Vmax exhibited between enzymes originating from bacteria or fungi. (ii) No significant difference in Vmax was found between cellulases and ligninases; however, there was significant difference in Km between them. (iii) Ligninases had higher Ea values and lower pHopt than cellulases; average ratio of pHsen to pHopt ranged 0.3 0.4 for the five enzymes, which means that an increase or decrease of 1.1 1.7 pH units from pHopt would reduce Vmax by 50%. (iv) Our analysis indicated that the Vmax values from lab measurements with purified enzymes were 1 2 orders of magnitude higher than those for use in SOC decomposition models under field conditions.

  7. Additional Developments in Atmosphere Revitalization Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Coker, Robert F.; Knox, James C.; Cummings, Ramona; Brooks, Thomas; Schunk, Richard G.

    2013-01-01

    NASA's Advanced Exploration Systems (AES) program is developing prototype systems, demonstrating key capabilities, and validating operational concepts for future human missions beyond Earth orbit. These forays beyond the confines of earth's gravity will place unprecedented demands on launch systems. They must launch the supplies needed to sustain a crew over longer periods for exploration missions beyond earth's moon. Thus all spacecraft systems, including those for the separation of metabolic carbon dioxide and water from a crewed vehicle, must be minimized with respect to mass, power, and volume. Emphasis is also placed on system robustness both to minimize replacement parts and ensure crew safety when a quick return to earth is not possible. Current efforts are focused on improving the current state-of-the-art systems utilizing fixed beds of sorbent pellets by evaluating structured sorbents, seeking more robust pelletized sorbents, and examining alternate bed configurations to improve system efficiency and reliability. These development efforts combine testing of sub-scale systems and multi-physics computer simulations to evaluate candidate approaches, select the best performing options, and optimize the configuration of the selected approach. This paper describes the continuing development of atmosphere revitalization models and simulations in support of the Atmosphere Revitalization Recovery and Environmental Monitoring (ARREM)

  8. Additional Developments in Atmosphere Revitalization Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Coker, Robert F.; Knox, James C.; Cummings, Ramona; Brooks, Thomas; Schunk, Richard G.; Gomez, Carlos

    2013-01-01

    NASA's Advanced Exploration Systems (AES) program is developing prototype systems, demonstrating key capabilities, and validating operational concepts for future human missions beyond Earth orbit. These forays beyond the confines of earth's gravity will place unprecedented demands on launch systems. They must launch the supplies needed to sustain a crew over longer periods for exploration missions beyond earth's moon. Thus all spacecraft systems, including those for the separation of metabolic carbon dioxide and water from a crewed vehicle, must be minimized with respect to mass, power, and volume. Emphasis is also placed on system robustness both to minimize replacement parts and ensure crew safety when a quick return to earth is not possible. Current efforts are focused on improving the current state-of-the-art systems utilizing fixed beds of sorbent pellets by evaluating structured sorbents, seeking more robust pelletized sorbents, and examining alternate bed configurations to improve system efficiency and reliability. These development efforts combine testing of sub-scale systems and multi-physics computer simulations to evaluate candidate approaches, select the best performing options, and optimize the configuration of the selected approach. This paper describes the continuing development of atmosphere revitalization models and simulations in support of the Atmosphere Revitalization Recovery and Environmental Monitoring (ARREM) project within the AES program.

  9. Sensitivity Analysis of Parameters in Linear-Quadratic Radiobiologic Modeling

    SciTech Connect

    Fowler, Jack F.

    2009-04-01

    Purpose: Radiobiologic modeling is increasingly used to estimate the effects of altered treatment plans, especially for dose escalation. The present article shows how much the linear-quadratic (LQ) (calculated biologically equivalent dose [BED] varies when individual parameters of the LQ formula are varied by {+-}20% and by 1%. Methods: Equivalent total doses (EQD2 = normalized total doses (NTD) in 2-Gy fractions for tumor control, acute mucosal reactions, and late complications were calculated using the linear- quadratic formula with overall time: BED = nd (1 + d/ [{alpha}/{beta}]) - log{sub e}2 (T - Tk) / {alpha}Tp, where BED is BED = total dose x relative effectiveness (RE = nd (1 + d/ [{alpha}/{beta}]). Each of the five biologic parameters in turn was altered by {+-}10%, and the altered EQD2s tabulated; the difference was finally divided by 20. EQD2 or NTD is obtained by dividing BED by the RE for 2-Gy fractions, using the appropriate {alpha}/{beta} ratio. Results: Variations in tumor and acute mucosal EQD ranged from 0.1% to 0.45% per 1% change in each parameter for conventional schedules, the largest variation being caused by overall time. Variations in 'late' EQD were 0.4% to 0.6% per 1% change in the only biologic parameter, the {alpha}/{beta} ratio. For stereotactic body radiotherapy schedules, variations were larger, up to 0.6 to 0.9 for tumor and 1.6% to 1.9% for late, per 1% change in parameter. Conclusions: Robustness occurs similar to that of equivalent uniform dose (EUD), for the same reasons. Total dose, dose per fraction, and dose-rate cause their major effects, as well known.

  10. Parameter Estimation for a Model of Space-Time Rainfall

    NASA Astrophysics Data System (ADS)

    Smith, James A.; Karr, Alan F.

    1985-08-01

    In this paper, parameter estimation procedures, based on data from a network of rainfall gages, are developed for a class of space-time rainfall models. The models, which are designed to represent the spatial distribution of daily rainfall, have three components, one that governs the temporal occurrence of storms, a second that distributes rain cells spatially for a given storm, and a third that determines the rainfall pattern within a rain cell. Maximum likelihood and method of moments procedures are developed. We illustrate that limitations on model structure are imposed by restricting data sources to rain gage networks. The estimation procedures are applied to a 240-mi2 (621 km2) catchment in the Potomac River basin.

  11. Modelling of some parameters from thermoelectric power plants

    NASA Astrophysics Data System (ADS)

    Popa, G. N.; Diniş, C. M.; Deaconu, S. I.; Maksay, Şt; Popa, I.

    2016-02-01

    Paper proposing new mathematical models for the main electrical parameters (active power P, reactive power Q of power supplies) and technological (mass flow rate of steam M from boiler and dust emission E from the output of precipitator) from a thermoelectric power plants using industrial plate-type electrostatic precipitators with three sections used in electrical power plants. The mathematical models were used experimental results taken from industrial facility, from boiler and plate-type electrostatic precipitators with three sections, and has used the least squares method for their determination. The modelling has been used equations of degree 1, 2 and 3. The equations were determined between dust emission depending on active power of power supplies and mass flow rate of steam from boiler, and, also, depending on reactive power of power supplies and mass flow rate of steam from boiler. These equations can be used to control the process from electrostatic precipitators.

  12. Anisotropic Effects on Constitutive Model Parameters of Aluminum Alloys

    NASA Astrophysics Data System (ADS)

    Brar, Nachhatter; Joshi, Vasant

    2011-06-01

    Simulation of low velocity impact on structures or high velocity penetration in armor materials heavily rely on constitutive material models. The model constants are required input to computer codes (LS-DYNA, DYNA3D or SPH) to accurately simulate fragment impact on structural components made of high strength 7075-T651 aluminum alloys. Johnson-Cook model constants determined for Al7075-T651 alloy bar material failed to simulate correctly the penetration into 1' thick Al-7075-T651plates. When simulations go well beyond minor parameter tweaking and experimental results are drastically different it is important to determine constitutive parameters from the actual material used in impact/penetration experiments. To investigate anisotropic effects on the yield/flow stress of this alloy we performed quasi-static and high strain rate tensile tests on specimens fabricated in the longitudinal, transverse, and thickness directions of 1' thick Al7075-T651 plate. Flow stresses at a strain rate of ~1100/s in the longitudinal and transverse direction are similar around 670MPa and decreases to 620 MPa in the thickness direction. These data are lower than the flow stress of 760 MPa measured in Al7075-T651 bar stock.

  13. Bayesian parameter estimation for stochastic models of biological cell migration

    NASA Astrophysics Data System (ADS)

    Dieterich, Peter; Preuss, Roland

    2013-08-01

    Cell migration plays an essential role under many physiological and patho-physiological conditions. It is of major importance during embryonic development and wound healing. In contrast, it also generates negative effects during inflammation processes, the transmigration of tumors or the formation of metastases. Thus, a reliable quantification and characterization of cell paths could give insight into the dynamics of these processes. Typically stochastic models are applied where parameters are extracted by fitting models to the so-called mean square displacement of the observed cell group. We show that this approach has several disadvantages and problems. Therefore, we propose a simple procedure directly relying on the positions of the cell's trajectory and the covariance matrix of the positions. It is shown that the covariance is identical with the spatial aging correlation function for the supposed linear Gaussian models of Brownian motion with drift and fractional Brownian motion. The technique is applied and illustrated with simulated data showing a reliable parameter estimation from single cell paths.

  14. Transferability of regional permafrost disturbance susceptibility modelling using generalized linear and generalized additive models

    NASA Astrophysics Data System (ADS)

    Rudy, Ashley C. A.; Lamoureux, Scott F.; Treitz, Paul; van Ewijk, Karin Y.

    2016-07-01

    To effectively assess and mitigate risk of permafrost disturbance, disturbance-prone areas can be predicted through the application of susceptibility models. In this study we developed regional susceptibility models for permafrost disturbances using a field disturbance inventory to test the transferability of the model to a broader region in the Canadian High Arctic. Resulting maps of susceptibility were then used to explore the effect of terrain variables on the occurrence of disturbances within this region. To account for a large range of landscape characteristics, the model was calibrated using two locations: Sabine Peninsula, Melville Island, NU, and Fosheim Peninsula, Ellesmere Island, NU. Spatial patterns of disturbance were predicted with a generalized linear model (GLM) and generalized additive model (GAM), each calibrated using disturbed and randomized undisturbed locations from both locations and GIS-derived terrain predictor variables including slope, potential incoming solar radiation, wetness index, topographic position index, elevation, and distance to water. Each model was validated for the Sabine and Fosheim Peninsulas using independent data sets while the transferability of the model to an independent site was assessed at Cape Bounty, Melville Island, NU. The regional GLM and GAM validated well for both calibration sites (Sabine and Fosheim) with the area under the receiver operating curves (AUROC) > 0.79. Both models were applied directly to Cape Bounty without calibration and validated equally with AUROC's of 0.76; however, each model predicted disturbed and undisturbed samples differently. Additionally, the sensitivity of the transferred model was assessed using data sets with different sample sizes. Results indicated that models based on larger sample sizes transferred more consistently and captured the variability within the terrain attributes in the respective study areas. Terrain attributes associated with the initiation of disturbances were

  15. Estimation of the lag time in a subsequent monomer addition model for fibril elongation.

    PubMed

    Shoffner, Suzanne K; Schnell, Santiago

    2016-08-01

    Fibrillogenesis, the production or development of protein fibers, has been linked to protein folding diseases. The progress curve of fibrils or aggregates typically takes on a sigmoidal shape with a lag phase, a rapid growth phase, and a final plateau regime. The study of the lag phase and the estimation of its critical timescale provide insight into the factors regulating the fibrillation process. However, methods to estimate a quantitative expression for the lag time rely on empirical expressions, which cannot connect the lag time to kinetic parameters associated with the reaction mechanisms of protein fibrillation. Here we introduce an approach for the estimation of the lag time using the governing rate equations of the elementary reactions of a subsequent monomer addition model for protein fibrillation as a case study. We show that the lag time is given by the sum of the critical timescales for each fibril intermediate in the subsequent monomer addition mechanism and therefore reveals causal connectivity between intermediate species. Furthermore, we find that single-molecule assays of protein fibrillation can exhibit a lag phase without a nucleation process, while dyes and extrinsic fluorescent probe bulk assays of protein fibrillation do not exhibit an observable lag phase during template-dependent elongation. Our approach could be valuable for investigating the effects of intrinsic and extrinsic factors to the protein fibrillation reaction mechanism and provides physicochemical insights into parameters regulating the lag phase. PMID:27250246

  16. Statefinder Parameters for Different Dark Energy Models with Variable G Correction in Kaluza-Klein Cosmology

    NASA Astrophysics Data System (ADS)

    Chakraborty, Shuvendu; Debnath, Ujjal; Jamil, Mubasher; Myrzakulov, Ratbay

    2012-07-01

    In this work, we have calculated the deceleration parameter, statefinder parameters and EoS parameters for different dark energy models with variable G correction in homogeneous, isotropic and non-flat universe for Kaluza-Klein Cosmology. The statefinder parameters have been obtained in terms of some observable parameters like dimensionless density parameter, EoS parameter and Hubble parameter for holographic dark energy, new agegraphic dark energy and generalized Chaplygin gas models.

  17. Microbial Communities Model Parameter Calculation for TSPA/SR

    SciTech Connect

    D. Jolley

    2001-07-16

    This calculation has several purposes. First the calculation reduces the information contained in ''Committed Materials in Repository Drifts'' (BSC 2001a) to useable parameters required as input to MING V1.O (CRWMS M&O 1998, CSCI 30018 V1.O) for calculation of the effects of potential in-drift microbial communities as part of the microbial communities model. The calculation is intended to replace the parameters found in Attachment II of the current In-Drift Microbial Communities Model revision (CRWMS M&O 2000c) with the exception of Section 11-5.3. Second, this calculation provides the information necessary to supercede the following DTN: M09909SPAMING1.003 and replace it with a new qualified dataset (see Table 6.2-1). The purpose of this calculation is to create the revised qualified parameter input for MING that will allow {Delta}G (Gibbs Free Energy) to be corrected for long-term changes to the temperature of the near-field environment. Calculated herein are the quadratic or second order regression relationships that are used in the energy limiting calculations to potential growth of microbial communities in the in-drift geochemical environment. Third, the calculation performs an impact review of a new DTN: M00012MAJIONIS.000 that is intended to replace the currently cited DTN: GS9809083 12322.008 for water chemistry data used in the current ''In-Drift Microbial Communities Model'' revision (CRWMS M&O 2000c). Finally, the calculation updates the material lifetimes reported on Table 32 in section 6.5.2.3 of the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000c) based on the inputs reported in BSC (2001a). Changes include adding new specified materials and updating old materials information that has changed.

  18. Variational estimation of process parameters in a simplified atmospheric general circulation model

    NASA Astrophysics Data System (ADS)

    Lv, Guokun; Koehl, Armin; Stammer, Detlef

    2016-04-01

    Parameterizations are used to simulate effects of unresolved sub-grid-scale processes in current state-of-the-art climate model. The values of the process parameters, which determine the model's climatology, are usually manually adjusted to reduce the difference of model mean state to the observed climatology. This process requires detailed knowledge of the model and its parameterizations. In this work, a variational method was used to estimate process parameters in the Planet Simulator (PlaSim). The adjoint code was generated using automatic differentiation of the source code. Some hydrological processes were switched off to remove the influence of zero-order discontinuities. In addition, the nonlinearity of the model limits the feasible assimilation window to about 1day, which is too short to tune the model's climatology. To extend the feasible assimilation window, nudging terms for all state variables were added to the model's equations, which essentially suppress all unstable directions. In identical twin experiments, we found that the feasible assimilation window could be extended to over 1-year and accurate parameters could be retrieved. Although the nudging terms transform to a damping of the adjoint variables and therefore tend to erases the information of the data over time, assimilating climatological information is shown to provide sufficient information on the parameters. Moreover, the mechanism of this regularization is discussed.

  19. Parameter optimization in differential geometry based solvation models.

    PubMed

    Wang, Bao; Wei, G W

    2015-10-01

    Differential geometry (DG) based solvation models are a new class of variational implicit solvent approaches that are able to avoid unphysical solvent-solute boundary definitions and associated geometric singularities, and dynamically couple polar and non-polar interactions in a self-consistent framework. Our earlier study indicates that DG based non-polar solvation model outperforms other methods in non-polar solvation energy predictions. However, the DG based full solvation model has not shown its superiority in solvation analysis, due to its difficulty in parametrization, which must ensure the stability of the solution of strongly coupled nonlinear Laplace-Beltrami and Poisson-Boltzmann equations. In this work, we introduce new parameter learning algorithms based on perturbation and convex optimization theories to stabilize the numerical solution and thus achieve an optimal parametrization of the DG based solvation models. An interesting feature of the present DG based solvation model is that it provides accurate solvation free energy predictions for both polar and non-polar molecules in a unified formulation. Extensive numerical experiment demonstrates that the present DG based solvation model delivers some of the most accurate predictions of the solvation free energies for a large number of molecules. PMID:26450304

  20. Important Scaling Parameters for Testing Model-Scale Helicopter Rotors

    NASA Technical Reports Server (NTRS)

    Singleton, Jeffrey D.; Yeager, William T., Jr.

    1998-01-01

    An investigation into the effects of aerodynamic and aeroelastic scaling parameters on model scale helicopter rotors has been conducted in the NASA Langley Transonic Dynamics Tunnel. The effect of varying Reynolds number, blade Lock number, and structural elasticity on rotor performance has been studied and the performance results are discussed herein for two different rotor blade sets at two rotor advance ratios. One set of rotor blades were rigid and the other set of blades were dynamically scaled to be representative of a main rotor design for a utility class helicopter. The investigation was con-densities permits the acquisition of data for several Reynolds and Lock number combinations.

  1. Analysis of sensitivity of simulated recharge to selected parameters for seven watersheds modeled using the precipitation-runoff modeling system

    USGS Publications Warehouse

    Ely, D. Matthew

    2006-01-01

    routing parameter. Although the primary objective of this study was to identify, by geographic region, the importance of the parameter value to the simulation of ground-water recharge, the secondary objectives proved valuable for future modeling efforts. The value of a rigorous sensitivity analysis can (1) make the calibration process more efficient, (2) guide additional data collection, (3) identify model limitations, and (4) explain simulated results.

  2. Uncertainty from synergistic effects of multiple parameters in the Johnson and Ettinger (1991) vapor intrusion model

    NASA Astrophysics Data System (ADS)

    Tillman, Fred D.; Weaver, James W.

    Migration of volatile chemicals from the subsurface into overlying buildings is known as vapor intrusion (VI). Under certain circumstances, people living in homes above contaminated soil or ground water may be exposed to harmful levels of these vapors. VI is a particularly difficult pathway to assess, as challenges exist in delineating subsurface contributions to measured indoor-air concentrations as well as in adequate characterization of subsurface parameters necessary to calibrate a predictive flow and transport model. Often, a screening-level model is employed to determine if a potential indoor inhalation exposure pathway exists and, if such a pathway is complete, whether long-term exposure increases the occupants' risk for cancer or other toxic effects to an unacceptable level. A popular screening-level algorithm currently in wide use in the United States, Canada and the UK for making such determinations is the "Johnson and Ettinger" (J&E) model. Concern exists over using the J&E model for deciding whether or not further action is necessary at sites as many parameters are not routinely measured (or are un-measurable). Many screening decisions are then made based on simulations using "best estimate" look-up parameter values. While research exists on the sensitivity of the J&E model to individual parameter uncertainty, little published information is available on the combined effects of multiple uncertain parameters and their effect on screening decisions. This paper presents results of multiple-parameter uncertainty analyses using the J&E model to evaluate risk to humans from VI. Software was developed to produce automated uncertainty analyses of the model. Results indicate an increase in predicted cancer risk from multiple-parameter uncertainty by nearly a factor of 10 compared with single-parameter uncertainty. Additionally, a positive skew in model response to variation of some parameters was noted for both single and multiple parameter uncertainty analyses

  3. Insight into model mechanisms through automatic parameter fitting: a new methodological framework for model development

    PubMed Central

    2014-01-01

    Background Striking a balance between the degree of model complexity and parameter identifiability, while still producing biologically feasible simulations using modelling is a major challenge in computational biology. While these two elements of model development are closely coupled, parameter fitting from measured data and analysis of model mechanisms have traditionally been performed separately and sequentially. This process produces potential mismatches between model and data complexities that can compromise the ability of computational frameworks to reveal mechanistic insights or predict new behaviour. In this study we address this issue by presenting a generic framework for combined model parameterisation, comparison of model alternatives and analysis of model mechanisms. Results The presented methodology is based on a combination of multivariate metamodelling (statistical approximation of the input–output relationships of deterministic models) and a systematic zooming into biologically feasible regions of the parameter space by iterative generation of new experimental designs and look-up of simulations in the proximity of the measured data. The parameter fitting pipeline includes an implicit sensitivity analysis and analysis of parameter identifiability, making it suitable for testing hypotheses for model reduction. Using this approach, under-constrained model parameters, as well as the coupling between parameters within the model are identified. The methodology is demonstrated by refitting the parameters of a published model of cardiac cellular mechanics using a combination of measured data and synthetic data from an alternative model of the same system. Using this approach, reduced models with simplified expressions for the tropomyosin/crossbridge kinetics were found by identification of model components that can be omitted without affecting the fit to the parameterising data. Our analysis revealed that model parameters could be constrained to a standard

  4. System parameters for erythropoiesis control model: Comparison of normal values in human and mouse model

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer model for erythropoietic control was adapted to the mouse system by altering system parameters originally given for the human to those which more realistically represent the mouse. Parameter values were obtained from a variety of literature sources. Using the mouse model, the mouse was studied as a potential experimental model for spaceflight. Simulation studies of dehydration and hypoxia were performed. A comparison of system parameters for the mouse and human models is presented. Aside from the obvious differences expected in fluid volumes, blood flows and metabolic rates, larger differences were observed in the following: erythrocyte life span, erythropoietin half-life, and normal arterial pO2.

  5. A surrogate-model-based identification of fractional viscoelastic constitutive parameters

    NASA Astrophysics Data System (ADS)

    Zhang, Guoqing; Yang, Haitian; Xu, Yongsheng

    2015-02-01

    In order to reduce the computational expense, a Kriging surrogate model is developed as an approximation of a numerical model based on FEM (finite element method) and FDM (finite difference method) to solve direct fractional viscoelastic problems and then is combined with a gridding-partition-based continuous ant colony algorithm to identify constitutive parameters of fractional viscoelastic materials. Three kinds of modeling strategies are presented to generate the Kriging surrogate model, that is, global modeling, piecewise modeling, and reduced modeling. Two numerical examples are given to illustrate the proposed approach in terms of computing accuracy and expense. The utilization of Kriging surrogate model not only can provide a sufficient computing accuracy, but also can significantly reduce the computational cost in solving inverse fractional viscoelastic problems. In addition, regional inhomogeneity and impact of noisy data are taken into account.

  6. A Lumped Parameter Model for Feedback Studies in Tokamaks

    NASA Astrophysics Data System (ADS)

    Chance, M. S.; Chu, M. S.; Okabayashi, M.; Glasser, A. H.

    2004-11-01

    A lumped circuit model of the feedback stabilization studies in tokamaks is calculated. This work parallels the formulation by Boozer^a, is analogous to the studies done on axisymmetric modes^b, and generalizes the cylindrical model^c. The lumped circuit parameters are derived from the DCON derived eigenfunctions of the plasma, the resistive shell and the feedback coils. The inductances are calculated using the VACUUM code which is designed to calculate the responses between the various elements in the feedback system. The results are compared with the normal mode^d and the system identification^e approaches. ^aA.H. Boozer, Phys. Plasmas 5, 3350 (1998). ^b E.A. Lazarus et al., Nucl. Fusion 30, 111 (1990). ^c M. Okabayashi et al., Nucl. Fusion 38, 1607 (1998). ^dM.S. Chu et al., Nucl. Fusion 43, 441 (2003). ^eY.Q. Liu et al., Phys. Plasmas 7, 3681 (2000).

  7. Breakdown parameter for kinetic modeling of multiscale gas flows.

    PubMed

    Meng, Jianping; Dongari, Nishanth; Reese, Jason M; Zhang, Yonghao

    2014-06-01

    Multiscale methods built purely on the kinetic theory of gases provide information about the molecular velocity distribution function. It is therefore both important and feasible to establish new breakdown parameters for assessing the appropriateness of a fluid description at the continuum level by utilizing kinetic information rather than macroscopic flow quantities alone. We propose a new kinetic criterion to indirectly assess the errors introduced by a continuum-level description of the gas flow. The analysis, which includes numerical demonstrations, focuses on the validity of the Navier-Stokes-Fourier equations and corresponding kinetic models and reveals that the new criterion can consistently indicate the validity of continuum-level modeling in both low-speed and high-speed flows at different Knudsen numbers. PMID:25019910

  8. A novel criterion for determination of material model parameters

    NASA Astrophysics Data System (ADS)

    Andrade-Campos, A.; de-Carvalho, R.; Valente, R. A. F.

    2011-05-01

    Parameter identification problems have emerged due to the increasing demanding of precision in the numerical results obtained by Finite Element Method (FEM) software. High result precision can only be obtained with confident input data and robust numerical techniques. The determination of parameters should always be performed confronting numerical and experimental results leading to the minimum difference between them. However, the success of this task is dependent of the specification of the cost/objective function, defined as the difference between the experimental and the numerical results. Recently, various objective functions have been formulated to assess the errors between the experimental and computed data (Lin et al., 2002; Cao and Lin, 2008; among others). The objective functions should be able to efficiently lead the optimisation process. An ideal objective function should have the following properties: (i) all the experimental data points on the curve and all experimental curves should have equal opportunity to be optimised; and (ii) different units and/or the number of curves in each sub-objective should not affect the overall performance of the fitting. These two criteria should be achieved without manually choosing the weighting factors. However, for some non-analytical specific problems, this is very difficult in practice. Null values of experimental or numerical values also turns the task difficult. In this work, a novel objective function for constitutive model parameter identification is presented. It is a generalization of the work of Cao and Lin and it is suitable for all kinds of constitutive models and mechanical tests, including cyclic tests and Baushinger tests with null values.

  9. Model structure and parameter identification in soil carbon models using incubation data

    NASA Astrophysics Data System (ADS)

    Sierra, Carlos

    2015-04-01

    Models of soil organic matter dynamics play an important role in integrating different sources of information and help to predict future behavior of carbon stocks and fluxes in soils. In particular, compartment-based models have proved successful at integrating data from laboratory and field experiments to estimate the range of cycling rates of organic matter found in different soils. Complex models with particular mechanisms explaining processes related to the stabilization and destabilization of organic matter usually include a large number of parameters than simpler models that omit detailed mechanisms. This poses a challenge to parameterize complex models. Depending on the type of data available, the estimation of parameters in complex models may lead to identifiability problems, i.e. obtaining different combinations of parameters that give equally good predictions in relation to the observed data. In this contribution, I explore the problem of identifiability in soil organic matter models, pointing out combinations of empirical data and model structure that can minimize identifiability issues. In particular, I will show how common datasets from incubation experiments can only help to uniquely identify small number of parameters for simple models. Isotopic data and soil fractionations can help to reduce identifiability issues, but only to a limited extend. In medium-complexity models including stabilization and destabilization mechanisms, only up to 4 to 5 parameters may be uniquely identified when a full set of respiration fluxes, stocks, fractions and isotopic data are integrated to inform parameter estimation.

  10. The order parameter model of liquids and glasses with applications to dielectric relaxation

    NASA Astrophysics Data System (ADS)

    Lesikar, Arnold V.; Moynihan, Cornelius T.

    1980-08-01

    The order parameter model is generalized to describe systems whose equilibrium states depend on other intensive variables, e.g., electric field E, in addition to temperature T and pressure P. The set of order parameters required to specify the state of a liquid or glass is shown to form an abstract Euclidean vector space in the vicinity of a particular equilibrium state. The results of relaxational experiments are then connected with geometric relations in this space of order parameters. Thermodynamic stability requires that certain angles in this space have real values. This leads to thermodynamic stability conditions (TSC's), which include the well known Prigogine-Defay condition for systems with intensive variables T and P and analogs of it for systems with other sets of intensive variables. The order parameter model is applied to dielectric relaxation, and its predictions are tested against available data. It is shown that the addition of E as an intensive variable requires at least one more order parameter to specify the state of the system than the number needed when T and P are the only intensive variables.

  11. Automated optimization of water-water interaction parameters for a coarse-grained model.

    PubMed

    Fogarty, Joseph C; Chiu, See-Wing; Kirby, Peter; Jakobsson, Eric; Pandit, Sagar A

    2014-02-13

    We have developed an automated parameter optimization software framework (ParOpt) that implements the Nelder-Mead simplex algorithm and applied it to a coarse-grained polarizable water model. The model employs a tabulated, modified Morse potential with decoupled short- and long-range interactions incorporating four water molecules per interaction site. Polarizability is introduced by the addition of a harmonic angle term defined among three charged points within each bead. The target function for parameter optimization was based on the experimental density, surface tension, electric field permittivity, and diffusion coefficient. The model was validated by comparison of statistical quantities with experimental observation. We found very good performance of the optimization procedure and good agreement of the model with experiment. PMID:24460506

  12. Automated Optimization of Water–Water Interaction Parameters for a Coarse-Grained Model

    PubMed Central

    2015-01-01

    We have developed an automated parameter optimization software framework (ParOpt) that implements the Nelder–Mead simplex algorithm and applied it to a coarse-grained polarizable water model. The model employs a tabulated, modified Morse potential with decoupled short- and long-range interactions incorporating four water molecules per interaction site. Polarizability is introduced by the addition of a harmonic angle term defined among three charged points within each bead. The target function for parameter optimization was based on the experimental density, surface tension, electric field permittivity, and diffusion coefficient. The model was validated by comparison of statistical quantities with experimental observation. We found very good performance of the optimization procedure and good agreement of the model with experiment. PMID:24460506

  13. Assessing uncertainty in model parameters based on sparse and noisy experimental data.

    PubMed

    Hiroi, Noriko; Swat, Maciej; Funahashi, Akira

    2014-01-01

    To perform parametric identification of mathematical models of biological events, experimental data are rare to be sufficient to estimate target behaviors produced by complex non-linear systems. We performed parameter fitting to a cell cycle model with experimental data as an in silico experiment. We calibrated model parameters with the generalized least squares method with randomized initial values and checked local and global sensitivity of the model. Sensitivity analyses showed that parameter optimization induced less sensitivity except for those related to the metabolism of the transcription factors c-Myc and E2F, which are required to overcome a restriction point (R-point). We performed bifurcation analyses with the optimized parameters and found the bimodality was lost. This result suggests that accumulation of c-Myc and E2F induced dysfunction of R-point. We performed a second parameter optimization based on the results of sensitivity analyses and incorporating additional derived from recent in vivo data. This optimization returned the bimodal characteristics of the model with a narrower range of hysteresis than the original. This result suggests that the optimized model can more easily go through R-point and come back to the gap phase after once having overcome it. Two parameter space analyses showed metabolism of c-Myc is transformed as it can allow cell bimodal behavior with weak stimuli of growth factors. This result is compatible with the character of the cell line used in our experiments. At the same time, Rb, an inhibitor of E2F, can allow cell bimodal behavior with only a limited range of stimuli when it is activated, but with a wider range of stimuli when it is inactive. These results provide two insights; biologically, the two transcription factors play an essential role in malignant cells to overcome R-point with weaker growth factor stimuli, and theoretically, sparse time-course data can be used to change a model to a biologically expected state

  14. Variational methods to estimate terrestrial ecosystem model parameters

    NASA Astrophysics Data System (ADS)

    Delahaies, Sylvain; Roulstone, Ian

    2016-04-01

    Carbon is at the basis of the chemistry of life. Its ubiquity in the Earth system is the result of complex recycling processes. Present in the atmosphere in the form of carbon dioxide it is adsorbed by marine and terrestrial ecosystems and stored within living biomass and decaying organic matter. Then soil chemistry and a non negligible amount of time transform the dead matter into fossil fuels. Throughout this cycle, carbon dioxide is released in the atmosphere through respiration and combustion of fossils fuels. Model-data fusion techniques allow us to combine our understanding of these complex processes with an ever-growing amount of observational data to help improving models and predictions. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Over the last decade several studies have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF, 4DVAR) to estimate model parameters and initial carbon stocks for DALEC and to quantify the uncertainty in the predictions. Despite its simplicity, DALEC represents the basic processes at the heart of more sophisticated models of the carbon cycle. Using adjoint based methods we study inverse problems for DALEC with various data streams (8 days MODIS LAI, monthly MODIS LAI, NEE). The framework of constraint optimization allows us to incorporate ecological common sense into the variational framework. We use resolution matrices to study the nature of the inverse problems and to obtain data importance and information content for the different type of data. We study how varying the time step affect the solutions, and we show how "spin up" naturally improves the conditioning of the inverse problems.

  15. Adaptive neuro-fuzzy inference system (ANFIS) to predict CI engine parameters fueled with nano-particles additive to diesel fuel

    NASA Astrophysics Data System (ADS)

    Ghanbari, M.; Najafi, G.; Ghobadian, B.; Mamat, R.; Noor, M. M.; Moosavian, A.

    2015-12-01

    This paper studies the use of adaptive neuro-fuzzy inference system (ANFIS) to predict the performance parameters and exhaust emissions of a diesel engine operating on nanodiesel blended fuels. In order to predict the engine parameters, the whole experimental data were randomly divided into training and testing data. For ANFIS modelling, Gaussian curve membership function (gaussmf) and 200 training epochs (iteration) were found to be optimum choices for training process. The results demonstrate that ANFIS is capable of predicting the diesel engine performance and emissions. In the experimental step, Carbon nano tubes (CNT) (40, 80 and 120 ppm) and nano silver particles (40, 80 and 120 ppm) with nanostructure were prepared and added as additive to the diesel fuel. Six cylinders, four-stroke diesel engine was fuelled with these new blended fuels and operated at different engine speeds. Experimental test results indicated the fact that adding nano particles to diesel fuel, increased diesel engine power and torque output. For nano-diesel it was found that the brake specific fuel consumption (bsfc) was decreased compared to the net diesel fuel. The results proved that with increase of nano particles concentrations (from 40 ppm to 120 ppm) in diesel fuel, CO2 emission increased. CO emission in diesel fuel with nano-particles was lower significantly compared to pure diesel fuel. UHC emission with silver nano-diesel blended fuel decreased while with fuels that contains CNT nano particles increased. The trend of NOx emission was inverse compared to the UHC emission. With adding nano particles to the blended fuels, NOx increased compared to the net diesel fuel. The tests revealed that silver & CNT nano particles can be used as additive in diesel fuel to improve combustion of the fuel and reduce the exhaust emissions significantly.

  16. Establishing a connection between hydrologic model parameters and physical catchment signatures for improved hierarchical Bayesian modeling in ungauged catchments

    NASA Astrophysics Data System (ADS)

    Marshall, L. A.; Weber, K.; Smith, T. J.; Greenwood, M. C.; Sharma, A.

    2012-12-01

    In an effort to improve hydrologic analysis in areas with limited data, hydrologists often seek to link catchments where little to no data collection occurs to catchments that are gauged. Various metrics and methods have been proposed to identify such relationships, in the hope that "surrogate" catchments might provide information for those catchments that are hydrologically similar. In this study we present a statistical analysis of over 150 catchments located in southeast Australia to examine the relationship between a hydrological model and certain catchment metrics. A conceptual rainfall-runoff model is optimized for each of the catchments and hierarchical clustering is performed to link catchments based on their calibrated model parameters. Clustering has been used in recent hydrologic studies but catchments are often clustered based on physical characteristics alone. Usually there is little evidence to suggest that such "surrogate" data approaches provide sufficiently similar model predictions. Beginning with model parameters and working backwards, we hope to establish if there is a relationship between the model parameters and physical characteristics for improved model predictions in the ungauged catchment. To analyze relationships, permutational multivariate analysis of variance tests are used that suggest which hydrologic metrics are most appropriate for discriminating between calibrated catchment clusters. Additional analysis is performed to determine which cluster pairs show significant differences for various metrics. We further examine the extent to which these results may be insightful for a hierarchical Bayesian modeling approach that is aimed at generating model predictions at an ungauged site. The method, known as Bayes Empirical Bayes (BEB) works to pool information from similar catchments to generate informed probability distributions for each model parameter at a data-limited catchment of interest. We demonstrate the effect of selecting

  17. Analysing DNA structural parameters using a mesoscopic model

    NASA Astrophysics Data System (ADS)

    Amarante, Tauanne D.; Weber, Gerald

    2014-03-01

    The Peyrard-Bishop model is a mesoscopic approximation to model DNA and RNA molecules. Several variants of this model exists, from 3D Hamiltonians, including torsional angles, to simpler 2D versions. Currently, we are able to parametrize the 2D variants of the model which allows us to extract important information about the molecule. For example, with this technique we were able recently to obtain the hydrogen bonds of RNA from melting temperatures, which previously were obtainable only from NMR measurements. Here, we take the 3D torsional Hamiltonian and set the angles to zero. Curiously, in doing this we do not recover the traditional 2D Hamiltonians. Instead, we obtain a different 2D Hamiltonian which now includes a base pair step distance, commonly known as rise. A detailed knowledge of the rise distance is important as it determines the overall length of the DNA molecule. This 2D Hamiltonian provides us with the exciting prospect of obtaining DNA structural parameters from melting temperatures. Our results of the rise distance at low salt concentration are in good qualitative agreement with those from several published x-ray measurements. We also found an important dependence of the rise distance with salt concentration. In contrast to our previous calculations, the elastic constants now show little dependence with salt concentrations which appears to be closer to what is seen experimentally in DNA flexibility experiments.

  18. The S-parameter in holographic technicolor models

    NASA Astrophysics Data System (ADS)

    Agashe, Kaustubh; Csáki, Csaba; Reece, Matthew; Grojean, Christophe

    2007-12-01

    We study the S parameter, considering especially its sign, in models of electroweak symmetry breaking (EWSB) in extra dimensions, with fermions localized near the UV brane. Such models are conjectured to be dual to 4D strong dynamics triggering EWSB. The motivation for such a study is that a negative value of S can significantly ameliorate the constraints from electroweak precision data on these models, allowing lower mass scales (TeV or below) for the new particles and leading to easier discovery at the LHC. We first extend an earlier proof of S>0 for EWSB by boundary conditions in arbitrary metric to the case of general kinetic functions for the gauge fields or arbitrary kinetic mixing. We then consider EWSB in the bulk by a Higgs VEV showing that S is positive for arbitrary metric and Higgs profile, assuming that the effects from higher-dimensional operators in the 5D theory are sub-leading and can therefore be neglected. For the specific case of AdS5 with a power law Higgs profile, we also show that S ~ +O(1), including effects of possible kinetic mixing from higher-dimensional operator (of NDA size) in the 5D theory. Therefore, our work strongly suggests that S is positive in calculable models in extra dimensions.

  19. Hierarchical parameter identification in models of respiratory mechanics.

    PubMed

    Schranz, C; Knöbel, C; Kretschmer, J; Zhao, Z; Möller, K

    2011-11-01

    Potential harmful effects of ventilation therapy could be reduced by model-based predictions of the effects of ventilator settings to the patient. To obtain optimal predictions, the model has to be individualized based on patients' data. Given a nonlinear model, the result of parameter identification using iterative numerical methods depends on initial estimates. In this work, a feasible hierarchical identification process is proposed and compared to the commonly implemented direct approach with randomized initial values. The hierarchical approach is exemplarily illustrated by identifying the viscoelastic model (VEM) of respiratory mechanics, whose a priori identifiability was proven. To demonstrate its advantages over the direct approach, two different data sources were employed. First, correctness of the approach was shown with simulation data providing controllable conditions. Second, the clinical potential was evaluated under realistic conditions using clinical data from 13 acute respiratory distress syndrome (ARDS) patients. Simulation data revealed that the success rate of the direct approach exponentially decreases with increasing deviation of the initial estimates while the hierarchical approach always obtained the correct solution. The average computing time using clinical data for the direct approach equals 4.77 s (SD  =  1.32) and 2.41 s (SD  =  0.01) for the hierarchical approach. These investigations demonstrate that a hierarchical approach may be beneficial with respect to robustness and efficiency using simulated and clinical data. PMID:21880567

  20. Stepglue, AN Effective Method for Parameter Estimation and Uncertainty Analysis of Geochemical Models with Wide Parameter Dimensionality

    NASA Astrophysics Data System (ADS)

    Sharifi, A.; Kalin, L.; Hantush, M. M.

    2013-12-01

    The Generalized Likelihood Uncertainty Estimation (GLUE) method is a practical tool for evaluating parameter uncertainty and distinguishing behavioral parameter sets, which are deemed acceptable in reproducing observed behavior of a system, from non-behavioral sets. When a conventional GLUE methodology is applied to a complex geochemical model, depending on the type of observed constituent used for model verification, parameters effecting more than one process might end up having different behavioral distributions. To overcome this problem, we propose a Stepwise GLUE procedure (StepGLUE), which can better identify the behavioral distributions of the model parameters regardless of the data being used for model verification. StepGLUE method uses a three step approach for identifying parameter behavioral domains that produce optimal results for all model constituents. In step 0, model parameters are divided into two groups: group A consisting of parameters exclusive to a single constituent (e.g. denitrification rate, which only effects nitrate pool) and group B consisting of parameters affecting more than constituent (e.g. nitrification rate which engages both ammonia and nitrate related processes). In step 1, for each constituent (like nitrate), we identify the most sensitive parameters through Kolmogorov-Smirnov (KS) test, using a single-constituent likelihood measure. If any of the parameters listed in group A end up being sensitive, new parameter values are generated for them according to their behavioral distributions. This process is necessary to avoid carrying over parameter uncertainty of one constituent to other constituents in the model. At the end of step one, new series of Monte Carlo simulations are performed with modified parameters. In step 2 (parameter oriented phase), we re-evaluate constituent sensitivity to all model parameters using KS test, however, this time the focus is on parameters in group B. For each of the group B parameters that show up in

  1. Multi-objective parameter optimization of common land model using adaptive surrogate modeling

    NASA Astrophysics Data System (ADS)

    Gong, W.; Duan, Q.; Li, J.; Wang, C.; Di, Z.; Dai, Y.; Ye, A.; Miao, C.

    2015-05-01

    Parameter specification usually has significant influence on the performance of land surface models (LSMs). However, estimating the parameters properly is a challenging task due to the following reasons: (1) LSMs usually have too many adjustable parameters (20 to 100 or even more), leading to the curse of dimensionality in the parameter input space; (2) LSMs usually have many output variables involving water/energy/carbon cycles, so that calibrating LSMs is actually a multi-objective optimization problem; (3) Regional LSMs are expensive to run, while conventional multi-objective optimization methods need a large number of model runs (typically ~105-106). It makes parameter optimization computationally prohibitive. An uncertainty quantification framework was developed to meet the aforementioned challenges, which include the following steps: (1) using parameter screening to reduce the number of adjustable parameters, (2) using surrogate models to emulate the responses of dynamic models to the variation of adjustable parameters, (3) using an adaptive strategy to improve the efficiency of surrogate modeling-based optimization; (4) using a weighting function to transfer multi-objective optimization to single-objective optimization. In this study, we demonstrate the uncertainty quantification framework on a single column application of a LSM - the Common Land Model (CoLM), and evaluate the effectiveness and efficiency of the proposed framework. The result indicate that this framework can efficiently achieve optimal parameters in a more effective way. Moreover, this result implies the possibility of calibrating other large complex dynamic models, such as regional-scale LSMs, atmospheric models and climate models.

  2. Impact of parameter uncertainty on carbon sequestration modeling

    NASA Astrophysics Data System (ADS)

    Bandilla, K.; Celia, M. A.

    2013-12-01

    Geologic carbon sequestration through injection of supercritical carbon dioxide (CO2) into the subsurface is one option to reduce anthropogenic CO¬2 emissions. Widespread industrial-scale deployment, on the order of giga-tonnes of CO2 injected per year, will be necessary for carbon sequestration to make a significant contribution to solving the CO2 problem. Deep saline formations are suitable targets for CO2 sequestration due to their large storage capacity, high injectivity, and favorable pressure and temperature regimes. Due to the large areal extent of saline formations, and the need to inject very large amounts of CO2, multiple sequestration operations are likely to be developed in the same formation. The injection-induced migration of both CO2 and resident formation fluids (brine) needs to be predicted to determine the feasibility of industrial-scale deployment of carbon sequestration. Due to the larger spatial scale of the domain, many of the modeling parameters (e.g., permeability) will be highly uncertain. In this presentation we discuss a sensitivity analysis of both pressure response and CO2 plume migration to variations of model parameters such as permeability, compressibility and temperature. The impact of uncertainty in the stratigraphic succession is also explored. The sensitivity analysis is conducted using a numerical vertically-integrated modeling approach. The Illinois Basin, USA is selected as the test site for this study, due to its large storage capacity and large number of stationary CO2 sources. As there is currently only one active CO2 injection operation in the Illinois Basin, a hypothetical injection scenario is used, where CO2 is injected at the locations of large CO2 emitters related to electricity generation, ethanol production and hydrocarbon refinement. The Area of Review (AoR) is chosen as the comparison metric, as it includes both the CO2 plume size and pressure response.

  3. Bayesian parameter inference for empirical stochastic models of paleoclimatic records with dating uncertainty

    NASA Astrophysics Data System (ADS)

    Boers, Niklas; Goswami, Bedartha; Chekroun, Mickael; Svensson, Anders; Rousseau, Denis-Didier; Ghil, Michael

    2016-04-01

    In the recent past, empirical stochastic models have been successfully applied to model a wide range of climatic phenomena [1,2]. In addition to enhancing our understanding of the geophysical systems under consideration, multilayer stochastic models (MSMs) have been shown to be solidly grounded in the Mori-Zwanzig formalism of statistical physics [3]. They are also well-suited for predictive purposes, e.g., for the El Niño Southern Oscillation [4] and the Madden-Julian Oscillation [5]. In general, these models are trained on a given time series under consideration, and then assumed to reproduce certain dynamical properties of the underlying natural system. Most existing approaches are based on least-squares fitting to determine optimal model parameters, which does not allow for an uncertainty estimation of these parameters. This approach significantly limits the degree to which dynamical characteristics of the time series can be safely inferred from the model. Here, we are specifically interested in fitting low-dimensional stochastic models to time series obtained from paleoclimatic proxy records, such as the oxygen isotope ratio and dust concentration of the NGRIP record [6]. The time series derived from these records exhibit substantial dating uncertainties, in addition to the proxy measurement errors. In particular, for time series of this kind, it is crucial to obtain uncertainty estimates for the final model parameters. Following [7], we first propose a statistical procedure to shift dating uncertainties from the time axis to the proxy axis of layer-counted paleoclimatic records. Thereafter, we show how Maximum Likelihood Estimation in combination with Markov Chain Monte Carlo parameter sampling can be employed to translate all uncertainties present in the original proxy time series to uncertainties of the parameter estimates of the stochastic model. We compare time series simulated by the empirical model to the original time series in terms of standard

  4. System identification of finite element modeling parameters using experimental spatial dynamic modeling

    NASA Astrophysics Data System (ADS)

    Lindholm, Brian E.; West, Robert L.

    1994-09-01

    A design parameter based update methodology for updating finite models based on the results of experimental dynamics tests is presented. In the proposed method, analyst-selected design parameters are updated with the objective of making realistic changes to a finite element model that will enable the model to more accurately predict the behavior of the structure. This process of 'reconciling' the finite element model with experimental data seeks to bring uncertainty in design parameters into the formulation for realistic updates of the model parameters. The reconciliation process becomes a problem of system identification. Since the finite element model is a spatial model, the high spatial density measurement of the structure's operating shape by the scanning laser-Doppler vibrometer is highly desirable. The reconciliation process updates the selected design parameters by solving a non-linear least-squares problem in which the differences between laser-based velocity measurements and analytically derived structural velocity fields are minimized over the entire structure. In the formulation, design or model parameters with greatest uncertainty are identified first, retaining statistical qualification on the estimates. This method lends itself to cross-validation of the model over the entire structure as well as at several frequencies of interest or over a frequency range. Model order analysis can also be performed within the process to ensure that the correct model is identified. The experimental velocity field is obtained by sinusoidally exciting the test structure at a given frequency and acquiring steady-state velocity data with a scanning laser-Doppler vibrometer. Conceptually, the laser-based measurements are samples of the structure's velocity field of operating shape. The finite element formulation used to generate the analytical steady-state velocity field is derived using either a dynamic stiffness finite element formulation or a static stiffness

  5. Parameter estimation of multiple item response profile model.

    PubMed

    Cho, Sun-Joo; Partchev, Ivailo; De Boeck, Paul

    2012-11-01

    Multiple item response profile (MIRP) models are models with crossed fixed and random effects. At least one between-person factor is crossed with at least one within-person factor, and the persons nested within the levels of the between-person factor are crossed with the items within levels of the within-person factor. Maximum likelihood estimation (MLE) of models for binary data with crossed random effects is challenging. This is because the marginal likelihood does not have a closed form, so that MLE requires numerical or Monte Carlo integration. In addition, the multidimensional structure of MIRPs makes the estimation complex. In this paper, three different estimation methods to meet these challenges are described: the Laplace approximation to the integrand; hierarchical Bayesian analysis, a simulation-based method; and an alternating imputation posterior with adaptive quadrature as the approximation to the integral. In addition, this paper discusses the advantages and disadvantages of these three estimation methods for MIRPs. The three algorithms are compared in a real data application and a simulation study was also done to compare their behaviour. PMID:22070786

  6. Symbolic-numeric estimation of parameters in biochemical models by quantifier elimination.

    PubMed

    Anai, Hirokazu; Orii, Shigeo; Horimoto, Katsuhisa

    2006-10-01

    The sequencing of complete genomes allows analyses of the interactions between various biological molecules on a genomic scale, which prompted us to simulate the global behaviors of biological phenomena on the molecular level. One of the basic mathematical problems in the simulation is the parameter optimization in the kinetic model for complex dynamics, and many estimation methods have been designed. We introduce a new approach to estimate the parameters in biological kinetic models by quantifier elimination (QE), in combination with numerical simulation methods. The estimation method was applied to a model for the inhibition kinetics of HIV proteinase with ten parameters and nine variables, and attained the goodness of fit to 300 points of observed data with the same magnitude as that obtained by the previous estimation methods, remarkably by using only one or two points of data. Furthermore, the utilization of QE demonstrated the feasibility of the present method for elucidating the behavior of the parameters and the variables in the analyzed model. Therefore, the present symbolic-numeric method is a powerful approach to reveal the fundamental mechanisms of kinetic models, in addition to being a computational engine. PMID:17099943

  7. Determination of remodeling parameters for a strain-adaptive finite element model of the distal ulna.

    PubMed

    Neuert, Mark A C; Dunning, Cynthia E

    2013-09-01

    Strain energy-based adaptive material models are used to predict bone resorption resulting from stress shielding induced by prosthetic joint implants. Generally, such models are governed by two key parameters: a homeostatic strain-energy state (K) and a threshold deviation from this state required to initiate bone reformation (s). A refinement procedure has been performed to estimate these parameters in the femur and glenoid; this study investigates the specific influences of these parameters on resulting density distributions in the distal ulna. A finite element model of a human ulna was created using micro-computed tomography (µCT) data, initialized to a homogeneous density distribution, and subjected to approximate in vivo loading. Values for K and s were tested, and the resulting steady-state density distribution compared with values derived from µCT images. The sensitivity of these parameters to initial conditions was examined by altering the initial homogeneous density value. The refined model parameters selected were then applied to six additional human ulnae to determine their performance across individuals. Model accuracy using the refined parameters was found to be comparable with that found in previous studies of the glenoid and femur, and gross bone structures, such as the cortical shell and medullary canal, were reproduced. The model was found to be insensitive to initial conditions; however, a fair degree of variation was observed between the six specimens. This work represents an important contribution to the study of changes in load transfer in the distal ulna following the implementation of commercial orthopedic implants. PMID:23804949

  8. Modeling parameter extraction for DNQ-novolak thick film resists

    NASA Astrophysics Data System (ADS)

    Henderson, Clifford L.; Scheer, Steven A.; Tsiartas, Pavlos C.; Rathsack, Benjamen M.; Sagan, John P.; Dammel, Ralph R.; Erdmann, Andreas; Willson, C. Grant

    1998-06-01

    Optical lithography with special thick film DNQ-novolac photoresists have been practiced for many years to fabricate microstructures that require feature heights ranging from several to hundreds of microns such as thin film magnetic heads. It is common in these thick film photoresist systems to observe interesting non-uniform profiles with narrow regions near the top surface of the film that transition into broader and more concave shapes near the bottom of the resist profile. A number of explanations have been proposed for these various observations including the formation of `dry skins' at the resist surface and the presence of solvent gradients in the film which serve to modify the local development rate of the photoresist. There have been few detailed experimental studies of the development behavior of thick films resists. This has been due to part to the difficulty in studying these films with conventional dissolution rate monitors (DRMs). In general, this lack of experimental data along with other factors has made simulation and modeling of thick film resist performance difficult. As applications such as thin film head manufacturing drive to smaller features with higher aspect ratios, the need for accurate thick film simulation capability continues to grow. A new multi-wavelength DRM tool has been constructed and used in conjunction with a resist bleaching tool and rigorous parameter extraction techniques to establish exposure and development parameters for two thick film resists, AZTM 4330-RS and AZTM 9200. Simulations based on these parameters show good agreement to resist profiles for these two resists.

  9. Inverse modeling of hydrologic parameters using surface flux and runoff observations in the Community Land Model

    NASA Astrophysics Data System (ADS)

    Sun, Y.; Hou, Z.; Huang, M.; Tian, F.; Leung, L. R.

    2013-04-01

    This study demonstrates the possibility of inverting hydrologic parameters using surface flux and runoff observations in version 4 of the Community Land Model (CLM4). Previous studies showed that surface flux and runoff calculations are sensitive to major hydrologic parameters in CLM4 over different watersheds, and illustrated the necessity and possibility of parameter calibration. Two inversion strategies, the deterministic least-square fitting and stochastic Markov-Chain Monte-Carlo (MCMC) Bayesian inversion approaches, are evaluated by applying them to CLM4 at selected sites. The unknowns to be estimated include surface and subsurface runoff generation parameters and vadose zone soil water parameters. We find that using model parameters calibrated by the least-square fitting provides little improvements in the model simulations but the sampling-based stochastic inversion approaches are consistent - as more information comes in, the predictive intervals of the calibrated parameters become narrower and the misfits between the calculated and observed responses decrease. In general, parameters that are identified to be significant through sensitivity analyses and statistical tests are better calibrated than those with weak or nonlinear impacts on flux or runoff observations. Temporal resolution of observations has larger impacts on the results of inverse modeling using heat flux data than runoff data. Soil and vegetation cover have important impacts on parameter sensitivities, leading to different patterns of posterior distributions of parameters at different sites. Overall, the MCMC-Bayesian inversion approach effectively and reliably improves the simulation of CLM under different climates and environmental conditions. Bayesian model averaging of the posterior estimates with different reference acceptance probabilities can smooth the posterior distribution and provide more reliable parameter estimates, but at the expense of wider uncertainty bounds.

  10. Inverse Modeling of Hydrologic Parameters Using Surface Flux and Runoff Observations in the Community Land Model

    SciTech Connect

    Sun, Yu; Hou, Zhangshuan; Huang, Maoyi; Tian, Fuqiang; Leung, Lai-Yung R.

    2013-12-10

    This study demonstrates the possibility of inverting hydrologic parameters using surface flux and runoff observations in version 4 of the Community Land Model (CLM4). Previous studies showed that surface flux and runoff calculations are sensitive to major hydrologic parameters in CLM4 over different watersheds, and illustrated the necessity and possibility of parameter calibration. Two inversion strategies, the deterministic least-square fitting and stochastic Markov-Chain Monte-Carlo (MCMC) - Bayesian inversion approaches, are evaluated by applying them to CLM4 at selected sites. The unknowns to be estimated include surface and subsurface runoff generation parameters and vadose zone soil water parameters. We find that using model parameters calibrated by the least-square fitting provides little improvements in the model simulations but the sampling-based stochastic inversion approaches are consistent - as more information comes in, the predictive intervals of the calibrated parameters become narrower and the misfits between the calculated and observed responses decrease. In general, parameters that are identified to be significant through sensitivity analyses and statistical tests are better calibrated than those with weak or nonlinear impacts on flux or runoff observations. Temporal resolution of observations has larger impacts on the results of inverse modeling using heat flux data than runoff data. Soil and vegetation cover have important impacts on parameter sensitivities, leading to the different patterns of posterior distributions of parameters at different sites. Overall, the MCMC-Bayesian inversion approach effectively and reliably improves the simulation of CLM under different climates and environmental conditions. Bayesian model averaging of the posterior estimates with different reference acceptance probabilities can smooth the posterior distribution and provide more reliable parameter estimates, but at the expense of wider uncertainty bounds.

  11. Modeling soil detachment capacity by rill flow using hydraulic parameters

    NASA Astrophysics Data System (ADS)

    Wang, Dongdong; Wang, Zhanli; Shen, Nan; Chen, Hao

    2016-04-01

    The relationship between soil detachment capacity (Dc) by rill flow and hydraulic parameters (e.g., flow velocity, shear stress, unit stream power, stream power, and unit energy) at low flow rates is investigated to establish an accurate experimental model. Experiments are conducted using a 4 × 0.1 m rill hydraulic flume with a constant artificial roughness on the flume bed. The flow rates range from 0.22 × 10-3 m2 s-1 to 0.67 × 10-3 m2 s-1, and the slope gradients vary from 15.8% to 38.4%. Regression analysis indicates that the Dc by rill flow can be predicted using the linear equations of flow velocity, stream power, unit stream power, and unit energy. Dc by rill flow that is fitted to shear stress can be predicted with a power function equation. Predictions based on flow velocity, unit energy, and stream power are powerful, but those based on shear stress, especially on unit stream power, are relatively poor. The prediction based on flow velocity provides the best estimates of Dc by rill flow because of the simplicity and availability of its measurements. Owing to error in measuring flow velocity at low flow rates, the predictive abilities of Dc by rill flow using all hydraulic parameters are relatively lower in this study compared with the results of previous research. The measuring accuracy of experiments for flow velocity should be improved in future research.

  12. Parameter Estimation and Model Validation of Nonlinear Dynamical Networks

    SciTech Connect

    Abarbanel, Henry; Gill, Philip

    2015-03-31

    In the performance period of this work under a DOE contract, the co-PIs, Philip Gill and Henry Abarbanel, developed new methods for statistical data assimilation for problems of DOE interest, including geophysical and biological problems. This included numerical optimization algorithms for variational principles, new parallel processing Monte Carlo routines for performing the path integrals of statistical data assimilation. These results have been summarized in the monograph: “Predicting the Future: Completing Models of Observed Complex Systems” by Henry Abarbanel, published by Spring-Verlag in June 2013. Additional results and details have appeared in the peer reviewed literature.

  13. Parameters-related uncertainty in modeling sugar cane yield with an agro-Land Surface Model

    NASA Astrophysics Data System (ADS)

    Valade, A.; Ciais, P.; Vuichard, N.; Viovy, N.; Ruget, F.; Gabrielle, B.

    2012-12-01

    Agro-Land Surface Models (agro-LSM) have been developed from the coupling of specific crop models and large-scale generic vegetation models. They aim at accounting for the spatial distribution and variability of energy, water and carbon fluxes within soil-vegetation-atmosphere continuum with a particular emphasis on how crop phenology and agricultural management practice influence the turbulent fluxes exchanged with the atmosphere, and the underlying water and carbon pools. A part of the uncertainty in these models is related to the many parameters included in the models' equations. In this study, we quantify the parameter-based uncertainty in the simulation of sugar cane biomass production with the agro-LSM ORCHIDEE-STICS on a multi-regional approach with data from sites in Australia, La Reunion and Brazil. First, the main source of uncertainty for the output variables NPP, GPP, and sensible heat flux (SH) is determined through a screening of the main parameters of the model on a multi-site basis leading to the selection of a subset of most sensitive parameters causing most of the uncertainty. In a second step, a sensitivity analysis is carried out on the parameters selected from the screening analysis at a regional scale. For this, a Monte-Carlo sampling method associated with the calculation of Partial Ranked Correlation Coefficients is used. First, we quantify the sensitivity of the output variables to individual input parameters on a regional scale for two regions of intensive sugar cane cultivation in Australia and Brazil. Then, we quantify the overall uncertainty in the simulation's outputs propagated from the uncertainty in the input parameters. Seven parameters are identified by the screening procedure as driving most of the uncertainty in the agro-LSM ORCHIDEE-STICS model output at all sites. These parameters control photosynthesis (optimal temperature of photosynthesis, optimal carboxylation rate), radiation interception (extinction coefficient), root

  14. Extracting Structure Parameters of Dimers for Molecular Tunneling Ionization Model

    NASA Astrophysics Data System (ADS)

    Song-Feng, Zhao; Fang, Huang; Guo-Li, Wang; Xiao-Xin, Zhou

    2016-03-01

    We determine structure parameters of the highest occupied molecular orbital (HOMO) of 27 dimers for the molecular tunneling ionization (so called MO-ADK) model of Tong et al. [Phys. Rev. A 66 (2002) 033402]. The molecular wave functions with correct asymptotic behavior are obtained by solving the time-independent Schrödinger equation with B-spline functions and molecular potentials which are numerically created using the density functional theory. We examine the alignment-dependent tunneling ionization probabilities from MO-ADK model for several molecules by comparing with the molecular strong-field approximation (MO-SFA) calculations. We show the molecular Perelomov–Popov–Terent'ev (MO-PPT) can successfully give the laser wavelength dependence of ionization rates (or probabilities). Based on the MO-PPT model, two diatomic molecules having valence orbital with antibonding systems (i.e., Cl2, Ne2) show strong ionization suppression when compared with their corresponding closest companion atoms. Supported by National Natural Science Foundation of China under Grant Nos. 11164025, 11264036, 11465016, 11364038, the Specialized Research Fund for the Doctoral Program of Higher Education of China under Grant No. 20116203120001, and the Basic Scientific Research Foundation for Institution of Higher Learning of Gansu Province

  15. Extracting Structure Parameters of Dimers for Molecular Tunneling Ionization Model

    NASA Astrophysics Data System (ADS)

    Zhao, Song-Feng; Huang, Fang; Wang, Guo-Li; Zhou, Xiao-Xin

    2016-03-01

    We determine structure parameters of the highest occupied molecular orbital (HOMO) of 27 dimers for the molecular tunneling ionization (so called MO-ADK) model of Tong et al. [Phys. Rev. A 66 (2002) 033402]. The molecular wave functions with correct asymptotic behavior are obtained by solving the time-independent Schrödinger equation with B-spline functions and molecular potentials which are numerically created using the density functional theory. We examine the alignment-dependent tunneling ionization probabilities from MO-ADK model for several molecules by comparing with the molecular strong-field approximation (MO-SFA) calculations. We show the molecular Perelomov-Popov-Terent'ev (MO-PPT) can successfully give the laser wavelength dependence of ionization rates (or probabilities). Based on the MO-PPT model, two diatomic molecules having valence orbital with antibonding systems (i.e., Cl2, Ne2) show strong ionization suppression when compared with their corresponding closest companion atoms. Supported by National Natural Science Foundation of China under Grant Nos. 11164025, 11264036, 11465016, 11364038, the Specialized Research Fund for the Doctoral Program of Higher Education of China under Grant No. 20116203120001, and the Basic Scientific Research Foundation for Institution of Higher Learning of Gansu Province

  16. Modeling Network Intrusion Detection System Using Feature Selection and Parameters Optimization

    NASA Astrophysics Data System (ADS)

    Kim, Dong Seong; Park, Jong Sou

    Previous approaches for modeling Intrusion Detection System (IDS) have been on twofold: improving detection model(s) in terms of (i) feature selection of audit data through wrapper and filter methods and (ii) parameters optimization of detection model design, based on classification, clustering algorithms, etc. In this paper, we present three approaches to model IDS in the context of feature selection and parameters optimization: First, we present Fusion of Genetic Algorithm (GA) and Support Vector Machines (SVM) (FuGAS), which employs combinations of GA and SVM through genetic operation and it is capable of building an optimal detection model with only selected important features and optimal parameters value. Second, we present Correlation-based Hybrid Feature Selection (CoHyFS), which utilizes a filter method in conjunction of GA for feature selection in order to reduce long training time. Third, we present Simultaneous Intrinsic Model Identification (SIMI), which adopts Random Forest (RF) and shows better intrusion detection rates and feature selection results, along with no additional computational overheads. We show the experimental results and analysis of three approaches on KDD 1999 intrusion detection datasets.

  17. Sound propagation and absorption in foam - A distributed parameter model.

    NASA Technical Reports Server (NTRS)

    Manson, L.; Lieberman, S.

    1971-01-01

    Liquid-base foams are highly effective sound absorbers. A better understanding of the mechanisms of sound absorption in foams was sought by exploration of a mathematical model of bubble pulsation and coupling and the development of a distributed-parameter mechanical analog. A solution by electric-circuit analogy was thus obtained and transmission-line theory was used to relate the physical properties of the foams to the characteristic impedance and propagation constants of the analog transmission line. Comparison of measured physical properties of the foam with values obtained from measured acoustic impedance and propagation constants and the transmission-line theory showed good agreement. We may therefore conclude that the sound propagation and absorption mechanisms in foam are accurately described by the resonant response of individual bubbles coupled to neighboring bubbles.

  18. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    NASA Astrophysics Data System (ADS)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification

  19. Adapting isostatic microbial growth parameters into non-isostatic models for use in dynamic ecosystems

    NASA Astrophysics Data System (ADS)

    Spangler, J.; Schulz, C. J.; Childers, G. W.

    2009-12-01

    Modeling microbial respiration and growth is an important tool for understanding many geochemical systems. The estimation of growth parameters relies on fitting experimental data to a selected model, such as the Monod equation or some variation, most often under batch or continuous culture conditions. While continuous culture conditions can be analogous to some natural environments, it often isn’t the case. More often, microorganisms are subject to fluctuating temperature, substrate concentrations, pH, water activity, and inhibitory compounds, to name a few. Microbial growth estimation under non-isothermal conditions has been possible through use of numerical solutions and has seen use in the field of food microbiology. In this study, numerical solutions were used to extend growth models under more non-isostatic conditions using momentary growth rate estimates. Using a model organism common in wastewater (Paracoccus denitrificans), growth and respiration rate parameters were estimated under varying static conditions (temperature, pH, electron donor/acceptor concentrations) and used to construct a non-isostatic growth model. After construction of the model, additional experiments were conducted to validate the model. These non-isostatic models hold the potential for allowing the prediction of cell biomass and respiration rates under a diverse array of conditions. By not restricting models to constant environmental conditions, the general applicability of the model can be greatly improved.

  20. Small-signal model parameter extraction for AlGaN/GaN HEMT

    NASA Astrophysics Data System (ADS)

    Le, Yu; Yingkui, Zheng; Sheng, Zhang; Lei, Pang; Ke, Wei; Xiaohua, Ma

    2016-03-01

    A new 22-element small signal equivalent circuit model for the AlGaN/GaN high electron mobility transistor (HEMT) is presented. Compared with the traditional equivalent circuit model, the gate forward and breakdown conductions (G gsf and G gdf) are introduced into the new model to characterize the gate leakage current. Additionally, for the new gate-connected field plate and the source-connected field plate of the device, an improved method for extracting the parasitic capacitances is proposed, which can be applied to the small-signal extraction for an asymmetric device. To verify the model, S-parameters are obtained from the modeling and measurements. The good agreement between the measured and the simulated results indicate that this model is accurate, stable and comparatively clear in physical significance.

  1. Evaluation of structural equation mixture models Parameter estimates and correct class assignment

    PubMed Central

    Tueller, Stephen; Lubke, Gitta

    2009-01-01

    Structural Equation Mixture Models(SEMMs) are latent class models that permit the estimation of a structural equation model within each class. Fitting SEMMs is illustrated using data from one wave of the Notre Dame Longitudinal Study of Aging. Based on the model used in the illustration, SEMM parameter estimation and correct class assignment are investigated in a large scale simulation study. Design factors of the simulation study are (im)balanced class proportions, (im)balanced factor variances, sample size, and class separation. We compare the fit of models with correct and misspecified within-class structural relations. In addition, we investigate the potential to fit SEMMs with binary indicators. The structure of within-class distributions can be recovered under a wide variety of conditions, indicating the general potential and flexibility of SEMMs to test complex within-class models. Correct class assignment is limited. PMID:20582328

  2. Use of generalised additive models to categorise continuous variables in clinical prediction

    PubMed Central

    2013-01-01

    Background In medical practice many, essentially continuous, clinical parameters tend to be categorised by physicians for ease of decision-making. Indeed, categorisation is a common practice both in medical research and in the development of clinical prediction rules, particularly where the ensuing models are to be applied in daily clinical practice to support clinicians in the decision-making process. Since the number of categories into which a continuous predictor must be categorised depends partly on the relationship between the predictor and the outcome, the need for more than two categories must be borne in mind. Methods We propose a categorisation methodology for clinical-prediction models, using Generalised Additive Models (GAMs) with P-spline smoothers to determine the relationship between the continuous predictor and the outcome. The proposed method consists of creating at least one average-risk category along with high- and low-risk categories based on the GAM smooth function. We applied this methodology to a prospective cohort of patients with exacerbated chronic obstructive pulmonary disease. The predictors selected were respiratory rate and partial pressure of carbon dioxide in the blood (PCO2), and the response variable was poor evolution. An additive logistic regression model was used to show the relationship between the covariates and the dichotomous response variable. The proposed categorisation was compared to the continuous predictor as the best option, using the AIC and AUC evaluation parameters. The sample was divided into a derivation (60%) and validation (40%) samples. The first was used to obtain the cut points while the second was used to validate the proposed methodology. Results The three-category proposal for the respiratory rate was ≤ 20;(20,24];> 24, for which the following values were obtained: AIC=314.5 and AUC=0.638. The respective values for the continuous predictor were AIC=317.1 and AUC=0.634, with no statistically

  3. Modeling Self-Ionized Plasma Wakefield Acceleration for Afterburner Parameters Using QuickPIC

    SciTech Connect

    Zhou, M.; Clayton, C.E.; Decyk, V.K.; Huang, C.; Johnson, D.K.; Joshi, C.; Lu, W.; Mori, W.B.; Tsung, F.S.; Deng, S.; Katsouleas, T.; Muggli, P.; Oz, E.; Decker, F.-J.; Iverson, R.; O'Connel, C.; Walz, D.; /SLAC

    2006-01-25

    For the parameters envisaged in possible afterburner stages[1] of a plasma wakefield accelerator (PWFA), the self-fields of the particle beam can be intense enough to tunnel ionize some neutral gases. Tunnel ionization has been investigated as a way for the beam itself to create the plasma, and the wakes generated may differ from those generated in pre-ionized plasmas[2],[3]. However, it is not practical to model the whole stage of PWFA with afterburner parameters using the models described in [2] and [3]. Here we describe the addition of a tunnel ionization package using the ADK model into QuickPIC, a highly efficient quasi-static particle in cell (PIC) code which can model a PWFA with afterburner parameters. Comparison between results from OSIRIS (a full PIC code with ionization) and from QuickPIC with the ionization package shows good agreement. Preliminary results using parameters relevant to the E164X experiment and the upcoming E167 experiment at SLAC are shown.

  4. Inverse Modeling of Hydrologic Parameters Using Surface Flux and Streamflow Observations in the Community Land Model

    NASA Astrophysics Data System (ADS)

    Sun, Y.; Hou, Z.; Huang, M.; Tian, F.; Leung, L.

    2012-12-01

    This study aims at demonstrating the possibility of calibrating hydrologic parameters using surface flux and streamflow observations in version 4 of the Community Land Model (CLM4). Previously we showed that surface flux and streamflow calculations are sensitive to several key hydrologic parameters in CLM4, and discussed the necessity and possibility of parameter calibration. In this study, we evaluate performances of several different inversion strategies, including least-square fitting, quasi Monte-Carlo (QMC) sampling based Bayesian updating, and a Markov-Chain Monte-Carlo (MCMC) Bayesian inversion approach. The parameters to be calibrated include the surface and subsurface runoff generation parameters and vadose zone soil water parameters. We discuss the effects of surface flux and streamflow observations on the inversion results and compare their consistency and reliability using both monthly and daily observations at various flux tower and MOPEX sites. We find that the sampling-based stochastic inversion approaches behaved consistently - as more information comes in, the predictive intervals of the calibrated parameters as well as the misfits between the calculated and observed observations decrease. In general, the parameters that are identified to be significant through sensitivity analyses and statistical tests are better calibrated than those with weak or nonlinear impacts on flux or streamflow observations. We also evaluated the possibility of probabilistic model averaging for more consistent parameter estimation.

  5. Are subject-specific musculoskeletal models robust to the uncertainties in parameter identification?

    PubMed

    Valente, Giordano; Pitto, Lorenzo; Testi, Debora; Seth, Ajay; Delp, Scott L; Stagni, Rita; Viceconti, Marco; Taddei, Fulvia

    2014-01-01

    Subject-specific musculoskeletal modeling can be applied to study musculoskeletal disorders, allowing inclusion of personalized anatomy and properties. Independent of the tools used for model creation, there are unavoidable uncertainties associated with parameter identification, whose effect on model predictions is still not fully understood. The aim of the present study was to analyze the sensitivity of subject-specific model predictions (i.e., joint angles, joint moments, muscle and joint contact forces) during walking to the uncertainties in the identification of body landmark positions, maximum muscle tension and musculotendon geometry. To this aim, we created an MRI-based musculoskeletal model of the lower limbs, defined as a 7-segment, 10-degree-of-freedom articulated linkage, actuated by 84 musculotendon units. We then performed a Monte-Carlo probabilistic analysis perturbing model parameters according to their uncertainty, and solving a typical inverse dynamics and static optimization problem using 500 models that included the different sets of perturbed variable values. Model creation and gait simulations were performed by using freely available software that we developed to standardize the process of model creation, integrate with OpenSim and create probabilistic simulations of movement. The uncertainties in input variables had a moderate effect on model predictions, as muscle and joint contact forces showed maximum standard deviation of 0.3 times body-weight and maximum range of 2.1 times body-weight. In addition, the output variables significantly correlated with few input variables (up to 7 out of 312) across the gait cycle, including the geometry definition of larger muscles and the maximum muscle tension in limited gait portions. Although we found subject-specific models not markedly sensitive to parameter identification, researchers should be aware of the model precision in relation to the intended application. In fact, force predictions could be

  6. Are Subject-Specific Musculoskeletal Models Robust to the Uncertainties in Parameter Identification?

    PubMed Central

    Valente, Giordano; Pitto, Lorenzo; Testi, Debora; Seth, Ajay; Delp, Scott L.; Stagni, Rita; Viceconti, Marco; Taddei, Fulvia

    2014-01-01

    Subject-specific musculoskeletal modeling can be applied to study musculoskeletal disorders, allowing inclusion of personalized anatomy and properties. Independent of the tools used for model creation, there are unavoidable uncertainties associated with parameter identification, whose effect on model predictions is still not fully understood. The aim of the present study was to analyze the sensitivity of subject-specific model predictions (i.e., joint angles, joint moments, muscle and joint contact forces) during walking to the uncertainties in the identification of body landmark positions, maximum muscle tension and musculotendon geometry. To this aim, we created an MRI-based musculoskeletal model of the lower limbs, defined as a 7-segment, 10-degree-of-freedom articulated linkage, actuated by 84 musculotendon units. We then performed a Monte-Carlo probabilistic analysis perturbing model parameters according to their uncertainty, and solving a typical inverse dynamics and static optimization problem using 500 models that included the different sets of perturbed variable values. Model creation and gait simulations were performed by using freely available software that we developed to standardize the process of model creation, integrate with OpenSim and create probabilistic simulations of movement. The uncertainties in input variables had a moderate effect on model predictions, as muscle and joint contact forces showed maximum standard deviation of 0.3 times body-weight and maximum range of 2.1 times body-weight. In addition, the output variables significantly correlated with few input variables (up to 7 out of 312) across the gait cycle, including the geometry definition of larger muscles and the maximum muscle tension in limited gait portions. Although we found subject-specific models not markedly sensitive to parameter identification, researchers should be aware of the model precision in relation to the intended application. In fact, force predictions could be

  7. Significance of settling model structures and parameter subsets in modelling WWTPs under wet-weather flow and filamentous bulking conditions.

    PubMed

    Ramin, Elham; Sin, Gürkan; Mikkelsen, Peter Steen; Plósz, Benedek Gy

    2014-10-15

    Current research focuses on predicting and mitigating the impacts of high hydraulic loadings on centralized wastewater treatment plants (WWTPs) under wet-weather conditions. The maximum permissible inflow to WWTPs depends not only on the settleability of activated sludge in secondary settling tanks (SSTs) but also on the hydraulic behaviour of SSTs. The present study investigates the impacts of ideal and non-ideal flow (dry and wet weather) and settling (good settling and bulking) boundary conditions on the sensitivity of WWTP model outputs to uncertainties intrinsic to the one-dimensional (1-D) SST model structures and parameters. We identify the critical sources of uncertainty in WWTP models through global sensitivity analysis (GSA) using the Benchmark simulation model No. 1 in combination with first- and second-order 1-D SST models. The results obtained illustrate that the contribution of settling parameters to the total variance of the key WWTP process outputs significantly depends on the influent flow and settling conditions. The magnitude of the impact is found to vary, depending on which type of 1-D SST model is used. Therefore, we identify and recommend potential parameter subsets for WWTP model calibration, and propose optimal choice of 1-D SST models under different flow and settling boundary conditions. Additionally, the hydraulic parameters in the second-order SST model are found significant under dynamic wet-weather flow conditions. These results highlight the importance of developing a more mechanistic based flow-dependent hydraulic sub-model in second-order 1-D SST models in the future. PMID:25003213

  8. Bayesian Model Comparison and Parameter Inference in Systems Biology Using Nested Sampling

    PubMed Central

    Pullen, Nick; Morris, Richard J.

    2014-01-01

    Inferring parameters for models of biological processes is a current challenge in systems biology, as is the related problem of comparing competing models that explain the data. In this work we apply Skilling's nested sampling to address both of these problems. Nested sampling is a Bayesian method for exploring parameter space that transforms a multi-dimensional integral to a 1D integration over likelihood space. This approach focusses on the computation of the marginal likelihood or evidence. The ratio of evidences of different models leads to the Bayes factor, which can be used for model comparison. We demonstrate how nested sampling can be used to reverse-engineer a system's behaviour whilst accounting for the uncertainty in the results. The effect of missing initial conditions of the variables as well as unknown parameters is investigated. We show how the evidence and the model ranking can change as a function of the available data. Furthermore, the addition of data from extra variables of the system can deliver more information for model comparison than increasing the data from one variable, thus providing a basis for experimental design. PMID:24523891

  9. A novel parameter for predicting arterial fusion and ablation in finite element models

    NASA Astrophysics Data System (ADS)

    Fankell, Douglas; Kramer, Eric; Taylor, Kenneth; Ferguson, Virginia; Rentschler, Mark E.

    2015-03-01

    Tissue fusion devices apply heat and pressure to ligate or ablate blood vessels during surgery. Although this process is widely used, a predictive finite element (FE) model incorporating both structural mechanics and heat transfer has not been developed, limiting improvements to empirical evidence. This work presents the development of a novel damage parameter, which incorporates stress, water content and temperature, and demonstrates its application in a FE model. A FE model, using the Holzapfel-Gasser-Ogden strain energy function to represent the structural mechanics and equations developed by Cezo to model water content and heat transfer, was created to simulate the fusion or ablation of a porcine splenic artery. Using state variables, the stresses, temperature and water content are recorded and combined to create a single parameter at each integration point. The parameter is then compared to a critical value (determined through experiments). If the critical value is reached, the element loses all strength. If the value is not reached, no change occurs. Little experimental data exists for validation, but the resulting stresses, temperatures and water content fall within ranges predicted by prior work. Due to the lack of published data, additional experimental studies are being conducted to rigorously validate and accurately determine the critical value. Ultimately, a novel method for demonstrating tissue damage and fusion in a FE model is presented, providing the first step towards in-depth FE models simulating fusion and ablation of arteries.

  10. Bayesian model comparison and parameter inference in systems biology using nested sampling.

    PubMed

    Pullen, Nick; Morris, Richard J

    2014-01-01

    Inferring parameters for models of biological processes is a current challenge in systems biology, as is the related problem of comparing competing models that explain the data. In this work we apply Skilling's nested sampling to address both of these problems. Nested sampling is a Bayesian method for exploring parameter space that transforms a multi-dimensional integral to a 1D integration over likelihood space. This approach focuses on the computation of the marginal likelihood or evidence. The ratio of evidences of different models leads to the Bayes factor, which can be used for model comparison. We demonstrate how nested sampling can be used to reverse-engineer a system's behaviour whilst accounting for the uncertainty in the results. The effect of missing initial conditions of the variables as well as unknown parameters is investigated. We show how the evidence and the model ranking can change as a function of the available data. Furthermore, the addition of data from extra variables of the system can deliver more information for model comparison than increasing the data from one variable, thus providing a basis for experimental design. PMID:24523891

  11. Parameter Estimation and Parameterization Uncertainty Using Bayesian Model Averaging

    NASA Astrophysics Data System (ADS)

    Tsai, F. T.; Li, X.

    2007-12-01

    This study proposes Bayesian model averaging (BMA) to address parameter estimation uncertainty arisen from non-uniqueness in parameterization methods. BMA provides a means of incorporating multiple parameterization methods for prediction through the law of total probability, with which an ensemble average of hydraulic conductivity distribution is obtained. Estimation uncertainty is described by the BMA variances, which contain variances within and between parameterization methods. BMA shows the facts that considering more parameterization methods tends to increase estimation uncertainty and estimation uncertainty is always underestimated using a single parameterization method. Two major problems in applying BMA to hydraulic conductivity estimation using a groundwater inverse method will be discussed in the study. The first problem is the use of posterior probabilities in BMA, which tends to single out one best method and discard other good methods. This problem arises from Occam's window that only accepts models in a very narrow range. We propose a variance window to replace Occam's window to cope with this problem. The second problem is the use of Kashyap information criterion (KIC), which makes BMA tend to prefer high uncertain parameterization methods due to considering the Fisher information matrix. We found that Bayesian information criterion (BIC) is a good approximation to KIC and is able to avoid controversial results. We applied BMA to hydraulic conductivity estimation in the 1,500-foot sand aquifer in East Baton Rouge Parish, Louisiana.

  12. Dynamic parameters in models of atmospheric vortex structures

    NASA Astrophysics Data System (ADS)

    Dobryshman, E. M.; Kochina, V. G.; Letunova, T. A.

    2013-09-01

    Vortex simulation and the computation of fields of dynamic parameters of vortex structures (velocity, rotor velocity, and helicity) are carried out with the use of exact hydrodynamic equations in a cylindrical coordinate system. Components of centripetal and Coriolis accelerations are taken into account in the initial equations. Internal and external solutions are defined. Internal solutions ignore the disturbances of the pressure field, but they are considered in external solutions. The simulation is carried out so that the effect of accounting for spatial coordinates on the structure of the above fields is pronounced. It is shown that the initial kinetic energy of rotating motion transforms into the kinetic energy of radial and vertical velocity components in models with centripetal acceleration. In models with Coriolis acceleration, the Rossby effect is clearly pronounced. The method of an "inverse problem" is used for finding external solutions, i.e., reconstruction of the pressure field at specified velocity components. Computations have shown that tangential components mainly contribute to the velocity and helicity vortex moduli at the initial stage.

  13. FEM numerical model study of electrosurgical dispersive electrode design parameters.

    PubMed

    Pearce, John A

    2015-08-01

    Electrosurgical dispersive electrodes must safely carry the surgical current in monopolar procedures, such as those used in cutting, coagulation and radio frequency ablation (RFA). Of these, RFA represents the most stringent design constraint since ablation currents are often more than 1 to 2 Arms (continuous) for several minutes depending on the size of the lesion desired and local heat transfer conditions at the applicator electrode. This stands in contrast to standard surgical activations, which are intermittent, and usually less than 1 Arms, but for several seconds at a time. Dispersive electrode temperature rise is also critically determined by the sub-surface skin anatomy, thicknesses of the subcutaneous and supra-muscular fat, etc. Currently, we lack fundamental engineering design criteria that provide an estimating framework for preliminary designs of these electrodes. The lack of a fundamental design framework means that a large number of experiments must be conducted in order to establish a reasonable design. Previously, an attempt to correlate maximum temperatures in experimental work with the average current density-time product failed to yield a good match. This paper develops and applies a new measure of an electrode stress parameter that correlates well with both the previous experimental data and with numerical models of other electrode shapes. The finite element method (FEM) model work was calibrated against experimental RF lesions in porcine skin to establish the fundamental principle underlying dispersive electrode performance. The results can be used in preliminary electrode design calculations, experiment series design and performance evaluation. PMID:26736814

  14. Mechanical models for insect locomotion: stability and parameter studies

    NASA Astrophysics Data System (ADS)

    Schmitt, John; Holmes, Philip

    2001-08-01

    We extend the analysis of simple models for the dynamics of insect locomotion in the horizontal plane, developed in [Biol. Cybern. 83 (6) (2000) 501] and applied to cockroach running in [Biol. Cybern. 83 (6) (2000) 517]. The models consist of a rigid body with a pair of effective legs (each representing the insect’s support tripod) placed intermittently in ground contact. The forces generated may be prescribed as functions of time, or developed by compression of a passive leg spring. We find periodic gaits in both cases, and show that prescribed (sinusoidal) forces always produce unstable gaits, unless they are allowed to rotate with the body during stride, in which case a (small) range of physically unrealistic stable gaits does exist. Stability is much more robust in the passive spring case, in which angular momentum transfer at touchdown/liftoff can result in convergence to asymptotically straight motions with bounded yaw, fore-aft and lateral velocity oscillations. Using a non-dimensional formulation of the equations of motion, we also develop exact and approximate scaling relations that permit derivation of gait characteristics for a range of leg stiffnesses, lengths, touchdown angles, body masses and inertias, from a single gait family computed at ‘standard’ parameter values.

  15. Sensitivity of numerical dispersion modeling to explosive source parameters

    SciTech Connect

    Baskett, R.L. ); Cederwall, R.T. )

    1991-02-13

    The calculation of downwind concentrations from non-traditional sources, such as explosions, provides unique challenges to dispersion models. The US Department of Energy has assigned the Atmospheric Release Advisory Capability (ARAC) at the Lawrence Livermore National Laboratory (LLNL) the task of estimating the impact of accidental radiological releases to the atmosphere anywhere in the world. Our experience includes responses to over 25 incidents in the past 16 years, and about 150 exercises a year. Examples of responses to explosive accidents include the 1980 Titan 2 missile fuel explosion near Damascus, Arkansas and the hydrogen gas explosion in the 1986 Chernobyl nuclear power plant accident. Based on judgment and experience, we frequently estimate the source geometry and the amount of toxic material aerosolized as well as its particle size distribution. To expedite our real-time response, we developed some automated algorithms and default assumptions about several potential sources. It is useful to know how well these algorithms perform against real-world measurements and how sensitive our dispersion model is to the potential range of input values. In this paper we present the algorithms we use to simulate explosive events, compare these methods with limited field data measurements, and analyze their sensitivity to input parameters. 14 refs., 7 figs., 2 tabs.

  16. Regionalization of Parameters of the Continuous Rainfall-Runoff model Based on Bayesian Generalized Linear Model

    NASA Astrophysics Data System (ADS)

    Kim, Tae-Jeong; Kim, Ki-Young; Shin, Dong-Hoon; Kwon, Hyun-Han

    2015-04-01

    It has been widely acknowledged that the appropriate simulation of natural streamflow at ungauged sites is one of the fundamental challenges to hydrology community. In particular, the key to reliable runoff simulation in ungauged basins is a reliable rainfall-runoff model and a parameter estimation. In general, parameter estimation in rainfall-runoff models is a complex issue due to an insufficient hydrologic data. This study aims to regionalize the parameters of the continuous rainfall-runoff model in conjunction with Bayesian statistical techniques to facilitate uncertainty analysis. First, this study uses the Bayesian Markov Chain Monte Carlo scheme for the Sacramento rainfall-runoff model that has been widely used around the world. The Sacramento model is calibrated against daily runoff observation, and thirteen parameters of the model are optimized as well as posterior distributor distributions for each parameter are derived. Second, we applied Bayesian generalized linear regression model to set of the parameters with basin characteristics (e.g. area and slope), to obtain a functional relationship between pairs of variables. The proposed model was validated in two gauged watersheds in accordance with the efficiency criteria such as the Nash-Sutcliffe efficiency, coefficient of efficiency, index of agreement and coefficient of correlation. The future study will be further focused on uncertainty analysis to fully incorporate propagation of the uncertainty into the regionalization framework. KEYWORDS: Ungauge, Parameter, Sacramento, Generalized linear model, Regionalization Acknowledgement This research was supported by a Grant (13SCIPA01) from Smart Civil Infrastructure Research Program funded by the Ministry of Land, Infrastructure and Transport (MOLIT) of Korea government and the Korea Agency for Infrastructure Technology Advancement (KAIA).

  17. Adaptation of the pore diffusion model to describe multi-addition batch uptake high-throughput screening experiments.

    PubMed

    Traylor, Steven J; Xu, Xuankuo; Li, Yi; Jin, Mi; Li, Zheng Jian

    2014-11-14

    Equilibrium isotherm and kinetic mass transfer measurements are critical to mechanistic modeling of binding and elution behavior within a chromatographic column. However, traditional methods of measuring these parameters are impractically time- and labor-intensive. While advances in high-throughput robotic liquid handling systems have created time and labor-saving methods of performing kinetic and equilibrium measurements of proteins on chromatographic resins in a 96-well plate format, these techniques continue to be limited by physical constraints on protein addition, incubation and separation times; the available concentration of protein stocks and process pools; and practical constraints on resin and fluid volumes in the 96-well format. In this study, a novel technique for measuring protein uptake kinetics (multi-addition batch uptake) has been developed to address some of these limitations during high-throughput batch uptake kinetic measurements. This technique uses sequential additions of protein stock to chromatographic resin in a 96-well plate and the subsequent removal of each addition by centrifugation or vacuum separation. The pore diffusion model was adapted here to model multi-addition batch uptake and was tested and compared with traditional batch uptake measurements of uptake of an Fc-fusion protein on an anion exchange resin. Acceptable agreement between the two techniques is achieved for the two solution conditions investigated here. In addition, a sensitivity analysis of the model to the physical inputs is presented and the advantages and limitations of the multi-addition batch uptake technique are explored. PMID:25311484

  18. A NEW VARIANCE ESTIMATOR FOR PARAMETERS OF SEMI-PARAMETRIC GENERALIZED ADDITIVE MODELS. (R829213)

    EPA Science Inventory

    The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

  19. Validating Mechanistic Sorption Model Parameters and Processes for Reactive Transport in Alluvium

    SciTech Connect

    Zavarin, M; Roberts, S K; Rose, T P; Phinney, D L

    2002-05-02

    The laboratory batch and flow-through experiments presented in this report provide a basis for validating the mechanistic surface complexation and ion exchange model we use in our hydrologic source term (HST) simulations. Batch sorption experiments were used to examine the effect of solution composition on sorption. Flow-through experiments provided for an analysis of the transport behavior of sorbing elements and tracers which includes dispersion and fluid accessibility effects. Analysis of downstream flow-through column fluids allowed for evaluation of weakly-sorbing element transport. Secondary Ion Mass Spectrometry (SIMS) analysis of the core after completion of the flow-through experiments permitted the evaluation of transport of strongly sorbing elements. A comparison between these data and model predictions provides additional constraints to our model and improves our confidence in near-field HST model parameters. In general, cesium, strontium, samarium, europium, neptunium, and uranium behavior could be accurately predicted using our mechanistic approach but only after some adjustment was made to the model parameters. The required adjustments included a reduction in strontium affinity for smectite, an increase in cesium affinity for smectite and illite, a reduction in iron oxide and calcite reactive surface area, and a change in clinoptilolite reaction constants to reflect a more recently published set of data. In general, these adjustments are justifiable because they fall within a range consistent with our understanding of the parameter uncertainties. These modeling results suggest that the uncertainty in the sorption model parameters must be accounted for to validate the mechanistic approach. The uncertainties in predicting the sorptive behavior of U-1a and UE-5n alluvium also suggest that these uncertainties must be propagated to nearfield HST and large-scale corrective action unit (CAU) models.

  20. Simultaneous model discrimination and parameter estimation in dynamic models of cellular systems

    PubMed Central

    2013-01-01

    Background Model development is a key task in systems biology, which typically starts from an initial model candidate and, involving an iterative cycle of hypotheses-driven model modifications, leads to new experimentation and subsequent model identification steps. The final product of this cycle is a satisfactory refined model of the biological phenomena under study. During such iterative model development, researchers frequently propose a set of model candidates from which the best alternative must be selected. Here we consider this problem of model selection and formulate it as a simultaneous model selection and parameter identification problem. More precisely, we consider a general mixed-integer nonlinear programming (MINLP) formulation for model selection and identification, with emphasis on dynamic models consisting of sets of either ODEs (ordinary differential equations) or DAEs (differential algebraic equations). Results We solved the MINLP formulation for model selection and identification using an algorithm based on Scatter Search (SS). We illustrate the capabilities and efficiency of the proposed strategy with a case study considering the KdpD/KdpE system regulating potassium homeostasis in Escherichia coli. The proposed approach resulted in a final model that presents a better fit to the in silico generated experimental data. Conclusions The presented MINLP-based optimization approach for nested-model selection and identification is a powerful methodology for model development in systems biology. This strategy can be used to perform model selection and parameter estimation in one single step, thus greatly reducing the number of experiments and computations of traditional modeling approaches. PMID:23938131

  1. Mass-based hygroscopicity parameter interaction model and measurement of atmospheric aerosol water uptake

    NASA Astrophysics Data System (ADS)

    Mikhailov, E.; Vlasenko, S.; Rose, D.; Pöschl, U.

    2013-01-01

    In this study we derive and apply a mass-based hygroscopicity parameter interaction model for efficient description of concentration-dependent water uptake by atmospheric aerosol particles with complex chemical composition. The model approach builds on the single hygroscopicity parameter model of Petters and Kreidenweis (2007). We introduce an observable mass-based hygroscopicity parameter κm which can be deconvoluted into a dilute hygroscopicity parameter (κm0) and additional self- and cross-interaction parameters describing non-ideal solution behavior and concentration dependencies of single- and multi-component systems. For reference aerosol samples of sodium chloride and ammonium sulfate, the κm-interaction model (KIM) captures the experimentally observed concentration and humidity dependence of the hygroscopicity parameter and is in good agreement with an accurate reference model based on the Pitzer ion-interaction approach (Aerosol Inorganic Model, AIM). Experimental results for pure organic particles (malonic acid, levoglucosan) and for mixed organic-inorganic particles (malonic acid - ammonium sulfate) are also well reproduced by KIM, taking into account apparent or equilibrium solubilities for stepwise or gradual deliquescence and efflorescence transitions. The mixed organic-inorganic particles as well as atmospheric aerosol samples exhibit three distinctly different regimes of hygroscopicity: (I) a quasi-eutonic deliquescence & efflorescence regime at low-humidity where substances are just partly dissolved and exist also in a non-dissolved phase, (II) a gradual deliquescence & efflorescence regime at intermediate humidity where different solutes undergo gradual dissolution or solidification in the aqueous phase; and (III) a dilute regime at high humidity where the solutes are fully dissolved approaching their dilute hygroscopicity. For atmospheric aerosol samples collected from boreal rural air and from pristine tropical rainforest air (secondary

  2. An Adaptive Sequential Design for Model Discrimination and Parameter Estimation in Non-Linear Nested Models

    SciTech Connect

    Tommasi, C.; May, C.

    2010-09-30

    The DKL-optimality criterion has been recently proposed for the dual problem of model discrimination and parameter estimation, for the case of two rival models. A sequential version of the DKL-optimality criterion is herein proposed in order to discriminate and efficiently estimate more than two nested non-linear models. Our sequential method is inspired by the procedure of Biswas and Chaudhuri (2002), which is however useful only in the set up of nested linear models.

  3. Impact of the hard-coded parameters on the hydrologic fluxes of the land surface model Noah-MP

    NASA Astrophysics Data System (ADS)

    Cuntz, Matthias; Mai, Juliane; Samaniego, Luis; Clark, Martyn; Wulfmeyer, Volker; Attinger, Sabine; Thober, Stephan

    2016-04-01

    Land surface models incorporate a large number of processes, described by physical, chemical and empirical equations. The process descriptions contain a number of parameters that can be soil or plant type dependent and are typically read from tabulated input files. Land surface models may have, however, process descriptions that contain fixed, hard-coded numbers in the computer code, which are not identified as model parameters. Here we searched for hard-coded parameters in the computer code of the land surface model Noah with multiple process options (Noah-MP) to assess the importance of the fixed values on restricting the model's agility during parameter estimation. We found 139 hard-coded values in all Noah-MP process options, which are mostly spatially constant values. This is in addition to the 71 standard parameters of Noah-MP, which mostly get distributed spatially by given vegetation and soil input maps. We performed a Sobol' global sensitivity analysis of Noah-MP to variations of the standard and hard-coded parameters for a specific set of process options. 42 standard parameters and 75 hard-coded parameters were active with the chosen process options. The sensitivities of the hydrologic output fluxes latent heat and total runoff as well as their component fluxes were evaluated. These sensitivities were evaluated at twelve catchments of the Eastern United States with very different hydro-meteorological regimes. Noah-MP's hydrologic output fluxes are sensitive to two thirds of its standard parameters. The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for evaporation, which proved to be oversensitive in other land surface models as well. Surface runoff is sensitive to almost all hard-coded parameters of the snow processes and the meteorological inputs. These parameter sensitivities diminish in total runoff. Assessing these parameters in model calibration would require detailed snow observations or the

  4. Optimization of hydrological parameters of a distributed runoff model based on multiple flood events

    NASA Astrophysics Data System (ADS)

    Miyamoto, Mamoru; Matsumoto, Kazuhiro; Tsuda, Morimasa; Yamakage, Yuzuru; Iwami, Yoichi; Anai, Hirokazu

    2015-04-01

    The error sources of flood forecasting by a runoff model commonly include input data, model structures, and parameter settings. This study focused on a calibration procedure to minimize errors due to parameter settings. Although many studies have been done on hydrological parameter optimization, they are mostly about individual optimization cases applying a specific optimization technique to a specific flood. Consequently, it is difficult to determine the most appropriate parameter set to make forecasts on future floods, because optimized parameter sets vary by flood type. Thus, this study aimed to develop a comprehensive method for optimizing hydrological parameters of a distributed runoff model for future flood forecasting. A distributed runoff model, PWRI-DHM, was applied to the Gokase River basin of 1,820km2 in Japan in this study. The model with gridded two-layer tanks for the entire target river basin includes hydrological parameters, such as hydraulic conductivity, surface roughness and runoff coefficient, which are set according to land-use and soil-type distributions. Global data sets, e.g., Global Map and DSMW (Digital Soil Map of the World), were employed as input data such as elevation, land use and soil type. Thirteen optimization algorithms such as GA, PSO and DEA were carefully selected from seventy-four open-source algorithms available for public use. These algorithms were used with three error assessment functions to calibrate the parameters of the model to each of fifteen past floods in the predetermined search range. Fifteen optimized parameter sets corresponding to the fifteen past floods were determined by selecting the best sets from the calibration results in terms of reproducible accuracy. This process helped eliminate bias due to type of optimization algorithms. Although the calibration results of each parameter were widely distributed in the search range, statistical significance was found in comparisons between the optimized parameters

  5. Updated parameters and expanded simulation options for a model of the auditory periphery.

    PubMed

    Zilany, Muhammad S A; Bruce, Ian C; Carney, Laurel H

    2014-01-01

    A phenomenological model of the auditory periphery in cats was previously developed by Zilany and colleagues [J. Acoust. Soc. Am. 126, 2390-2412 (2009)] to examine the detailed transformation of acoustic signals into the auditory-nerve representation. In this paper, a few issues arising from the responses of the previous version have been addressed. The parameters of the synapse model have been readjusted to better simulate reported physiological discharge rates at saturation for higher characteristic frequencies [Liberman, J. Acoust. Soc. Am. 63, 442-455 (1978)]. This modification also corrects the responses of higher-characteristic frequency (CF) model fibers to low-frequency tones that were erroneously much higher than the responses of low-CF model fibers in the previous version. In addition, an analytical method has been implemented to compute the mean discharge rate and variance from the model's synapse output that takes into account the effects of absolute refractoriness. PMID:24437768

  6. Modelling suspended-sediment propagation and related heavy metal contamination in floodplains: a parameter sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Hostache, R.; Hissler, C.; Matgen, P.; Guignard, C.; Bates, P.

    2014-09-01

    Fine sediments represent an important vector of pollutant diffusion in rivers. When deposited in floodplains and riverbeds, they can be responsible for soil pollution. In this context, this paper proposes a modelling exercise aimed at predicting transport and diffusion of fine sediments and dissolved pollutants. The model is based upon the Telemac hydro-informatic system (dynamical coupling Telemac-2D-Sysiphe). As empirical and semiempirical parameters need to be calibrated for such a modelling exercise, a sensitivity analysis is proposed. An innovative point in this study is the assessment of the usefulness of dissolved trace metal contamination information for model calibration. Moreover, for supporting the modelling exercise, an extensive database was set up during two flood events. It includes water surface elevation records, discharge measurements and geochemistry data such as time series of dissolved/particulate contaminants and suspended-sediment concentrations. The most sensitive parameters were found to be the hydraulic friction coefficients and the sediment particle settling velocity in water. It was also found that model calibration did not benefit from dissolved trace metal contamination information. Using the two monitored hydrological events as calibration and validation, it was found that the model is able to satisfyingly predict suspended sediment and dissolve pollutant transport in the river channel. In addition, a qualitative comparison between simulated sediment deposition in the floodplain and a soil contamination map shows that the preferential zones for deposition identified by the model are realistic.

  7. Approaches in highly parameterized inversion - PEST++, a Parameter ESTimation code optimized for large environmental models

    USGS Publications Warehouse

    Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.

    2012-01-01

    An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.

  8. Additive Manufacturing Modeling and Simulation A Literature Review for Electron Beam Free Form Fabrication

    NASA Technical Reports Server (NTRS)

    Seufzer, William J.

    2014-01-01

    Additive manufacturing is coming into industrial use and has several desirable attributes. Control of the deposition remains a complex challenge, and so this literature review was initiated to capture current modeling efforts in the field of additive manufacturing. This paper summarizes about 10 years of modeling and simulation related to both welding and additive manufacturing. The goals were to learn who is doing what in modeling and simulation, to summarize various approaches taken to create models, and to identify research gaps. Later sections in the report summarize implications for closed-loop-control of the process, implications for local research efforts, and implications for local modeling efforts.

  9. Recommended direct simulation Monte Carlo collision model parameters for modeling ionized air transport processes

    NASA Astrophysics Data System (ADS)

    Swaminathan-Gopalan, Krishnan; Stephani, Kelly A.

    2016-02-01

    A systematic approach for calibrating the direct simulation Monte Carlo (DSMC) collision model parameters to achieve consistency in the transport processes is presented. The DSMC collision cross section model parameters are calibrated for high temperature atmospheric conditions by matching the collision integrals from DSMC against ab initio based collision integrals that are currently employed in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and Data Parallel Line Relaxation (DPLR) high temperature computational fluid dynamics solvers. The DSMC parameter values are computed for the widely used Variable Hard Sphere (VHS) and the Variable Soft Sphere (VSS) models using the collision-specific pairing approach. The recommended best-fit VHS/VSS parameter values are provided over a temperature range of 1000-20 000 K for a thirteen-species ionized air mixture. Use of the VSS model is necessary to achieve consistency in transport processes of ionized gases. The agreement of the VSS model transport properties with the transport properties as determined by the ab initio collision integral fits was found to be within 6% in the entire temperature range, regardless of the composition of the mixture. The recommended model parameter values can be readily applied to any gas mixture involving binary collisional interactions between the chemical species presented for the specified temperature range.

  10. Accurate analytical method for the extraction of solar cell model parameters

    NASA Astrophysics Data System (ADS)

    Phang, J. C. H.; Chan, D. S. H.; Phillips, J. R.

    1984-05-01

    Single diode solar cell model parameters are rapidly extracted from experimental data by means of the presently derived analytical expressions. The parameter values obtained have a less than 5 percent error for most solar cells, in light of the extraction of model parameters for two cells of differing quality which were compared with parameters extracted by means of the iterative method.

  11. Observation model and parameter partials for the JPL geodetic GPS modeling software GPSOMC

    NASA Technical Reports Server (NTRS)

    Sovers, O. J.; Border, J. S.

    1988-01-01

    The physical models employed in GPSOMC and the modeling module of the GIPSY software system developed at JPL for analysis of geodetic Global Positioning Satellite (GPS) measurements are described. Details of the various contributions to range and phase observables are given, as well as the partial derivatives of the observed quantities with respect to model parameters. A glossary of parameters is provided to enable persons doing data analysis to identify quantities in the current report with their counterparts in the computer programs. There are no basic model revisions, with the exceptions of an improved ocean loading model and some new options for handling clock parametrization. Such misprints as were discovered were corrected. Further revisions include modeling improvements and assurances that the model description is in accord with the current software.

  12. Model Parameter Variability for Enhanced Anaerobic Bioremediation of DNAPL Source Zones

    NASA Astrophysics Data System (ADS)

    Mao, X.; Gerhard, J. I.; Barry, D. A.

    2005-12-01

    The objective of the Source Area Bioremediation (SABRE) project, an international collaboration of twelve companies, two government agencies and three research institutions, is to evaluate the performance of enhanced anaerobic bioremediation for the treatment of chlorinated ethene source areas containing dense, non-aqueous phase liquids (DNAPL). This 4-year, 5.7 million dollars research effort focuses on a pilot-scale demonstration of enhanced bioremediation at a trichloroethene (TCE) DNAPL field site in the United Kingdom, and includes a significant program of laboratory and modelling studies. Prior to field implementation, a large-scale, multi-laboratory microcosm study was performed to determine the optimal system properties to support dehalogenation of TCE in site soil and groundwater. This statistically-based suite of experiments measured the influence of key variables (electron donor, nutrient addition, bioaugmentation, TCE concentration and sulphate concentration) in promoting the reductive dechlorination of TCE to ethene. As well, a comprehensive biogeochemical numerical model was developed for simulating the anaerobic dehalogenation of chlorinated ethenes. An appropriate (reduced) version of this model was combined with a parameter estimation method based on fitting of the experimental results. Each of over 150 individual microcosm calibrations involved matching predicted and observed time-varying concentrations of all chlorinated compounds. This study focuses on an analysis of this suite of fitted model parameter values. This includes determining the statistical correlation between parameters typically employed in standard Michaelis-Menten type rate descriptions (e.g., maximum dechlorination rates, half-saturation constants) and the key experimental variables. The analysis provides insight into the degree to which aqueous phase TCE and cis-DCE inhibit dechlorination of less-chlorinated compounds. Overall, this work provides a database of the numerical

  13. Computerized Adaptive Testing: A Comparison of the Nominal Response Model and the Three Parameter Logistic Model.

    ERIC Educational Resources Information Center

    DeAyala, R. J.; Koch, William R.

    A nominal response model-based computerized adaptive testing procedure (nominal CAT) was implemented using simulated data. Ability estimates from the nominal CAT were compared to those from a CAT based upon the three-parameter logistic model (3PL CAT). Furthermore, estimates from both CAT procedures were compared with the known true abilities used…

  14. Rock thermal conductivity as key parameter for geothermal numerical models

    NASA Astrophysics Data System (ADS)

    Di Sipio, Eloisa; Chiesa, Sergio; Destro, Elisa; Galgaro, Antonio; Giaretta, Aurelio; Gola, Gianluca; Manzella, Adele

    2013-04-01

    The geothermal energy applications are undergoing a rapid development. However, there are still several challenges in the successful exploitation of geothermal energy resources. In particular, a special effort is required to characterize the thermal properties of the ground along with the implementation of efficient thermal energy transfer technologies. This paper focuses on understanding the quantitative contribution that geosciences can receive from the characterization of rock thermal conductivity. The thermal conductivity of materials is one of the main input parameters in geothermal modeling since it directly controls the steady state temperature field. An evaluation of this thermal property is required in several fields, such as Thermo-Hydro-Mechanical multiphysics analysis of frozen soils, designing ground source heat pumps plant, modeling the deep geothermal reservoirs structure, assessing the geothermal potential of subsoil. Aim of this study is to provide original rock thermal conductivity values useful for the evaluation of both low and high enthalpy resources at regional or local scale. To overcome the existing lack of thermal conductivity data of sedimentary, igneous and metamorphic rocks, a series of laboratory measurements has been performed on several samples, collected in outcrop, representative of the main lithologies of the regions included in the VIGOR Project (southern Italy). Thermal properties tests were carried out both in dry and wet conditions, using a C-Therm TCi device, operating following the Modified Transient Plane Source method.Measurements were made at standard laboratory conditions on samples both water saturated and dehydrated with a fan-forced drying oven at 70 ° C for 24 hr, for preserving the mineral assemblage and preventing the change of effective porosity. Subsequently, the samples have been stored in an air-conditioned room while bulk density, solid volume and porosity were detected. The measured thermal conductivity

  15. Model-Based Material Parameter Estimation for Terahertz Reflection Spectroscopy

    NASA Astrophysics Data System (ADS)

    Kniffin, Gabriel Paul

    Many materials such as drugs and explosives have characteristic spectral signatures in the terahertz (THz) band. These unique signatures imply great promise for spectral detection and classification using THz radiation. While such spectral features are most easily observed in transmission, real-life imaging systems will need to identify materials of interest from reflection measurements, often in non-ideal geometries. One important, yet commonly overlooked source of signal corruption is the etalon effect -- interference phenomena caused by multiple reflections from dielectric layers of packaging and clothing likely to be concealing materials of interest in real-life scenarios. This thesis focuses on the development and implementation of a model-based material parameter estimation technique, primarily for use in reflection spectroscopy, that takes the influence of the etalon effect into account. The technique is adapted from techniques developed for transmission spectroscopy of thin samples and is demonstrated using measured data taken at the Northwest Electromagnetic Research Laboratory (NEAR-Lab) at Portland State University. Further tests are conducted, demonstrating the technique's robustness against measurement noise and common sources of error.

  16. Observation model and parameter partials for the JPL geodetic (GPS) modeling software 'GPSOMC'

    NASA Technical Reports Server (NTRS)

    Sovers, O. J.

    1990-01-01

    The physical models employed in GPSOMC, the modeling module of the GIPSY software system developed at JPL for analysis of geodetic Global Positioning Satellite (GPS) measurements are described. Details of the various contributions to range and phase observables are given, as well as the partial derivatives of the observed quantities with respect to model parameters. A glossary of parameters is provided to enable persons doing data analysis to identify quantities with their counterparts in the computer programs. The present version is the second revision of the original document which it supersedes. The modeling is expanded to provide the option of using Cartesian station coordinates; parameters for the time rates of change of universal time and polar motion are also introduced.

  17. Discussion of skill improvement in marine ecosystem dynamic models based on parameter optimization and skill assessment

    NASA Astrophysics Data System (ADS)

    Shen, Chengcheng; Shi, Honghua; Liu, Yongzhi; Li, Fen; Ding, Dewen

    2015-12-01

    Marine ecosystem dynamic models (MEDMs) are important tools for the simulation and prediction of marine ecosystems. This article summarizes the methods and strategies used for the improvement and assessment of MEDM skill, and it attempts to establish a technical framework to inspire further ideas concerning MEDM skill improvement. The skill of MEDMs can be improved by parameter optimization (PO), which is an important step in model calibration. An efficient approach to solve the problem of PO constrained by MEDMs is the global treatment of both sensitivity analysis and PO. Model validation is an essential step following PO, which validates the efficiency of model calibration by analyzing and estimating the goodness-of-fit of the optimized model. Additionally, by focusing on the degree of impact of various factors on model skill, model uncertainty analysis can supply model users with a quantitative assessment of model confidence. Research on MEDMs is ongoing; however, improvement in model skill still lacks global treatments and its assessment is not integrated. Thus, the predictive performance of MEDMs is not strong and model uncertainties lack quantitative descriptions, limiting their application. Therefore, a large number of case studies concerning model skill should be performed to promote the development of a scientific and normative technical framework for the improvement of MEDM skill.

  18. Discussion of skill improvement in marine ecosystem dynamic models based on parameter optimization and skill assessment

    NASA Astrophysics Data System (ADS)

    Shen, Chengcheng; Shi, Honghua; Liu, Yongzhi; Li, Fen; Ding, Dewen

    2016-07-01

    Marine ecosystem dynamic models (MEDMs) are important tools for the simulation and prediction of marine ecosystems. This article summarizes the methods and strategies used for the improvement and assessment of MEDM skill, and it attempts to establish a technical framework to inspire further ideas concerning MEDM skill improvement. The skill of MEDMs can be improved by parameter optimization (PO), which is an important step in model calibration. An efficient approach to solve the problem of PO constrained by MEDMs is the global treatment of both sensitivity analysis and PO. Model validation is an essential step following PO, which validates the efficiency of model calibration by analyzing and estimating the goodness-of-fit of the optimized model. Additionally, by focusing on the degree of impact of various factors on model skill, model uncertainty analysis can supply model users with a quantitative assessment of model confidence. Research on MEDMs is ongoing; however, improvement in model skill still lacks global treatments and its assessment is not integrated. Thus, the predictive performance of MEDMs is not strong and model uncertainties lack quantitative descriptions, limiting their application. Therefore, a large number of case studies concerning model skill should be performed to promote the development of a scientific and normative technical framework for the improvement of MEDM skill.

  19. Multiprocessing and Correction Algorithm of 3D-models for Additive Manufacturing

    NASA Astrophysics Data System (ADS)

    Anamova, R. R.; Zelenov, S. V.; Kuprikov, M. U.; Ripetskiy, A. V.

    2016-07-01

    This article addresses matters related to additive manufacturing preparation. A layer-by-layer model presentation was developed on the basis of a routing method. Methods for correction of errors in the layer-by-layer model presentation were developed. A multiprocessing algorithm for forming an additive manufacturing batch file was realized.

  20. Validation analysis of probabilistic models of dietary exposure to food additives.

    PubMed

    Gilsenan, M B; Thompson, R L; Lambe, J; Gibney, M J

    2003-10-01

    The validity of a range of simple conceptual models designed specifically for the estimation of food additive intakes using probabilistic analysis was assessed. Modelled intake estimates that fell below traditional conservative point estimates of intake and above 'true' additive intakes (calculated from a reference database at brand level) were considered to be in a valid region. Models were developed for 10 food additives by combining food intake data, the probability of an additive being present in a food group and additive concentration data. Food intake and additive concentration data were entered as raw data or as a lognormal distribution, and the probability of an additive being present was entered based on the per cent brands or the per cent eating occasions within a food group that contained an additive. Since the three model components assumed two possible modes of input, the validity of eight (2(3)) model combinations was assessed. All model inputs were derived from the reference database. An iterative approach was employed in which the validity of individual model components was assessed first, followed by validation of full conceptual models. While the distribution of intake estimates from models fell below conservative intakes, which assume that the additive is present at maximum permitted levels (MPLs) in all foods in which it is permitted, intake estimates were not consistently above 'true' intakes. These analyses indicate the need for more complex models for the estimation of food additive intakes using probabilistic analysis. Such models should incorporate information on market share and/or brand loyalty. PMID:14555358

  1. Parameter Choice and Constraint in Hydrologic Models for Evaluating Land Use Change

    NASA Astrophysics Data System (ADS)

    Jackson, C. R.

    2011-12-01

    Hydrologic models are used to answer questions, from simple, "what is the expected 100-year peak flow for a basin?", to complex, "how will land use change alter flow pathways, flow time series, and water chemistry?" Appropriate model structure and complexity depend on the questions being addressed. Numerous studies of simple transfer models for converting climate signals into streamflows suggest that only three or four parameters are needed. The conceptual corollary to such models is a single hillslope bucket with storage, evapotranspiration, fast flow, and slow flow. While having the benefit of low uncertainty, such models are ill-suited to addressing land use questions. Land use questions require models that can simulate effects of changes in vegetation, alterations of soil characteristics, and resulting changes in flow pathways. For example, minimum goals for a hydrologic model evaluating bioenergy feedstock production might include: 1) calculate Horton overland flow based on surface conductivities and saturated surface flow based on relative moisture content in the topsoils, 2) allow reinfiltration of Horton overland flow created by bare soils, compacted soils, and pavement (roads, logging roads, skid trails, landings), 3) account for root zone depth and LAI in transpiration calculations, 4) allow mixing of hillslope flows in the riparian aquifer, 5) allow separate simulation of the riparian soils and vegetation and upslope soils and vegetation, 6) incorporate important aspects of topography and stratigraphy, and 7) estimate residence times in different flow paths. How many parameters are needed for such a model, and what information beside streamflow can be collected to constrain the parameters? Additional information that can be used for evaluating and testing watershed models are in-situ conductivity measurements, soil porosity, soil moisture dynamics, shallow perched groundwater behavior, interflow occurrence, groundwater behavior, regional ET estimates

  2. Study on the effect of hydrogen addition on the variation of plasma parameters of argon-oxygen magnetron glow discharge for synthesis of TiO2 films

    NASA Astrophysics Data System (ADS)

    Saikia, Partha; Saikia, Bipul Kumar; Bhuyan, Heman

    2016-04-01

    We report the effect of hydrogen addition on plasma parameters of argon-oxygen magnetron glow discharge plasma in the synthesis of H-doped TiO2 films. The parameters of the hydrogen-added Ar/O2 plasma influence the properties and the structural phases of the deposited TiO2 film. Therefore, the variation of plasma parameters such as electron temperature (Te), electron density (ne), ion density (ni), degree of ionization of Ar and degree of dissociation of H2 as a function of hydrogen content in the discharge is studied. Langmuir probe and Optical emission spectroscopy are used to characterize the plasma. On the basis of the different reactions in the gas phase of the magnetron discharge, the variation of plasma parameters and sputtering rate are explained. It is observed that the electron and heavy ion density decline with gradual addition of hydrogen in the discharge. Hydrogen addition significantly changes the degree of ionization of Ar which influences the structural phases of the TiO2 film.

  3. Roughness parameter optimization using Land Parameter Retrieval Model and Soil Moisture Deficit: Implementation using SMOS brightness temperatures

    NASA Astrophysics Data System (ADS)

    Srivastava, Prashant K.; O'Neill, Peggy; Han, Dawei; Rico-Ramirez, Miguel A.; Petropoulos, George P.; Islam, Tanvir; Gupta, Manika

    2015-04-01

    Roughness parameterization is necessary for nearly all soil moisture retrieval algorithms such as single or dual channel algorithms, L-band Microwave Emission of Biosphere (LMEB), Land Parameter Retrieval Model (LPRM), etc. At present, roughness parameters can be obtained either by field experiments, although obtaining field measurements all over the globe is nearly impossible, or by using a land cover-based look up table, which is not always accurate everywhere for individual fields. From a catalogue of models available in the technical literature domain, the LPRM model was used here because of its robust nature and applicability to a wide range of frequencies. LPRM needs several parameters for soil moisture retrieval -- in particular, roughness parameters (h and Q) are important for calculating reflectivity. In this study, the h and Q parameters are optimized using the soil moisture deficit (SMD) estimated from the probability distributed model (PDM) and Soil Moisture and Ocean Salinity (SMOS) brightness temperatures following the Levenberg-Marquardt (LM) algorithm over the Brue catchment, Southwest of England, U.K.. The catchment is predominantly a pasture land with moderate topography. The PDM-based SMD is used as it is calibrated and validated using locally available ground-based information, suitable for large scale areas such as catchments. The optimal h and Q parameters are determined by maximizing the correlation between SMD and LPRM retrieved soil moisture. After optimization the values of h and Q have been found to be 0.32 and 0.15, respectively. For testing the usefulness of the estimated roughness parameters, a separate set of SMOS datasets are taken into account for soil moisture retrieval using the LPRM model and optimized roughness parameters. The overall analysis indicates a satisfactory result when compared against the SMD information. This work provides quantitative values of roughness parameters suitable for large scale applications. The

  4. Observation model and parameter partials for the JPL VLBI parameter estimation software MODEST/1991

    NASA Technical Reports Server (NTRS)

    Sovers, O. J.

    1991-01-01

    A revision is presented of MASTERFIT-1987, which it supersedes. Changes during 1988 to 1991 included introduction of the octupole component of solid Earth tides, the NUVEL tectonic motion model, partial derivatives for the precession constant and source position rates, the option to correct for source structure, a refined model for antenna offsets, modeling the unique antenna at Richmond, FL, improved nutation series due to Zhu, Groten, and Reigber, and reintroduction of the old (Woolard) nutation series for simulation purposes. Text describing the relativistic transformations and gravitational contributions to the delay model was also revised in order to reflect the computer code more faithfully.

  5. Multi-Variable Model-Based Parameter Estimation Model for Antenna Radiation Pattern Prediction

    NASA Technical Reports Server (NTRS)

    Deshpande, Manohar D.; Cravey, Robin L.

    2002-01-01

    A new procedure is presented to develop multi-variable model-based parameter estimation (MBPE) model to predict far field intensity of antenna. By performing MBPE model development procedure on a single variable at a time, the present method requires solution of smaller size matrices. The utility of the present method is demonstrated by determining far field intensity due to a dipole antenna over a frequency range of 100-1000 MHz and elevation angle range of 0-90 degrees.

  6. Parameters for the AMBER force field for the molecular mechanics modeling of the cobalt corrinoids

    NASA Astrophysics Data System (ADS)

    Marques, H. M.; Ngoma, B.; Egan, T. J.; Brown, K. L.

    2001-04-01

    Additional parameters for the AMBER force field have been developed for the molecular mechanics modeling of the cobalt corrinoids. Parameter development was based on a statistical analysis of the reported structures of these compounds. The resulting force field reproduces bond lengths, bond angles, and torsional angles within 0.01 Å, 0.8°, and 4.0° of the mean crystallographic values, respectively. Parameters for the Co-C bond length and the Co-C-C bond angle for modeling the alkylcobalamins were developed by modeling six alkylcobalamins. The validity of the force field was tested by comparing the results obtained with known experimental features of the structures of the cobalt corrinoids as well as with the results from their modeling using a parameter set for the MM2 force field that has been previously developed and extensively tested. The AMBER force field reproduces the structures of the cobalt corrinoids as well as the MM2 force field, although it tends to underestimate the corrin fold angle, the angle between mean planes through the corrin atoms in the northern and southern half of the molecules, respectively. The force field was applied to a study of the structures of 5'-deoxy-5'-(3-isoadenosyl)cobalamin, 2',5'-dideoxy-5'-adenosylcobalamin and 2',3',5'-trideoxy-5'-adenosylcobalamin. This expansion of the standard AMBER force field provides a force field that can be used for modeling the structures of the B 12-dependent proteins, the structures of some of which are now beginning to emerge. This was verified in a preliminary modeling of the coenzyme B 12 binding site of methylmalonyl coenzyme A mutase.

  7. Complete regional waveform modeling to estimate seismic velocity structure and source parameters for CTBT monitoring

    SciTech Connect

    Bredbeck, T; Rodgers, A; Walter, W

    1999-07-23

    The velocity structures and source parameters estimated by waveform modeling provide valuable information for CTBT monitoring. The inferred crustal and uppermost mantle structures advance understanding of tectonics and guides regionalization for event location and identification efforts. Estimation of source parameters such as seismic moment, depth and mechanism (whether earthquake, explosion or collapse) is crucial to event identification. In this paper we briefly outline some of the waveform modeling research for CTBT monitoring performed in the last year. In the future we will estimate structure for new regions by modeling waveforms of large well-observed events along additional paths. Of particular interest will be the estimation of velocity structure in aseismic regions such as most of Africa and the Former Soviet Union. Our previous work on aseismic regions in the Middle East, north Africa and south Asia give us confidence to proceed with our current methods. Using the inferred velocity models we plan to estimate source parameters for smaller events. It is especially important to obtain seismic moments of earthquakes for use in applying the Magnitude-Distance Amplitude Correction (MDAC; Taylor et al., 1999) to regional body-wave amplitudes for discrimination and calibrating the coda-based magnitude scales.

  8. A Note on the Item Information Function of the Four-Parameter Logistic Model

    ERIC Educational Resources Information Center

    Magis, David

    2013-01-01

    This article focuses on four-parameter logistic (4PL) model as an extension of the usual three-parameter logistic (3PL) model with an upper asymptote possibly different from 1. For a given item with fixed item parameters, Lord derived the value of the latent ability level that maximizes the item information function under the 3PL model. The…

  9. Assessing parameter importance of the Common Land Model based on qualitative and quantitative sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Li, J.; Duan, Q. Y.; Gong, W.; Ye, A.; Dai, Y.; Miao, C.; Di, Z.; Tong, C.; Sun, Y.

    2013-08-01

    Proper specification of model parameters is critical to the performance of land surface models (LSMs). Due to high dimensionality and parameter interaction, estimating parameters of an LSM is a challenging task. Sensitivity analysis (SA) is a tool that can screen out the most influential parameters on model outputs. In this study, we conducted parameter screening for six output fluxes for the Common Land Model: sensible heat, latent heat, upward longwave radiation, net radiation, soil temperature and soil moisture. A total of 40 adjustable parameters were considered. Five qualitative SA methods, including local, sum-of-trees, multivariate adaptive regression splines, delta test and Morris methods, were compared. The proper sampling design and sufficient sample size necessary to effectively screen out the sensitive parameters were examined. We found that there are 2-8 sensitive parameters, depending on the output type, and about 400 samples are adequate to reliably identify the most sensitive parameters. We also employed a revised Sobol' sensitivity method to quantify the importance of all parameters. The total effects of the parameters were used to assess the contribution of each parameter to the total variances of the model outputs. The results confirmed that global SA methods can generally identify the most sensitive parameters effectively, while local SA methods result in type I errors (i.e., sensitive parameters labeled as insensitive) or type II errors (i.e., insensitive parameters labeled as sensitive). Finally, we evaluated and confirmed the screening results for their consistency with the physical interpretation of the model parameters.

  10. Assessing parameter importance of the Common Land Model based on qualitative and quantitative sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Li, J. D.; Duan, Q. Y.; Gong, W.; Ye, A. Z.; Dai, Y. J.; Miao, C. Y.; Di, Z. H.; Tong, C.; Sun, Y. W.

    2013-02-01

    Proper specification of model parameters is critical to the performance of land surface models (LSMs). Due to high dimensionality and parameter interaction, estimating parameters of a LSM is a challenging task. Sensitivity analysis (SA) is a tool that can screen out the most influential parameters on model outputs. In this study, we conducted parameter screening for six output fluxes for the Common Land Model: sensible heat, latent heat, upward longwave radiation, net radiation, soil temperature and soil moisture. A total of 40 adjustable parameters were considered. Five qualitative SA methods, including local, sum-of-trees, multivariate adaptive regression splines, delta test and Morris methods, were compared. The proper sampling design and sufficient sample size necessary to effectively screen out the sensitive parameters were examined. We found that there are 2-8 sensitive parameters, depending on the output type, and about 400 samples are adequate to reliably identify the most sensitive parameters. We also employed a revised Sobol' sensitivity method to quantify the importance of all parameters. The total effects of the parameters were used to assess the contribution of each parameter to the total variances of the model outputs. The results confirmed that global SA methods can generally identify the most sensitive parameters effectively, while local SA methods result in type I errors (i.e. sensitive parameters labeled as insensitive) or type II errors (i.e. insensitive parameters labeled as sensitive). Finally, we evaluated and confirmed the screening results for their consistence with the physical interpretation of the model parameters.

  11. State and model error estimation for distributed parameter systems. [in large space structure control

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1979-01-01

    In-flight estimation of large structure model errors in order to detect inevitable deficiencies in large structure controller/estimator models is discussed. Such an estimation process is particularly applicable in the area of shape control system design required to maintain a prescribed static structural shape and, in addition, suppress dynamic disturbances due to the vehicle vibrational modes. The paper outlines a solution to the problem of static shape estimation where the vehicle shape must be reconstructed from a set of measurements discretely located throughout the structure. The estimation process is based on the principle of least-squares that inherently contains the definition and explicit computation of model error estimates that are optimal in some sense. Consequently, a solution is provided for the problem of estimation of static model errors (e.g., external loads). A generalized formulation applicable to distributed parameters systems is first worked out and then applied to a one-dimensional beam-like structural configuration.

  12. Spatiotemporal and random parameter panel data models of traffic crash fatalities in Vietnam.

    PubMed

    Truong, Long T; Kieu, Le-Minh; Vu, Tuan A

    2016-09-01

    This paper investigates factors associated with traffic crash fatalities in 63 provinces of Vietnam during the period from 2012 to 2014. Random effect negative binomial (RENB) and random parameter negative binomial (RPNB) panel data models are adopted to consider spatial heterogeneity across provinces. In addition, a spatiotemporal model with conditional autoregressive priors (ST-CAR) is utilised to account for spatiotemporal autocorrelation in the data. The statistical comparison indicates the ST-CAR model outperforms the RENB and RPNB models. Estimation results provide several significant findings. For example, traffic crash fatalities tend to be higher in provinces with greater numbers of level crossings. Passenger distance travelled and road lengths are also positively associated with fatalities. However, hospital densities are negatively associated with fatalities. The safety impact of the national highway 1A, the main transport corridor of the country, is also highlighted. PMID:27294863

  13. Fundamental M-dwarf parameters from high-resolution spectra using PHOENIX ACES models. I. Parameter accuracy and benchmark stars

    NASA Astrophysics Data System (ADS)

    Passegger, V. M.; Wende-von Berg, S.; Reiners, A.

    2016-03-01

    M-dwarf stars are the most numerous stars in the Universe; they span a wide range in mass and are in the focus of ongoing and planned exoplanet surveys. To investigate and understand their physical nature, detailed spectral information and accurate stellar models are needed. We use a new synthetic atmosphere model generation and compare model spectra to observations. To test the model accuracy, we compared the models to four benchmark stars with atmospheric parameters for which independent information from interferometric radius measurements is available. We used χ2-based methods to determine parameters from high-resolution spectroscopic observations. Our synthetic spectra are based on the new PHOENIX grid that uses the ACES description for the equation of state. This is a model generation expected to be especially suitable for the low-temperature atmospheres. We identified suitable spectral tracers of atmospheric parameters and determined the uncertainties in Teff, log g, and [Fe/H] resulting from degeneracies between parameters and from shortcomings of the model atmospheres. The inherent uncertainties we find are σTeff = 35 K, σlog g = 0.14, and σ[Fe/H] = 0.11. The new model spectra achieve a reliable match to our observed data; our results for Teff and log g are consistent with literature values to within 1σ. However, metallicities reported from earlier photometric and spectroscopic calibrations in some cases disagree with our results by more than 3σ. A possible explanation are systematic errors in earlier metallicity determinations that were based on insufficient descriptions of the cool atmospheres. At this point, however, we cannot definitely identify the reason for this discrepancy, but our analysis indicates that there is a large uncertainty in the accuracy of M-dwarf parameter estimates. Based on observations carried out with UVES at ESO VLT.

  14. Strengthen forensic entomology in court--the need for data exploration and the validation of a generalised additive mixed model.

    PubMed

    Baqué, Michèle; Amendt, Jens

    2013-01-01

    Developmental data of juvenile blow flies (Diptera: Calliphoridae) are typically used to calculate the age of immature stages found on or around a corpse and thus to estimate a minimum post-mortem interval (PMI(min)). However, many of those data sets don't take into account that immature blow flies grow in a non-linear fashion. Linear models do not supply a sufficient reliability on age estimates and may even lead to an erroneous determination of the PMI(min). According to the Daubert standard and the need for improvements in forensic science, new statistic tools like smoothing methods and mixed models allow the modelling of non-linear relationships and expand the field of statistical analyses. The present study introduces into the background and application of these statistical techniques by analysing a model which describes the development of the forensically important blow fly Calliphora vicina at different temperatures. The comparison of three statistical methods (linear regression, generalised additive modelling and generalised additive mixed modelling) clearly demonstrates that only the latter provided regression parameters that reflect the data adequately. We focus explicitly on both the exploration of the data--to assure their quality and to show the importance of checking it carefully prior to conducting the statistical tests--and the validation of the resulting models. Hence, we present a common method for evaluating and testing forensic entomological data sets by using for the first time generalised additive mixed models. PMID:22370995

  15. Sum of ranking differences (SRD) to ensemble multivariate calibration model merits for tuning parameter selection and comparing calibration methods.

    PubMed

    Kalivas, John H; Héberger, Károly; Andries, Erik

    2015-04-15

    Most multivariate calibration methods require selection of tuning parameters, such as partial least squares (PLS) or the Tikhonov regularization variant ridge regression (RR). Tuning parameter values determine the direction and magnitude of respective model vectors thereby setting the resultant predication abilities of the model vectors. Simultaneously, tuning parameter values establish the corresponding bias/variance and the underlying selectivity/sensitivity tradeoffs. Selection of the final tuning parameter is often accomplished through some form of cross-validation and the resultant root mean square error of cross-validation (RMSECV) values are evaluated. However, selection of a "good" tuning parameter with this one model evaluation merit is almost impossible. Including additional model merits assists tuning parameter selection to provide better balanced models as well as allowing for a reasonable comparison between calibration methods. Using multiple merits requires decisions to be made on how to combine and weight the merits into an information criterion. An abundance of options are possible. Presented in this paper is the sum of ranking differences (SRD) to ensemble a collection of model evaluation merits varying across tuning parameters. It is shown that the SRD consensus ranking of model tuning parameters allows automatic selection of the final model, or a collection of models if so desired. Essentially, the user's preference for the degree of balance between bias and variance ultimately decides the merits used in SRD and hence, the tuning parameter values ranked lowest by SRD for automatic selection. The SRD process is also shown to allow simultaneous comparison of different calibration methods for a particular data set in conjunction with tuning parameter selection. Because SRD evaluates consistency across multiple merits, decisions on how to combine and weight merits are avoided. To demonstrate the utility of SRD, a near infrared spectral data set and a

  16. Mathematical models use varying parameter strategies to represent paralyzed muscle force properties: a sensitivity analysis

    PubMed Central

    Frey Law, Laura A; Shields, Richard K

    2005-01-01

    Background Mathematical muscle models may be useful for the determination of appropriate musculoskeletal stresses that will safely maintain the integrity of muscle and bone following spinal cord injury. Several models have been proposed to represent paralyzed muscle, but there have not been any systematic comparisons of modelling approaches to better understand the relationships between model parameters and muscle contractile properties. This sensitivity analysis of simulated muscle forces using three currently available mathematical models provides insight into the differences in modelling strategies as well as any direct parameter associations with simulated muscle force properties. Methods Three mathematical muscle models were compared: a traditional linear model with 3 parameters and two contemporary nonlinear models each with 6 parameters. Simulated muscle forces were calculated for two stimulation patterns (constant frequency and initial doublet trains) at three frequencies (5, 10, and 20 Hz). A sensitivity analysis of each model was performed by altering a single parameter through a range of 8 values, while the remaining parameters were kept at baseline values. Specific simulated force characteristics were determined for each stimulation pattern and each parameter increment. Significant parameter influences for each simulated force property were determined using ANOVA and Tukey's follow-up tests (α ≤ 0.05), and compared to previously reported parameter definitions. Results Each of the 3 linear model's parameters most clearly influence either simulated force magnitude or speed properties, consistent with previous parameter definitions. The nonlinear models' parameters displayed greater redundancy between force magnitude and speed properties. Further, previous parameter definitions for one of the nonlinear models were consistently supported, while the other was only partially supported by this analysis. Conclusion These three mathematical models use

  17. Stochastic modelling of daily rainfall in Nigeria: intra-annual variation of model parameters

    NASA Astrophysics Data System (ADS)

    Jimoh, O. D.; Webster, P.

    1999-09-01

    A Markov model of order 1 may be used to describe the occurrence of wet and dry days in Nigeria. Such models feature two parameter sets; P01 to characterise the probability of a wet day following a dry day and P11 to characterise the probability of a wet day following a wet day. The model parameter sets, when estimated from historical records, are characterised by a distinctive seasonal behaviour. However, the comparison of this seasonal behaviour between rainfall stations is hampered by the noise reflecting the high variability of parameters on successive days. The first part of this article is concerned with methods for smoothing these inherently noisy parameter sets. Smoothing has been approached using Fourier series, averaging techniques, or a combination thereof. It has been found that different methods generally perform well with respect to estimation of the average number of wet events and the frequency duration curves of wet and dry events. Parameterisation of the P01 parameter set is more successful than the P11 in view of the relatively small number of wet events lasting two or more days. The second part of the article is concerned with describing the regional variation in smoothed parameter sets. There is a systematic variation in the P01 parameter set as one moves northwards. In contrast, there is limited regional variation in the P11 set. Although this regional variation in P01 appears to be related to the gradual movement of the Inter Tropical Convergence Zone, the contrasting behaviour of the two parameter sets is difficult to explain on physical grounds.

  18. Kinetic analysis of HO{sub 2} addition to ethylene, propene, and isobutene, and thermochemical parameters of alkyl hydroperoxides and hydroperoxide alkyl radicals

    SciTech Connect

    Chen, C.J.; Bozzelli, J.W.

    2000-06-01

    Thermochemical kinetic analysis for the reactions of HO{sub 2} radical addition to the primary, secondary, and tertiary carbon-carbon double bonds of ethylene, propene, and isobutene are studied using canonical transition state theory (TST). Thermochemical properties of reactants, alkyl hydroperoxides (ROOH), hydroperoxy alkyl radicals (R-OOH), and transition states (TSs) are determined by ab initio and density functional calculations. Enthalpies of formation ({Delta}H{sub f 298}{degree}) of product radicals (R-OOH) are determined using isodesmic reactions with group balance at MP4(full)6-31G(d,p)/MP2(full)/6-31G(d), MP2(full)/6-31G(d), complete basis set model chemistry (CBS-q with MP2(full)/6-31g(d) and B3LYP/6-31g(d) optimized geometries), and density functional (B3LYP/6-31g(d) and B3LYP/6-311+g(3df,2p)//B3LYP/6-31g(d)) calculations. {Delta}H{sub f 298}{degree} of TSs are obtained from the {Delta}H{sub f 298}{degree} of reactants plus energy differences between reactants and TSs. Entropies (S{sub 298}{degree}) and heat capacities (Cp(T) 300 {le} T/K {le} 1,500) contributions from vibrational, translational, and external rotational are calculated using the rigid-rotor-harmonic-oscillator approximation based on geometric parameters and vibrational frequencies obtained at MP2(full)/6-31G(d) and B3LYP/6-31G(d) levels of theory. Selected potential barriers of internal rotations for hydroperoxy alkyl radicals and TSs are calculated at MP2(full)/6-31G(d) and CBS-Q//MP2(full)/6-31G(d) levels. Contributions from hindered rotors of S{sub 298}{degree} and Cp(T) are calculated by the method of Pitzer and Gwinn and by summation over the energy levels obtained by direct diagonalization of the Hamiltonian matrix of hindered internal rotations when the potential barriers of internal rotations are available. calculated rate constants obtained at CBS-q/MP2(full)/6-31G(d) and CBS-q//B3LYP/6-31G(d) levels of theory show similar trends with experimental data: HO{sub 2} radical

  19. An investigation of the material and model parameters for a constitutive model for MSMAs

    NASA Astrophysics Data System (ADS)

    Dikes, Jason; Feigenbaum, Heidi; Ciocanel, Constantin

    2015-04-01

    A two dimensional constitutive model capable of predicting the magneto-mechanical response of a magnetic shape memory alloy (MSMA) has been developed and calibrated using a zero field-variable stress test1. This calibration approach is easy to perform and facilitates a faster evaluation of the three calibration constants required by the model (vs. five calibration constants required by previous models2,3). The calibration constants generated with this approach facilitate good model predictions of constant field-variable stress tests, for a wide range of loading conditions1. However, the same calibration constants yield less accurate model predictions for constant stress-variable field tests. Deployment of a separate calibration method for this type of loading, using a varying field-zero stress calibration test, also didn't lead to improved model predictions of this loading case. As a result, a sensitivity analysis was performed on most model and material parameters to identify which of them may influence model predictions the most, in both types of loading conditions. The sensitivity analysis revealed that changing most of these parameters did not improve model predictions for all loading types. Only the anisotropy coefficient was found to improve significantly field controlled model predictions and slightly worsen model predictions for stress controlled cases. This suggests that either the value of the anisotropy coefficient (which is provided by the manufacturer) is not accurate, or that the model is missing features associated with the magnetic energy of the material.

  20. An introduction to modeling longitudinal data with generalized additive models: applications to single-case designs.

    PubMed

    Sullivan, Kristynn J; Shadish, William R; Steiner, Peter M

    2015-03-01

    Single-case designs (SCDs) are short time series that assess intervention effects by measuring units repeatedly over time in both the presence and absence of treatment. This article introduces a statistical technique for analyzing SCD data that has not been much used in psychological and educational research: generalized additive models (GAMs). In parametric regression, the researcher must choose a functional form to impose on the data, for example, that trend over time is linear. GAMs reverse this process by letting the data inform the choice of functional form. In this article we review the problem that trend poses in SCDs, discuss how current SCD analytic methods approach trend, describe GAMs as a possible solution, suggest a GAM model testing procedure for examining the presence of trend in SCDs, present a small simulation to show the statistical properties of GAMs, and illustrate the procedure on 3 examples of different lengths. Results suggest that GAMs may be very useful both as a form of sensitivity analysis for checking the plausibility of assumptions about trend and as a primary data analysis strategy for testing treatment effects. We conclude with a discussion of some problems with GAMs and some future directions for research on the application of GAMs to SCDs. PMID:24885341

  1. Inferring model structural deficits by analyzing temporal dynamics of model performance and parameter sensitivity

    NASA Astrophysics Data System (ADS)

    Reusser, D. E.; Zehe, E.

    2011-07-01

    In this paper we investigate the use of hydrological models as learning tools to help improve our understanding of the hydrological functioning of a catchment. With the model as a hypothetical conceptualization of how dominant hydrological processes contribute to catchment-scale response, we investigate three questions: (1) During which periods does the model (not) reproduce observed quantities and dynamics? (2) What is the nature of the error during times of bad model performance? (3) Which model components are responsible for this error? To investigate these questions, we combine a method for detecting repeating patterns of typical differences between model and observations (time series of grouped errors, TIGER) with a method for identifying the active model components during each simulation time step based on parameter sensitivity (temporal dynamics of parameter sensitivities, TEDPAS). The approach generates a time series of occurrence of dominant error types and time series of parameter sensitivities. A synoptic discussion of these time series highlights deficiencies in the assumptions about the functioning of the catchment. The approach is demonstrated for the Weisseritz headwater catchment in the eastern Ore Mountains. Our results indicate that the WaSiM-ETH complex grid-based model is not a sufficient working hypothesis for the functioning of the Weisseritz catchment and point toward future steps that can help improve our understanding of the catchment.

  2. Dependence of red edge on eddy viscosity model parameters

    NASA Technical Reports Server (NTRS)

    Deupree, R. G.; Cole, P. W.

    1980-01-01

    The dependence of the red edge location on the two fundamental free parameters in the eddy viscosity treatment was extensively studied. It is found that the convective flux is rather insensitive to any reasonable or allowed value of the two free parameters of the treatment. This must be due in part to the fact that the convective flux is determined more by the properties of the hydrogen ionization region than by differences in convective structure. The changes in the effective temperature of the red edge of the RR Lyrae gap resulting from these parameter variations is quite small (approximately 150 K). This is true both because the parameter variation causes only small changes and because large changes in the convective flux are required to produce any significant change in red edge location. The possible changes found are substantially less than the approximately 600 K required to change the predicted helium abundance mass fraction from 0.3 to 0.2.

  3. Statistical inference for the additive hazards model under outcome-dependent sampling

    PubMed Central

    Yu, Jichang; Liu, Yanyan; Sandler, Dale P.; Zhou, Haibo

    2015-01-01

    Cost-effective study design and proper inference procedures for data from such designs are always of particular interests to study investigators. In this article, we propose a biased sampling scheme, an outcome-dependent sampling (ODS) design for survival data with right censoring under the additive hazards model. We develop a weighted pseudo-score estimator for the regression parameters for the proposed design and derive the asymptotic properties of the proposed estimator. We also provide some suggestions for using the proposed method by evaluating the relative efficiency of the proposed method against simple random sampling design and derive the optimal allocation of the subsamples for the proposed design. Simulation studies show that the proposed ODS design is more powerful than other existing designs and the proposed estimator is more efficient than other estimators. We apply our method to analyze a cancer study conducted at NIEHS, the Cancer Incidence and Mortality of Uranium Miners Study, to study the risk of radon exposure to cancer. PMID:26379363

  4. In vivo characterization of two additional Leishmania donovani strains using the murine and hamster model.

    PubMed

    Kauffmann, F; Dumetz, F; Hendrickx, S; Muraille, E; Dujardin, J-C; Maes, L; Magez, S; De Trez, C

    2016-05-01

    Leishmania donovani is a protozoan parasite causing the neglected tropical disease visceral leishmaniasis. One difficulty to study the immunopathology upon L. donovani infection is the limited adaptability of the strains to experimental mammalian hosts. Our knowledge about L. donovani infections relies on a restricted number of East African strains (LV9, 1S). Isolated from patients in the 1960s, these strains were described extensively in mice and Syrian hamsters and have consequently become 'reference' laboratory strains. L. donovani strains from the Indian continent display distinct clinical features compared to East African strains. Some reports describing the in vivo immunopathology of strains from the Indian continent exist. This study comprises a comprehensive immunopathological characterization upon infection with two additional strains, the Ethiopian L. donovani L82 strain and the Nepalese L. donovani BPK282 strain in both Syrian hamsters and C57BL/6 mice. Parameters that include parasitaemia levels, weight loss, hepatosplenomegaly and alterations in cellular composition of the spleen and liver, showed that the L82 strain generated an overall more virulent infection compared to the BPK282 strain. Altogether, both L. donovani strains are suitable and interesting for subsequent in vivo investigation of visceral leishmaniasis in the Syrian hamster and the C57BL/6 mouse model. PMID:27012562

  5. Ecosystem Modeling of College Drinking: Parameter Estimation and Comparing Models to Data*

    PubMed Central

    Ackleh, Azmy S.; Fitzpatrick, Ben G.; Scribner, Richard; Simonsen, Neal; Thibodeaux, Jeremy J.

    2009-01-01

    Recently we developed a model composed of five impulsive differential equations that describes the changes in drinking patterns (that persist at epidemic level) amongst college students. Many of the model parameters cannot be measured directly from data; thus, an inverse problem approach, which chooses the set of parameters that results in the “best” model to data fit, is crucial for using this model as a predictive tool. The purpose of this paper is to present the procedure and results of an unconventional approach to parameter estimation that we developed after more common approaches were unsuccessful for our specific problem. The results show that our model provides a good fit to survey data for 32 campuses. Using these parameter estimates, we examined the effect of two hypothetical intervention policies: 1) reducing environmental wetness, and 2) penalizing students who are caught drinking. The results suggest that reducing campus wetness may be a very effective way of reducing heavy episodic (binge) drinking on a college campus, while a policy that penalizes students who drink is not nearly as effective. PMID:20161275

  6. The Role of a Steepness Parameter in the Exponential Stability of a Model Problem. Numerical Aspects

    NASA Astrophysics Data System (ADS)

    Todorovic, N.

    2011-06-01

    The Nekhoroshev theorem considers quasi integrable Hamiltonians providing stability of actions in exponentially long times. One of the hypothesis required by the theorem is a mathematical condition called steepness. Nekhoroshev conjectured that different steepness properties should imply numerically observable differences in the stability times. After a recent study on this problem (Guzzo et al. 2011, Todorovic et al. 2011) we show some additional numerical results on the change of resonances and the diffusion laws produced by the increasing effect of steepness. The experiments are performed on a 4-dimensional steep symplectic map designed in a way that a parameter smoothly regulates the steepness properties in the model.

  7. A pharmacometric case study regarding the sensitivity of structural model parameter estimation to error in patient reported dosing times.

    PubMed

    Knights, Jonathan; Rohatagi, Shashank

    2015-12-01

    Although there is a body of literature focused on minimizing the effect of dosing inaccuracies on pharmacokinetic (PK) parameter estimation, most of the work centers on missing doses. No attempt has been made to specifically characterize the effect of error in reported dosing times. Additionally, existing work has largely dealt with cases in which the compound of interest is dosed at an interval no less than its terminal half-life. This work provides a case study investigating how error in patient reported dosing times might affect the accuracy of structural model parameter estimation under sparse sampling conditions when the dosing interval is less than the terminal half-life of the compound, and the underlying kinetics are monoexponential. Additional effects due to noncompliance with dosing events are not explored and it is assumed that the structural model and reasonable initial estimates of the model parameters are known. Under the conditions of our simulations, with structural model CV % ranging from ~20 to 60 %, parameter estimation inaccuracy derived from error in reported dosing times was largely controlled around 10 % on average. Given that no observed dosing was included in the design and sparse sampling was utilized, we believe these error results represent a practical ceiling given the variability and parameter estimates for the one-compartment model. The findings suggest additional investigations may be of interest and are noteworthy given the inability of current PK software platforms to accommodate error in dosing times. PMID:26209956

  8. Structural modelling and control design under incomplete parameter information: The maximum-entropy approach

    NASA Technical Reports Server (NTRS)

    Hyland, D. C.

    1983-01-01

    A stochastic structural control model is described. In contrast to the customary deterministic model, the stochastic minimum data/maximum entropy model directly incorporates the least possible a priori parameter information. The approach is to adopt this model as the basic design model, thus incorporating the effects of parameter uncertainty at a fundamental level, and design mean-square optimal controls (that is, choose the control law to minimize the average of a quadratic performance index over the parameter ensemble).

  9. Parameter sensitivity and uncertainty analysis for a storm surge and wave model

    NASA Astrophysics Data System (ADS)

    Bastidas, L. A.; Knighton, J.; Kline, S. W.

    2015-10-01

    Development and simulation of synthetic hurricane tracks is a common methodology used to estimate hurricane hazards in the absence of empirical coastal surge and wave observations. Such methods typically rely on numerical models to translate stochastically generated hurricane wind and pressure forcing into coastal surge and wave estimates. The model output uncertainty associated with selection of appropriate model parameters must therefore be addressed. The computational overburden of probabilistic surge hazard estimates is exacerbated by the high dimensionality of numerical surge and wave models. We present a model parameter sensitivity analysis of the Delft3D model for the simulation of hazards posed by Hurricane Bob (1991) utilizing three theoretical wind distributions (NWS23, modified Rankine, and Holland). The sensitive model parameters (of eleven total considered) include wind drag, the depth-induced breaking γB, and the bottom roughness. Several parameters show no sensitivity (threshold depth, eddy viscosity, wave triad parameters and depth-induced breaking αB) and can therefore be excluded to reduce the computational overburden of probabilistic surge hazard estimates. The sensitive model parameters also demonstrate a large amount of interactions between parameters and a non-linear model response. While model outputs showed sensitivity to several parameters, the ability of these parameters to act as tuning parameters for calibration is somewhat limited as proper model calibration is strongly reliant on accurate wind and pressure forcing data. A comparison of the model performance with forcings from the different wind models is also presented.

  10. [Study on the automatic parameters identification of water pipe network model].

    PubMed

    Jia, Hai-Feng; Zhao, Qi-Feng

    2010-01-01

    Based on the problems analysis on development and application of water pipe network model, the model parameters automatic identification is regarded as a kernel bottleneck of model's application in water supply enterprise. The methodology of water pipe network model parameters automatic identification based on GIS and SCADA database is proposed. Then the kernel algorithm of model parameters automatic identification is studied, RSA (Regionalized Sensitivity Analysis) is used for automatic recognition of sensitive parameters, and MCS (Monte-Carlo Sampling) is used for automatic identification of parameters, the detail technical route based on RSA and MCS is presented. The module of water pipe network model parameters automatic identification is developed. At last, selected a typical water pipe network as a case, the case study on water pipe network model parameters automatic identification is conducted and the satisfied results are achieved. PMID:20329520

  11. Detonation of highly dilute porous explosives; II: Influence of inert additives on the structure of the front, the parameters, and the reaction time

    SciTech Connect

    Shvedov, K.K.; Aniskin, A.I.; Dremin, A.N.; Il'in, A.N.

    1982-06-01

    For the detonation of porous explosives with inert additives, as for the detonation of individual porous explosives, the basic postulates and conclusions of the modern gasdynamic theory of detonation are valid. The influence of solid, refractory inert additives on the decomposition mechanism of porous explosives depends on the individual properties of the explosives and mainly on the dispersity of the additives. With the elimination of pronounced heating of the additives in mixtures with TNT, a certain positive influence on the appearance of decomposition sources and the total reaction time is observed. In cases with hexogen, no such influence is observed, which is evidently the result of physical inhomogeneity of the porous structure of the charge and the sufficiently high detonation pressures of the mixtures. The basic influence of inert additives on the critical diameter, front structure, detonation parameters, and reaction time of porous explosives is exerted through processes of energy absorption in the reaction region and factors leading to energy losses may lead to ambiguity of the detonation conditions in a system with specified chemical potential energy. The state of the additive in the reaction region must be taken into account for reliable theoretical description of the detonation conditions of porous explosives with a large content of inert additives.

  12. Estimation of Ecosystem Parameters of the Community Land Model with DREAM: Evaluation of the Potential for Upscaling Net Ecosystem Exchange

    NASA Astrophysics Data System (ADS)

    Hendricks Franssen, H. J.; Post, H.; Vrugt, J. A.; Fox, A. M.; Baatz, R.; Kumbhar, P.; Vereecken, H.

    2015-12-01

    Estimation of net ecosystem exchange (NEE) by land surface models is strongly affected by uncertain ecosystem parameters and initial conditions. A possible approach is the estimation of plant functional type (PFT) specific parameters for sites with measurement data like NEE and application of the parameters at other sites with the same PFT and no measurements. This upscaling strategy was evaluated in this work for sites in Germany and France. Ecosystem parameters and initial conditions were estimated with NEE-time series of one year length, or a time series of only one season. The DREAM(zs) algorithm was used for the estimation of parameters and initial conditions. DREAM(zs) is not limited to Gaussian distributions and can condition to large time series of measurement data simultaneously. DREAM(zs) was used in combination with the Community Land Model (CLM) v4.5. Parameter estimates were evaluated by model predictions at the same site for an independent verification period. In addition, the parameter estimates were evaluated at other, independent sites situated >500km away with the same PFT. The main conclusions are: i) simulations with estimated parameters reproduced better the NEE measurement data in the verification periods, including the annual NEE-sum (23% improvement), annual NEE-cycle and average diurnal NEE course (error reduction by factor 1,6); ii) estimated parameters based on seasonal NEE-data outperformed estimated parameters based on yearly data; iii) in addition, those seasonal parameters were often also significantly different from their yearly equivalents; iv) estimated parameters were significantly different if initial conditions were estimated together with the parameters. We conclude that estimated PFT-specific parameters improve land surface model predictions significantly at independent verification sites and for independent verification periods so that their potential for upscaling is demonstrated. However, simulation results also indicate

  13. A Paradox between IRT Invariance and Model-Data Fit When Utilizing the One-Parameter and Three-Parameter Models

    ERIC Educational Resources Information Center

    Custer, Michael; Sharairi, Sid; Yamazaki, Kenji; Signatur, Diane; Swift, David; Frey, Sharon

    2008-01-01

    The present study compared item and ability invariance as well as model-data fit between the one-parameter (1PL) and three-parameter (3PL) Item Response Theory (IRT) models utilizing real data across five grades; second through sixth as well as simulated data at second, fourth and sixth grade. At each grade, the 1PL and 3PL IRT models were run…

  14. Exploring the role of model parameters and regularization procedures in the thermodynamics of the PNJL model

    SciTech Connect

    Ruivo, M. C.; Costa, P.; Sousa, C. A. de; Hansen, H.

    2010-08-05

    The equation of state and the critical behavior around the critical end point are studied in the framework of the Polyakov-Nambu-Jona-Lasinio model. We prove that a convenient choice of the model parameters is crucial to get the correct description of isentropic trajectories. The physical relevance of the effects of the regularization procedure is insured by the agreement with general thermodynamic requirements. The results are compared with simple thermodynamic expectations and lattice data.

  15. Improved landslide-tsunami prediction: Effects of block model parameters and slide model

    NASA Astrophysics Data System (ADS)

    Heller, Valentin; Spinneken, Johannes

    2013-03-01

    Subaerial landslide-tsunamis and impulse waves are caused by mass movements impacting into a water body, and the hazards they pose have to be reliably assessed. Empirical equations developed with physical Froude model studies can be an efficient method for such predictions. The present study improves this methodology and addresses two significant shortcomings in detail for the first time: these are the effect of three commonly ignored block model parameters and whether the slide is represented by a rigid block or a deformable granular material. A total of 144 block slide tests were conducted in a wave flume under systematic variation of three important block model parameters, the slide Froude number, the relative slide thickness, and the relative slide mass. Empirical equations for the maximum wave amplitude, height, and period as well as their evolution with propagation distance are derived. For most wave parameters, remarkably small data scatter is achieved. The combined influence of the three block model parameters affects the wave amplitude and wave height by up to a factor of two. The newly derived equations for block slides are then related to published equations for granular slides. This comparison reveals that block slides do not necessarily generate larger waves than granular slides, as often argued in the technical literature. In fact, it is shown that they may also generate significant smaller waves. The new findings can readily be integrated in existing hazard assessment methodologies, and they explain a large part of the discrepancy between previously published data.

  16. STATISTICAL METHODOLOGY FOR ESTIMATING PARAMETERS IN PBPK/PD MODELS

    EPA Science Inventory

    PBPK/PD models are large dynamic models that predict tissue concentration and biological effects of a toxicant before PBPK/PD models can be used in risk assessments in the arena of toxicological hypothesis testing, models allow the consequences of alternative mechanistic hypothes...

  17. Parameter identification for the electrical modeling of semiconductor bridges.

    SciTech Connect

    Gray, Genetha Anne

    2005-03-01

    Semiconductor bridges (SCBs) are commonly used as initiators for explosive and pyrotechnic devices. Their advantages include reduced voltage and energy requirements and exceptional safety features. Moreover, the design of systems which implement SCBs can be expedited using electrical simulation software. Successful use of this software requires that certain parameters be correctly chosen. In this paper, we explain how these parameters can be identified using optimization. We describe the problem focusing on the application of a direct optimization method for its solution, and present some numerical results.

  18. Multiobjective adaptive surrogate modeling-based optimization for parameter estimation of large, complex geophysical models

    NASA Astrophysics Data System (ADS)

    Gong, Wei; Duan, Qingyun; Li, Jianduo; Wang, Chen; Di, Zhenhua; Ye, Aizhong; Miao, Chiyuan; Dai, Yongjiu

    2016-03-01

    Parameter specification is an important source of uncertainty in large, complex geophysical models. These models generally have multiple model outputs that require multiobjective optimization algorithms. Although such algorithms have long been available, they usually require a large number of model runs and are therefore computationally expensive for large, complex dynamic models. In this paper, a multiobjective adaptive surrogate modeling-based optimization (MO-ASMO) algorithm is introduced that aims to reduce computational cost while maintaining optimization effectiveness. Geophysical dynamic models usually have a prior parameterization scheme derived from the physical processes involved, and our goal is to improve all of the objectives by parameter calibration. In this study, we developed a method for directing the search processes toward the region that can improve all of the objectives simultaneously. We tested the MO-ASMO algorithm against NSGA-II and SUMO with 13 test functions and a land surface model - the Common Land Model (CoLM). The results demonstrated the effectiveness and efficiency of MO-ASMO.

  19. Cognitive Models of Risky Choice: Parameter Stability and Predictive Accuracy of Prospect Theory

    ERIC Educational Resources Information Center

    Glockner, Andreas; Pachur, Thorsten

    2012-01-01

    In the behavioral sciences, a popular approach to describe and predict behavior is cognitive modeling with adjustable parameters (i.e., which can be fitted to data). Modeling with adjustable parameters allows, among other things, measuring differences between people. At the same time, parameter estimation also bears the risk of overfitting. Are…

  20. Thermal Stability of Nanocrystalline Alloys by Solute Additions and A Thermodynamic Modeling

    NASA Astrophysics Data System (ADS)

    Saber, Mostafa

    and alpha → gamma phase transformation in Fe-Ni-Zr alloys. In addition to the experimental study of thermal stabilization of nanocrystalline Fe-Cr-Zr or Fe-Ni-Zr alloys, the thesis presented here developed a new predictive model, applicable to strongly segregating solutes, for thermodynamic stabilization of binary alloys. This model can serve as a benchmark for selecting solute and evaluating the possible contribution of stabilization. Following a regular solution model, both the chemical and elastic strain energy contributions are combined to obtain the mixing enthalpy. The total Gibbs free energy of mixing is then minimized with respect to simultaneous variations in the grain boundary volume fraction and the solute concentration in the grain boundary and the grain interior. The Lagrange multiplier method was used to obtained numerical solutions. Application are given for the temperature dependence of the grain size and the grain boundary solute excess for selected binary system where experimental results imply that thermodynamic stabilization could be operative. This thesis also extends the binary model to a new model for thermodynamic stabilization of ternary nanocrystalline alloys. It is applicable to strongly segregating size-misfit solutes and uses input data available in the literature. In a same manner as the binary model, this model is based on a regular solution approach such that the chemical and elastic strain energy contributions are incorporated into the mixing enthalpy DeltaHmix, and the mixing entropy DeltaSmix is obtained using the ideal solution approximation. The Gibbs mixing free energy Delta Gmix is then minimized with respect to simultaneous variations in grain growth and solute segregation parameters. The Lagrange multiplier method is similarly used to obtain numerical solutions for the minimum Delta Gmix. The temperature dependence of the nanocrystalline grain size and interfacial solute excess can be obtained for selected ternary systems. As

  1. Lumped Parameter Modeling for Rapid Vibration Response Prototyping and Test Correlation for Electronic Units

    NASA Technical Reports Server (NTRS)

    Van Dyke, Michael B.

    2013-01-01

    Present preliminary work using lumped parameter models to approximate dynamic response of electronic units to random vibration; Derive a general N-DOF model for application to electronic units; Illustrate parametric influence of model parameters; Implication of coupled dynamics for unit/board design; Demonstrate use of model to infer printed wiring board (PWB) dynamics from external chassis test measurement.

  2. Parameter optimization method for the water quality dynamic model based on data-driven theory.

    PubMed

    Liang, Shuxiu; Han, Songlin; Sun, Zhaochen

    2015-09-15

    Parameter optimization is important for developing a water quality dynamic model. In this study, we applied data-driven method to select and optimize parameters for a complex three-dimensional water quality model. First, a data-driven model was developed to train the response relationship between phytoplankton and environmental factors based on the measured data. Second, an eight-variable water quality dynamic model was established and coupled to a physical model. Parameter sensitivity analysis was investigated by changing parameter values individually in an assigned range. The above results served as guidelines for the control parameter selection and the simulated result verification. Finally, using the data-driven model to approximate the computational water quality model, we employed the Particle Swarm Optimization (PSO) algorithm to optimize the control parameters. The optimization routines and results were analyzed and discussed based on the establishment of the water quality model in Xiangshan Bay (XSB). PMID:26277602

  3. Generalized Continental Scale Hydrologic Model Parameter Estimates: Application to a VIC model implementation for the Contiguous United States (CONUS)

    NASA Astrophysics Data System (ADS)

    Mizukami, N.; Clark, M. P.; Nijssen, B.; Sampson, K. M.; Newman, A. J.; Samaniego, L. E.

    2014-12-01

    Parameter estimation is one of the biggest challenges in hydrologic modeling, particularly over large spatial scales. Model uncertainty as a result of parameter values can be as large as that from other sources such as the choice of hydrologic model or the choice of model forcing data. Thus far, parameter estimation has been performed in an inconsistent manner across the model domain, e.g., using patchy calibration or spatially constant parameters. This can produce artifacts in the spatial variability of model outputs, e.g., discontinuity of simulated hydrologic fields, difficulty with spatially consistent parameter adjustments, and so on. We implement a framework that is suitable for use across multiple model physics options to map between geophysical attributes (i.e., soil, vegetation) and model parameters that describe the storage and transmission of water and energy. Specifically, we apply the transfer functions that transform geophysical attributes into model parameters and apply these transfer functions at the native resolution of the geophysical attribute data rather than at the resolution of the model application. The model parameters are then aggregated to the spatial scale of the model simulation with several scaling functions - arithmetic mean, harmonic mean, geometric mean. Model parameter adjustments are made by calibrating the parameters of the transfer function rather than the model parameters themselves.We demonstrate this general parameter estimation approach using a continental scale VIC implementation at a 12km resolution. The VIC soil parameters were generated by a set of transfer functions developed with nation-wide STATSGO soil data. The VIC model with new soil parameters is forced with Maurer et al. 2002 climate dataset (1979-2008) and the simulation results are compared with the previous simulations with parameters used in past studies as well as observed streamflows at selected basins.

  4. Impact of Temporal Data Resolution on Parameter Inference and Model Identification in Conceptual Hydrological Modeling: Insights from an Experimental Catchment

    NASA Astrophysics Data System (ADS)

    Fenicia, F.; Kavetski, D.; Clark, M.

    2010-12-01

    A major issue in hydrological and broader environmental modeling is the uncertainty in the observed data, in particular, the effects of sparse data sampling and averaging to temporal and spatial scales that well exceed those of many hydrological dynamics of interest. This study presents quantitative and qualitative insights into the time scale dependencies of hydrological parameters, predictions and their uncertainties, and examines the impact of the time resolution of the calibration data on the identifiable system complexity. Data from an experimental basin (Weierbach, Luxembourg) is used to analyze four conceptual models of varying complexity, over time scales of 30 min to 3 days, using several combinations of numerical implementations and inference equations. Large spurious time scale trends arise in the parameter estimates when unreliable time stepping approximations are employed, and/or when the heteroscedasticity of the model residual errors is ignored. Conversely, the use of robust numerics and more adequate (albeit still imperfect) likelihood functions markedly stabilizes the time scale dependencies and improves the identifiability of increasingly complex model structures. Parameters describing slowflow remained essentially constant over the range of sub-hourly to daily scales considered here, while parameters describing quickflow converged towards increasingly precise and stable estimates as the data resolution approached the characteristic time scale of these faster processes. These results are consistent with theoretical expectations based on numerical error analysis and data-averaging considerations. Additional diagnostics confirmed the improved ability of the more complex models to reproduce distinct signatures in the observed data. More broadly, this study provides insights into the information content of data and, through robust numerical and statistical techniques, furthers the utilization of dense-resolution data and experimental insights to

  5. Impact of temporal data resolution on parameter inference and model identification in conceptual hydrological modeling: Insights from an experimental catchment

    NASA Astrophysics Data System (ADS)

    Kavetski, Dmitri; Fenicia, Fabrizio; Clark, Martyn P.

    2011-05-01

    This study presents quantitative and qualitative insights into the time scale dependencies of hydrological parameters, predictions and their uncertainties, and examines the impact of the time resolution of the calibration data on the identifiable system complexity. Data from an experimental basin (Weierbach, Luxembourg) is used to analyze four conceptual models of varying complexity, over time scales of 30 min to 3 days, using several combinations of numerical implementations and inference equations. Large spurious time scale trends arise in the parameter estimates when unreliable time-stepping approximations are employed and/or when the heteroscedasticity of the model residual errors is ignored. Conversely, the use of robust numerics and more adequate (albeit still clearly imperfect) likelihood functions markedly stabilizes and, in many cases, reduces the time scale dependencies and improves the identifiability of increasingly complex model structures. Parameters describing slow flow remained essentially constant over the range of subhourly to daily scales considered here, while parameters describing quick flow converged toward increasingly precise and stable estimates as the data resolution approached the characteristic time scale of these faster processes. These results are consistent with theoretical expectations based on numerical error analysis and data-averaging considerations. Additional diagnostics confirmed the improved ability of the more complex models to reproduce distinct signatures in the observed data. More broadly, this study provides insights into the information content of hydrological data and, by advocating careful attention to robust numericostatistical analysis and stringent process-oriented diagnostics, furthers the utilization of dense-resolution data and experimental insights to advance hypothesis-based hydrological modeling at the catchment scale.

  6. Using Spreadsheets to Discover Meaning for Parameters in Nonlinear Models

    ERIC Educational Resources Information Center

    Green, Kris H.

    2008-01-01

    This paper explores the use of spreadsheets to develop an exploratory environment where mathematics students can develop their own understanding of the parameters of commonly encountered families of functions: linear, logarithmic, exponential and power. The key to this understanding involves opening up the definition of rate of change from the…

  7. Constructing Approximate Confidence Intervals for Parameters with Structural Equation Models

    ERIC Educational Resources Information Center

    Cheung, Mike W. -L.

    2009-01-01

    Confidence intervals (CIs) for parameters are usually constructed based on the estimated standard errors. These are known as Wald CIs. This article argues that likelihood-based CIs (CIs based on likelihood ratio statistics) are often preferred to Wald CIs. It shows how the likelihood-based CIs and the Wald CIs for many statistics and psychometric…

  8. Distributed parameter modelling of flexible spacecraft: Where's the beef?

    NASA Technical Reports Server (NTRS)

    Hyland, D. C.

    1994-01-01

    This presentation discusses various misgivings concerning the directions and productivity of Distributed Parameter System (DPS) theory as applied to spacecraft vibration control. We try to show the need for greater cross-fertilization between DPS theorists and spacecraft control designers. We recommend a shift in research directions toward exploration of asymptotic frequency response characteristics of critical importance to control designers.

  9. SMA actuators for vibration control and experimental determination of model parameters dependent on ambient airflow velocity

    NASA Astrophysics Data System (ADS)

    Suzuki, Y.

    2016-05-01

    This article demonstrates the practical applicability of a method of modelling shape memory alloys (SMAs) as actuators. For this study, a pair of SMA wires was installed in an antagonistic manner to form an actuator, and a linear differential equation that describes the behaviour of the actuator’s generated force relative to its input voltage was derived for the limited range below the austenite onset temperature. In this range, hysteresis need not be considered, and the proposed SMA actuator can therefore be practically applied in linear control systems, which is significant because large deformations accompanied by hysteresis do not necessarily occur in most vibration control cases. When specific values of the parameters used in the differential equation were identified experimentally, it became clear that one of the parameters was dependent on ambient airflow velocity. The values of this dependent parameter were obtained using an additional SMA wire as a sensor. In these experiments, while the airflow distribution around the SMA wires was varied by changing the rotational speed of the fans in the wind tunnels, an input voltage was conveyed to the SMA actuator circuit, and the generated force was measured. In this way, the parameter dependent on airflow velocity was estimated in real time, and it was validated that the calculated force was consistent with the measured one.

  10. Individual based modeling and parameter estimation for a Lotka-Volterra system.

    PubMed

    Waniewski, J; Jedruch, W

    1999-03-15

    Stochastic component, inevitable in biological systems, makes problematic the estimation of the model parameters from a single sequence of measurements, despite the complete knowledge of the system. We studied the problem of parameter estimation using individual-based computer simulations of a 'Lotka-Volterra world'. Two kinds (species) of particles--X (preys) and Y (predators)--moved on a sphere according to deterministic rules and at the collision (interaction) of X and Y the particle X was changed to a new particle Y. Birth of preys and death of predators were simulated by addition of X and removal of Y, respectively, according to exponential probability distributions. With this arrangement of the system, the numbers of particles of each kind might be described by the Lotka-Volterra equations. The simulations of the system with low (200-400 particles on average) number of individuals showed unstable oscillations of the population size. In some simulation runs one of the species became extinct. Nevertheless, the oscillations had some generic properties (e.g. mean, in one simulation run, oscillation period, mean ratio of the amplitudes of the consecutive maxima of X and Y numbers, etc.) characteristic for the solutions of the Lotka-Volterra equations. This observation made it possible to estimate the four parameters of the Lotka-Volterra model with high accuracy and good precision. The estimation was performed using the integral form of the Lotka-Volterra equations and two parameter linear regression for each oscillation cycle separately. We conclude that in spite of the irregular time course of the number of individuals in each population due to stochastic intraspecies component, the generic features of the simulated system evolution can provide enough information for quantitative estimation of the system parameters. PMID:10194922

  11. Factor analysis models for structuring covariance matrices of additive genetic effects: a Bayesian implementation

    PubMed Central

    de los Campos, Gustavo; Gianola, Daniel

    2007-01-01

    Multivariate linear models are increasingly important in quantitative genetics. In high dimensional specifications, factor analysis (FA) may provide an avenue for structuring (co)variance matrices, thus reducing the number of parameters needed for describing (co)dispersion. We describe how FA can be used to model genetic effects in the context of a multivariate linear mixed model. An orthogonal common factor structure is used to model genetic effects under Gaussian assumption, so that the marginal likelihood is multivariate normal with a structured genetic (co)variance matrix. Under standard prior assumptions, all fully conditional distributions have closed form, and samples from the joint posterior distribution can be obtained via Gibbs sampling. The model and the algorithm developed for its Bayesian implementation were used to describe five repeated records of milk yield in dairy cattle, and a one common FA model was compared with a standard multiple trait model. The Bayesian Information Criterion favored the FA model. PMID:17897592

  12. Using Dirichlet Priors to Improve Model Parameter Plausibility

    ERIC Educational Resources Information Center

    Rai, Dovan; Gong, Yue; Beck, Joseph E.

    2009-01-01

    Student modeling is a widely used approach to make inference about a student's attributes like knowledge, learning,