Science.gov

Sample records for additional model parameters

  1. Separating response-execution bias from decision bias: arguments for an additional parameter in Ratcliff's diffusion model.

    PubMed

    Voss, Andreas; Voss, Jochen; Klauer, Karl Christoph

    2010-11-01

    Diffusion model data analysis permits the disentangling of different processes underlying the effects of experimental manipulations. Estimates can be provided for the speed of information accumulation, for the amount of information used to draw conclusions, and for a decision bias. One parameter describes the duration of non-decisional processes including the duration of motor-response execution. In the default diffusion model, it is implicitly assumed that both responses are executed with the same speed. In some applications of the diffusion model, this assumption will be violated. This will lead to biased parameter estimates. Consequently, we suggest accounting explicitly for differences in the speed of response execution for both responses. Results from a simulation study illustrate that parameter estimates from the default model are biased if the speed of response execution differs between responses. A second simulation study shows that large trial numbers (N>1,000) are needed to detect whether differences in response-execution times are based on different execution times.

  2. An Additional Approach to Model Current Followers and Amplifiers with Electronically Controllable Parameters from Commercially Available ICs

    NASA Astrophysics Data System (ADS)

    Sotner, R.; Kartci, A.; Jerabek, J.; Herencsar, N.; Dostal, T.; Vrba, K.

    2012-12-01

    Several behavioral models of current active elements for experimental purposes are introduced in this paper. These models are based on commercially available devices. They are suitable for experimental tests of current- and mixed-mode filters, oscillators, and other circuits (employing current-mode active elements) frequently used in analog signal processing without necessity of onchip fabrication of proper active element. Several methods of electronic control of intrinsic resistance in the proposed behavioral models are discussed. All predictions and theoretical assumptions are supported by simulations and experiments. This contribution helps to find a cheaper and more effective way to preliminary laboratory tests without expensive on-chip fabrication of special active elements.

  3. Functional Generalized Additive Models.

    PubMed

    McLean, Mathew W; Hooker, Giles; Staicu, Ana-Maria; Scheipl, Fabian; Ruppert, David

    2014-01-01

    We introduce the functional generalized additive model (FGAM), a novel regression model for association studies between a scalar response and a functional predictor. We model the link-transformed mean response as the integral with respect to t of F{X(t), t} where F(·,·) is an unknown regression function and X(t) is a functional covariate. Rather than having an additive model in a finite number of principal components as in Müller and Yao (2008), our model incorporates the functional predictor directly and thus our model can be viewed as the natural functional extension of generalized additive models. We estimate F(·,·) using tensor-product B-splines with roughness penalties. A pointwise quantile transformation of the functional predictor is also considered to ensure each tensor-product B-spline has observed data on its support. The methods are evaluated using simulated data and their predictive performance is compared with other competing scalar-on-function regression alternatives. We illustrate the usefulness of our approach through an application to brain tractography, where X(t) is a signal from diffusion tensor imaging at position, t, along a tract in the brain. In one example, the response is disease-status (case or control) and in a second example, it is the score on a cognitive test. R code for performing the simulations and fitting the FGAM can be found in supplemental materials available online.

  4. Numerical modeling of heat-transfer and the influence of process parameters on tailoring the grain morphology of IN718 in electron beam additive manufacturing

    DOE PAGES

    Raghavan, Narendran; Dehoff, Ryan; Pannala, Sreekanth; Simunovic, Srdjan; Kirka, Michael; Turner, John; Carlson, Neil; Babu, Sudarsanam S.

    2016-04-26

    The fabrication of 3-D parts from CAD models by additive manufacturing (AM) is a disruptive technology that is transforming the metal manufacturing industry. The correlation between solidification microstructure and mechanical properties has been well understood in the casting and welding processes over the years. This paper focuses on extending these principles to additive manufacturing to understand the transient phenomena of repeated melting and solidification during electron beam powder melting process to achieve site-specific microstructure control within a fabricated component. In this paper, we have developed a novel melt scan strategy for electron beam melting of nickel-base superalloy (Inconel 718) andmore » also analyzed 3-D heat transfer conditions using a parallel numerical solidification code (Truchas) developed at Los Alamos National Laboratory. The spatial and temporal variations of temperature gradient (G) and growth velocity (R) at the liquid-solid interface of the melt pool were calculated as a function of electron beam parameters. By manipulating the relative number of voxels that lie in the columnar or equiaxed region, the crystallographic texture of the components can be controlled to an extent. The analysis of the parameters provided optimum processing conditions that will result in columnar to equiaxed transition (CET) during the solidification. Furthermore, the results from the numerical simulations were validated by experimental processing and characterization thereby proving the potential of additive manufacturing process to achieve site-specific crystallographic texture control within a fabricated component.« less

  5. Additive Manufacturing of Single-Crystal Superalloy CMSX-4 Through Scanning Laser Epitaxy: Computational Modeling, Experimental Process Development, and Process Parameter Optimization

    NASA Astrophysics Data System (ADS)

    Basak, Amrita; Acharya, Ranadip; Das, Suman

    2016-08-01

    This paper focuses on additive manufacturing (AM) of single-crystal (SX) nickel-based superalloy CMSX-4 through scanning laser epitaxy (SLE). SLE, a powder bed fusion-based AM process was explored for the purpose of producing crack-free, dense deposits of CMSX-4 on top of similar chemistry investment-cast substrates. Optical microscopy and scanning electron microscopy (SEM) investigations revealed the presence of dendritic microstructures that consisted of fine γ' precipitates within the γ matrix in the deposit region. Computational fluid dynamics (CFD)-based process modeling, statistical design of experiments (DoE), and microstructural characterization techniques were combined to produce metallurgically bonded single-crystal deposits of more than 500 μm height in a single pass along the entire length of the substrate. A customized quantitative metallography based image analysis technique was employed for automatic extraction of various deposit quality metrics from the digital cross-sectional micrographs. The processing parameters were varied, and optimal processing windows were identified to obtain good quality deposits. The results reported here represent one of the few successes obtained in producing single-crystal epitaxial deposits through a powder bed fusion-based metal AM process and thus demonstrate the potential of SLE to repair and manufacture single-crystal hot section components of gas turbine systems from nickel-based superalloy powders.

  6. Mixed additive models

    NASA Astrophysics Data System (ADS)

    Carvalho, Francisco; Covas, Ricardo

    2016-06-01

    We consider mixed models y =∑i =0 w Xiβi with V (y )=∑i =1 w θiMi Where Mi=XiXi⊤ , i = 1, . . ., w, and µ = X0β0. For these we will estimate the variance components θ1, . . ., θw, aswell estimable vectors through the decomposition of the initial model into sub-models y(h), h ∈ Γ, with V (y (h ))=γ (h )Ig (h )h ∈Γ . Moreover we will consider L extensions of these models, i.e., y˚=Ly+ɛ, where L=D (1n1, . . ., 1nw) and ɛ, independent of y, has null mean vector and variance covariance matrix θw+1Iw, where w =∑i =1 n wi .

  7. Computational Process Modeling for Additive Manufacturing

    NASA Technical Reports Server (NTRS)

    Bagg, Stacey; Zhang, Wei

    2014-01-01

    Computational Process and Material Modeling of Powder Bed additive manufacturing of IN 718. Optimize material build parameters with reduced time and cost through modeling. Increase understanding of build properties. Increase reliability of builds. Decrease time to adoption of process for critical hardware. Potential to decrease post-build heat treatments. Conduct single-track and coupon builds at various build parameters. Record build parameter information and QM Meltpool data. Refine Applied Optimization powder bed AM process model using data. Report thermal modeling results. Conduct metallography of build samples. Calibrate STK models using metallography findings. Run STK models using AO thermal profiles and report STK modeling results. Validate modeling with additional build. Photodiode Intensity measurements highly linear with power input. Melt Pool Intensity highly correlated to Melt Pool Size. Melt Pool size and intensity increase with power. Applied Optimization will use data to develop powder bed additive manufacturing process model.

  8. Parameter estimation for transformer modeling

    NASA Astrophysics Data System (ADS)

    Cho, Sung Don

    are simulated and compared with an earlier-developed BCTRAN-based model. Black start energization cases are also simulated as a means of model evaluation and compared with actual event records. The simulated results using the model developed here are reasonable and more correct than those of the BCTRAN-based model. Simulation accuracy is dependent on the accuracy of the equipment model and its parameters. This work is significant in that it advances existing parameter estimation methods in cases where the available data and measurements are incomplete. The accuracy of EMTP simulation for power systems including three-phase autotransformers is thus enhanced. Theoretical results obtained from this work provide a sound foundation for development of transformer parameter estimation methods using engineering optimization. In addition, it should be possible to refine which information and measurement data are necessary for complete duality-based transformer models. To further refine and develop the models and transformer parameter estimation methods developed here, iterative full-scale laboratory tests using high-voltage and high-power three-phase transformer would be helpful.

  9. Parameter uncertainty for ASP models

    SciTech Connect

    Knudsen, J.K.; Smith, C.L.

    1995-10-01

    The steps involved to incorporate parameter uncertainty into the Nuclear Regulatory Commission (NRC) accident sequence precursor (ASP) models is covered in this paper. Three different uncertainty distributions (i.e., lognormal, beta, gamma) were evaluated to Determine the most appropriate distribution. From the evaluation, it was Determined that the lognormal distribution will be used for the ASP models uncertainty parameters. Selection of the uncertainty parameters for the basic events is also discussed. This paper covers the process of determining uncertainty parameters for the supercomponent basic events (i.e., basic events that are comprised of more than one component which can have more than one failure mode) that are utilized in the ASP models. Once this is completed, the ASP model is ready to be utilized to propagate parameter uncertainty for event assessments.

  10. Parameter identification in continuum models

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Crowley, J. M.

    1983-01-01

    Approximation techniques for use in numerical schemes for estimating spatially varying coefficients in continuum models such as those for Euler-Bernoulli beams are discussed. The techniques are based on quintic spline state approximations and cubic spline parameter approximations. Both theoretical and numerical results are presented. Previously announced in STAR as N83-28934

  11. Parameter identification in continuum models

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Crowley, J. M.

    1983-01-01

    Approximation techniques for use in numerical schemes for estimating spatially varying coefficients in continuum models such as those for Euler-Bernoulli beams are discussed. The techniques are based on quintic spline state approximations and cubic spline parameter approximations. Both theoretical and numerical results are presented.

  12. Additional Investigations of Ice Shape Sensitivity to Parameter Variations

    NASA Technical Reports Server (NTRS)

    Miller, Dean R.; Potapczuk, Mark G.; Langhals, Tammy J.

    2006-01-01

    A second parameter sensitivity study was conducted at the NASA Glenn Research Center's Icing Research Tunnel (IRT) using a 36 in. chord (0.91 m) NACA-0012 airfoil. The objective of this work was to further investigate the feasibility of using ice shape feature changes to define requirements for the simulation and measurement of SLD and appendix C icing conditions. A previous study concluded that it was feasible to use changes in ice shape features (e.g., ice horn angle, ice horn thickness, and ice shape mass) to detect relatively small variations in icing spray condition parameters (LWC, MVD, and temperature). The subject of this current investigation extends the scope of this previous work, by also examining the effect of icing tunnel spray-bar parameter variations (water pressure, air pressure) on ice shape feature changes. The approach was to vary spray-bar water pressure and air pressure, and then evaluate the effects of these parameter changes on the resulting ice shapes. This paper will provide a description of the experimental method, present selected experimental results, and conclude with an evaluation of these results.

  13. Computational Process Modeling for Additive Manufacturing (OSU)

    NASA Technical Reports Server (NTRS)

    Bagg, Stacey; Zhang, Wei

    2015-01-01

    Powder-Bed Additive Manufacturing (AM) through Direct Metal Laser Sintering (DMLS) or Selective Laser Melting (SLM) is being used by NASA and the Aerospace industry to "print" parts that traditionally are very complex, high cost, or long schedule lead items. The process spreads a thin layer of metal powder over a build platform, then melts the powder in a series of welds in a desired shape. The next layer of powder is applied, and the process is repeated until layer-by-layer, a very complex part can be built. This reduces cost and schedule by eliminating very complex tooling and processes traditionally used in aerospace component manufacturing. To use the process to print end-use items, NASA seeks to understand SLM material well enough to develop a method of qualifying parts for space flight operation. Traditionally, a new material process takes many years and high investment to generate statistical databases and experiential knowledge, but computational modeling can truncate the schedule and cost -many experiments can be run quickly in a model, which would take years and a high material cost to run empirically. This project seeks to optimize material build parameters with reduced time and cost through modeling.

  14. Criteria for deviation from predictions by the concentration addition model.

    PubMed

    Takeshita, Jun-Ichi; Seki, Masanori; Kamo, Masashi

    2016-07-01

    Loewe's additivity (concentration addition) is a well-known model for predicting the toxic effects of chemical mixtures under the additivity assumption of toxicity. However, from the perspective of chemical risk assessment and/or management, it is important to identify chemicals whose toxicities are additive when present concurrently, that is, it should be established whether there are chemical mixtures to which the concentration addition predictive model can be applied. The objective of the present study was to develop criteria for judging test results that deviated from the predictions by the concentration addition chemical mixture model. These criteria were based on the confidence interval of the concentration addition model's prediction and on estimation of errors of the predicted concentration-effect curves by toxicity tests after exposure to single chemicals. A log-logit model with 2 parameters was assumed for the concentration-effect curve of each individual chemical. These parameters were determined by the maximum-likelihood method, and the criteria were defined using the variances and the covariance of the parameters. In addition, the criteria were applied to a toxicity test of a binary mixture of p-n-nonylphenol and p-n-octylphenol using the Japanese killifish, medaka (Oryzias latipes). Consequently, the concentration addition model using confidence interval was capable of predicting the test results at any level, and no reason for rejecting the concentration addition was found. Environ Toxicol Chem 2016;35:1806-1814. © 2015 SETAC. PMID:26660330

  15. Testing Nested Additive, Multiplicative, and General Multitrait-Multimethod Models.

    ERIC Educational Resources Information Center

    Coenders, Germa; Saris, Willem E.

    2000-01-01

    Provides alternatives to the definitions of additive and multiplicative method effects in multitrait-multimethod data given by D. Campbell and E. O'Connell (1967). The alternative definitions can be formulated by means of constraints in the parameters of the correlated uniqueness model (H. Marsh, 1989). (SLD)

  16. Parameter Estimation of Partial Differential Equation Models.

    PubMed

    Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Carroll, Raymond J; Maity, Arnab

    2013-01-01

    Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown, and need to be estimated from the measurements of the dynamic system in the present of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE, and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from LIDAR data. PMID:24363476

  17. Moose models with vanishing S parameter

    SciTech Connect

    Casalbuoni, R.; De Curtis, S.; Dominici, D.

    2004-09-01

    In the linear moose framework, which naturally emerges in deconstruction models, we show that there is a unique solution for the vanishing of the S parameter at the lowest order in the weak interactions. We consider an effective gauge theory based on K SU(2) gauge groups, K+1 chiral fields, and electroweak groups SU(2){sub L} and U(1){sub Y} at the ends of the chain of the moose. S vanishes when a link in the moose chain is cut. As a consequence one has to introduce a dynamical nonlocal field connecting the two ends of the moose. Then the model acquires an additional custodial symmetry which protects this result. We examine also the possibility of a strong suppression of S through an exponential behavior of the link couplings as suggested by the Randall Sundrum metric.

  18. Understanding Parameter Invariance in Unidimensional IRT Models

    ERIC Educational Resources Information Center

    Rupp, Andre A.; Zumbo, Bruno D.

    2006-01-01

    One theoretical feature that makes item response theory (IRT) models those of choice for many psychometric data analysts is parameter invariance, the equality of item and examinee parameters from different examinee populations or measurement conditions. In this article, using the well-known fact that item and examinee parameters are identical only…

  19. Transferability and additivity of dihedral parameters in polarizable and nonpolarizable empirical force fields.

    PubMed

    Zgarbová, Marie; Rosnik, Andreana M; Luque, F Javier; Curutchet, Carles; Jurečka, Petr

    2015-09-30

    Recent advances in polarizable force fields have revealed that major reparameterization is necessary when the polarization energy is treated explicitly. This study is focused on the torsional parameters, which are crucial for the accurate description of conformational equilibria in biomolecules. In particular, attention is paid to the influence of polarization on the (i) transferability of dihedral terms between molecules, (ii) transferability between different environments, and (iii) additivity of dihedral energies. To this end, three polarizable force fields based on the induced point dipole model designed for use in AMBER are tested, including two recent ff02 reparameterizations. Attention is paid to the contributions due to short range interactions (1-2, 1-3, and 1-4) within the four atoms defining the dihedral angle. The results show that when short range 1-2 and 1-3 polarization interactions are omitted, as for instance in ff02, the 1-4 polarization contribution is rather small and unlikely to improve the description of the torsional energy. Conversely, when screened 1-2 and 1-3 interactions are included, the polarization contribution is sizeable and shows potential to improve the transferability of parameters between different molecules and environments as well as the additivity of dihedral terms. However, to reproduce intramolecular polarization effects accurately, further fine-tuning of the short range damping of polarization is necessary.

  20. Transferability and additivity of dihedral parameters in polarizable and nonpolarizable empirical force fields.

    PubMed

    Zgarbová, Marie; Rosnik, Andreana M; Luque, F Javier; Curutchet, Carles; Jurečka, Petr

    2015-09-30

    Recent advances in polarizable force fields have revealed that major reparameterization is necessary when the polarization energy is treated explicitly. This study is focused on the torsional parameters, which are crucial for the accurate description of conformational equilibria in biomolecules. In particular, attention is paid to the influence of polarization on the (i) transferability of dihedral terms between molecules, (ii) transferability between different environments, and (iii) additivity of dihedral energies. To this end, three polarizable force fields based on the induced point dipole model designed for use in AMBER are tested, including two recent ff02 reparameterizations. Attention is paid to the contributions due to short range interactions (1-2, 1-3, and 1-4) within the four atoms defining the dihedral angle. The results show that when short range 1-2 and 1-3 polarization interactions are omitted, as for instance in ff02, the 1-4 polarization contribution is rather small and unlikely to improve the description of the torsional energy. Conversely, when screened 1-2 and 1-3 interactions are included, the polarization contribution is sizeable and shows potential to improve the transferability of parameters between different molecules and environments as well as the additivity of dihedral terms. However, to reproduce intramolecular polarization effects accurately, further fine-tuning of the short range damping of polarization is necessary. PMID:26224547

  1. Model parameter updating using Bayesian networks

    SciTech Connect

    Treml, C. A.; Ross, Timothy J.

    2004-01-01

    This paper outlines a model parameter updating technique for a new method of model validation using a modified model reference adaptive control (MRAC) framework with Bayesian Networks (BNs). The model parameter updating within this method is generic in the sense that the model/simulation to be validated is treated as a black box. It must have updateable parameters to which its outputs are sensitive, and those outputs must have metrics that can be compared to that of the model reference, i.e., experimental data. Furthermore, no assumptions are made about the statistics of the model parameter uncertainty, only upper and lower bounds need to be specified. This method is designed for situations where a model is not intended to predict a complete point-by-point time domain description of the item/system behavior; rather, there are specific points, features, or events of interest that need to be predicted. These specific points are compared to the model reference derived from actual experimental data. The logic for updating the model parameters to match the model reference is formed via a BN. The nodes of this BN consist of updateable model input parameters and the specific output values or features of interest. Each time the model is executed, the input/output pairs are used to adapt the conditional probabilities of the BN. Each iteration further refines the inferred model parameters to produce the desired model output. After parameter updating is complete and model inputs are inferred, reliabilities for the model output are supplied. Finally, this method is applied to a simulation of a resonance control cooling system for a prototype coupled cavity linac. The results are compared to experimental data.

  2. Global Model Analysis by Parameter Space Partitioning

    ERIC Educational Resources Information Center

    Pitt, Mark A.; Kim, Woojae; Navarro, Daniel J.; Myung, Jay I.

    2006-01-01

    To model behavior, scientists need to know how models behave. This means learning what other behaviors a model can produce besides the one generated by participants in an experiment. This is a difficult problem because of the complexity of psychological models (e.g., their many parameters) and because the behavioral precision of models (e.g.,…

  3. A new approach to NMR chemical shift additivity parameters using simultaneous linear equation method.

    PubMed

    Shahab, Yosif A; Khalil, Rabah A

    2006-10-01

    A new approach to NMR chemical shift additivity parameters using simultaneous linear equation method has been introduced. Three general nitrogen-15 NMR chemical shift additivity parameters with physical significance for aliphatic amines in methanol and cyclohexane and their hydrochlorides in methanol have been derived. A characteristic feature of these additivity parameters is the individual equation can be applied to both open-chain and rigid systems. The factors that influence the (15)N chemical shift of these substances have been determined. A new method for evaluating conformational equilibria at nitrogen in these compounds using the derived additivity parameters has been developed. Conformational analyses of these substances have been worked out. In general, the results indicate that there are four factors affecting the (15)N chemical shift of aliphatic amines; paramagnetic term (p-character), lone pair-proton interactions, proton-proton interactions, symmetry of alkyl substituents and molecular association.

  4. Additional field verification of convective scaling for the lateral dispersion parameter

    SciTech Connect

    Sakiyama, S.K.; Davis, P.A.

    1988-07-01

    The results of a series of diffusion trials over the heterogeneous surface of the Canadian Precambrian Shield provide additional support for the convective scaling of the lateral dispersion parameter. The data indicate that under convective conditions, the lateral dispersion parameter can be scaled with the convective velocity scale and the mixing depth. 10 references.

  5. On Interpreting the Model Parameters for the Three Parameter Logistic Model

    ERIC Educational Resources Information Center

    Maris, Gunter; Bechger, Timo

    2009-01-01

    This paper addresses two problems relating to the interpretability of the model parameters in the three parameter logistic model. First, it is shown that if the values of the discrimination parameters are all the same, the remaining parameters are nonidentifiable in a nontrivial way that involves not only ability and item difficulty, but also the…

  6. Network reconstruction using nonparametric additive ODE models.

    PubMed

    Henderson, James; Michailidis, George

    2014-01-01

    Network representations of biological systems are widespread and reconstructing unknown networks from data is a focal problem for computational biologists. For example, the series of biochemical reactions in a metabolic pathway can be represented as a network, with nodes corresponding to metabolites and edges linking reactants to products. In a different context, regulatory relationships among genes are commonly represented as directed networks with edges pointing from influential genes to their targets. Reconstructing such networks from data is a challenging problem receiving much attention in the literature. There is a particular need for approaches tailored to time-series data and not reliant on direct intervention experiments, as the former are often more readily available. In this paper, we introduce an approach to reconstructing directed networks based on dynamic systems models. Our approach generalizes commonly used ODE models based on linear or nonlinear dynamics by extending the functional class for the functions involved from parametric to nonparametric models. Concomitantly we limit the complexity by imposing an additive structure on the estimated slope functions. Thus the submodel associated with each node is a sum of univariate functions. These univariate component functions form the basis for a novel coupling metric that we define in order to quantify the strength of proposed relationships and hence rank potential edges. We show the utility of the method by reconstructing networks using simulated data from computational models for the glycolytic pathway of Lactocaccus Lactis and a gene network regulating the pluripotency of mouse embryonic stem cells. For purposes of comparison, we also assess reconstruction performance using gene networks from the DREAM challenges. We compare our method to those that similarly rely on dynamic systems models and use the results to attempt to disentangle the distinct roles of linearity, sparsity, and derivative

  7. Network Reconstruction Using Nonparametric Additive ODE Models

    PubMed Central

    Henderson, James; Michailidis, George

    2014-01-01

    Network representations of biological systems are widespread and reconstructing unknown networks from data is a focal problem for computational biologists. For example, the series of biochemical reactions in a metabolic pathway can be represented as a network, with nodes corresponding to metabolites and edges linking reactants to products. In a different context, regulatory relationships among genes are commonly represented as directed networks with edges pointing from influential genes to their targets. Reconstructing such networks from data is a challenging problem receiving much attention in the literature. There is a particular need for approaches tailored to time-series data and not reliant on direct intervention experiments, as the former are often more readily available. In this paper, we introduce an approach to reconstructing directed networks based on dynamic systems models. Our approach generalizes commonly used ODE models based on linear or nonlinear dynamics by extending the functional class for the functions involved from parametric to nonparametric models. Concomitantly we limit the complexity by imposing an additive structure on the estimated slope functions. Thus the submodel associated with each node is a sum of univariate functions. These univariate component functions form the basis for a novel coupling metric that we define in order to quantify the strength of proposed relationships and hence rank potential edges. We show the utility of the method by reconstructing networks using simulated data from computational models for the glycolytic pathway of Lactocaccus Lactis and a gene network regulating the pluripotency of mouse embryonic stem cells. For purposes of comparison, we also assess reconstruction performance using gene networks from the DREAM challenges. We compare our method to those that similarly rely on dynamic systems models and use the results to attempt to disentangle the distinct roles of linearity, sparsity, and derivative

  8. Rheological parameters of dough with inulin addition and its effect on bread quality

    NASA Astrophysics Data System (ADS)

    Bojnanska, T.; Tokar, M.; Vollmannova, A.

    2015-04-01

    The rheological properties of enriched flour prepared with an addition of inulin were studied. The addition of inulin caused changes of the rheological parameters of the recorder curve. 10% and more addition significantly extended development time and on the farinogram were two peaks of consistency, what is a non-standard shape. With increasing addition of inulin resistance to deformation grows and dough is difficult to process, over 15% addition make dough short and unsuitable for making bread. Bread volume, the most important parameter, significantly decreased with inulin addition. Our results suggest a level of 5% inulin to produce a functional bread of high sensory acceptance and a level of 10% inulin produce a bread of satisfactory sensory acceptance. Bread with a level over 10% of inulin was unsatisfactory.

  9. CREATION OF THE MODEL ADDITIONAL PROTOCOL

    SciTech Connect

    Houck, F.; Rosenthal, M.; Wulf, N.

    2010-05-25

    In 1991, the international nuclear nonproliferation community was dismayed to discover that the implementation of safeguards by the International Atomic Energy Agency (IAEA) under its NPT INFCIRC/153 safeguards agreement with Iraq had failed to detect Iraq's nuclear weapon program. It was now clear that ensuring that states were fulfilling their obligations under the NPT would require not just detecting diversion but also the ability to detect undeclared materials and activities. To achieve this, the IAEA initiated what would turn out to be a five-year effort to reappraise the NPT safeguards system. The effort engaged the IAEA and its Member States and led to agreement in 1997 on a new safeguards agreement, the Model Protocol Additional to the Agreement(s) between States and the International Atomic Energy Agency for the Application of Safeguards. The Model Protocol makes explicit that one IAEA goal is to provide assurance of the absence of undeclared nuclear material and activities. The Model Protocol requires an expanded declaration that identifies a State's nuclear potential, empowers the IAEA to raise questions about the correctness and completeness of the State's declaration, and, if needed, allows IAEA access to locations. The information required and the locations available for access are much broader than those provided for under INFCIRC/153. The negotiation was completed in quite a short time because it started with a relatively complete draft of an agreement prepared by the IAEA Secretariat. This paper describes how the Model Protocol was constructed and reviews key decisions that were made both during the five-year period and in the actual negotiation.

  10. Electroacoustics modeling of piezoelectric welders for ultrasonic additive manufacturing processes

    NASA Astrophysics Data System (ADS)

    Hehr, Adam; Dapino, Marcelo J.

    2016-04-01

    Ultrasonic additive manufacturing (UAM) is a recent 3D metal printing technology which utilizes ultrasonic vibrations from high power piezoelectric transducers to additively weld similar and dissimilar metal foils. CNC machining is used intermittent of welding to create internal channels, embed temperature sensitive components, sensors, and materials, and for net shaping parts. Structural dynamics of the welder and work piece influence the performance of the welder and part quality. To understand the impact of structural dynamics on UAM, a linear time-invariant model is used to relate system shear force and electric current inputs to the system outputs of welder velocity and voltage. Frequency response measurements are combined with in-situ operating measurements of the welder to identify model parameters and to verify model assumptions. The proposed LTI model can enhance process consistency, performance, and guide the development of improved quality monitoring and control strategies.

  11. Detecting contaminated birthdates using generalized additive models

    PubMed Central

    2014-01-01

    Background Erroneous patient birthdates are common in health databases. Detection of these errors usually involves manual verification, which can be resource intensive and impractical. By identifying a frequent manifestation of birthdate errors, this paper presents a principled and statistically driven procedure to identify erroneous patient birthdates. Results Generalized additive models (GAM) enabled explicit incorporation of known demographic trends and birth patterns. With false positive rates controlled, the method identified birthdate contamination with high accuracy. In the health data set used, of the 58 actual incorrect birthdates manually identified by the domain expert, the GAM-based method identified 51, with 8 false positives (resulting in a positive predictive value of 86.0% (51/59) and a false negative rate of 12.0% (7/58)). These results outperformed linear time-series models. Conclusions The GAM-based method is an effective approach to identify systemic birthdate errors, a common data quality issue in both clinical and administrative databases, with high accuracy. PMID:24923281

  12. Parameters and error of a theoretical model

    SciTech Connect

    Moeller, P.; Nix, J.R.; Swiatecki, W.

    1986-09-01

    We propose a definition for the error of a theoretical model of the type whose parameters are determined from adjustment to experimental data. By applying a standard statistical method, the maximum-likelihoodlmethod, we derive expressions for both the parameters of the theoretical model and its error. We investigate the derived equations by solving them for simulated experimental and theoretical quantities generated by use of random number generators. 2 refs., 4 tabs.

  13. Models and parameters for environmental radiological assessments

    SciTech Connect

    Miller, C W

    1984-01-01

    This book presents a unified compilation of models and parameters appropriate for assessing the impact of radioactive discharges to the environment. Models examined include those developed for the prediction of atmospheric and hydrologic transport and deposition, for terrestrial and aquatic food-chain bioaccumulation, and for internal and external dosimetry. Chapters have been entered separately into the data base. (ACR)

  14. Analysis of Modeling Parameters on Threaded Screws.

    SciTech Connect

    Vigil, Miquela S.; Brake, Matthew Robert; Vangoethem, Douglas

    2015-06-01

    Assembled mechanical systems often contain a large number of bolted connections. These bolted connections (joints) are integral aspects of the load path for structural dynamics, and, consequently, are paramount for calculating a structure's stiffness and energy dissipation prop- erties. However, analysts have not found the optimal method to model appropriately these bolted joints. The complexity of the screw geometry cause issues when generating a mesh of the model. This paper will explore different approaches to model a screw-substrate connec- tion. Model parameters such as mesh continuity, node alignment, wedge angles, and thread to body element size ratios are examined. The results of this study will give analysts a better understanding of the influences of these parameters and will aide in finding the optimal method to model bolted connections.

  15. Uncertainty in dual permeability model parameters for structured soils.

    PubMed

    Arora, B; Mohanty, B P; McGuire, J T

    2012-01-01

    Successful application of dual permeability models (DPM) to predict contaminant transport is contingent upon measured or inversely estimated soil hydraulic and solute transport parameters. The difficulty in unique identification of parameters for the additional macropore- and matrix-macropore interface regions, and knowledge about requisite experimental data for DPM has not been resolved to date. Therefore, this study quantifies uncertainty in dual permeability model parameters of experimental soil columns with different macropore distributions (single macropore, and low- and high-density multiple macropores). Uncertainty evaluation is conducted using adaptive Markov chain Monte Carlo (AMCMC) and conventional Metropolis-Hastings (MH) algorithms while assuming 10 out of 17 parameters to be uncertain or random. Results indicate that AMCMC resolves parameter correlations and exhibits fast convergence for all DPM parameters while MH displays large posterior correlations for various parameters. This study demonstrates that the choice of parameter sampling algorithms is paramount in obtaining unique DPM parameters when information on covariance structure is lacking, or else additional information on parameter correlations must be supplied to resolve the problem of equifinality of DPM parameters. This study also highlights the placement and significance of matrix-macropore interface in flow experiments of soil columns with different macropore densities. Histograms for certain soil hydraulic parameters display tri-modal characteristics implying that macropores are drained first followed by the interface region and then by pores of the matrix domain in drainage experiments. Results indicate that hydraulic properties and behavior of the matrix-macropore interface is not only a function of saturated hydraulic conductivity of the macroporematrix interface (Ksa ) and macropore tortuosity (lf ) but also of other parameters of the matrix and macropore domains.

  16. Bioelectrical impedance modelling of gentamicin pharmacokinetic parameters.

    PubMed

    Zarowitz, B J; Pilla, A M; Peterson, E L

    1989-10-01

    1. Bioelectrical impedance analysis was used to develop descriptive models of gentamicin pharmacokinetic parameters in 30 adult in-patients receiving therapy with gentamicin. 2. Serial blood samples obtained from each subject at steady state were analyzed and used to derive gentamicin pharmacokinetic parameters. 3. Multiple regression equations were developed for clearance, elimination rate constant and volume of distribution at steady state and were all statistically significant at P less than 0.05. 4. Clinical validation of this innovative technique is warranted before clinical use is recommended.

  17. Additives

    NASA Technical Reports Server (NTRS)

    Smalheer, C. V.

    1973-01-01

    The chemistry of lubricant additives is discussed to show what the additives are chemically and what functions they perform in the lubrication of various kinds of equipment. Current theories regarding the mode of action of lubricant additives are presented. The additive groups discussed include the following: (1) detergents and dispersants, (2) corrosion inhibitors, (3) antioxidants, (4) viscosity index improvers, (5) pour point depressants, and (6) antifouling agents.

  18. Testing Linear Models for Ability Parameters in Item Response Models

    ERIC Educational Resources Information Center

    Glas, Cees A. W.; Hendrawan, Irene

    2005-01-01

    Methods for testing hypotheses concerning the regression parameters in linear models for the latent person parameters in item response models are presented. Three tests are outlined: A likelihood ratio test, a Lagrange multiplier test and a Wald test. The tests are derived in a marginal maximum likelihood framework. They are explicitly formulated…

  19. Modelling affect in terms of speech parameters.

    PubMed

    Stassen, H H

    1988-01-01

    It is well known that the human voice contains important information about the affective state of a speaker at a nonverbal level. Accordingly, we started an extensive investigation which aims at modelling intraindividual changes of the global affective state over time, as this state is reflected by the human voice, and can be inferred from measurable speech parameters. For the purpose of this investigation, a speech-recording procedure was designed which is especially suited to reveal intraindividual changes of voice patterns over time since each person serves as his or her own reference. On the other hand, the chosen experimental setup is less suited to classify patients in the sense of a traditional diagnostic scheme. In order to find an appropriate mathematical model on the basis of speech parameters, a calibration study with 190 healthy subjects was carried out which enabled us to investigate each parameter for its reproducibility, sensitivity and specificity. In particular, this calibration study yielded the information of how to draw the line between 'normal' fluctuations and 'significant' intraindividual changes over time. All speech parameters under discussion turned out to be sufficiently stable over time, whereas, in regard to their sensitivity to form and content of text, significant differences showed up. In a second step, a pilot study with 6 depressive patients was carried out in order to investigate the specificity of voice parameters with regard to psychopathology. It turned out that the registration procedure is realizable even if patients are considerably handicapped by their illness. However, no consistent correlations could be revealed between single speech parameters and psychopathological rating scales.(ABSTRACT TRUNCATED AT 250 WORDS)

  20. Modelling spin Hamiltonian parameters of molecular nanomagnets.

    PubMed

    Gupta, Tulika; Rajaraman, Gopalan

    2016-07-12

    Molecular nanomagnets encompass a wide range of coordination complexes possessing several potential applications. A formidable challenge in realizing these potential applications lies in controlling the magnetic properties of these clusters. Microscopic spin Hamiltonian (SH) parameters describe the magnetic properties of these clusters, and viable ways to control these SH parameters are highly desirable. Computational tools play a proactive role in this area, where SH parameters such as isotropic exchange interaction (J), anisotropic exchange interaction (Jx, Jy, Jz), double exchange interaction (B), zero-field splitting parameters (D, E) and g-tensors can be computed reliably using X-ray structures. In this feature article, we have attempted to provide a holistic view of the modelling of these SH parameters of molecular magnets. The determination of J includes various class of molecules, from di- and polynuclear Mn complexes to the {3d-Gd}, {Gd-Gd} and {Gd-2p} class of complexes. The estimation of anisotropic exchange coupling includes the exchange between an isotropic metal ion and an orbitally degenerate 3d/4d/5d metal ion. The double-exchange section contains some illustrative examples of mixed valance systems, and the section on the estimation of zfs parameters covers some mononuclear transition metal complexes possessing very large axial zfs parameters. The section on the computation of g-anisotropy exclusively covers studies on mononuclear Dy(III) and Er(III) single-ion magnets. The examples depicted in this article clearly illustrate that computational tools not only aid in interpreting and rationalizing the observed magnetic properties but possess the potential to predict new generation MNMs. PMID:27366794

  1. Additive interaction in survival analysis: use of the additive hazards model.

    PubMed

    Rod, Naja Hulvej; Lange, Theis; Andersen, Ingelise; Marott, Jacob Louis; Diderichsen, Finn

    2012-09-01

    It is a widely held belief in public health and clinical decision-making that interventions or preventive strategies should be aimed at patients or population subgroups where most cases could potentially be prevented. To identify such subgroups, deviation from additivity of absolute effects is the relevant measure of interest. Multiplicative survival models, such as the Cox proportional hazards model, are often used to estimate the association between exposure and risk of disease in prospective studies. In Cox models, deviations from additivity have usually been assessed by surrogate measures of additive interaction derived from multiplicative models-an approach that is both counter-intuitive and sometimes invalid. This paper presents a straightforward and intuitive way of assessing deviation from additivity of effects in survival analysis by use of the additive hazards model. The model directly estimates the absolute size of the deviation from additivity and provides confidence intervals. In addition, the model can accommodate both continuous and categorical exposures and models both exposures and potential confounders on the same underlying scale. To illustrate the approach, we present an empirical example of interaction between education and smoking on risk of lung cancer. We argue that deviations from additivity of effects are important for public health interventions and clinical decision-making, and such estimations should be encouraged in prospective studies on health. A detailed implementation guide of the additive hazards model is provided in the appendix.

  2. Intrinsic viscosity and conformational parameters of xanthan in aqueous solutions: salt addition effect.

    PubMed

    Brunchi, Cristina-Eliza; Morariu, Simona; Bercea, Maria

    2014-10-01

    The intrinsic viscosity and conformational parameters of xanthan in aqueous solutions were investigated at 25°C as a function of salt nature (NaCl and KCl) and concentration (up to 3×10(-1)mol/L). The viscometric parameters were evaluated by applying semi-empirical equations proposed by Rao and Wolf. The results show that the new model proposed by Wolf provides accurate intrinsic viscosity values comparable with those obtained by using traditional methods. The experimental data were modeled with Boltzmann sigmoidal equation. The stiffness parameter, hydrodynamic volume and viscometric expansion factor were determined and discussed. With increasing salt concentration, the hydrodynamic volume and the viscometric expansion factor decrease and the critical overlap concentration increases, reaching limiting values above a given salt concentration. The high Huggins constant values suggest the existence of aggregates for salt concentrations above 5×10(-2) and 3×10(-3)mol/L for NaCl and KCl, respectively. Stiffness parameter was determined by Smidsrød and Haug method as being 5.45×10(-3), indicating a rigid conformation for xanthan macromolecules in solution.

  3. Constant-parameter capture-recapture models

    USGS Publications Warehouse

    Brownie, C.; Hines, J.E.; Nichols, J.D.

    1986-01-01

    Jolly (1982, Biometrics 38, 301-321) presented modifications of the Jolly-Seber model for capture-recapture data, which assume constant survival and/or capture rates. Where appropriate, because of the reduced number of parameters, these models lead to more efficient estimators than the Jolly-Seber model. The tests to compare models given by Jolly do not make complete use of the data, and we present here the appropriate modifications, and also indicate how to carry out goodness-of-fit tests which utilize individual capture history information. We also describe analogous models for the case where young and adult animals are tagged. The availability of computer programs to perform the analysis is noted, and examples are given using output from these programs.

  4. Test models for improving filtering with model errors through stochastic parameter estimation

    SciTech Connect

    Gershgorin, B.; Harlim, J. Majda, A.J.

    2010-01-01

    The filtering skill for turbulent signals from nature is often limited by model errors created by utilizing an imperfect model for filtering. Updating the parameters in the imperfect model through stochastic parameter estimation is one way to increase filtering skill and model performance. Here a suite of stringent test models for filtering with stochastic parameter estimation is developed based on the Stochastic Parameterization Extended Kalman Filter (SPEKF). These new SPEKF-algorithms systematically correct both multiplicative and additive biases and involve exact formulas for propagating the mean and covariance including the parameters in the test model. A comprehensive study is presented of robust parameter regimes for increasing filtering skill through stochastic parameter estimation for turbulent signals as the observation time and observation noise are varied and even when the forcing is incorrectly specified. The results here provide useful guidelines for filtering turbulent signals in more complex systems with significant model errors.

  5. Modeling techniques for gaining additional urban space

    NASA Astrophysics Data System (ADS)

    Thunig, Holger; Naumann, Simone; Siegmund, Alexander

    2009-09-01

    One of the major accompaniments of the globalization is the rapid growing of urban areas. Urban sprawl is the main environmental problem affecting those cities across different characteristics and continents. Various reasons for the increase in urban sprawl in the last 10 to 30 years have been proposed [1], and often depend on the socio-economic situation of cities. The quantitative reduction and the sustainable handling of land should be performed by inner urban development instead of expanding urban regions. Following the principal "spare the urban fringe, develop the inner suburbs first" requires differentiated tools allowing for quantitative and qualitative appraisals of current building potentials. Using spatial high resolution remote sensing data within an object-based approach enables the detection of potential areas while GIS-data provides information for the quantitative valuation. This paper presents techniques for modeling urban environment and opportunities of utilization of the retrieved information for urban planners and their special needs.

  6. Kane model parameters and stochastic spin current

    NASA Astrophysics Data System (ADS)

    Chowdhury, Debashree

    2015-11-01

    The spin current and spin conductivity is computed through thermally driven stochastic process. By evaluating the Kramers equation and with the help of k → . p → method we have studied the spin Hall scenario. Due to the thermal assistance, the Kane model parameters get modified, which consequently modulate the spin orbit coupling (SOC). This modified SOC causes the spin current to change in a finite amount.

  7. Sensitivity analysis of geometric errors in additive manufacturing medical models.

    PubMed

    Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian

    2015-03-01

    Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms.

  8. Parameter estimation, model reduction and quantum filtering

    NASA Astrophysics Data System (ADS)

    Chase, Bradley A.

    This thesis explores the topics of parameter estimation and model reduction in the context of quantum filtering. The last is a mathematically rigorous formulation of continuous quantum measurement, in which a stream of auxiliary quantum systems is used to infer the state of a target quantum system. Fundamental quantum uncertainties appear as noise which corrupts the probe observations and therefore must be filtered in order to extract information about the target system. This is analogous to the classical filtering problem in which techniques of inference are used to process noisy observations of a system in order to estimate its state. Given the clear similarities between the two filtering problems, I devote the beginning of this thesis to a review of classical and quantum probability theory, stochastic calculus and filtering. This allows for a mathematically rigorous and technically adroit presentation of the quantum filtering problem and solution. Given this foundation, I next consider the related problem of quantum parameter estimation, in which one seeks to infer the strength of a parameter that drives the evolution of a probe quantum system. By embedding this problem in the state estimation problem solved by the quantum filter, I present the optimal Bayesian estimator for a parameter when given continuous measurements of the probe system to which it couples. For cases when the probe takes on a finite number of values, I review a set of sufficient conditions for asymptotic convergence of the estimator. For a continuous-valued parameter, I present a computational method called quantum particle filtering for practical estimation of the parameter. Using these methods, I then study the particular problem of atomic magnetometry and review an experimental method for potentially reducing the uncertainty in the estimate of the magnetic field beyond the standard quantum limit. The technique involves double-passing a probe laser field through the atomic system, giving

  9. Additional deleterious effects of alcohol consumption on sperm parameters and DNA integrity in diabetic mice.

    PubMed

    Pourentezari, M; Talebi, A R; Mangoli, E; Anvari, M; Rahimipour, M

    2016-06-01

    The aim of this study was to survey the impact of alcohol consumption on sperm parameters and DNA integrity in experimentally induced diabetic mice. A total of 32 adult male mice were divided into four groups: mice of group 1 served as control fed on basal diet, group 2 received streptozotocin (STZ) (200 mg kg(-1) , single dose, intraperitoneal) and basal diet, group 3 received alcohol (10 mg kg(-1) , water soluble) and basal diet, and group 4 received STZ and alcohol for 35 days. The cauda epididymidis of each mouse was dissected and placed in 1 ml of pre-warm Ham's F10 culture medium for 30 min. The swim-out spermatozoa were analysed for count, motility, morphology and viability. Sperm chromatin quality was evaluated with aniline blue, toluidine blue, acridine orange and chromomycin A3 staining. The results showed that all sperm parameters had significant differences (P < 0.05), also when sperm chromatin was assessed with cytochemical tests. There were significant differences (P < 0.001) between the groups. According to our results, alcohol and diabetes can cause abnormalities in sperm parameters and chromatin quality. In addition, alcohol consumption in diabetic mice can intensify sperm chromatin/DNA damage. PMID:26358836

  10. Observation model and parameter partials for the JPL VLBI parameter estimation software MASTERFIT-1987

    NASA Technical Reports Server (NTRS)

    Sovers, O. J.; Fanselow, J. L.

    1987-01-01

    This report is a revision of the document of the same title (1986), dated August 1, which it supersedes. Model changes during 1986 and 1987 included corrections for antenna feed rotation, refraction in modelling antenna axis offsets, and an option to employ improved values of the semiannual and annual nutation amplitudes. Partial derivatives of the observables with respect to an additional parameter (surface temperature) are now available. New versions of two figures representing the geometric delay are incorporated. The expressions for the partial derivatives with respect to the nutation parameters have been corrected to include contributions from the dependence of UTI on nutation. The authors hope to publish revisions of this document in the future, as modeling improvements warrant.

  11. Identification of Neurofuzzy models using GTLS parameter estimation.

    PubMed

    Jakubek, Stefan; Hametner, Christoph

    2009-10-01

    In this paper, nonlinear system identification utilizing generalized total least squares (GTLS) methodologies in neurofuzzy systems is addressed. The problem involved with the estimation of the local model parameters of neurofuzzy networks is the presence of noise in measured data. When some or all input channels are subject to noise, the GTLS algorithm yields consistent parameter estimates. In addition to the estimation of the parameters, the main challenge in the design of these local model networks is the determination of the region of validity for the local models. The method presented in this paper is based on an expectation-maximization algorithm that uses a residual from the GTLS parameter estimation for proper partitioning. The performance of the resulting nonlinear model with local parameters estimated by weighted GTLS is a product both of the parameter estimation itself and the associated residual used for the partitioning process. The applicability and benefits of the proposed algorithm are demonstrated by means of illustrative examples and an automotive application. PMID:19336320

  12. Radar altimeter waveform modeled parameter recovery. [SEASAT-1 data

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Satellite-borne radar altimeters include waveform sampling gates providing point samples of the transmitted radar pulse after its scattering from the ocean's surface. Averages of the waveform sampler data can be fitted by varying parameters in a model mean return waveform. The theoretical waveform model used is described as well as a general iterative nonlinear least squares procedures used to obtain estimates of parameters characterizing the modeled waveform for SEASAT-1 data. The six waveform parameters recovered by the fitting procedure are: (1) amplitude; (2) time origin, or track point; (3) ocean surface rms roughness; (4) noise baseline; (5) ocean surface skewness; and (6) altitude or off-nadir angle. Additional practical processing considerations are addressed and FORTRAN source listing for subroutines used in the waveform fitting are included. While the description is for the Seasat-1 altimeter waveform data analysis, the work can easily be generalized and extended to other radar altimeter systems.

  13. Parameter optimization in S-system models

    PubMed Central

    Vilela, Marco; Chou, I-Chun; Vinga, Susana; Vasconcelos, Ana Tereza R; Voit, Eberhard O; Almeida, Jonas S

    2008-01-01

    Background The inverse problem of identifying the topology of biological networks from their time series responses is a cornerstone challenge in systems biology. We tackle this challenge here through the parameterization of S-system models. It was previously shown that parameter identification can be performed as an optimization based on the decoupling of the differential S-system equations, which results in a set of algebraic equations. Results A novel parameterization solution is proposed for the identification of S-system models from time series when no information about the network topology is known. The method is based on eigenvector optimization of a matrix formed from multiple regression equations of the linearized decoupled S-system. Furthermore, the algorithm is extended to the optimization of network topologies with constraints on metabolites and fluxes. These constraints rejoin the system in cases where it had been fragmented by decoupling. We demonstrate with synthetic time series why the algorithm can be expected to converge in most cases. Conclusion A procedure was developed that facilitates automated reverse engineering tasks for biological networks using S-systems. The proposed method of eigenvector optimization constitutes an advancement over S-system parameter identification from time series using a recent method called Alternating Regression. The proposed method overcomes convergence issues encountered in alternate regression by identifying nonlinear constraints that restrict the search space to computationally feasible solutions. Because the parameter identification is still performed for each metabolite separately, the modularity and linear time characteristics of the alternating regression method are preserved. Simulation studies illustrate how the proposed algorithm identifies the correct network topology out of a collection of models which all fit the dynamical time series essentially equally well. PMID:18416837

  14. System for Predicting Pitzer Ion-Interaction Model Parameters

    NASA Astrophysics Data System (ADS)

    Schreiber, D. R.; Obias, T.

    2002-12-01

    Pitzer's Ion-Interaction Model has been widely utilized for the prediction of non-ideal solution behavior. The Pitzer model does an excellent job of predicting the solubility of minerals over a wide range of conditions for natural water systems. While Pitzer's equations have been successful in modeling systems when there are parameters available, there are still some systems that can't be modeled because parameters aren't available for all of the salts of interest. For example, there is little to no data for aluminum salts yet in acidified natural waters it may be present at significant concentrations. In addition, aluminum chemistry will also be important in the remediation of acidified High-level waste. Given the quantity of work involved in generating the needed parameters it would be advantageous to be able to predict Pitzer parameters for salt systems when there is no data available. Recently we began work on modeling High-level waste systems where Pitzer parameters are not available for some of the constituents of interest. We will discuss a set of relations we have developed for the prediction of Pitzer's binary ion-interaction parameters. In the binary parameter case, we reformulated the Pitzer's equations by replacing the parameters, β(0), β(1), β(2), and C, with expressions in ionic radii. Equations have been developed for salts of a particular anion with cations of similar charge. For example, there is a single equation for the 1:1 chloride salts. Relations for acids were developed separately. Also we have developed a separate set of equations for all salts of a particular charge type independent of the anion. While the latter set of equations are of lesser predictive value, they can be used in cases where we don't have a relation for a particular anion. Since any system used to predict parameters would result in a loss of accuracy, experimentally determined parameters should be used when available. The ability of parameters derived from our model

  15. Model parameters for simulation of physiological lipids

    PubMed Central

    McGlinchey, Nicholas

    2016-01-01

    Coarse grain simulation of proteins in their physiological membrane environment can offer insight across timescales, but requires a comprehensive force field. Parameters are explored for multicomponent bilayers composed of unsaturated lipids DOPC and DOPE, mixed‐chain saturation POPC and POPE, and anionic lipids found in bacteria: POPG and cardiolipin. A nonbond representation obtained from multiscale force matching is adapted for these lipids and combined with an improved bonding description of cholesterol. Equilibrating the area per lipid yields robust bilayer simulations and properties for common lipid mixtures with the exception of pure DOPE, which has a known tendency to form nonlamellar phase. The models maintain consistency with an existing lipid–protein interaction model, making the force field of general utility for studying membrane proteins in physiologically representative bilayers. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:26864972

  16. Multiscale and Multiphysics Modeling of Additive Manufacturing of Advanced Materials

    NASA Technical Reports Server (NTRS)

    Liou, Frank; Newkirk, Joseph; Fan, Zhiqiang; Sparks, Todd; Chen, Xueyang; Fletcher, Kenneth; Zhang, Jingwei; Zhang, Yunlu; Kumar, Kannan Suresh; Karnati, Sreekar

    2015-01-01

    The objective of this proposed project is to research and develop a prediction tool for advanced additive manufacturing (AAM) processes for advanced materials and develop experimental methods to provide fundamental properties and establish validation data. Aircraft structures and engines demand materials that are stronger, useable at much higher temperatures, provide less acoustic transmission, and enable more aeroelastic tailoring than those currently used. Significant improvements in properties can only be achieved by processing the materials under nonequilibrium conditions, such as AAM processes. AAM processes encompass a class of processes that use a focused heat source to create a melt pool on a substrate. Examples include Electron Beam Freeform Fabrication and Direct Metal Deposition. These types of additive processes enable fabrication of parts directly from CAD drawings. To achieve the desired material properties and geometries of the final structure, assessing the impact of process parameters and predicting optimized conditions with numerical modeling as an effective prediction tool is necessary. The targets for the processing are multiple and at different spatial scales, and the physical phenomena associated occur in multiphysics and multiscale. In this project, the research work has been developed to model AAM processes in a multiscale and multiphysics approach. A macroscale model was developed to investigate the residual stresses and distortion in AAM processes. A sequentially coupled, thermomechanical, finite element model was developed and validated experimentally. The results showed the temperature distribution, residual stress, and deformation within the formed deposits and substrates. A mesoscale model was developed to include heat transfer, phase change with mushy zone, incompressible free surface flow, solute redistribution, and surface tension. Because of excessive computing time needed, a parallel computing approach was also tested. In addition

  17. Additive functions in boolean models of gene regulatory network modules.

    PubMed

    Darabos, Christian; Di Cunto, Ferdinando; Tomassini, Marco; Moore, Jason H; Provero, Paolo; Giacobini, Mario

    2011-01-01

    Gene-on-gene regulations are key components of every living organism. Dynamical abstract models of genetic regulatory networks help explain the genome's evolvability and robustness. These properties can be attributed to the structural topology of the graph formed by genes, as vertices, and regulatory interactions, as edges. Moreover, the actual gene interaction of each gene is believed to play a key role in the stability of the structure. With advances in biology, some effort was deployed to develop update functions in boolean models that include recent knowledge. We combine real-life gene interaction networks with novel update functions in a boolean model. We use two sub-networks of biological organisms, the yeast cell-cycle and the mouse embryonic stem cell, as topological support for our system. On these structures, we substitute the original random update functions by a novel threshold-based dynamic function in which the promoting and repressing effect of each interaction is considered. We use a third real-life regulatory network, along with its inferred boolean update functions to validate the proposed update function. Results of this validation hint to increased biological plausibility of the threshold-based function. To investigate the dynamical behavior of this new model, we visualized the phase transition between order and chaos into the critical regime using Derrida plots. We complement the qualitative nature of Derrida plots with an alternative measure, the criticality distance, that also allows to discriminate between regimes in a quantitative way. Simulation on both real-life genetic regulatory networks show that there exists a set of parameters that allows the systems to operate in the critical region. This new model includes experimentally derived biological information and recent discoveries, which makes it potentially useful to guide experimental research. The update function confers additional realism to the model, while reducing the complexity

  18. Additive Functions in Boolean Models of Gene Regulatory Network Modules

    PubMed Central

    Darabos, Christian; Di Cunto, Ferdinando; Tomassini, Marco; Moore, Jason H.; Provero, Paolo; Giacobini, Mario

    2011-01-01

    Gene-on-gene regulations are key components of every living organism. Dynamical abstract models of genetic regulatory networks help explain the genome's evolvability and robustness. These properties can be attributed to the structural topology of the graph formed by genes, as vertices, and regulatory interactions, as edges. Moreover, the actual gene interaction of each gene is believed to play a key role in the stability of the structure. With advances in biology, some effort was deployed to develop update functions in Boolean models that include recent knowledge. We combine real-life gene interaction networks with novel update functions in a Boolean model. We use two sub-networks of biological organisms, the yeast cell-cycle and the mouse embryonic stem cell, as topological support for our system. On these structures, we substitute the original random update functions by a novel threshold-based dynamic function in which the promoting and repressing effect of each interaction is considered. We use a third real-life regulatory network, along with its inferred Boolean update functions to validate the proposed update function. Results of this validation hint to increased biological plausibility of the threshold-based function. To investigate the dynamical behavior of this new model, we visualized the phase transition between order and chaos into the critical regime using Derrida plots. We complement the qualitative nature of Derrida plots with an alternative measure, the criticality distance, that also allows to discriminate between regimes in a quantitative way. Simulation on both real-life genetic regulatory networks show that there exists a set of parameters that allows the systems to operate in the critical region. This new model includes experimentally derived biological information and recent discoveries, which makes it potentially useful to guide experimental research. The update function confers additional realism to the model, while reducing the complexity

  19. Revised digestive parameter estimates for the Molly cow model.

    PubMed

    Hanigan, M D; Appuhamy, J A D R N; Gregorini, P

    2013-06-01

    The Molly cow model represents nutrient digestion and metabolism based on a mechanistic representation of the key biological elements. Digestive parameters were derived ad hoc from literature observations or were assumed. Preliminary work determined that several of these parameters did not represent the true relationships. The current work was undertaken to derive ruminal and postruminal digestive parameters and to use a meta-approach to assess the effects of interactions among nutrients and identify areas of model weakness. Model predictions were compared with a database of literature observations containing 233 treatment means. Mean square prediction errors were assessed to characterize model performance. Ruminal pH prediction equations had substantial mean bias, which caused problems in fiber digestion and microbial growth predictions. The pH prediction equation was reparameterized simultaneously with the several ruminal and postruminal digestion parameters, resulting in more realistic parameter estimates for ruminal fiber digestion, and moderate reductions in prediction errors for pH, neutral detergent fiber, acid detergent fiber, and microbial N outflow from the rumen; and postruminal digestion of neutral detergent fiber, acid detergent fiber, and protein. Prediction errors are still large for ruminal ammonia and outflow of starch from the rumen. The gain in microbial efficiency associated with fat feeding was found to be more than twice the original estimate, but in contrast to prior assumptions, fat feeding did not exert negative effects on fiber and protein degradation in the rumen. Microbial responses to ruminal ammonia concentrations were half saturated at 0.2mM versus the original estimate of 1.2mM. Residuals analyses indicated that additional progress could be made in predicting microbial N outflow, volatile fatty acid production and concentrations, and cycling of N between blood and the rumen. These additional corrections should lead to an even more

  20. Multiscale modeling of failure in composites under model parameter uncertainty

    NASA Astrophysics Data System (ADS)

    Bogdanor, Michael J.; Oskay, Caglar; Clay, Stephen B.

    2015-09-01

    This manuscript presents a multiscale stochastic failure modeling approach for fiber reinforced composites. A homogenization based reduced-order multiscale computational model is employed to predict the progressive damage accumulation and failure in the composite. Uncertainty in the composite response is modeled at the scale of the microstructure by considering the constituent material (i.e., matrix and fiber) parameters governing the evolution of damage as random variables. Through the use of the multiscale model, randomness at the constituent scale is propagated to the scale of the composite laminate. The probability distributions of the underlying material parameters are calibrated from unidirectional composite experiments using a Bayesian statistical approach. The calibrated multiscale model is exercised to predict the ultimate tensile strength of quasi-isotropic open-hole composite specimens at various loading rates. The effect of random spatial distribution of constituent material properties on the composite response is investigated.

  1. Radiation processing of thermoplastic starch by blending aromatic additives: Effect of blend composition and radiation parameters

    NASA Astrophysics Data System (ADS)

    Khandal, Dhriti; Mikus, Pierre-Yves; Dole, Patrice; Coqueret, Xavier

    2013-03-01

    This paper reports on the effects of electron beam (EB) irradiation on poly α-1,4-glucose oligomers (maltodextrins) in the presence of water and of various aromatic additives, as model blends for gaining a better understanding at a molecular level the modifications occurring in amorphous starch-lignin blends submitted to ionizing irradiation for improving the properties of this type of bio-based thermoplastic material. A series of aromatic compounds, namely p-methoxy benzyl alcohol, benzene dimethanol, cinnamyl alcohol and some related carboxylic acids namely cinnamic acid, coumaric acid, and ferulic acid, was thus studied for assessing the ability of each additive to counteract chain scission of the polysaccharide and induce interchain covalent linkages. Gel formation in EB-irradiated blends comprising of maltodextrin was shown to be dependent on three main factors: the type of aromatic additive, presence of glycerol, and irradiation dose. The chain scission versus grafting phenomenon as a function of blend composition and dose were studied using Size Exclusion Chromatography by determining the changes in molecular weight distribution (MWD) from Refractive Index (RI) chromatograms and the presence of aromatic grafts onto the maltodextrin chains from UV chromatograms. The occurrence of crosslinking was quantified by gel fraction measurements allowing for ranking the cross-linking efficiency of the additives. When applying the method to destructurized starch blends, gel formation was also shown to be strongly affected by the moisture content of the sample submitted to irradiation. The results demonstrate the possibility to tune the reactivity of tailored blend for minimizing chain degradation and control the degree of cross-linking.

  2. Fixing the c Parameter in the Three-Parameter Logistic Model

    ERIC Educational Resources Information Center

    Han, Kyung T.

    2012-01-01

    For several decades, the "three-parameter logistic model" (3PLM) has been the dominant choice for practitioners in the field of educational measurement for modeling examinees' response data from multiple-choice (MC) items. Past studies, however, have pointed out that the c-parameter of 3PLM should not be interpreted as a guessing parameter. This…

  3. Empirical flow parameters : a tool for hydraulic model validity

    USGS Publications Warehouse

    Asquith, William H.; Burley, Thomas E.; Cleveland, Theodore G.

    2013-01-01

    The objectives of this project were (1) To determine and present from existing data in Texas, relations between observed stream flow, topographic slope, mean section velocity, and other hydraulic factors, to produce charts such as Figure 1 and to produce empirical distributions of the various flow parameters to provide a methodology to "check if model results are way off!"; (2) To produce a statistical regional tool to estimate mean velocity or other selected parameters for storm flows or other conditional discharges at ungauged locations (most bridge crossings) in Texas to provide a secondary way to compare such values to a conventional hydraulic modeling approach. (3.) To present ancillary values such as Froude number, stream power, Rosgen channel classification, sinuosity, and other selected characteristics (readily determinable from existing data) to provide additional information to engineers concerned with the hydraulic-soil-foundation component of transportation infrastructure.

  4. Integrating microbial diversity in soil carbon dynamic models parameters

    NASA Astrophysics Data System (ADS)

    Louis, Benjamin; Menasseri-Aubry, Safya; Leterme, Philippe; Maron, Pierre-Alain; Viaud, Valérie

    2015-04-01

    Faced with the numerous concerns about soil carbon dynamic, a large quantity of carbon dynamic models has been developed during the last century. These models are mainly in the form of deterministic compartment models with carbon fluxes between compartments represented by ordinary differential equations. Nowadays, lots of them consider the microbial biomass as a compartment of the soil organic matter (carbon quantity). But the amount of microbial carbon is rarely used in the differential equations of the models as a limiting factor. Additionally, microbial diversity and community composition are mostly missing, although last advances in soil microbial analytical methods during the two past decades have shown that these characteristics play also a significant role in soil carbon dynamic. As soil microorganisms are essential drivers of soil carbon dynamic, the question about explicitly integrating their role have become a key issue in soil carbon dynamic models development. Some interesting attempts can be found and are dominated by the incorporation of several compartments of different groups of microbial biomass in terms of functional traits and/or biogeochemical compositions to integrate microbial diversity. However, these models are basically heuristic models in the sense that they are used to test hypotheses through simulations. They have rarely been confronted to real data and thus cannot be used to predict realistic situations. The objective of this work was to empirically integrate microbial diversity in a simple model of carbon dynamic through statistical modelling of the model parameters. This work is based on available experimental results coming from a French National Research Agency program called DIMIMOS. Briefly, 13C-labelled wheat residue has been incorporated into soils with different pedological characteristics and land use history. Then, the soils have been incubated during 104 days and labelled and non-labelled CO2 fluxes have been measured at ten

  5. Hyperbolic value addition and general models of animal choice.

    PubMed

    Mazur, J E

    2001-01-01

    Three mathematical models of choice--the contextual-choice model (R. Grace, 1994), delay-reduction theory (N. Squires & E. Fantino, 1971), and a new model called the hyperbolic value-added model--were compared in their ability to predict the results from a wide variety of experiments with animal subjects. When supplied with 2 or 3 free parameters, all 3 models made fairly accurate predictions for a large set of experiments that used concurrent-chain procedures. One advantage of the hyperbolic value-added model is that it is derived from a simpler model that makes accurate predictions for many experiments using discrete-trial adjusting-delay procedures. Some results favor the hyperbolic value-added model and delay-reduction theory over the contextual-choice model, but more data are needed from choice situations for which the models make distinctly different predictions.

  6. How much additional model complexity do the use of catchment hydrological signatures, additional data and expert knowledge warrant?

    NASA Astrophysics Data System (ADS)

    Hrachowitz, M.; Fovet, O.; RUIZ, L.; Gascuel-odoux, C.; Savenije, H.

    2013-12-01

    In the frequent absence of sufficient suitable data to constrain hydrological models, it is not uncommon to represent catchments at a range of scales by lumped model set-ups. Although process heterogeneity can average out on the catchment scale to generate simple catchment integrated responses whose general flow features can frequently be reproduced by lumped models, these models often fail to get details of the flow pattern as well as catchment internal dynamics, such as groundwater level changes, right to a sufficient degree, resulting in considerable predictive uncertainty. Traditionally, models are constrained by only one or two objectives functions, which does not warrant more than a handful of parameters to avoid elevated predictive uncertainty, thereby preventing more complex model set-ups accounting for increased process heterogeneity. In this study it was tested how much additional process heterogeneity is warranted in models when optimizing the model calibration strategy, using additional data and expert knowledge. Long-term timeseries of flow and groundwater levels for small nested experimental catchments in French Brittany with considerable differences in geology, topography and flow regime were used in this study to test which degree of model process heterogeneity is warranted with increased availability of information. In a first step, as a benchmark, the system was treated as one lumped entity and the model was trained based only on its ability to reproduce the hydrograph. Although it was found that the overall modelled flow generally reflects the observed flow response quite well, the internal system dynamics could not be reproduced. In further steps the complexity of this model was gradually increased, first by adding a separate riparian reservoir to the lumped set-up and then by a semi-distributed set-up, allowing for independent, parallel model structures, representing the contrasting nested catchments. Although calibration performance increased

  7. Transfer function modeling of damping mechanisms in distributed parameter models

    NASA Technical Reports Server (NTRS)

    Slater, J. C.; Inman, D. J.

    1994-01-01

    This work formulates a method for the modeling of material damping characteristics in distributed parameter models which may be easily applied to models such as rod, plate, and beam equations. The general linear boundary value vibration equation is modified to incorporate hysteresis effects represented by complex stiffness using the transfer function approach proposed by Golla and Hughes. The governing characteristic equations are decoupled through separation of variables yielding solutions similar to those of undamped classical theory, allowing solution of the steady state as well as transient response. Example problems and solutions are provided demonstrating the similarity of the solutions to those of the classical theories and transient responses of nonviscous systems.

  8. Support vector machine to predict diesel engine performance and emission parameters fueled with nano-particles additive to diesel fuel

    NASA Astrophysics Data System (ADS)

    Ghanbari, M.; Najafi, G.; Ghobadian, B.; Mamat, R.; Noor, M. M.; Moosavian, A.

    2015-12-01

    This paper studies the use of adaptive Support Vector Machine (SVM) to predict the performance parameters and exhaust emissions of a diesel engine operating on nanodiesel blended fuels. In order to predict the engine parameters, the whole experimental data were randomly divided into training and testing data. For SVM modelling, different values for radial basis function (RBF) kernel width and penalty parameters (C) were considered and the optimum values were then found. The results demonstrate that SVM is capable of predicting the diesel engine performance and emissions. In the experimental step, Carbon nano tubes (CNT) (40, 80 and 120 ppm) and nano silver particles (40, 80 and 120 ppm) with nanostructure were prepared and added as additive to the diesel fuel. Six cylinders, four-stroke diesel engine was fuelled with these new blended fuels and operated at different engine speeds. Experimental test results indicated the fact that adding nano particles to diesel fuel, increased diesel engine power and torque output. For nano-diesel it was found that the brake specific fuel consumption (bsfc) was decreased compared to the net diesel fuel. The results proved that with increase of nano particles concentrations (from 40 ppm to 120 ppm) in diesel fuel, CO2 emission increased. CO emission in diesel fuel with nano-particles was lower significantly compared to pure diesel fuel. UHC emission with silver nano-diesel blended fuel decreased while with fuels that contains CNT nano particles increased. The trend of NOx emission was inverse compared to the UHC emission. With adding nano particles to the blended fuels, NOx increased compared to the net diesel fuel. The tests revealed that silver & CNT nano particles can be used as additive in diesel fuel to improve complete combustion of the fuel and reduce the exhaust emissions significantly.

  9. Improving a regional model using reduced complexity and parameter estimation

    USGS Publications Warehouse

    Kelson, Victor A.; Hunt, Randall J.; Haitjema, Henk M.

    2002-01-01

    The availability of powerful desktop computers and graphical user interfaces for ground water flow models makes possible the construction of ever more complex models. A proposed copper-zinc sulfide mine in northern Wisconsin offers a unique case in which the same hydrologic system has been modeled using a variety of techniques covering a wide range of sophistication and complexity. Early in the permitting process, simple numerical models were used to evaluate the necessary amount of water to be pumped from the mine, reductions in streamflow, and the drawdowns in the regional aquifer. More complex models have subsequently been used in an attempt to refine the predictions. Even after so much modeling effort, questions regarding the accuracy and reliability of the predictions remain. We have performed a new analysis of the proposed mine using the two-dimensional analytic element code GFLOW coupled with the nonlinear parameter estimation code UCODE. The new model is parsimonious, containing fewer than 10 parameters, and covers a region several times larger in areal extent than any of the previous models. The model demonstrates the suitability of analytic element codes for use with parameter estimation codes. The simplified model results are similar to the more complex models; predicted mine inflows and UCODE-derived 95% confidence intervals are consistent with the previous predictions. More important, the large areal extent of the model allowed us to examine hydrological features not included in the previous models, resulting in new insights about the effects that far-field boundary conditions can have on near-field model calibration and parameterization. In this case, the addition of surface water runoff into a lake in the headwaters of a stream while holding recharge constant moved a regional ground watershed divide and resulted in some of the added water being captured by the adjoining basin. Finally, a simple analytical solution was used to clarify the GFLOW model

  10. Estimation of Time-Varying Pilot Model Parameters

    NASA Technical Reports Server (NTRS)

    Zaal, Peter M. T.; Sweet, Barbara T.

    2011-01-01

    Human control behavior is rarely completely stationary over time due to fatigue or loss of attention. In addition, there are many control tasks for which human operators need to adapt their control strategy to vehicle dynamics that vary in time. In previous studies on the identification of time-varying pilot control behavior wavelets were used to estimate the time-varying frequency response functions. However, the estimation of time-varying pilot model parameters was not considered. Estimating these parameters can be a valuable tool for the quantification of different aspects of human time-varying manual control. This paper presents two methods for the estimation of time-varying pilot model parameters, a two-step method using wavelets and a windowed maximum likelihood estimation method. The methods are evaluated using simulations of a closed-loop control task with time-varying pilot equalization and vehicle dynamics. Simulations are performed with and without remnant. Both methods give accurate results when no pilot remnant is present. The wavelet transform is very sensitive to measurement noise, resulting in inaccurate parameter estimates when considerable pilot remnant is present. Maximum likelihood estimation is less sensitive to pilot remnant, but cannot detect fast changes in pilot control behavior.

  11. Comprehensive European dietary exposure model (CEDEM) for food additives.

    PubMed

    Tennant, David R

    2016-05-01

    European methods for assessing dietary exposures to nutrients, additives and other substances in food are limited by the availability of detailed food consumption data for all member states. A proposed comprehensive European dietary exposure model (CEDEM) applies summary data published by the European Food Safety Authority (EFSA) in a deterministic model based on an algorithm from the EFSA intake method for food additives. The proposed approach can predict estimates of food additive exposure provided in previous EFSA scientific opinions that were based on the full European food consumption database.

  12. Optimal welding parameters for very high power ultrasonic additive manufacturing of smart structures with aluminum 6061 matrix

    NASA Astrophysics Data System (ADS)

    Wolcott, Paul J.; Hehr, Adam; Dapino, Marcelo J.

    2014-03-01

    Ultrasonic additive manufacturing (UAM) is a recent solid state manufacturing process that combines ad- ditive joining of thin metal tapes with subtractive milling operations to generate near net shape metallic parts. Due to the minimal heating during the process, UAM is a proven method of embedding Ni-Ti, Fe-Ga, and PVDF to create active metal matrix composites. Recently, advances in the UAM process utilizing 9 kW very high power (VHP) welding has improved bonding properties, enabling joining of high strength materials previously unweldable with 1 kW low power UAM. Consequently, a design of experiments study was conducted to optimize welding conditions for aluminum 6061 components. This understanding is critical in the design of UAM parts containing smart materials. Build parameters, including weld force, weld speed, amplitude, and temperature were varied based on a Taguchi experimental design matrix and tested for me- chanical strength. Optimal weld parameters were identi ed with statistical methods including a generalized linear model for analysis of variance (ANOVA), mean e ects plots, and interaction e ects plots.

  13. Numerical model for thermal parameters in optical materials

    NASA Astrophysics Data System (ADS)

    Sato, Yoichi; Taira, Takunori

    2016-04-01

    Thermal parameters of optical materials, such as thermal conductivity, thermal expansion, temperature coefficient of refractive index play a decisive role for the thermal design inside laser cavities. Therefore, numerical value of them with temperature dependence is quite important in order to develop the high intense laser oscillator in which optical materials generate excessive heat across mode volumes both of lasing output and optical pumping. We already proposed a novel model of thermal conductivity in various optical materials. Thermal conductivity is a product of isovolumic specific heat and thermal diffusivity, and independent modeling of these two figures should be required from the viewpoint of a clarification of physical meaning. Our numerical model for thermal conductivity requires one material parameter for specific heat and two parameters for thermal diffusivity in the calculation of each optical material. In this work we report thermal conductivities of various optical materials as Y3Al5O12 (YAG), YVO4 (YVO), GdVO4 (GVO), stoichiometric and congruent LiTaO3, synthetic quartz, YAG ceramics and Y2O3 ceramics. The dependence on Nd3+-doping in laser gain media in YAG, YVO and GVO is also studied. This dependence can be described by only additional three parameters. Temperature dependence of thermal expansion and temperature coefficient of refractive index for YAG, YVO, and GVO: these are also included in this work for convenience. We think our numerical model is quite useful for not only thermal analysis in laser cavities or optical waveguides but also the evaluation of physical properties in various transparent materials.

  14. Parameter uncertainty in biochemical models described by ordinary differential equations.

    PubMed

    Vanlier, J; Tiemann, C A; Hilbers, P A J; van Riel, N A W

    2013-12-01

    Improved mechanistic understanding of biochemical networks is one of the driving ambitions of Systems Biology. Computational modeling allows the integration of various sources of experimental data in order to put this conceptual understanding to the test in a quantitative manner. The aim of computational modeling is to obtain both predictive as well as explanatory models for complex phenomena, hereby providing useful approximations of reality with varying levels of detail. As the complexity required to describe different system increases, so does the need for determining how well such predictions can be made. Despite efforts to make tools for uncertainty analysis available to the field, these methods have not yet found widespread use in the field of Systems Biology. Additionally, the suitability of the different methods strongly depends on the problem and system under investigation. This review provides an introduction to some of the techniques available as well as gives an overview of the state-of-the-art methods for parameter uncertainty analysis.

  15. Parameter Estimates in Differential Equation Models for Chemical Kinetics

    ERIC Educational Resources Information Center

    Winkel, Brian

    2011-01-01

    We discuss the need for devoting time in differential equations courses to modelling and the completion of the modelling process with efforts to estimate the parameters in the models using data. We estimate the parameters present in several differential equation models of chemical reactions of order n, where n = 0, 1, 2, and apply more general…

  16. Order-parameter model for unstable multilane traffic flow

    NASA Astrophysics Data System (ADS)

    Lubashevsky, Ihor A.; Mahnke, Reinhard

    2000-11-01

    We discuss a phenomenological approach to the description of unstable vehicle motion on multilane highways that explains in a simple way the observed sequence of the ``free flow <--> synchronized mode <--> jam'' phase transitions as well as the hysteresis in these transitions. We introduce a variable called an order parameter that accounts for possible correlations in the vehicle motion at different lanes. So, it is principally due to the ``many-body'' effects in the car interaction in contrast to such variables as the mean car density and velocity being actually the zeroth and first moments of the ``one-particle'' distribution function. Therefore, we regard the order parameter as an additional independent state variable of traffic flow. We assume that these correlations are due to a small group of ``fast'' drivers and by taking into account the general properties of the driver behavior we formulate a governing equation for the order parameter. In this context we analyze the instability of homogeneous traffic flow that manifested itself in the above-mentioned phase transitions and gave rise to the hysteresis in both of them. Besides, the jam is characterized by the vehicle flows at different lanes which are independent of one another. We specify a certain simplified model in order to study the general features of the car cluster self-formation under the ``free flow <--> synchronized motion'' phase transition. In particular, we show that the main local parameters of the developed cluster are determined by the state characteristics of vehicle motion only.

  17. [Temperature dependence of parameters of plant photosynthesis models: a review].

    PubMed

    Borjigidai, Almaz; Yu, Gui-Rui

    2013-12-01

    This paper reviewed the progress on the temperature response models of plant photosynthesis. Mechanisms involved in changes in the photosynthesis-temperature curve were discussed based on four parameters, intercellular CO2 concentration, activation energy of the maximum rate of RuBP (ribulose-1,5-bisphosphate) carboxylation (V (c max)), activation energy of the rate of RuBP regeneration (J(max)), and the ratio of J(max) to V(c max) All species increased the activation energy of V(c max) with increasing growth temperature, while other parameters changed but differed among species, suggesting the activation energy of V(c max) might be the most important parameter for the temperature response of plant photosynthesis. In addition, research problems and prospects were proposed. It's necessary to combine the photosynthesis models at foliage and community levels, and to investigate the mechanism of plants in response to global change from aspects of leaf area, solar radiation, canopy structure, canopy microclimate and photosynthetic capacity. It would benefit the understanding and quantitative assessment of plant growth, carbon balance of communities and primary productivity of ecosystems.

  18. Hydrological modeling in alpine catchments: sensing the critical parameters towards an efficient model calibration.

    PubMed

    Achleitner, S; Rinderer, M; Kirnbauer, R

    2009-01-01

    For the Tyrolean part of the river Inn, a hybrid model for flood forecast has been set up and is currently in its test phase. The system is a hybrid system which comprises of a hydraulic 1D model for the river Inn, and the hydrological models HQsim (Rainfall-runoff-discharge model) and the snow and ice melt model SES for modeling the rainfall runoff form non-glaciated and glaciated tributary catchment respectively. Within this paper the focus is put on the hydrological modeling of the totally 49 connected non-glaciated catchments realized with the software HQsim. In the course of model calibration, the identification of the most sensitive parameters is important aiming at an efficient calibration procedure. The indicators used for explaining the parameter sensitivities were chosen specifically for the purpose of flood forecasting. Finally five model parameters could be identified as being sensitive for model calibration when aiming for a well calibrated model for flood conditions. In addition two parameters were identified which are sensitive in situations where the snow line plays an important role.

  19. Hydrological modeling in alpine catchments: sensing the critical parameters towards an efficient model calibration.

    PubMed

    Achleitner, S; Rinderer, M; Kirnbauer, R

    2009-01-01

    For the Tyrolean part of the river Inn, a hybrid model for flood forecast has been set up and is currently in its test phase. The system is a hybrid system which comprises of a hydraulic 1D model for the river Inn, and the hydrological models HQsim (Rainfall-runoff-discharge model) and the snow and ice melt model SES for modeling the rainfall runoff form non-glaciated and glaciated tributary catchment respectively. Within this paper the focus is put on the hydrological modeling of the totally 49 connected non-glaciated catchments realized with the software HQsim. In the course of model calibration, the identification of the most sensitive parameters is important aiming at an efficient calibration procedure. The indicators used for explaining the parameter sensitivities were chosen specifically for the purpose of flood forecasting. Finally five model parameters could be identified as being sensitive for model calibration when aiming for a well calibrated model for flood conditions. In addition two parameters were identified which are sensitive in situations where the snow line plays an important role. PMID:19759453

  20. Effect of a phytogenic feed additive on performance, ovarian morphology, serum lipid parameters and egg sensory quality in laying hen

    PubMed Central

    Saki, Ali Asghar; Aliarabi, Hassan; Hosseini Siyar, Sayed Ali; Salari, Jalal; Hashemi, Mahdi

    2014-01-01

    This present study was conducted to evaluate the effects of dietary inclusion of 4, 8 and 12 g kg-1 phytogenic feed additives mixture on performance, egg quality, ovary parameters, serum biochemical parameters and yolk trimethylamine level in laying hens. The results of experiment have shown that egg weight was increased by supplementation of 12 g kg-1 feed additive whereas egg production, feed intake and feed conversion ratio (FCR) were not significantly affected. There were no significant differences in egg quality parameters by supplementation of phytogenic feed additive, whereas yolk trimethylamine level was decreased as the feed additive level increased. The sensory evaluation parameters did not differ significantly. No significant differences were found in serum cholesterol and triglyceride levels between the treatments but low- and high-density lipoprotein were significantly increased. Number of small follicles and ovary weight were significantly increased by supplementation of 12 g kg-1 feed additive. Overall, dietary supplementation of polyherbal additive increased egg weigh, improved ovary characteristics and declined yolk trimethylamine level. PMID:25610580

  1. Effect of argon addition on plasma parameters and dust charging in hydrogen plasma

    SciTech Connect

    Kakati, B. Kausik, S. S.; Saikia, B. K.; Bandyopadhyay, M.; Saxena, Y. C.

    2014-10-28

    Experimental results on effect of adding argon gas to hydrogen plasma in a multi-cusp dusty plasma device are reported. Addition of argon modifies plasma density, electron temperature, degree of hydrogen dissociation, dust current as well as dust charge. From the dust charging profile, it is observed that the dust current and dust charge decrease significantly up to 40% addition of argon flow rate in hydrogen plasma. But beyond 40% of argon flow rate, the changes in dust current and dust charge are insignificant. Results show that the addition of argon to hydrogen plasma in a dusty plasma device can be used as a tool to control the dust charging in a low pressure dusty plasma.

  2. Parameter redundancy in discrete state‐space and integrated models

    PubMed Central

    McCrea, Rachel S.

    2016-01-01

    Discrete state‐space models are used in ecology to describe the dynamics of wild animal populations, with parameters, such as the probability of survival, being of ecological interest. For a particular parametrization of a model it is not always clear which parameters can be estimated. This inability to estimate all parameters is known as parameter redundancy or a model is described as nonidentifiable. In this paper we develop methods that can be used to detect parameter redundancy in discrete state‐space models. An exhaustive summary is a combination of parameters that fully specify a model. To use general methods for detecting parameter redundancy a suitable exhaustive summary is required. This paper proposes two methods for the derivation of an exhaustive summary for discrete state‐space models using discrete analogues of methods for continuous state‐space models. We also demonstrate that combining multiple data sets, through the use of an integrated population model, may result in a model in which all parameters are estimable, even though models fitted to the separate data sets may be parameter redundant. PMID:27362826

  3. Parameter redundancy in discrete state-space and integrated models.

    PubMed

    Cole, Diana J; McCrea, Rachel S

    2016-09-01

    Discrete state-space models are used in ecology to describe the dynamics of wild animal populations, with parameters, such as the probability of survival, being of ecological interest. For a particular parametrization of a model it is not always clear which parameters can be estimated. This inability to estimate all parameters is known as parameter redundancy or a model is described as nonidentifiable. In this paper we develop methods that can be used to detect parameter redundancy in discrete state-space models. An exhaustive summary is a combination of parameters that fully specify a model. To use general methods for detecting parameter redundancy a suitable exhaustive summary is required. This paper proposes two methods for the derivation of an exhaustive summary for discrete state-space models using discrete analogues of methods for continuous state-space models. We also demonstrate that combining multiple data sets, through the use of an integrated population model, may result in a model in which all parameters are estimable, even though models fitted to the separate data sets may be parameter redundant.

  4. Estimation Methods for One-Parameter Testlet Models

    ERIC Educational Resources Information Center

    Jiao, Hong; Wang, Shudong; He, Wei

    2013-01-01

    This study demonstrated the equivalence between the Rasch testlet model and the three-level one-parameter testlet model and explored the Markov Chain Monte Carlo (MCMC) method for model parameter estimation in WINBUGS. The estimation accuracy from the MCMC method was compared with those from the marginalized maximum likelihood estimation (MMLE)…

  5. Seamless continental-domain hydrologic model parameter estimations with Multi-Scale Parameter Regionalization

    NASA Astrophysics Data System (ADS)

    Mizukami, Naoki; Clark, Martyn; Newman, Andrew; Wood, Andy

    2016-04-01

    Estimation of spatially distributed parameters is one of the biggest challenges in hydrologic modeling over a large spatial domain. This problem arises from methodological challenges such as the transfer of calibrated parameters to ungauged locations. Consequently, many current large scale hydrologic assessments rely on spatially inconsistent parameter fields showing patchwork patterns resulting from individual basin calibration or spatially constant parameters resulting from the adoption of default or a-priori estimates. In this study we apply the Multi-scale Parameter Regionalization (MPR) framework (Samaniego et al., 2010) to generate spatially continuous and optimized parameter fields for the Variable Infiltration Capacity (VIC) model over the contiguous United States(CONUS). The MPR method uses transfer functions that relate geophysical attributes (e.g., soil) to model parameters (e.g., parameters that describe the storage and transmission of water) at the native resolution of the geophysical attribute data and then scale to the model spatial resolution with several scaling functions, e.g., arithmetic mean, harmonic mean, and geometric mean. Model parameter adjustments are made by calibrating the parameters of the transfer function rather than the model parameters themselves. In this presentation, we first discuss conceptual challenges in a "model agnostic" continental-domain application of the MPR approach. We describe development of transfer functions for the soil parameters, and discuss challenges associated with extending MPR for VIC to multiple models. Next, we discuss the "computational shortcut" of headwater basin calibration where we estimate the parameters for only 500 headwater basins rather than conducting simulations for every grid box across the entire domain. We first performed individual basin calibration to obtain a benchmark of the maximum achievable performance in each basin, and examined their transferability to the other basins. We then

  6. Modeling the cardiovascular system using a nonlinear additive autoregressive model with exogenous input

    NASA Astrophysics Data System (ADS)

    Riedl, M.; Suhrbier, A.; Malberg, H.; Penzel, T.; Bretthauer, G.; Kurths, J.; Wessel, N.

    2008-07-01

    The parameters of heart rate variability and blood pressure variability have proved to be useful analytical tools in cardiovascular physics and medicine. Model-based analysis of these variabilities additionally leads to new prognostic information about mechanisms behind regulations in the cardiovascular system. In this paper, we analyze the complex interaction between heart rate, systolic blood pressure, and respiration by nonparametric fitted nonlinear additive autoregressive models with external inputs. Therefore, we consider measurements of healthy persons and patients suffering from obstructive sleep apnea syndrome (OSAS), with and without hypertension. It is shown that the proposed nonlinear models are capable of describing short-term fluctuations in heart rate as well as systolic blood pressure significantly better than similar linear ones, which confirms the assumption of nonlinear controlled heart rate and blood pressure. Furthermore, the comparison of the nonlinear and linear approaches reveals that the heart rate and blood pressure variability in healthy subjects is caused by a higher level of noise as well as nonlinearity than in patients suffering from OSAS. The residue analysis points at a further source of heart rate and blood pressure variability in healthy subjects, in addition to heart rate, systolic blood pressure, and respiration. Comparison of the nonlinear models within and among the different groups of subjects suggests the ability to discriminate the cohorts that could lead to a stratification of hypertension risk in OSAS patients.

  7. Adjoint method for estimating Jiles-Atherton hysteresis model parameters

    NASA Astrophysics Data System (ADS)

    Zaman, Mohammad Asif; Hansen, Paul C.; Neustock, Lars T.; Padhy, Punnag; Hesselink, Lambertus

    2016-09-01

    A computationally efficient method for identifying the parameters of the Jiles-Atherton hysteresis model is presented. Adjoint analysis is used in conjecture with an accelerated gradient descent optimization algorithm. The proposed method is used to estimate the Jiles-Atherton model parameters of two different materials. The obtained results are found to be in good agreement with the reported values. By comparing with existing methods of model parameter estimation, the proposed method is found to be computationally efficient and fast converging.

  8. Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan

    2013-01-01

    The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.

  9. Multi-criteria parameter estimation for the Unified Land Model

    NASA Astrophysics Data System (ADS)

    Livneh, B.; Lettenmaier, D. P.

    2012-08-01

    We describe a parameter estimation framework for the Unified Land Model (ULM) that utilizes multiple independent data sets over the continental United States. These include a satellite-based evapotranspiration (ET) product based on MODerate resolution Imaging Spectroradiometer (MODIS) and Geostationary Operational Environmental Satellites (GOES) imagery, an atmospheric-water balance based ET estimate that utilizes North American Regional Reanalysis (NARR) atmospheric fields, terrestrial water storage content (TWSC) data from the Gravity Recovery and Climate Experiment (GRACE), and streamflow (Q) primarily from the United States Geological Survey (USGS) stream gauges. The study domain includes 10 large-scale (≥105 km2) river basins and 250 smaller-scale (<104 km2) tributary basins. ULM, which is essentially a merger of the Noah Land Surface Model and Sacramento Soil Moisture Accounting Model, is the basis for these experiments. Calibrations were made using each of the data sets individually, in addition to combinations of multiple criteria, with multi-criteria skill scores computed for all cases. At large scales, calibration to Q resulted in the best overall performance, whereas certain combinations of ET and TWSC calibrations lead to large errors in other criteria. At small scales, about one-third of the basins had their highest Q performance from multi-criteria calibrations (to Q and ET) suggesting that traditional calibration to Q may benefit by supplementing observed Q with remote sensing estimates of ET. Model streamflow errors using optimized parameters were mostly due to over (under) estimation of low (high) flows. Overall, uncertainties in remote-sensing data proved to be a limiting factor in the utility of multi-criteria parameter estimation.

  10. Multi-criteria parameter estimation for the unified land model

    NASA Astrophysics Data System (ADS)

    Livneh, B.; Lettenmaier, D. P.

    2012-04-01

    We describe a parameter estimation framework for the Unified Land Model (ULM) that utilizes multiple independent data sets over the Continental United States. These include a satellite-based evapotranspiration (ET) product based on MODerate resolution Imaging Spectroradiometer (MODIS) and Geostationary Operation Environmental Satellites (GOES) imagery, an atmospheric-water balance based ET estimate that utilizes North American Regional Reanalysis (NARR) atmospheric fields, terrestrial water storage content (TWSC) data from the Gravity Recovery and Climate Experiment (GRACE), and streamflow (Q) primarily from the United States Geological Survey (USGS) stream gauges. The study domain includes 10 large-scale (≥105 km2) river basins and 250 smaller-scale (<104 km2) tributary basins. ULM, which is essentially a merger of the Noah Land Surface Model and Sacramento Soil Moisture Accounting model, is the basis for these experiments. Calibrations were made using each of the criteria individually, in addition to combinations of multiple criteria, with multi-criteria skill scores computed for all cases. At large-scales calibration to Q resulted in the best overall performance, whereas certain combinations of ET and TWSC calibrations lead to large errors in other criteria. At small scales, about one-third of the basins had their highest Q performance from multi-criteria calibrations (to Q and ET) suggesting that traditional calibration to Q may benefit by supplementing observed Q with remote sensing estimates of ET. Model streamflow errors using optimized parameters were mostly due to over (under) estimation of low (high) flows. Overall, uncertainties in remote-sensing data proved to be a limiting factor in the utility of multi-criteria parameter estimation.

  11. An Additional Symmetry in the Weinberg-Salam Model

    SciTech Connect

    Bakker, B.L.G.; Veselov, A.I.; Zubkov, M.A.

    2005-06-01

    An additional Z{sub 6} symmetry hidden in the fermion and Higgs sectors of the Standard Model has been found recently. It has a singular nature and is connected to the centers of the SU(3) and SU(2) subgroups of the gauge group. A lattice regularization of the Standard Model was constructed that possesses this symmetry. In this paper, we report our results on the numerical simulation of its electroweak sector.

  12. Modeling uranium transport in acidic contaminated groundwater with base addition

    SciTech Connect

    Zhang, Fan; Luo, Wensui; Parker, Jack C.; Brooks, Scott C; Watson, David B; Jardine, Philip; Gu, Baohua

    2011-01-01

    This study investigates reactive transport modeling in a column of uranium(VI)-contaminated sediments with base additions in the circulating influent. The groundwater and sediment exhibit oxic conditions with low pH, high concentrations of NO{sub 3}{sup -}, SO{sub 4}{sup 2-}, U and various metal cations. Preliminary batch experiments indicate that additions of strong base induce rapid immobilization of U for this material. In the column experiment that is the focus of the present study, effluent groundwater was titrated with NaOH solution in an inflow reservoir before reinjection to gradually increase the solution pH in the column. An equilibrium hydrolysis, precipitation and ion exchange reaction model developed through simulation of the preliminary batch titration experiments predicted faster reduction of aqueous Al than observed in the column experiment. The model was therefore modified to consider reaction kinetics for the precipitation and dissolution processes which are the major mechanism for Al immobilization. The combined kinetic and equilibrium reaction model adequately described variations in pH, aqueous concentrations of metal cations (Al, Ca, Mg, Sr, Mn, Ni, Co), sulfate and U(VI). The experimental and modeling results indicate that U(VI) can be effectively sequestered with controlled base addition due to sorption by slowly precipitated Al with pH-dependent surface charge. The model may prove useful to predict field-scale U(VI) sequestration and remediation effectiveness.

  13. Modeling uranium transport in acidic contaminated groundwater with base addition.

    PubMed

    Zhang, Fan; Luo, Wensui; Parker, Jack C; Brooks, Scott C; Watson, David B; Jardine, Philip M; Gu, Baohua

    2011-06-15

    This study investigates reactive transport modeling in a column of uranium(VI)-contaminated sediments with base additions in the circulating influent. The groundwater and sediment exhibit oxic conditions with low pH, high concentrations of NO(3)(-), SO(4)(2-), U and various metal cations. Preliminary batch experiments indicate that additions of strong base induce rapid immobilization of U for this material. In the column experiment that is the focus of the present study, effluent groundwater was titrated with NaOH solution in an inflow reservoir before reinjection to gradually increase the solution pH in the column. An equilibrium hydrolysis, precipitation and ion exchange reaction model developed through simulation of the preliminary batch titration experiments predicted faster reduction of aqueous Al than observed in the column experiment. The model was therefore modified to consider reaction kinetics for the precipitation and dissolution processes which are the major mechanism for Al immobilization. The combined kinetic and equilibrium reaction model adequately described variations in pH, aqueous concentrations of metal cations (Al, Ca, Mg, Sr, Mn, Ni, Co), sulfate and U(VI). The experimental and modeling results indicate that U(VI) can be effectively sequestered with controlled base addition due to sorption by slowly precipitated Al with pH-dependent surface charge. The model may prove useful to predict field-scale U(VI) sequestration and remediation effectiveness.

  14. Model-Based MR Parameter Mapping with Sparsity Constraints: Parameter Estimation and Performance Bounds

    PubMed Central

    Zhao, Bo; Lam, Fan; Liang, Zhi-Pei

    2014-01-01

    MR parameter mapping (e.g., T1 mapping, T2 mapping, T2∗ mapping) is a valuable tool for tissue characterization. However, its practical utility has been limited due to long data acquisition times. This paper addresses this problem with a new model-based parameter mapping method. The proposed method utilizes a formulation that integrates the explicit signal model with sparsity constraints on the model parameters, enabling direct estimation of the parameters of interest from highly undersampled, noisy k-space data. An efficient greedy-pursuit algorithm is described to solve the resulting constrained parameter estimation problem. Estimation-theoretic bounds are also derived to analyze the benefits of incorporating sparsity constraints and benchmark the performance of the proposed method. The theoretical properties and empirical performance of the proposed method are illustrated in a T2 mapping application example using computer simulations. PMID:24833520

  15. Bayesian analysis of inflation: Parameter estimation for single field models

    SciTech Connect

    Mortonson, Michael J.; Peiris, Hiranya V.; Easther, Richard

    2011-02-15

    Future astrophysical data sets promise to strengthen constraints on models of inflation, and extracting these constraints requires methods and tools commensurate with the quality of the data. In this paper we describe ModeCode, a new, publicly available code that computes the primordial scalar and tensor power spectra for single-field inflationary models. ModeCode solves the inflationary mode equations numerically, avoiding the slow roll approximation. It is interfaced with CAMB and CosmoMC to compute cosmic microwave background angular power spectra and perform likelihood analysis and parameter estimation. ModeCode is easily extendable to additional models of inflation, and future updates will include Bayesian model comparison. Errors from ModeCode contribute negligibly to the error budget for analyses of data from Planck or other next generation experiments. We constrain representative single-field models ({phi}{sup n} with n=2/3, 1, 2, and 4, natural inflation, and 'hilltop' inflation) using current data, and provide forecasts for Planck. From current data, we obtain weak but nontrivial limits on the post-inflationary physics, which is a significant source of uncertainty in the predictions of inflationary models, while we find that Planck will dramatically improve these constraints. In particular, Planck will link the inflationary dynamics with the post-inflationary growth of the horizon, and thus begin to probe the ''primordial dark ages'' between TeV and grand unified theory scale energies.

  16. Non-additive model for specific heat of electrons

    NASA Astrophysics Data System (ADS)

    Anselmo, D. H. A. L.; Vasconcelos, M. S.; Silva, R.; Mello, V. D.

    2016-10-01

    By using non-additive Tsallis entropy we demonstrate numerically that one-dimensional quasicrystals, whose energy spectra are multifractal Cantor sets, are characterized by an entropic parameter, and calculate the electronic specific heat, where we consider a non-additive entropy Sq. In our method we consider an energy spectra calculated using the one-dimensional tight binding Schrödinger equation, and their bands (or levels) are scaled onto the [ 0 , 1 ] interval. The Tsallis' formalism is applied to the energy spectra of Fibonacci and double-period one-dimensional quasiperiodic lattices. We analytically obtain an expression for the specific heat that we consider to be more appropriate to calculate this quantity in those quasiperiodic structures.

  17. Using Set Model for Learning Addition of Integers

    ERIC Educational Resources Information Center

    Lestari, Umi Puji; Putri, Ratu Ilma Indra; Hartono, Yusuf

    2015-01-01

    This study aims to investigate how set model can help students' understanding of addition of integers in fourth grade. The study has been carried out to 23 students and a teacher of IVC SD Iba Palembang in January 2015. This study is a design research that also promotes PMRI as the underlying design context and activity. Results showed that the…

  18. Effects of additional food in a delayed predator-prey model.

    PubMed

    Sahoo, Banshidhar; Poria, Swarup

    2015-03-01

    We examine the effects of supplying additional food to predator in a gestation delay induced predator-prey system with habitat complexity. Additional food works in favor of predator growth in our model. Presence of additional food reduces the predatory attack rate to prey in the model. Supplying additional food we can control predator population. Taking time delay as bifurcation parameter the stability of the coexisting equilibrium point is analyzed. Hopf bifurcation analysis is done with respect to time delay in presence of additional food. The direction of Hopf bifurcations and the stability of bifurcated periodic solutions are determined by applying the normal form theory and the center manifold theorem. The qualitative dynamical behavior of the model is simulated using experimental parameter values. It is observed that fluctuations of the population size can be controlled either by supplying additional food suitably or by increasing the degree of habitat complexity. It is pointed out that Hopf bifurcation occurs in the system when the delay crosses some critical value. This critical value of delay strongly depends on quality and quantity of supplied additional food. Therefore, the variation of predator population significantly effects the dynamics of the model. Model results are compared with experimental results and biological implications of the analytical findings are discussed in the conclusion section.

  19. Summary of the DREAM8 Parameter Estimation Challenge: Toward Parameter Identification for Whole-Cell Models

    PubMed Central

    Karr, Jonathan R.; Williams, Alex H.; Zucker, Jeremy D.; Raue, Andreas; Steiert, Bernhard; Timmer, Jens; Kreutz, Clemens; Wilkinson, Simon; Allgood, Brandon A.; Bot, Brian M.; Hoff, Bruce R.; Kellen, Michael R.; Covert, Markus W.; Stolovitzky, Gustavo A.; Meyer, Pablo

    2015-01-01

    Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM) 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model’s structure and in silico “experimental” data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation. PMID:26020786

  20. Parameter estimation in deformable models using Markov chain Monte Carlo

    NASA Astrophysics Data System (ADS)

    Chalana, Vikram; Haynor, David R.; Sampson, Paul D.; Kim, Yongmin

    1997-04-01

    Deformable models have gained much popularity recently for many applications in medical imaging, such as image segmentation, image reconstruction, and image registration. Such models are very powerful because various kinds of information can be integrated together in an elegant statistical framework. Each such piece of information is typically associated with a user-defined parameter. The values of these parameters can have a significant effect on the results generated using these models. Despite the popularity of deformable models for various applications, not much attention has been paid to the estimation of these parameters. In this paper we describe systematic methods for the automatic estimation of these deformable model parameters. These methods are derived by posing the deformable models as a Bayesian inference problem. Our parameter estimation methods use Markov chain Monte Carlo methods for generating samples from highly complex probability distributions.

  1. Agricultural and Environmental Input Parameters for the Biosphere Model

    SciTech Connect

    Kaylie Rasmuson; Kurt Rautenstrauch

    2003-06-20

    This analysis is one of nine technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. It documents input parameters for the biosphere model, and supports the use of the model to develop Biosphere Dose Conversion Factors (BDCF). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in the biosphere Technical Work Plan (TWP, BSC 2003a). It should be noted that some documents identified in Figure 1-1 may be under development and therefore not available at the time this document is issued. The ''Biosphere Model Report'' (BSC 2003b) describes the ERMYN and its input parameters. This analysis report, ANL-MGR-MD-000006, ''Agricultural and Environmental Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. This report defines and justifies values for twelve parameters required in the biosphere model. These parameters are related to use of contaminated groundwater to grow crops. The parameter values recommended in this report are used in the soil, plant, and carbon-14 submodels of the ERMYN.

  2. PET-Specific Parameters and Radiotracers in Theoretical Tumour Modelling

    PubMed Central

    Marcu, Loredana G.; Bezak, Eva

    2015-01-01

    The innovation of computational techniques serves as an important step toward optimized, patient-specific management of cancer. In particular, in silico simulation of tumour growth and treatment response may eventually yield accurate information on disease progression, enhance the quality of cancer treatment, and explain why certain therapies are effective where others are not. In silico modelling is demonstrated to considerably benefit from information obtainable with PET and PET/CT. In particular, models have successfully integrated tumour glucose metabolism, cell proliferation, and cell oxygenation from multiple tracers in order to simulate tumour behaviour. With the development of novel radiotracers to image additional tumour phenomena, such as pH and gene expression, the value of PET and PET/CT data for use in tumour models will continue to grow. In this work, the use of PET and PET/CT information in in silico tumour models is reviewed. The various parameters that can be obtained using PET and PET/CT are detailed, as well as the radiotracers that may be used for this purpose, their utility, and limitations. The biophysical measures used to quantify PET and PET/CT data are also described. Finally, a list of in silico models that incorporate PET and/or PET/CT data is provided and reviewed. PMID:25788973

  3. On retrial queueing model with fuzzy parameters

    NASA Astrophysics Data System (ADS)

    Ke, Jau-Chuan; Huang, Hsin-I.; Lin, Chuen-Horng

    2007-01-01

    This work constructs the membership functions of the system characteristics of a retrial queueing model with fuzzy customer arrival, retrial and service rates. The α-cut approach is used to transform a fuzzy retrial-queue into a family of conventional crisp retrial queues in this context. By means of the membership functions of the system characteristics, a set of parametric non-linear programs is developed to describe the family of crisp retrial queues. A numerical example is solved successfully to illustrate the validity of the proposed approach. Because the system characteristics are expressed and governed by the membership functions, more information is provided for use by management. By extending this model to the fuzzy environment, fuzzy retrial-queue is represented more accurately and analytic results are more useful for system designers and practitioners.

  4. A simulation of water pollution model parameter estimation

    NASA Technical Reports Server (NTRS)

    Kibler, J. F.

    1976-01-01

    A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.

  5. A Logical Difficulty of the Parameter Setting Model.

    ERIC Educational Resources Information Center

    Sasaki, Yoshinori

    1990-01-01

    Seeks to prove that the parameter setting model (PSM) of Chomsky's Universal Grammar theory contains an internal contradiction when it is seriously taken to model the internal state of language learners. (six references) (JL)

  6. Determining extreme parameter correlation in ground water models.

    USGS Publications Warehouse

    Hill, M.C.; Osterby, O.

    2003-01-01

    In ground water flow system models with hydraulic-head observations but without significant imposed or observed flows, extreme parameter correlation generally exists. As a result, hydraulic conductivity and recharge parameters cannot be uniquely estimated. In complicated problems, such correlation can go undetected even by experienced modelers. Extreme parameter correlation can be detected using parameter correlation coefficients, but their utility depends on the presence of sufficient, but not excessive, numerical imprecision of the sensitivities, such as round-off error. This work investigates the information that can be obtained from parameter correlation coefficients in the presence of different levels of numerical imprecision, and compares it to the information provided by an alternative method called the singular value decomposition (SVD). Results suggest that (1) calculated correlation coefficients with absolute values that round to 1.00 were good indicators of extreme parameter correlation, but smaller values were not necessarily good indicators of lack of correlation and resulting unique parameter estimates; (2) the SVD may be more difficult to interpret than parameter correlation coefficients, but it required sensitivities that were one to two significant digits less accurate than those that required using parameter correlation coefficients; and (3) both the SVD and parameter correlation coefficients identified extremely correlated parameters better when the parameters were more equally sensitive. When the statistical measures fail, parameter correlation can be identified only by the tedious process of executing regression using different sets of starting values, or, in some circumstances, through graphs of the objective function.

  7. On Interpreting the Parameters for Any Item Response Model

    ERIC Educational Resources Information Center

    Thissen, David

    2009-01-01

    Maris and Bechger's article is an exercise in technical virtuosity and provides much to be learned by students of psychometrics. In this commentary, the author begins with making two observations. The first is that the title, "On Interpreting the Model Parameters for the Three Parameter Logistic Model," belies the generality of parts of Maris and…

  8. Exploring the interdependencies between parameters in a material model.

    SciTech Connect

    Silling, Stewart Andrew; Fermen-Coker, Muge

    2014-01-01

    A method is investigated to reduce the number of numerical parameters in a material model for a solid. The basis of the method is to detect interdependencies between parameters within a class of materials of interest. The method is demonstrated for a set of material property data for iron and steel using the Johnson-Cook plasticity model.

  9. Influences of parameter uncertainties within the ICRP-66 respiratory tract model: a parameter sensitivity analysis.

    PubMed

    Huston, Thomas E; Farfán, Eduardo B; Bolch, W Emmett; Bolch, Wesley E

    2003-11-01

    An important aspect in model uncertainty analysis is the evaluation of input parameter sensitivities with respect to model outcomes. In previous publications, parameter uncertainties were examined for the ICRP-66 respiratory tract model. The studies were aided by the development and use of a computer code LUDUC (Lung Dose Uncertainty Code) which allows probabilities density functions to be specified for all ICRP-66 model input parameters. These density functions are sampled using Latin hypercube techniques with values subsequently propagated through the ICRP-66 model. In the present study, LUDUC has been used to perform a detailed parameter sensitivity analysis of the ICRP-66 model using input parameter density functions specified in previously published articles. The results suggest that most of the variability in the dose to a given target region is explained by only a few input parameters. For example, for particle diameters between 0.1 and 50 microm, about 50% of the variability in the total lung dose (weighted sum of target tissue doses) for 239PuO2 is due to variability in the dose to the alveolar-interstitial (AI) region. In turn, almost 90% of the variability in the dose to the AI region is attributable to uncertainties in only four parameters in the model: the ventilation rate, the AI deposition fraction, the clearance rate constant for slow-phase absorption of deposited material to the blood, and the clearance rate constant for particle transport from the AI2 to bb1 compartment. A general conclusion is that many input parameters do not significantly influence variability in final doses. As a result, future research can focus on improving density functions for those input variables that contribute the most to variability in final dose values. PMID:14571988

  10. NWP model forecast skill optimization via closure parameter variations

    NASA Astrophysics Data System (ADS)

    Järvinen, H.; Ollinaho, P.; Laine, M.; Solonen, A.; Haario, H.

    2012-04-01

    We present results of a novel approach to tune predictive skill of numerical weather prediction (NWP) models. These models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. The current practice is to specify manually the numerical parameter values, based on expert knowledge. We developed recently a concept and method (QJRMS 2011) for on-line estimation of the NWP model parameters via closure parameter variations. The method called EPPES ("Ensemble prediction and parameter estimation system") utilizes ensemble prediction infra-structure for parameter estimation in a very cost-effective way: practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating an ensemble of predictions so that each member uses different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In this presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an ensemble prediction system emulator, based on the ECHAM5 atmospheric GCM show that the model tuning capability of EPPES scales up to realistic models and ensemble prediction systems. Finally, preliminary results of EPPES in the context of ECMWF forecasting system are presented.

  11. Estimating classification images with generalized linear and additive models.

    PubMed

    Knoblauch, Kenneth; Maloney, Laurence T

    2008-12-22

    Conventional approaches to modeling classification image data can be described in terms of a standard linear model (LM). We show how the problem can be characterized as a Generalized Linear Model (GLM) with a Bernoulli distribution. We demonstrate via simulation that this approach is more accurate in estimating the underlying template in the absence of internal noise. With increasing internal noise, however, the advantage of the GLM over the LM decreases and GLM is no more accurate than LM. We then introduce the Generalized Additive Model (GAM), an extension of GLM that can be used to estimate smooth classification images adaptively. We show that this approach is more robust to the presence of internal noise, and finally, we demonstrate that GAM is readily adapted to estimation of higher order (nonlinear) classification images and to testing their significance.

  12. Biological parameters for lung cancer in mathematical models of carcinogenesis.

    PubMed

    Jacob, P; Jacob, V

    2003-01-01

    Applications of the two-step model of carcinogenesis with clonal expansion (TSCE) to lung cancer data are reviewed, including those on atomic bomb survivors from Hiroshima and Nagasaki. British doctors, Colorado Plateau miners and Chinese tin miners. Different sets of identifiable model parameters are used in the literature. The parameter set which could be determined with the lowest uncertainty consists of the net proliferation rate gamma of intermediate cells, the hazard h55 at an intermediate age and the hazard h(infinity) at an asymptotically large age. Also, the values of these three parameters obtained in the various studies are more consistent than other identifiable combinations of the biological parameters. Based on representative results for these three parameters, implications for the biological parameters in the TSCE model are derived. PMID:14579892

  13. The cluster-galaxy cross spectrum. An additional probe of cosmological and halo parameters

    NASA Astrophysics Data System (ADS)

    Hütsi, G.; Lahav, O.

    2008-12-01

    Context: There are several wide field galaxy and cluster surveys planned for the near future, e.g. BOSS, WFMOS, ADEPT, Hetdex, SPT, eROSITA. In the simplest approach, one would analyze these independently, thus neglecting the extra information provided by the cluster-galaxy cross pairs. Aims: In this paper we have focused on the possible synergy between these surveys by investigating the amount of information encoded in the cross pairs. Methods: We present a model for the cluster-galaxy cross spectrum within the halo model framework. To assess the gain in performance due to inclusion of the cluster-galaxy cross pairs, we carry out a Fisher matrix analysis for a BOSS-like galaxy redshift survey targeting luminous red galaxies and a hypothetical mass-limited cluster redshift survey with a lower mass threshold of 1.7 × 1014 h-1 M⊙ over the same volume. Results: On small scales, a cluster-galaxy cross spectrum directly probes the density profile of the halos, instead of the density profile convolved with itself, as is the case for the galaxy power spectrum. Due to this different behavior, adding information from the cross pairs helps to tighten constraints on the halo occupation distribution (e.g. a factor of ~2 compression of the error ellipses on the m_glow-α plane) and offers an alternative mechanism compared with techniques that directly fit halo density profiles. By inclusion of the cross pairs, a factor of ~2 stronger constraints are obtained for σ_8, while the improvement for the dark energy figure-of-merit is somewhat weaker: an increase by a factor of 1.4. We have also written down the formalism for the case when only photometric redshifts are available for both the clusters and the galaxies. For the analysis of the photometric surveys the inclusion of the cluster-galaxy cross pairs might be very beneficial since the photo-z errors for the clusters are usually significantly smaller than for the typical galaxies.

  14. Additions to Mars Global Reference Atmospheric Model (MARS-GRAM)

    NASA Technical Reports Server (NTRS)

    Justus, C. G.; James, Bonnie

    1992-01-01

    Three major additions or modifications were made to the Mars Global Reference Atmospheric Model (Mars-GRAM): (1) in addition to the interactive version, a new batch version is available, which uses NAMELIST input, and is completely modular, so that the main driver program can easily be replaced by any calling program, such as a trajectory simulation program; (2) both the interactive and batch versions now have an option for treating local-scale dust storm effects, rather than just the global-scale dust storms in the original Mars-GRAM; and (3) the Zurek wave perturbation model was added, to simulate the effects of tidal perturbations, in addition to the random (mountain wave) perturbation model of the original Mars-GRAM. A minor modification was also made which allows heights to go 'below' local terrain height and return 'realistic' pressure, density, and temperature, and not the surface values, as returned by the original Mars-GRAM. This feature will allow simulations of Mars rover paths which might go into local 'valley' areas which lie below the average height of the present, rather coarse-resolution, terrain height data used by Mars-GRAM. Sample input and output of both the interactive and batch versions of Mars-GRAM are presented.

  15. Additions to Mars Global Reference Atmospheric Model (Mars-GRAM)

    NASA Technical Reports Server (NTRS)

    Justus, C. G.

    1991-01-01

    Three major additions or modifications were made to the Mars Global Reference Atmospheric Model (Mars-GRAM): (1) in addition to the interactive version, a new batch version is available, which uses NAMELIST input, and is completely modular, so that the main driver program can easily be replaced by any calling program, such as a trajectory simulation program; (2) both the interactive and batch versions now have an option for treating local-scale dust storm effects, rather than just the global-scale dust storms in the original Mars-GRAM; and (3) the Zurek wave perturbation model was added, to simulate the effects of tidal perturbations, in addition to the random (mountain wave) perturbation model of the original Mars-GRAM. A minor modification has also been made which allows heights to go below local terrain height and return realistic pressure, density, and temperature (not the surface values) as returned by the original Mars-GRAM. This feature will allow simulations of Mars rover paths which might go into local valley areas which lie below the average height of the present, rather coarse-resolution, terrain height data used by Mars-GRAM. Sample input and output of both the interactive and batch version of Mars-GRAM are presented.

  16. Unscented Kalman filter with parameter identifiability analysis for the estimation of multiple parameters in kinetic models.

    PubMed

    Baker, Syed Murtuza; Poskar, C Hart; Junker, Björn H

    2011-01-01

    In systems biology, experimentally measured parameters are not always available, necessitating the use of computationally based parameter estimation. In order to rely on estimated parameters, it is critical to first determine which parameters can be estimated for a given model and measurement set. This is done with parameter identifiability analysis. A kinetic model of the sucrose accumulation in the sugar cane culm tissue developed by Rohwer et al. was taken as a test case model. What differentiates this approach is the integration of an orthogonal-based local identifiability method into the unscented Kalman filter (UKF), rather than using the more common observability-based method which has inherent limitations. It also introduces a variable step size based on the system uncertainty of the UKF during the sensitivity calculation. This method identified 10 out of 12 parameters as identifiable. These ten parameters were estimated using the UKF, which was run 97 times. Throughout the repetitions the UKF proved to be more consistent than the estimation algorithms used for comparison. PMID:21989173

  17. Relationship between Cole-Cole model parameters and spectral decomposition parameters derived from SIP data

    NASA Astrophysics Data System (ADS)

    Weigand, M.; Kemna, A.

    2016-06-01

    Spectral induced polarization (SIP) data are commonly analysed using phenomenological models. Among these models the Cole-Cole (CC) model is the most popular choice to describe the strength and frequency dependence of distinct polarization peaks in the data. More flexibility regarding the shape of the spectrum is provided by decomposition schemes. Here the spectral response is decomposed into individual responses of a chosen elementary relaxation model, mathematically acting as kernel in the involved integral, based on a broad range of relaxation times. A frequently used kernel function is the Debye model, but also the CC model with some other a priorly specified frequency dispersion (e.g. Warburg model) has been proposed as kernel in the decomposition. The different decomposition approaches in use, also including conductivity and resistivity formulations, pose the question to which degree the integral spectral parameters typically derived from the obtained relaxation time distribution are biased by the approach itself. Based on synthetic SIP data sampled from an ideal CC response, we here investigate how the two most important integral output parameters deviate from the corresponding CC input parameters. We find that the total chargeability may be underestimated by up to 80 per cent and the mean relaxation time may be off by up to three orders of magnitude relative to the original values, depending on the frequency dispersion of the analysed spectrum and the proximity of its peak to the frequency range limits considered in the decomposition. We conclude that a quantitative comparison of SIP parameters across different studies, or the adoption of parameter relationships from other studies, for example when transferring laboratory results to the field, is only possible on the basis of a consistent spectral analysis procedure. This is particularly important when comparing effective CC parameters with spectral parameters derived from decomposition results.

  18. Testing parameters in structural equation modeling: every "one" matters.

    PubMed

    Gonzalez, R; Griffin, D

    2001-09-01

    A problem with standard errors estimated by many structural equation modeling programs is described. In such programs, a parameter's standard error is sensitive to how the model is identified (i.e., how scale is set). Alternative but equivalent ways to identify a model may yield different standard errors, and hence different Z tests for a parameter, even though the identifications produce the same overall model fit. This lack of invariance due to model identification creates the possibility that different analysts may reach different conclusions about a parameter's significance level even though they test equivalent models on the same data. The authors suggest that parameters be tested for statistical significance through the likelihood ratio test, which is invariant to the identification choice. PMID:11570231

  19. Extraction of exposure modeling parameters of thick resist

    NASA Astrophysics Data System (ADS)

    Liu, Chi; Du, Jinglei; Liu, Shijie; Duan, Xi; Luo, Boliang; Zhu, Jianhua; Guo, Yongkang; Du, Chunlei

    2004-12-01

    Experimental and theoretical analysis indicates that many nonlinear factors existing in the exposure process of thick resist can remarkably affect the PAC concentration distribution in the resist. So the effects should be fully considered in the exposure model of thick resist, and exposure parameters should not be treated as constants because there exists certain relationship between the parameters and resist thickness. In this paper, an enhanced Dill model for the exposure process of thick resist is presented, and the experimental setup for measuring exposure parameters of thick resist is developed. We measure the intensity transmittance curve of thick resist AZ4562 under different processing conditions, and extract the corresponding exposure parameters based on the experiment results and the calculations from the beam propagation matrix of the resist films. With these modified modeling parameters and enhanced Dill model, simulation of thick-resist exposure process can be effectively developed in the future.

  20. Backbone Additivity in the Transfer Model of Protein Solvation

    SciTech Connect

    Hu, Char Y.; Kokubo, Hironori; Lynch, Gillian C.; Bolen, D Wayne; Pettitt, Bernard M.

    2010-05-01

    The transfer model implying additivity of the peptide backbone free energy of transfer is computationally tested. Molecular dynamics simulations are used to determine the extent of change in transfer free energy (ΔGtr) with increase in chain length of oligoglycine with capped end groups. Solvation free energies of oligoglycine models of varying lengths in pure water and in the osmolyte solutions, 2M urea and 2M trimethylamine N-oxide (TMAO), were calculated from simulations of all atom models, and ΔGtr values for peptide backbone transfer from water to the osmolyte solutions were determined. The results show that the transfer free energies change linearly with increasing chain length, demonstrating the principle of additivity, and provide values in reasonable agreement with experiment. The peptide backbone transfer free energy contributions arise from van der Waals interactions in the case of transfer to urea, but from electrostatics on transfer to TMAO solution. The simulations used here allow for the calculation of the solvation and transfer free energy of longer oligoglycine models to be evaluated than is currently possible through experiment. The peptide backbone unit computed transfer free energy of –54 cal/mol/Mcompares quite favorably with –43 cal/mol/M determined experimentally.

  1. Identification of parameters of discrete-continuous models

    SciTech Connect

    Cekus, Dawid Warys, Pawel

    2015-03-10

    In the paper, the parameters of a discrete-continuous model have been identified on the basis of experimental investigations and formulation of optimization problem. The discrete-continuous model represents a cantilever stepped Timoshenko beam. The mathematical model has been formulated and solved according to the Lagrange multiplier formalism. Optimization has been based on the genetic algorithm. The presented proceeding’s stages make the identification of any parameters of discrete-continuous systems possible.

  2. Estimating parameters for generalized mass action models with connectivity information

    PubMed Central

    Ko, Chih-Lung; Voit, Eberhard O; Wang, Feng-Sheng

    2009-01-01

    Background Determining the parameters of a mathematical model from quantitative measurements is the main bottleneck of modelling biological systems. Parameter values can be estimated from steady-state data or from dynamic data. The nature of suitable data for these two types of estimation is rather different. For instance, estimations of parameter values in pathway models, such as kinetic orders, rate constants, flux control coefficients or elasticities, from steady-state data are generally based on experiments that measure how a biochemical system responds to small perturbations around the steady state. In contrast, parameter estimation from dynamic data requires time series measurements for all dependent variables. Almost no literature has so far discussed the combined use of both steady-state and transient data for estimating parameter values of biochemical systems. Results In this study we introduce a constrained optimization method for estimating parameter values of biochemical pathway models using steady-state information and transient measurements. The constraints are derived from the flux connectivity relationships of the system at the steady state. Two case studies demonstrate the estimation results with and without flux connectivity constraints. The unconstrained optimal estimates from dynamic data may fit the experiments well, but they do not necessarily maintain the connectivity relationships. As a consequence, individual fluxes may be misrepresented, which may cause problems in later extrapolations. By contrast, the constrained estimation accounting for flux connectivity information reduces this misrepresentation and thereby yields improved model parameters. Conclusion The method combines transient metabolic profiles and steady-state information and leads to the formulation of an inverse parameter estimation task as a constrained optimization problem. Parameter estimation and model selection are simultaneously carried out on the constrained

  3. Inverse estimation of parameters for an estuarine eutrophication model

    SciTech Connect

    Shen, J.; Kuo, A.Y.

    1996-11-01

    An inverse model of an estuarine eutrophication model with eight state variables is developed. It provides a framework to estimate parameter values of the eutrophication model by assimilation of concentration data of these state variables. The inverse model using the variational technique in conjunction with a vertical two-dimensional eutrophication model is general enough to be applicable to aid model calibration. The formulation is illustrated by conducting a series of numerical experiments for the tidal Rappahannock River, a western shore tributary of the Chesapeake Bay. The numerical experiments of short-period model simulations with different hypothetical data sets and long-period model simulations with limited hypothetical data sets demonstrated that the inverse model can be satisfactorily used to estimate parameter values of the eutrophication model. The experiments also showed that the inverse model is useful to address some important questions, such as uniqueness of the parameter estimation and data requirements for model calibration. Because of the complexity of the eutrophication system, degrading of speed of convergence may occur. Two major factors which cause degradation of speed of convergence are cross effects among parameters and the multiple scales involved in the parameter system.

  4. Estimation of Kalman filter model parameters from an ensemble of tests

    NASA Technical Reports Server (NTRS)

    Gibbs, B. P.; Haley, D. R.; Levine, W.; Porter, D. W.; Vahlberg, C. J.

    1980-01-01

    A methodology for estimating initial mean and covariance parameters in a Kalman filter model from an ensemble of nonidentical tests is presented. In addition, the problem of estimating time constants and process noise levels is addressed. Practical problems such as developing and validating inertial instrument error models from laboratory test data or developing error models of individual phases of a test are generally considered.

  5. Incorporation of shuttle CCT parameters in computer simulation models

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terry

    1990-01-01

    Computer simulations of shuttle missions have become increasingly important during recent years. The complexity of mission planning for satellite launch and repair operations which usually involve EVA has led to the need for accurate visibility and access studies. The PLAID modeling package used in the Man-Systems Division at Johnson currently has the necessary capabilities for such studies. In addition, the modeling package is used for spatial location and orientation of shuttle components for film overlay studies such as the current investigation of the hydrogen leaks found in the shuttle flight. However, there are a number of differences between the simulation studies and actual mission viewing. These include image blur caused by the finite resolution of the CCT monitors in the shuttle and signal noise from the video tubes of the cameras. During the course of this investigation the shuttle CCT camera and monitor parameters are incorporated into the existing PLAID framework. These parameters are specific for certain camera/lens combinations and the SNR characteristics of these combinations are included in the noise models. The monitor resolution is incorporated using a Gaussian spread function such as that found in the screen phosphors in the shuttle monitors. Another difference between the traditional PLAID generated images and actual mission viewing lies in the lack of shadows and reflections of light from surfaces. Ray tracing of the scene explicitly includes the lighting and material characteristics of surfaces. The results of some preliminary studies using ray tracing techniques for the image generation process combined with the camera and monitor effects are also reported.

  6. Additional Research Needs to Support the GENII Biosphere Models

    SciTech Connect

    Napier, Bruce A.; Snyder, Sandra F.; Arimescu, Carmen

    2013-11-30

    In the course of evaluating the current parameter needs for the GENII Version 2 code (Snyder et al. 2013), areas of possible improvement for both the data and the underlying models have been identified. As the data review was implemented, PNNL staff identified areas where the models can be improved both to accommodate the locally significant pathways identified and also to incorporate newer models. The areas are general data needs for the existing models and improved formulations for the pathway models. It is recommended that priorities be set by NRC staff to guide selection of the most useful improvements in a cost-effective manner. Suggestions are made based on relatively easy and inexpensive changes, and longer-term more costly studies. In the short term, there are several improved model formulations that could be applied to the GENII suite of codes to make them more generally useful. • Implementation of the separation of the translocation and weathering processes • Implementation of an improved model for carbon-14 from non-atmospheric sources • Implementation of radon exposure pathways models • Development of a KML processor for the output report generator module data that are calculated on a grid that could be superimposed upon digital maps for easier presentation and display • Implementation of marine mammal models (manatees, seals, walrus, whales, etc.). Data needs in the longer term require extensive (and potentially expensive) research. Before picking any one radionuclide or food type, NRC staff should perform an in-house review of current and anticipated environmental analyses to select “dominant” radionuclides of interest to allow setting of cost-effective priorities for radionuclide- and pathway-specific research. These include • soil-to-plant uptake studies for oranges and other citrus fruits, and • Development of models for evaluation of radionuclide concentration in highly-processed foods such as oils and sugars. Finally, renewed

  7. Model and Parameter Discretization Impacts on Estimated ASR Recovery Efficiency

    NASA Astrophysics Data System (ADS)

    Forghani, A.; Peralta, R. C.

    2015-12-01

    We contrast computed recovery efficiency of one Aquifer Storage and Recovery (ASR) well using several modeling situations. Test situations differ in employed finite difference grid discretization, hydraulic conductivity, and storativity. We employ a 7-layer regional groundwater model calibrated for Salt Lake Valley. Since the regional model grid is too coarse for ASR analysis, we prepare two local models with significantly smaller discretization capable of analyzing ASR recovery efficiency. Some addressed situations employ parameters interpolated from the coarse valley model. Other situations employ parameters derived from nearby well logs or pumping tests. The intent of the evaluations and subsequent sensitivity analysis is to show how significantly the employed discretization and aquifer parameters affect estimated recovery efficiency. Most of previous studies to evaluate ASR recovery efficiency only consider hypothetical uniform specified boundary heads and gradient assuming homogeneous aquifer parameters. The well is part of the Jordan Valley Water Conservancy District (JVWCD) ASR system, that lies within Salt Lake Valley.

  8. Addition Table of Colours: Additive and Subtractive Mixtures Described Using a Single Reasoning Model

    ERIC Educational Resources Information Center

    Mota, A. R.; Lopes dos Santos, J. M. B.

    2014-01-01

    Students' misconceptions concerning colour phenomena and the apparent complexity of the underlying concepts--due to the different domains of knowledge involved--make its teaching very difficult. We have developed and tested a teaching device, the addition table of colours (ATC), that encompasses additive and subtractive mixtures in a single…

  9. Parameter identifiability of power-law biochemical system models.

    PubMed

    Srinath, Sridharan; Gunawan, Rudiyanto

    2010-09-01

    Mathematical modeling has become an integral component in biotechnology, in which these models are frequently used to design and optimize bioprocesses. Canonical models, like power-laws within the Biochemical Systems Theory, offer numerous mathematical and numerical advantages, including built-in flexibility to simulate general nonlinear behavior. The construction of such models relies on the estimation of unknown case-specific model parameters by way of experimental data fitting, also known as inverse modeling. Despite the large number of publications on this topic, this task remains the bottleneck in canonical modeling of biochemical systems. The focus of this paper concerns with the question of identifiability of power-law models from dynamic data, that is, whether the parameter values can be uniquely and accurately identified from time-series data. Existing and newly developed parameter identifiability methods were applied to two power-law models of biochemical systems, and the results pointed to the lack of parametric identifiability as the root cause of the difficulty faced in the inverse modeling. Despite the focus on power-law models, the analyses and conclusions are extendable to other canonical models, and the issue of parameter identifiability is expected to be a common problem in biochemical system modeling. PMID:20197073

  10. Software reliability: Additional investigations into modeling with replicated experiments

    NASA Technical Reports Server (NTRS)

    Nagel, P. M.; Schotz, F. M.; Skirvan, J. A.

    1984-01-01

    The effects of programmer experience level, different program usage distributions, and programming languages are explored. All these factors affect performance, and some tentative relational hypotheses are presented. An analytic framework for replicated and non-replicated (traditional) software experiments is presented. A method of obtaining an upper bound on the error rate of the next error is proposed. The method was validated empirically by comparing forecasts with actual data. In all 14 cases the bound exceeded the observed parameter, albeit somewhat conservatively. Two other forecasting methods are proposed and compared to observed results. Although demonstrated relative to this framework that stages are neither independent nor exponentially distributed, empirical estimates show that the exponential assumption is nearly valid for all but the extreme tails of the distribution. Except for the dependence in the stage probabilities, Cox's model approximates to a degree what is being observed.

  11. A Study of Additive Noise Model for Robust Speech Recognition

    NASA Astrophysics Data System (ADS)

    Awatade, Manisha H.

    2011-12-01

    A model of how speech amplitude spectra are affected by additive noise is studied. Acoustic features are extracted based on the noise robust parts of speech spectra without losing discriminative information. An existing two non-linear processing methods, harmonic demodulation and spectral peak-to-valley ratio locking, are designed to minimize mismatch between clean and noisy speech features. Previously studied methods, including peak isolation [1], do not require noise estimation and are effective in dealing with both stationary and non-stationary noise.

  12. Additive Manufacturing of Medical Models--Applications in Rhinology.

    PubMed

    Raos, Pero; Klapan, Ivica; Galeta, Tomislav

    2015-09-01

    In the paper we are introducing guidelines and suggestions for use of 3D image processing SW in head pathology diagnostic and procedures for obtaining physical medical model by additive manufacturing/rapid prototyping techniques, bearing in mind the improvement of surgery performance, its maximum security and faster postoperative recovery of patients. This approach has been verified in two case reports. In the treatment we used intelligent classifier-schemes for abnormal patterns using computer-based system for 3D-virtual and endoscopic assistance in rhinology, with appropriate visualization of anatomy and pathology within the nose, paranasal sinuses, and scull base area.

  13. Estimating winter wheat phenological parameters: Implications for crop modeling

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Crop parameters, such as the timing of developmental events, are critical for accurate simulation results in crop simulation models, yet uncertainty often exists in determining the parameters. Factors contributing to the uncertainty include: a) sources of variation within a plant (i.e., within diffe...

  14. Complexity, parameter sensitivity and parameter transferability in the modelling of floodplain inundation

    NASA Astrophysics Data System (ADS)

    Bates, P. D.; Neal, J. C.; Fewtrell, T. J.

    2012-12-01

    In this we paper we consider two related questions. First, we address the issue of how much physical complexity is necessary in a model in order to simulate floodplain inundation to within validation data error. This is achieved through development of a single code/multiple physics hydraulic model (LISFLOOD-FP) where different degrees of complexity can be switched on or off. Different configurations of this code are applied to four benchmark test cases, and compared to the results of a number of industry standard models. Second we address the issue of how parameter sensitivity and transferability change with increasing complexity using numerical experiments with models of different physical and geometric intricacy. Hydraulic models are a good example system with which to address such generic modelling questions as: (1) they have a strong physical basis; (2) there is only one set of equations to solve; (3) they require only topography and boundary conditions as input data; and (4) they typically require only a single free parameter, namely boundary friction. In terms of complexity required we show that for the problem of sub-critical floodplain inundation a number of codes of different dimensionality and resolution can be found to fit uncertain model validation data equally well, and that in this situation Occam's razor emerges as a useful logic to guide model selection. We find also find that model skill usually improves more rapidly with increases in model spatial resolution than increases in physical complexity, and that standard approaches to testing hydraulic models against laboratory data or analytical solutions may fail to identify this important fact. Lastly, we find that in benchmark testing studies significant differences can exist between codes with identical numerical solution techniques as a result of auxiliary choices regarding the specifics of model implementation that are frequently unreported by code developers. As a consequence, making sound

  15. Fundamentals, accuracy and input parameters of frost heave prediction models

    NASA Astrophysics Data System (ADS)

    Schellekens, Fons Jozef

    In this thesis, the frost heave knowledge of physical geographers and soil physicists, a detailed description of the frost heave process, methods to determine soil parameters, and analysis of the spatial variability of these soil parameters are connected to the expertise of civil engineers and mathematicians in the (computer) modelling of the process. A description is given of observations of frost heave in laboratory experiments and in the field. Frost heave modelling is made accessible by a detailed description of the main principles of frost heave modelling in a language which can be understood by persons who do not have a thorough mathematical background. Two examples of practical one-dimensional frost heave prediction models are described: a model developed by Wang (1994) and a model developed by Nixon (1991). Advantages, limitations and some improvements of these models are described. It is suggested that conventional frost heave prediction using estimated extreme input parameters may be improved by using locally measured input parameters. The importance of accurate input parameters in frost heave prediction models is demonstrated in a case study using the frost heave models developed by Wang and Nixon. Methods to determine the input parameters are discussed, concluding with a suite of methods, some of which are new, to determine the input parameters of frost heave prediction models from very basic grain size parameters. The spatial variability of the required input parameters is analysed using data obtained along the Norman Wells-Zama oil pipeline at Norman Wells, NWT, located in the transition between discontinuous and continuous permafrost regions at the northern end of Canada's northernmost oil pipeline. A method based on spatial variability analysis of the input parameters in frost heave models is suggested to optimize the improvement that arises from adequate sampling, while minimizing the costs of obtaining field data. A series of frost heave

  16. Multiscale Modeling of Powder Bed-Based Additive Manufacturing

    NASA Astrophysics Data System (ADS)

    Markl, Matthias; Körner, Carolin

    2016-07-01

    Powder bed fusion processes are additive manufacturing technologies that are expected to induce the third industrial revolution. Components are built up layer by layer in a powder bed by selectively melting confined areas, according to sliced 3D model data. This technique allows for manufacturing of highly complex geometries hardly machinable with conventional technologies. However, the underlying physical phenomena are sparsely understood and difficult to observe during processing. Therefore, an intensive and expensive trial-and-error principle is applied to produce components with the desired dimensional accuracy, material characteristics, and mechanical properties. This review presents numerical modeling approaches on multiple length scales and timescales to describe different aspects of powder bed fusion processes. In combination with tailored experiments, the numerical results enlarge the process understanding of the underlying physical mechanisms and support the development of suitable process strategies and component topologies.

  17. Multiscale Modeling of Powder Bed–Based Additive Manufacturing

    NASA Astrophysics Data System (ADS)

    Markl, Matthias; Körner, Carolin

    2016-07-01

    Powder bed fusion processes are additive manufacturing technologies that are expected to induce the third industrial revolution. Components are built up layer by layer in a powder bed by selectively melting confined areas, according to sliced 3D model data. This technique allows for manufacturing of highly complex geometries hardly machinable with conventional technologies. However, the underlying physical phenomena are sparsely understood and difficult to observe during processing. Therefore, an intensive and expensive trial-and-error principle is applied to produce components with the desired dimensional accuracy, material characteristics, and mechanical properties. This review presents numerical modeling approaches on multiple length scales and timescales to describe different aspects of powder bed fusion processes. In combination with tailored experiments, the numerical results enlarge the process understanding of the underlying physical mechanisms and support the development of suitable process strategies and component topologies.

  18. Retrospective forecast of ETAS model with daily parameters estimate

    NASA Astrophysics Data System (ADS)

    Falcone, Giuseppe; Murru, Maura; Console, Rodolfo; Marzocchi, Warner; Zhuang, Jiancang

    2016-04-01

    We present a retrospective ETAS (Epidemic Type of Aftershock Sequence) model based on the daily updating of free parameters during the background, the learning and the test phase of a seismic sequence. The idea was born after the 2011 Tohoku-Oki earthquake. The CSEP (Collaboratory for the Study of Earthquake Predictability) Center in Japan provided an appropriate testing benchmark for the five 1-day submitted models. Of all the models, only one was able to successfully predict the number of events that really happened. This result was verified using both the real time and the revised catalogs. The main cause of the failure was in the underestimation of the forecasted events, due to model parameters maintained fixed during the test. Moreover, the absence in the learning catalog of an event similar to the magnitude of the mainshock (M9.0), which drastically changed the seismicity in the area, made the learning parameters not suitable to describe the real seismicity. As an example of this methodological development we show the evolution of the model parameters during the last two strong seismic sequences in Italy: the 2009 L'Aquila and the 2012 Reggio Emilia episodes. The achievement of the model with daily updated parameters is compared with that of same model where the parameters remain fixed during the test time.

  19. Parameter Estimates in Differential Equation Models for Population Growth

    ERIC Educational Resources Information Center

    Winkel, Brian J.

    2011-01-01

    We estimate the parameters present in several differential equation models of population growth, specifically logistic growth models and two-species competition models. We discuss student-evolved strategies and offer "Mathematica" code for a gradient search approach. We use historical (1930s) data from microbial studies of the Russian biologist,…

  20. NEFDS contamination model parameter estimation of powder contaminated surfaces

    NASA Astrophysics Data System (ADS)

    Gibbs, Timothy J.; Messinger, David W.

    2016-05-01

    Hyperspectral signatures of powdered contaminated surfaces are challenging to characterize due to intimate mixing between materials. Most radiometric models have difficulties in recreating these signatures due to non-linear interactions between particles with different physical properties. The Nonconventional Exploitation Factors Data System (NEFDS) Contamination Model is capable of recreating longwave hyperspectral signatures at any contamination mixture amount, but only for a limited selection of materials currently in the database. A method has been developed to invert the NEFDS model and perform parameter estimation on emissivity measurements from a variety of powdered materials on substrates. This model was chosen for its potential to accurately determine contamination coverage density as a parameter in the inverted model. Emissivity data were measured using a Designs and Prototypes fourier transform infrared spectrometer model 102 for different levels of contamination. Temperature emissivity separation was performed to convert data from measure radiance to estimated surface emissivity. Emissivity curves were then input into the inverted model and parameters were estimated for each spectral curve. A comparison of measured data with extrapolated model emissivity curves using estimated parameter values assessed performance of the inverted NEFDS contamination model. This paper will present the initial results of the experimental campaign and the estimated surface coverage parameters.

  1. Effect of Operating Parameters and Chemical Additives on Crystal Habit and Specific Cake Resistance of Zinc Hydroxide Precipitates

    SciTech Connect

    Alwin, Jennifer Louise

    1999-08-01

    The effect of process parameters and chemical additives on the specific cake resistance of zinc hydroxide precipitates was investigated. The ability of a slurry to be filtered is dependent upon the particle habit of the solid and the particle habit is influenced by certain process variables. The process variables studied include neutralization temperature, agitation type, and alkalinity source used for neutralization. Several commercially available chemical additives advertised to aid in solid/liquid separation were also examined in conjunction with hydroxide precipitation. A statistical analysis revealed that the neutralization temperature and the source of alkalinity were statistically significant in influencing the specific cake resistance of zinc hydroxide precipitates in this study. The type of agitation did not significantly effect the specific cake resistance of zinc hydroxide precipitates. The use of chemical additives in conjunction with hydroxide precipitation had a favorable effect on the filterability. The morphology of the hydroxide precipitates was analyzed using scanning electron microscopy.

  2. WATEQ3 geochemical model: thermodynamic data for several additional solids

    SciTech Connect

    Krupka, K.M.; Jenne, E.A.

    1982-09-01

    Geochemical models such as WATEQ3 can be used to model the concentrations of water-soluble pollutants that may result from the disposal of nuclear waste and retorted oil shale. However, for a model to competently deal with these water-soluble pollutants, an adequate thermodynamic data base must be provided that includes elements identified as important in modeling these pollutants. To this end, several minerals and related solid phases were identified that were absent from the thermodynamic data base of WATEQ3. In this study, the thermodynamic data for the identified solids were compiled and selected from several published tabulations of thermodynamic data. For these solids, an accepted Gibbs free energy of formation, ..delta..G/sup 0//sub f,298/, was selected for each solid phase based on the recentness of the tabulated data and on considerations of internal consistency with respect to both the published tabulations and the existing data in WATEQ3. For those solids not included in these published tabulations, Gibbs free energies of formation were calculated from published solubility data (e.g., lepidocrocite), or were estimated (e.g., nontronite) using a free-energy summation method described by Mattigod and Sposito (1978). The accepted or estimated free energies were then combined with internally consistent, ancillary thermodynamic data to calculate equilibrium constants for the hydrolysis reactions of these minerals and related solid phases. Including these values in the WATEQ3 data base increased the competency of this geochemical model in applications associated with the disposal of nuclear waste and retorted oil shale. Additional minerals and related solid phases that need to be added to the solubility submodel will be identified as modeling applications continue in these two programs.

  3. Agricultural and Environmental Input Parameters for the Biosphere Model

    SciTech Connect

    K. Rasmuson; K. Rautenstrauch

    2004-09-14

    This analysis is one of 10 technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) (i.e., the biosphere model). It documents development of agricultural and environmental input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the ERMYN and its input parameters.

  4. Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty

    SciTech Connect

    Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.; Cantrell, Kirk J.

    2004-03-01

    The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates based on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four

  5. Accuracy of Aerodynamic Model Parameters Estimated from Flight Test Data

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Klein, Vladislav

    1997-01-01

    An important put of building mathematical models based on measured date is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of this accuracy, the parameter estimates themselves have limited value. An expression is developed for computing quantitatively correct parameter accuracy measures for maximum likelihood parameter estimates when the output residuals are colored. This result is important because experience in analyzing flight test data reveals that the output residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Monte Carlo simulation runs were used to show that parameter accuracy measures from the new technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for correction factors or frequency domain analysis of the output residuals. The technique was applied to flight test data from repeated maneuvers flown on the F-18 High Alpha Research Vehicle. As in the simulated cases, parameter accuracy measures from the new technique were in agreement with the scatter in the parameter estimates from repeated maneuvers, whereas conventional parameter accuracy measures were optimistic.

  6. Hunting for hydrogen: random structure searching and prediction of NMR parameters of hydrous wadsleyite† †Electronic supplementary information (ESI) available: Further information on the structures generated by AIRSS, alternative structural models, supercell calculations, total enthalpies of all computed structures and further information on 1H/2H NMR parameters. Example input and all raw output files from AIRSS and CASTEP NMR calculations are also included. See DOI: 10.1039/c6cp01529h Click here for additional data file.

    PubMed Central

    Moran, Robert F.; McKay, David; Pickard, Chris J.; Berry, Andrew J.; Griffin, John M.

    2016-01-01

    The structural chemistry of materials containing low levels of nonstoichiometric hydrogen is difficult to determine, and producing structural models is challenging where hydrogen has no fixed crystallographic site. Here we demonstrate a computational approach employing ab initio random structure searching (AIRSS) to generate a series of candidate structures for hydrous wadsleyite (β-Mg2SiO4 with 1.6 wt% H2O), a high-pressure mineral proposed as a repository for water in the Earth's transition zone. Aligning with previous experimental work, we solely consider models with Mg3 (over Mg1, Mg2 or Si) vacancies. We adapt the AIRSS method by starting with anhydrous wadsleyite, removing a single Mg2+ and randomly placing two H+ in a unit cell model, generating 819 candidate structures. 103 geometries were then subjected to more accurate optimisation under periodic DFT. Using this approach, we find the most favourable hydration mechanism involves protonation of two O1 sites around the Mg3 vacancy. The formation of silanol groups on O3 or O4 sites (with loss of stable O1–H hydroxyls) coincides with an increase in total enthalpy. Importantly, the approach we employ allows observables such as NMR parameters to be computed for each structure. We consider hydrous wadsleyite (∼1.6 wt%) to be dominated by protonated O1 sites, with O3/O4–H silanol groups present as defects, a model that maps well onto experimental studies at higher levels of hydration (J. M. Griffin et al., Chem. Sci., 2013, 4, 1523). The AIRSS approach adopted herein provides the crucial link between atomic-scale structure and experimental studies. PMID:27020937

  7. Numerical modeling of piezoelectric transducers using physical parameters.

    PubMed

    Cappon, Hans; Keesman, Karel J

    2012-05-01

    Design of ultrasonic equipment is frequently facilitated with numerical models. These numerical models, however, need a calibration step, because usually not all characteristics of the materials used are known. Characterization of material properties combined with numerical simulations and experimental data can be used to acquire valid estimates of the material parameters. In our design application, a finite element (FE) model of an ultrasonic particle separator, driven by an ultrasonic transducer in thickness mode, is required. A limited set of material parameters for the piezoelectric transducer were obtained from the manufacturer, thus preserving prior physical knowledge to a large extent. The remaining unknown parameters were estimated from impedance analysis with a simple experimental setup combined with a numerical optimization routine using 2-D and 3-D FE models. Thus, a full set of physically interpretable material parameters was obtained for our specific purpose. The approach provides adequate accuracy of the estimates of the material parameters, near 1%. These parameter estimates will subsequently be applied in future design simulations, without the need to go through an entire series of characterization experiments. Finally, a sensitivity study showed that small variations of 1% in the main parameters caused changes near 1% in the eigenfrequency, but changes up to 7% in the admittance peak, thus influencing the efficiency of the system. Temperature will already cause these small variations in response; thus, a frequency control unit is required when actually manufacturing an efficient ultrasonic separation system.

  8. SPOTting Model Parameters Using a Ready-Made Python Package

    PubMed Central

    Houska, Tobias; Kraft, Philipp; Chamorro-Chavez, Alejandro; Breuer, Lutz

    2015-01-01

    The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI). We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function. PMID:26680783

  9. An Effective Parameter Screening Strategy for High Dimensional Watershed Models

    NASA Astrophysics Data System (ADS)

    Khare, Y. P.; Martinez, C. J.; Munoz-Carpena, R.

    2014-12-01

    Watershed simulation models can assess the impacts of natural and anthropogenic disturbances on natural systems. These models have become important tools for tackling a range of water resources problems through their implementation in the formulation and evaluation of Best Management Practices, Total Maximum Daily Loads, and Basin Management Action Plans. For accurate applications of watershed models they need to be thoroughly evaluated through global uncertainty and sensitivity analyses (UA/SA). However, due to the high dimensionality of these models such evaluation becomes extremely time- and resource-consuming. Parameter screening, the qualitative separation of important parameters, has been suggested as an essential step before applying rigorous evaluation techniques such as the Sobol' and Fourier Amplitude Sensitivity Test (FAST) methods in the UA/SA framework. The method of elementary effects (EE) (Morris, 1991) is one of the most widely used screening methodologies. Some of the common parameter sampling strategies for EE, e.g. Optimized Trajectories [OT] (Campolongo et al., 2007) and Modified Optimized Trajectories [MOT] (Ruano et al., 2012), suffer from inconsistencies in the generated parameter distributions, infeasible sample generation time, etc. In this work, we have formulated a new parameter sampling strategy - Sampling for Uniformity (SU) - for parameter screening which is based on the principles of the uniformity of the generated parameter distributions and the spread of the parameter sample. A rigorous multi-criteria evaluation (time, distribution, spread and screening efficiency) of OT, MOT, and SU indicated that SU is superior to other sampling strategies. Comparison of the EE-based parameter importance rankings with those of Sobol' helped to quantify the qualitativeness of the EE parameter screening approach, reinforcing the fact that one should use EE only to reduce the resource burden required by FAST/Sobol' analyses but not to replace it.

  10. Generalized additive modeling with implicit variable selection by likelihood-based boosting.

    PubMed

    Tutz, Gerhard; Binder, Harald

    2006-12-01

    The use of generalized additive models in statistical data analysis suffers from the restriction to few explanatory variables and the problems of selection of smoothing parameters. Generalized additive model boosting circumvents these problems by means of stagewise fitting of weak learners. A fitting procedure is derived which works for all simple exponential family distributions, including binomial, Poisson, and normal response variables. The procedure combines the selection of variables and the determination of the appropriate amount of smoothing. Penalized regression splines and the newly introduced penalized stumps are considered as weak learners. Estimates of standard deviations and stopping criteria, which are notorious problems in iterative procedures, are based on an approximate hat matrix. The method is shown to be a strong competitor to common procedures for the fitting of generalized additive models. In particular, in high-dimensional settings with many nuisance predictor variables it performs very well. PMID:17156269

  11. The influence of non-solvent addition on the independent and dependent parameters in roller electrospinning of polyurethane.

    PubMed

    Cengiz-Callioglu, Funda; Jirsak, Oldrich; Dayik, Mehmet

    2013-07-01

    This paper discusses the effects of 1,1,2,2 tetrachlorethylen (TCE) non-solvent addition on the independent (electrical conductivity, dielectric constant, surface tension and the theological properties of the solution etc.) and dependent parameters (number of Taylor cones per square meter (NTC/m2), spinning performance for one Taylor cone (SP/TC), total spinning performance (SP), fiber properties such as diameter, diameter uniformity, non-fibrous area) in roller electrospinning of polyurethane (PU). The same process parameters (voltage, distance of the electrodes, humidity, etc.) were applied for all solutions during the spinning process. According to the results, the effect of TCE non-solvent concentration on the dielectric constant, surface tension, rheological properties of the solution and also spinning performance was important statistically. Beside these results, TCE non-solvent concentration effects quality of fiber and nano web structure. Generally high fiber density, low non-fibrous percentage and uniform nanofibers were obtained from fiber morphology analyses.

  12. Application of physical parameter identification to finite element models

    NASA Technical Reports Server (NTRS)

    Bronowicki, Allen J.; Lukich, Michael S.; Kuritz, Steven P.

    1986-01-01

    A time domain technique for matching response predictions of a structural dynamic model to test measurements is developed. Significance is attached to prior estimates of physical model parameters and to experimental data. The Bayesian estimation procedure allows confidence levels in predicted physical and modal parameters to be obtained. Structural optimization procedures are employed to minimize an error functional with physical model parameters describing the finite element model as design variables. The number of complete FEM analyses are reduced using approximation concepts, including the recently developed convoluted Taylor series approach. The error function is represented in closed form by converting free decay test data to a time series model using Prony' method. The technique is demonstrated on simulated response of a simple truss structure.

  13. Parameter Identification in a Tuberculosis Model for Cameroon

    PubMed Central

    Moualeu-Ngangue, Dany Pascal; Röblitz, Susanna; Ehrig, Rainald; Deuflhard, Peter

    2015-01-01

    A deterministic model of tuberculosis in Cameroon is designed and analyzed with respect to its transmission dynamics. The model includes lack of access to treatment and weak diagnosis capacity as well as both frequency- and density-dependent transmissions. It is shown that the model is mathematically well-posed and epidemiologically reasonable. Solutions are non-negative and bounded whenever the initial values are non-negative. A sensitivity analysis of model parameters is performed and the most sensitive ones are identified by means of a state-of-the-art Gauss-Newton method. In particular, parameters representing the proportion of individuals having access to medical facilities are seen to have a large impact on the dynamics of the disease. The model predicts that a gradual increase of these parameters could significantly reduce the disease burden on the population within the next 15 years. PMID:25874885

  14. Regionalization parameters of conceptual rainfall-runoff model

    NASA Astrophysics Data System (ADS)

    Osuch, M.

    2003-04-01

    Main goal of this study was to develop techniques for the a priori estimation parameters of hydrological model. Conceptual hydrological model CLIRUN was applied to around 50 catchment in Poland. The size of catchments range from 1 000 to 100 000 km2. The model was calibrated for a number of gauged catchments with different catchment characteristics. The parameters of model were related to different climatic and physical catchment characteristics (topography, land use, vegetation and soil type). The relationships were tested by comparing observed and simulated runoff series from the gauged catchment that were not used in the calibration. The model performance using regional parameters was promising for most of the calibration and validation catchments.

  15. Parameter Sensitivity Evaluation of the CLM-Crop model

    NASA Astrophysics Data System (ADS)

    Drewniak, B. A.; Zeng, X.; Mametjanov, A.; Anitescu, M.; Norris, B.; Kotamarthi, V. R.

    2011-12-01

    In order to improve carbon cycling within Earth System Models, crop representation for corn, spring wheat, and soybean species has been incorporated into the latest version of the Community Land Model (CLM), the land surface model in the Community Earth System Model. As a means to evaluate and improve the CLM-Crop model, we will determine the sensitivity of various crop parameters on carbon fluxes (such as GPP and NEE), yields, and soil organic matter. The sensitivity analysis will perform small perturbations over a range of values for each parameter on individual grid sites, for comparison with AmeriFlux data, as well as globally so crop model parameters can be improved. Over 20 parameters have been identified for evaluation in this study including carbon-nitrogen ratios for leaves, stems, roots, and organs; fertilizer applications; growing degree days for each growth stage; and more. Results from this study will be presented to give a better understanding of the sensitivity of the various parameters used to represent crops, which will help improve the overall model performance and aid with determining future influences climate change will have on cropland ecosystems.

  16. Observation model and parameter partials for the JPL VLBI parameter estimation software MODEST, 19 94

    NASA Technical Reports Server (NTRS)

    Sovers, O. J.; Jacobs, C. S.

    1994-01-01

    This report is a revision of the document Observation Model and Parameter Partials for the JPL VLBI Parameter Estimation Software 'MODEST'---1991, dated August 1, 1991. It supersedes that document and its four previous versions (1983, 1985, 1986, and 1987). A number of aspects of the very long baseline interferometry (VLBI) model were improved from 1991 to 1994. Treatment of tidal effects is extended to model the effects of ocean tides on universal time and polar motion (UTPM), including a default model for nearly diurnal and semidiurnal ocean tidal UTPM variations, and partial derivatives for all (solid and ocean) tidal UTPM amplitudes. The time-honored 'K(sub 1) correction' for solid earth tides has been extended to include analogous frequency-dependent response of five tidal components. Partials of ocean loading amplitudes are now supplied. The Zhu-Mathews-Oceans-Anisotropy (ZMOA) 1990-2 and Kinoshita-Souchay models of nutation are now two of the modeling choices to replace the increasingly inadequate 1980 International Astronomical Union (IAU) nutation series. A rudimentary model of antenna thermal expansion is provided. Two more troposphere mapping functions have been added to the repertoire. Finally, corrections among VLBI observations via the model of Treuhaft and lanyi improve modeling of the dynamic troposphere. A number of minor misprints in Rev. 4 have been corrected.

  17. Parameter Estimation for Groundwater Models under Uncertain Irrigation Data.

    PubMed

    Demissie, Yonas; Valocchi, Albert; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen

    2015-01-01

    The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p < 0.05) bias in estimated parameters and model predictions that persist despite calibrating the models to different calibration data and sample sizes. However, by directly accounting for the irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.

  18. Parameter Estimation for Groundwater Models under Uncertain Irrigation Data.

    PubMed

    Demissie, Yonas; Valocchi, Albert; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen

    2015-01-01

    The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p < 0.05) bias in estimated parameters and model predictions that persist despite calibrating the models to different calibration data and sample sizes. However, by directly accounting for the irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes. PMID:25040235

  19. Parameters and pitfalls to consider in the conduct of food additive research, Carrageenan as a case study.

    PubMed

    Weiner, Myra L

    2016-01-01

    This paper provides guidance on the conduct of new in vivo and in vitro studies on high molecular weight food additives, with carrageenan, the widely used food additive, as a case study. It is important to understand the physical/chemical properties and to verify the identity/purity, molecular weight and homogeneity/stability of the additive in the vehicle for oral delivery. The strong binding of CGN to protein in rodent chow or infant formula results in no gastrointestinal tract exposure to free CGN. It is recommended that doses of high Mw non-caloric, non-nutritive additives not exceed 5% by weight of total solid diet to avoid potential nutritional effects. Addition of some high Mw additives at high concentrations to liquid nutritional supplements increases viscosity and may affect palatability, caloric intake and body weight gain. In in vitro studies, the use of well-characterized, relevant cell types and the appropriate composition of the culture media are necessary for proper conduct and interpretation. CGN is bound to media protein and not freely accessible to cells in vitro. Interpretation of new studies on food additives should consider the interaction of food additives with the vehicle components and the appropriateness of the animal or cell model and dose-response.

  20. Parameters and pitfalls to consider in the conduct of food additive research, Carrageenan as a case study.

    PubMed

    Weiner, Myra L

    2016-01-01

    This paper provides guidance on the conduct of new in vivo and in vitro studies on high molecular weight food additives, with carrageenan, the widely used food additive, as a case study. It is important to understand the physical/chemical properties and to verify the identity/purity, molecular weight and homogeneity/stability of the additive in the vehicle for oral delivery. The strong binding of CGN to protein in rodent chow or infant formula results in no gastrointestinal tract exposure to free CGN. It is recommended that doses of high Mw non-caloric, non-nutritive additives not exceed 5% by weight of total solid diet to avoid potential nutritional effects. Addition of some high Mw additives at high concentrations to liquid nutritional supplements increases viscosity and may affect palatability, caloric intake and body weight gain. In in vitro studies, the use of well-characterized, relevant cell types and the appropriate composition of the culture media are necessary for proper conduct and interpretation. CGN is bound to media protein and not freely accessible to cells in vitro. Interpretation of new studies on food additives should consider the interaction of food additives with the vehicle components and the appropriateness of the animal or cell model and dose-response. PMID:26615870

  1. Parameters of cosmological models and recent astronomical observations

    SciTech Connect

    Sharov, G.S.; Vorontsova, E.G. E-mail: elenavor@inbox.ru

    2014-10-01

    For different gravitational models we consider limitations on their parameters coming from recent observational data for type Ia supernovae, baryon acoustic oscillations, and from 34 data points for the Hubble parameter H(z) depending on redshift. We calculate parameters of 3 models describing accelerated expansion of the universe: the ΛCDM model, the model with generalized Chaplygin gas (GCG) and the multidimensional model of I. Pahwa, D. Choudhury and T.R. Seshadri. In particular, for the ΛCDM model 1σ estimates of parameters are: H{sub 0}=70.262±0.319 km {sup -1}Mp {sup -1}, Ω{sub m}=0.276{sub -0.008}{sup +0.009}, Ω{sub Λ}=0.769±0.029, Ω{sub k}=-0.045±0.032. The GCG model under restriction 0α≥ is reduced to the ΛCDM model. Predictions of the multidimensional model essentially depend on 3 data points for H(z) with z≥2.3.

  2. Material parameter computation for multi-layered vocal fold models.

    PubMed

    Schmidt, Bastian; Stingl, Michael; Leugering, Günter; Berry, David A; Döllinger, Michael

    2011-04-01

    Today, the prevention and treatment of voice disorders is an ever-increasing health concern. Since many occupations rely on verbal communication, vocal health is necessary just to maintain one's livelihood. Commonly applied models to study vocal fold vibrations and air flow distributions are self sustained physical models of the larynx composed of artificial silicone vocal folds. Choosing appropriate mechanical parameters for these vocal fold models while considering simplifications due to manufacturing restrictions is difficult but crucial for achieving realistic behavior. In the present work, a combination of experimental and numerical approaches to compute material parameters for synthetic vocal fold models is presented. The material parameters are derived from deformation behaviors of excised human larynges. The resulting deformations are used as reference displacements for a tracking functional to be optimized. Material optimization was applied to three-dimensional vocal fold models based on isotropic and transverse-isotropic material laws, considering both a layered model with homogeneous material properties on each layer and an inhomogeneous model. The best results exhibited a transversal-isotropic inhomogeneous (i.e., not producible) model. For the homogeneous model (three layers), the transversal-isotropic material parameters were also computed for each layer yielding deformations similar to the measured human vocal fold deformations.

  3. Kinetic modeling of molecular motors: pause model and parameter determination from single-molecule experiments

    NASA Astrophysics Data System (ADS)

    Morin, José A.; Ibarra, Borja; Cao, Francisco J.

    2016-05-01

    Single-molecule manipulation experiments of molecular motors provide essential information about the rate and conformational changes of the steps of the reaction located along the manipulation coordinate. This information is not always sufficient to define a particular kinetic cycle. Recent single-molecule experiments with optical tweezers showed that the DNA unwinding activity of a Phi29 DNA polymerase mutant presents a complex pause behavior, which includes short and long pauses. Here we show that different kinetic models, considering different connections between the active and the pause states, can explain the experimental pause behavior. Both the two independent pause model and the two connected pause model are able to describe the pause behavior of a mutated Phi29 DNA polymerase observed in an optical tweezers single-molecule experiment. For the two independent pause model all parameters are fixed by the observed data, while for the more general two connected pause model there is a range of values of the parameters compatible with the observed data (which can be expressed in terms of two of the rates and their force dependencies). This general model includes models with indirect entry and exit to the long-pause state, and also models with cycling in both directions. Additionally, assuming that detailed balance is verified, which forbids cycling, this reduces the ranges of the values of the parameters (which can then be expressed in terms of one rate and its force dependency). The resulting model interpolates between the independent pause model and the indirect entry and exit to the long-pause state model

  4. [Critical of the additive model of the randomized controlled trial].

    PubMed

    Boussageon, Rémy; Gueyffier, François; Bejan-Angoulvant, Theodora; Felden-Dominiak, Géraldine

    2008-01-01

    Randomized, double-blind, placebo-controlled clinical trials are currently the best way to demonstrate the clinical effectiveness of drugs. Its methodology relies on the method of difference (John Stuart Mill), through which the observed difference between two groups (drug vs placebo) can be attributed to the pharmacological effect of the drug being tested. However, this additive model can be questioned in the event of statistical interactions between the pharmacological and the placebo effects. Evidence in different domains has shown that the placebo effect can influence the effect of the active principle. This article evaluates the methodological, clinical and epistemological consequences of this phenomenon. Topics treated include extrapolating results, accounting for heterogeneous results, demonstrating the existence of several factors in the placebo effect, the necessity to take these factors into account for given symptoms or pathologies, as well as the problem of the "specific" effect.

  5. Environmental Transport Input Parameters for the Biosphere Model

    SciTech Connect

    M. Wasiolek

    2004-09-10

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment for the license application (TSPA-LA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA-LA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]) (TWP). This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA). This report is one of the five reports that develop input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the conceptual model and the mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed description of the model input parameters. The output of this report is used as direct input in the ''Nominal Performance Biosphere Dose Conversion Factor Analysis'' and in the ''Disruptive Event Biosphere Dose Conversion Factor Analysis'' that calculate the values of biosphere dose conversion factors (BDCFs) for the groundwater and volcanic ash exposure scenarios, respectively. The purpose of this analysis was to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or in volcanic ash). The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573]).

  6. Spatial Variability and Interpolation of Stochastic Weather Simulation Model Parameters.

    NASA Astrophysics Data System (ADS)

    Johnson, Gregory L.; Daly, Christopher; Taylor, George H.; Hanson, Clayton L.

    2000-06-01

    The spatial variability of 58 precipitation and temperature parameters from the `generation of weather elements for multiple applications' (GEM) weather generator has been investigated over a region of significant complexity in topography and climate. GEM parameters were derived for 80 climate stations in southern Idaho and southeastern Oregon. A technique was developed and used to determine the GEM parameters from high-elevation snowpack telemetry stations that report precipitation in nonstandard 2.5-mm (versus 0.25 mm) increments. Important dependencies were noted between most of these parameters and elevation (both domainwide and local), location, and other factors. The `parameter-elevation regressions on independent slopes model' (PRISM) spatial modeling system was used to develop approximate 4-km gridded data fields of each of these parameters. A feature was developed in PRISM that models temperatures above and below mean inversions differently. Examples of the spatial fields derived from this study and a discussion of the applications of these spatial parameter fields are included.

  7. Inhalation Exposure Input Parameters for the Biosphere Model

    SciTech Connect

    K. Rautenstrauch

    2004-09-10

    This analysis is one of 10 reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. Inhalation Exposure Input Parameters for the Biosphere Model is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the Technical Work Plan for Biosphere Modeling and Expert Support (BSC 2004 [DIRS 169573]). This analysis report defines and justifies values of mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception.

  8. Parameter uncertainty analysis of a biokinetic model of caesium.

    PubMed

    Li, W B; Klein, W; Blanchardon, E; Puncher, M; Leggett, R W; Oeh, U; Breustedt, B; Noßke, D; Lopez, M A

    2015-01-01

    Parameter uncertainties for the biokinetic model of caesium (Cs) developed by Leggett et al. were inventoried and evaluated. The methods of parameter uncertainty analysis were used to assess the uncertainties of model predictions with the assumptions of model parameter uncertainties and distributions. Furthermore, the importance of individual model parameters was assessed by means of sensitivity analysis. The calculated uncertainties of model predictions were compared with human data of Cs measured in blood and in the whole body. It was found that propagating the derived uncertainties in model parameter values reproduced the range of bioassay data observed in human subjects at different times after intake. The maximum ranges, expressed as uncertainty factors (UFs) (defined as a square root of ratio between 97.5th and 2.5th percentiles) of blood clearance, whole-body retention and urinary excretion of Cs predicted at earlier time after intake were, respectively: 1.5, 1.0 and 2.5 at the first day; 1.8, 1.1 and 2.4 at Day 10 and 1.8, 2.0 and 1.8 at Day 100; for the late times (1000 d) after intake, the UFs were increased to 43, 24 and 31, respectively. The model parameters of transfer rates between kidneys and blood, muscle and blood and the rate of transfer from kidneys to urinary bladder content are most influential to the blood clearance and to the whole-body retention of Cs. For the urinary excretion, the parameters of transfer rates from urinary bladder content to urine and from kidneys to urinary bladder content impact mostly. The implication and effect on the estimated equivalent and effective doses of the larger uncertainty of 43 in whole-body retention in the later time, say, after Day 500 will be explored in a successive work in the framework of EURADOS.

  9. Simultaneous parameter estimation and contaminant source characterization for coupled groundwater flow and contaminant transport modelling

    USGS Publications Warehouse

    Wagner, B.J.

    1992-01-01

    Parameter estimation and contaminant source characterization are key steps in the development of a coupled groundwater flow and contaminant transport simulation model. Here a methodologyfor simultaneous model parameter estimation and source characterization is presented. The parameter estimation/source characterization inverse model combines groundwater flow and contaminant transport simulation with non-linear maximum likelihood estimation to determine optimal estimates of the unknown model parameters and source characteristics based on measurements of hydraulic head and contaminant concentration. First-order uncertainty analysis provides a means for assessing the reliability of the maximum likelihood estimates and evaluating the accuracy and reliability of the flow and transport model predictions. A series of hypothetical examples is presented to demonstrate the ability of the inverse model to solve the combined parameter estimation/source characterization inverse problem. Hydraulic conductivities, effective porosity, longitudinal and transverse dispersivities, boundary flux, and contaminant flux at the source are estimated for a two-dimensional groundwater system. In addition, characterization of the history of contaminant disposal or location of the contaminant source is demonstrated. Finally, the problem of estimating the statistical parameters that describe the errors associated with the head and concentration data is addressed. A stage-wise estimation procedure is used to jointly estimate these statistical parameters along with the unknown model parameters and source characteristics. ?? 1992.

  10. Environmental Transport Input Parameters for the Biosphere Model

    SciTech Connect

    M. A. Wasiolek

    2003-06-27

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (TWP) (BSC 2003 [163602]). Some documents in Figure 1-1 may be under development and not available when this report is issued. This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA), but access to the listed documents is not required to understand the contents of this report. This report is one of the reports that develops input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2003 [160699]) describes the conceptual model, the mathematical model, and the input parameters. The purpose of this analysis is to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or volcanic ash). The analysis was performed in accordance with the TWP (BSC 2003 [163602]). This analysis develops values of parameters associated with many features, events, and processes (FEPs) applicable to the reference biosphere (DTN: M00303SEPFEPS2.000 [162452]), which are addressed in the biosphere model (BSC 2003 [160699]). The treatment of these FEPs is described in BSC (2003 [160699], Section 6.2). Parameter values

  11. Global-scale regionalization of hydrologic model parameters

    NASA Astrophysics Data System (ADS)

    Beck, Hylke E.; van Dijk, Albert I. J. M.; de Roo, Ad; Miralles, Diego G.; McVicar, Tim R.; Schellekens, Jaap; Bruijnzeel, L. Adrian

    2016-05-01

    Current state-of-the-art models typically applied at continental to global scales (hereafter called macroscale) tend to use a priori parameters, resulting in suboptimal streamflow (Q) simulation. For the first time, a scheme for regionalization of model parameters at the global scale was developed. We used data from a diverse set of 1787 small-to-medium sized catchments (10-10,000 km2) and the simple conceptual HBV model to set up and test the scheme. Each catchment was calibrated against observed daily Q, after which 674 catchments with high calibration and validation scores, and thus presumably good-quality observed Q and forcing data, were selected to serve as donor catchments. The calibrated parameter sets for the donors were subsequently transferred to 0.5° grid cells with similar climatic and physiographic characteristics, resulting in parameter maps for HBV with global coverage. For each grid cell, we used the 10 most similar donor catchments, rather than the single most similar donor, and averaged the resulting simulated Q, which enhanced model performance. The 1113 catchments not used as donors were used to independently evaluate the scheme. The regionalized parameters outperformed spatially uniform (i.e., averaged calibrated) parameters for 79% of the evaluation catchments. Substantial improvements were evident for all major Köppen-Geiger climate types and even for evaluation catchments > 5000 km distant from the donors. The median improvement was about half of the performance increase achieved through calibration. HBV with regionalized parameters outperformed nine state-of-the-art macroscale models, suggesting these might also benefit from the new regionalization scheme. The produced HBV parameter maps including ancillary data are available via www.gloh2o.org.

  12. Global-scale regionalization of hydrologic model parameters

    NASA Astrophysics Data System (ADS)

    Beck, Hylke; van Dijk, Albert; de Roo, Ad; Miralles, Diego; Schellekens, Jaap; McVicar, Tim; Bruijnzeel, Sampurno

    2016-04-01

    Current state-of-the-art models typically applied at continental to global scales (hereafter called macro-scale) tend to use a priori parameters, resulting in suboptimal streamflow (Q) simulation. For the first time, a scheme for regionalization of model parameters at the global scale was developed. We used data from a diverse set of 1787 small-to-medium sized catchments (10--10 000~km^2) and the simple conceptual HBV model to set up and test the scheme. Each catchment was calibrated against observed daily Q, after which 674 catchments with high calibration and validation scores, and thus presumably good-quality observed Q and forcing data, were selected to serve as donor catchments. The calibrated parameter sets for the donors were subsequently transferred to 0.5° grid cells with similar climatic and physiographic characteristics, resulting in parameter maps for HBV with global coverage. For each grid cell, we used the ten most similar donor catchments, rather than the single most similar donor, and averaged the resulting simulated Q, which enhanced model performance. The 1113 catchments not used as donors were used to independently evaluate the scheme. The regionalized parameters outperformed spatially-uniform (i.e., averaged calibrated) parameters for 79~% of the evaluation catchments. Substantial improvements were evident for all major Köppen-Geiger climate types and even for evaluation catchments >5000~km distance from the donors. The median improvement was about half of the performance increase achieved through calibration. HBV using regionalized parameters outperformed nine state-of-the-art macro-scale models, suggesting these might also benefit from the new regionalization scheme. The produced HBV parameter maps including ancillary data are available via http://water.jrc.ec.europa.eu/HBV/.

  13. Dynamic Factor Analysis Models with Time-Varying Parameters

    ERIC Educational Resources Information Center

    Chow, Sy-Miin; Zu, Jiyun; Shifren, Kim; Zhang, Guangjian

    2011-01-01

    Dynamic factor analysis models with time-varying parameters offer a valuable tool for evaluating multivariate time series data with time-varying dynamics and/or measurement properties. We use the Dynamic Model of Activation proposed by Zautra and colleagues (Zautra, Potter, & Reich, 1997) as a motivating example to construct a dynamic factor model…

  14. Separability of Item and Person Parameters in Response Time Models.

    ERIC Educational Resources Information Center

    Van Breukelen, Gerard J. P.

    1997-01-01

    Discusses two forms of separability of item and person parameters in the context of response time models. The first is "separate sufficiency," and the second is "ranking independence." For each form a theorem stating sufficient conditions is proved. The two forms are shown to include several cases of models from psychometric and biometric…

  15. Atmospheric turbulence parameters for modeling wind turbine dynamics

    NASA Technical Reports Server (NTRS)

    Holley, W. E.; Thresher, R. W.

    1982-01-01

    A model which can be used to predict the response of wind turbines to atmospheric turbulence is given. The model was developed using linearized aerodynamics for a three-bladed rotor and accounts for three turbulent velocity components as well as velocity gradients across the rotor disk. Typical response power spectral densities are shown. The system response depends critically on three wind and turbulence parameters, and models are presented to predict desired response statistics. An equation error method, which can be used to estimate the required parameters from field data, is also presented.

  16. Uncertainty Analysis and Parameter Estimation For Nearshore Hydrodynamic Models

    NASA Astrophysics Data System (ADS)

    Ardani, S.; Kaihatu, J. M.

    2012-12-01

    Numerical models represent deterministic approaches used for the relevant physical processes in the nearshore. Complexity of the physics of the model and uncertainty involved in the model inputs compel us to apply a stochastic approach to analyze the robustness of the model. The Bayesian inverse problem is one powerful way to estimate the important input model parameters (determined by apriori sensitivity analysis) and can be used for uncertainty analysis of the outputs. Bayesian techniques can be used to find the range of most probable parameters based on the probability of the observed data and the residual errors. In this study, the effect of input data involving lateral (Neumann) boundary conditions, bathymetry and off-shore wave conditions on nearshore numerical models are considered. Monte Carlo simulation is applied to a deterministic numerical model (the Delft3D modeling suite for coupled waves and flow) for the resulting uncertainty analysis of the outputs (wave height, flow velocity, mean sea level and etc.). Uncertainty analysis of outputs is performed by random sampling from the input probability distribution functions and running the model as required until convergence to the consistent results is achieved. The case study used in this analysis is the Duck94 experiment, which was conducted at the U.S. Army Field Research Facility at Duck, North Carolina, USA in the fall of 1994. The joint probability of model parameters relevant for the Duck94 experiments will be found using the Bayesian approach. We will further show that, by using Bayesian techniques to estimate the optimized model parameters as inputs and applying them for uncertainty analysis, we can obtain more consistent results than using the prior information for input data which means that the variation of the uncertain parameter will be decreased and the probability of the observed data will improve as well. Keywords: Monte Carlo Simulation, Delft3D, uncertainty analysis, Bayesian techniques

  17. Bayesian methods for characterizing unknown parameters of material models

    DOE PAGES

    Emery, J. M.; Grigoriu, M. D.; Field Jr., R. V.

    2016-02-04

    A Bayesian framework is developed for characterizing the unknown parameters of probabilistic models for material properties. In this framework, the unknown parameters are viewed as random and described by their posterior distributions obtained from prior information and measurements of quantities of interest that are observable and depend on the unknown parameters. The proposed Bayesian method is applied to characterize an unknown spatial correlation of the conductivity field in the definition of a stochastic transport equation and to solve this equation by Monte Carlo simulation and stochastic reduced order models (SROMs). As a result, the Bayesian method is also employed tomore » characterize unknown parameters of material properties for laser welds from measurements of peak forces sustained by these welds.« less

  18. Ultrasonic degradation of polymers: effect of operating parameters and intensification using additives for carboxymethyl cellulose (CMC) and polyvinyl alcohol (PVA).

    PubMed

    Mohod, Ashish V; Gogate, Parag R

    2011-05-01

    Use of ultrasound can yield polymer degradation as reflected by a significant reduction in the intrinsic viscosity or the molecular weight. The ultrasonic degradation of two water soluble polymers viz. carboxymethyl cellulose (CMC) and polyvinyl alcohol (PVA) has been studied in the present work. The effect of different operating parameters such as time of irradiation, immersion depth of horn and solution concentration has been investigated initially using laboratory scale operation followed by intensification studies using different additives such as air, sodium chloride and surfactant. Effect of scale of operation has been investigated with experiments in the available different capacity reactors with an objective of recommending a suitable type of configuration for large scale operation. The experimental results show that the viscosity of polymer solution decreased with an increase in the ultrasonic irradiation time and approached a limiting value. Use of additives such as air, sodium chloride and surfactant helps in increasing the extent of viscosity reduction. At higher frequency operation the viscosity reduction has been found to be negligible possibly attributed to less contribution of the physical effects. The viscosity reduction in the case of ultrasonic horn has been observed to be more as compared to other large capacity reactors. Kinetic analysis of the polymer degradation process has also been performed. The present work has enabled us to understand the role of the different operating parameters in deciding the extent of viscosity reduction in polymer systems and also the controlling effects of low frequency high power ultrasound with experiments on different scales of operation.

  19. Soil-related Input Parameters for the Biosphere Model

    SciTech Connect

    A. J. Smith

    2003-07-02

    This analysis is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the geologic repository at Yucca Mountain. The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A graphical representation of the documentation hierarchy for the ERMYN biosphere model is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (BSC 2003 [163602]). It should be noted that some documents identified in Figure 1-1 may be under development at the time this report is issued and therefore not available. This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. ''The Biosphere Model Report'' (BSC 2003 [160699]) describes in detail the conceptual model as well as the mathematical model and its input parameters. The purpose of this analysis was to develop the biosphere model parameters needed to evaluate doses from pathways associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation and ash

  20. Parameter Estimation for Single Diode Models of Photovoltaic Modules

    SciTech Connect

    Hansen, Clifford

    2015-03-01

    Many popular models for photovoltaic system performance employ a single diode model to compute the I - V curve for a module or string of modules at given irradiance and temperature conditions. A single diode model requires a number of parameters to be estimated from measured I - V curves. Many available parameter estimation methods use only short circuit, o pen circuit and maximum power points for a single I - V curve at standard test conditions together with temperature coefficients determined separately for individual cells. In contrast, module testing frequently records I - V curves over a wide range of irradi ance and temperature conditions which, when available , should also be used to parameterize the performance model. We present a parameter estimation method that makes use of a fu ll range of available I - V curves. We verify the accuracy of the method by recov ering known parameter values from simulated I - V curves . We validate the method by estimating model parameters for a module using outdoor test data and predicting the outdoor performance of the module.

  1. Modeling smectic layers in confined geometries: order parameter and defects.

    PubMed

    Pevnyi, Mykhailo Y; Selinger, Jonathan V; Sluckin, Timothy J

    2014-09-01

    We identify problems with the standard complex order parameter formalism for smectic-A (SmA) liquid crystals and discuss possible alternative descriptions of smectic order. In particular, we suggest an approach based on the real smectic density variation rather than a complex order parameter. This approach gives reasonable numerical results for the smectic layer configuration and director field in sample geometries and can be used to model smectic liquid crystals under nanoscale confinement for technological applications.

  2. Parameter selection and testing the soil water model SOIL

    NASA Astrophysics Data System (ADS)

    McGechan, M. B.; Graham, R.; Vinten, A. J. A.; Douglas, J. T.; Hooda, P. S.

    1997-08-01

    The soil water and heat simulation model SOIL was tested for its suitability to study the processes of transport of water in soil. Required parameters, particularly soil hydraulic parameters, were determined by field and laboratory tests for some common soil types and for soils subjected to contrasting treatments of long-term grassland and tilled land under cereal crops. Outputs from simulations were shown to be in reasonable agreement with independently measured field drain outflows and soil water content histories.

  3. Simultaneous estimation of parameters in the bivariate Emax model.

    PubMed

    Magnusdottir, Bergrun T; Nyquist, Hans

    2015-12-10

    In this paper, we explore inference in multi-response, nonlinear models. By multi-response, we mean models with m > 1 response variables and accordingly m relations. Each parameter/explanatory variable may appear in one or more of the relations. We study a system estimation approach for simultaneous computation and inference of the model and (co)variance parameters. For illustration, we fit a bivariate Emax model to diabetes dose-response data. Further, the bivariate Emax model is used in a simulation study that compares the system estimation approach to equation-by-equation estimation. We conclude that overall, the system estimation approach performs better for the bivariate Emax model when there are dependencies among relations. The stronger the dependencies, the more we gain in precision by using system estimation rather than equation-by-equation estimation.

  4. Inelastic properties of magnetorheological composites: II. Model, identification of parameters

    NASA Astrophysics Data System (ADS)

    Kaleta, Jerzy; Lewandowski, Daniel; Zietek, Grazyna

    2007-10-01

    As a result of a two-part research project the inelastic properties of a selected group of magnetorheological composites in cyclic shear conditions have been identified. In the first part the fabrication of the composites, their structure, the control-measurement setup, the test methods and the experimental results were described. In the second part (presented here), the experimental data are used to construct a constitutive model and identify it. A four-parameter model of an elastic/viscoplastic body was adopted for description. The model coefficients were made dependent on magnetic field strength H. The model was analysed and procedures for its identification were designed. Two-phase identification of the model parameters was carried out. The model has been shown to be valid in a frequency range above 5 Hz.

  5. Application of physical parameter identification to finite-element models

    NASA Technical Reports Server (NTRS)

    Bronowicki, Allen J.; Lukich, Michael S.; Kuritz, Steven P.

    1987-01-01

    The time domain parameter identification method described previously is applied to TRW's Large Space Structure Truss Experiment. Only control sensors and actuators are employed in the test procedure. The fit of the linear structural model to the test data is improved by more than an order of magnitude using a physically reasonable parameter set. The electro-magnetic control actuators are found to contribute significant damping due to a combination of eddy current and back electro-motive force (EMF) effects. Uncertainties in both estimated physical parameters and modal behavior variables are given.

  6. Estimation of nonlinear pilot model parameters including time delay.

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.; Roland, V. R.; Wells, W. R.

    1972-01-01

    Investigation of the feasibility of using a Kalman filter estimator for the identification of unknown parameters in nonlinear dynamic systems with a time delay. The problem considered is the application of estimation theory to determine the parameters of a family of pilot models containing delayed states. In particular, the pilot-plant dynamics are described by differential-difference equations of the retarded type. The pilot delay, included as one of the unknown parameters to be determined, is kept in pure form as opposed to the Pade approximations generally used for these systems. Problem areas associated with processing real pilot response data are included in the discussion.

  7. Effects of anodizing parameters and heat treatment on nanotopographical features, bioactivity, and cell culture response of additively manufactured porous titanium.

    PubMed

    Amin Yavari, S; Chai, Y C; Böttger, A J; Wauthle, R; Schrooten, J; Weinans, H; Zadpoor, A A

    2015-06-01

    Anodizing could be used for bio-functionalization of the surfaces of titanium alloys. In this study, we use anodizing for creating nanotubes on the surface of porous titanium alloy bone substitutes manufactured using selective laser melting. Different sets of anodizing parameters (voltage: 10 or 20V anodizing time: 30min to 3h) are used for anodizing porous titanium structures that were later heat treated at 500°C. The nanotopographical features are examined using electron microscopy while the bioactivity of anodized surfaces is measured using immersion tests in the simulated body fluid (SBF). Moreover, the effects of anodizing and heat treatment on the performance of one representative anodized porous titanium structures are evaluated using in vitro cell culture assays using human periosteum-derived cells (hPDCs). It has been shown that while anodizing with different anodizing parameters results in very different nanotopographical features, i.e. nanotubes in the range of 20 to 55nm, anodized surfaces have limited apatite-forming ability regardless of the applied anodizing parameters. The results of in vitro cell culture show that both anodizing, and thus generation of regular nanotopographical feature, and heat treatment improve the cell culture response of porous titanium. In particular, cell proliferation measured using metabolic activity and DNA content was improved for anodized and heat treated as well as for anodized but not heat-treated specimens. Heat treatment additionally improved the cell attachment of porous titanium surfaces and upregulated expression of osteogenic markers. Anodized but not heat-treated specimens showed some limited signs of upregulated expression of osteogenic markers. In conclusion, while varying the anodizing parameters creates different nanotube structure, it does not improve apatite-forming ability of porous titanium. However, both anodizing and heat treatment at 500°C improve the cell culture response of porous titanium.

  8. Effects of anodizing parameters and heat treatment on nanotopographical features, bioactivity, and cell culture response of additively manufactured porous titanium.

    PubMed

    Amin Yavari, S; Chai, Y C; Böttger, A J; Wauthle, R; Schrooten, J; Weinans, H; Zadpoor, A A

    2015-06-01

    Anodizing could be used for bio-functionalization of the surfaces of titanium alloys. In this study, we use anodizing for creating nanotubes on the surface of porous titanium alloy bone substitutes manufactured using selective laser melting. Different sets of anodizing parameters (voltage: 10 or 20V anodizing time: 30min to 3h) are used for anodizing porous titanium structures that were later heat treated at 500°C. The nanotopographical features are examined using electron microscopy while the bioactivity of anodized surfaces is measured using immersion tests in the simulated body fluid (SBF). Moreover, the effects of anodizing and heat treatment on the performance of one representative anodized porous titanium structures are evaluated using in vitro cell culture assays using human periosteum-derived cells (hPDCs). It has been shown that while anodizing with different anodizing parameters results in very different nanotopographical features, i.e. nanotubes in the range of 20 to 55nm, anodized surfaces have limited apatite-forming ability regardless of the applied anodizing parameters. The results of in vitro cell culture show that both anodizing, and thus generation of regular nanotopographical feature, and heat treatment improve the cell culture response of porous titanium. In particular, cell proliferation measured using metabolic activity and DNA content was improved for anodized and heat treated as well as for anodized but not heat-treated specimens. Heat treatment additionally improved the cell attachment of porous titanium surfaces and upregulated expression of osteogenic markers. Anodized but not heat-treated specimens showed some limited signs of upregulated expression of osteogenic markers. In conclusion, while varying the anodizing parameters creates different nanotube structure, it does not improve apatite-forming ability of porous titanium. However, both anodizing and heat treatment at 500°C improve the cell culture response of porous titanium. PMID

  9. Parameter fitting for piano sound synthesis by physical modeling

    NASA Astrophysics Data System (ADS)

    Bensa, Julien; Gipouloux, Olivier; Kronland-Martinet, Richard

    2005-07-01

    A difficult issue in the synthesis of piano tones by physical models is to choose the values of the parameters governing the hammer-string model. In fact, these parameters are hard to estimate from static measurements, causing the synthesis sounds to be unrealistic. An original approach that estimates the parameters of a piano model, from the measurement of the string vibration, by minimizing a perceptual criterion is proposed. The minimization process that was used is a combination of a gradient method and a simulated annealing algorithm, in order to avoid convergence problems in case of multiple local minima. The criterion, based on the tristimulus concept, takes into account the spectral energy density in three bands, each allowing particular parameters to be estimated. The optimization process has been run on signals measured on an experimental setup. The parameters thus estimated provided a better sound quality than the one obtained using a global energetic criterion. Both the sound's attack and its brightness were better preserved. This quality gain was obtained for parameter values very close to the initial ones, showing that only slight deviations are necessary to make synthetic sounds closer to the real ones.

  10. Advanced parameter retrievals for metamaterial slabs using an inhomogeneous model

    NASA Astrophysics Data System (ADS)

    Li Hou, Ling; Chin, Jessie Yao; Yang, Xin Mi; Lin, Xian Qi; Liu, Ruopeng; Xu, Fu Yong; Cui, Tie Jun

    2008-03-01

    The S-parameter retrieval has proved to be an efficient approach to obtain electromagnetic parameters of metamaterials from reflection and transmission coefficients, where a slab of metamaterial with finite thickness is regarded as a homogeneous medium slab with the same thickness [D. R. Smith and S. Schultz, Phys. Rev. B 65, 195104 (2002)]. However, metamaterial structures composed of subwavelength unit cells are different from homogeneous materials, and the conventional retrieval method is, under certain circumstances, not accurate enough. In this paper, we propose an advanced parameter retrieval method for metamaterial slabs using an inhomogeneous model. Due to the coupling effects of unit cells in a metamaterial slab, the roles of edge and inner cells in the slab are different. Hence, the corresponding equivalent medium parameters are different, which results in the inhomogeneous property of the metamaterial slab. We propose the retrievals of medium parameters for edge and inner cells from S parameters by considering two- and three-cell metamaterial slabs, respectively. Then we set up an inhomogeneous three-layer model for arbitrary metamaterial slabs, which is much more accurate than the conventional homogeneous model. Numerical simulations verify the above conclusions.

  11. Control of the SCOLE configuration using distributed parameter models

    NASA Technical Reports Server (NTRS)

    Hsiao, Min-Hung; Huang, Jen-Kuang

    1994-01-01

    A continuum model for the SCOLE configuration has been derived using transfer matrices. Controller designs for distributed parameter systems have been analyzed. Pole-assignment controller design is considered easy to implement but stability is not guaranteed. An explicit transfer function of dynamic controllers has been obtained and no model reduction is required before the controller is realized. One specific LQG controller for continuum models had been derived, but other optimal controllers for more general performances need to be studied.

  12. Accuracy in Parameter Estimation for Targeted Effects in Structural Equation Modeling: Sample Size Planning for Narrow Confidence Intervals

    ERIC Educational Resources Information Center

    Lai, Keke; Kelley, Ken

    2011-01-01

    In addition to evaluating a structural equation model (SEM) as a whole, often the model parameters are of interest and confidence intervals for those parameters are formed. Given a model with a good overall fit, it is entirely possible for the targeted effects of interest to have very wide confidence intervals, thus giving little information about…

  13. Quantifying the parameters of Prusiner's heterodimer model for prion replication

    NASA Astrophysics Data System (ADS)

    Li, Z. R.; Liu, G. R.; Mi, D.

    2005-02-01

    A novel approach for the determination of parameters in prion replication kinetics is developed based on Prusiner's heterodimer model. It is proposed to employ a simple 2D HP lattice model and a two-state transition theory to determine kinetic parameters that play the key role in the prion replication process. The simulation results reveal the most important facts observed in the prion diseases, including the long incubation time, rapid death following symptom manifestation, the effect of inoculation size, different mechanisms of the familial and infectious prion diseases, etc. Extensive simulation with various thermodynamic parameters shows that the Prusiner's heterodimer model is applicable, and the putative protein X plays a critical role in replication of the prion disease.

  14. Utilizing Soize's Approach to Identify Parameter and Model Uncertainties

    SciTech Connect

    Bonney, Matthew S.; Brake, Matthew Robert

    2014-10-01

    Quantifying uncertainty in model parameters is a challenging task for analysts. Soize has derived a method that is able to characterize both model and parameter uncertainty independently. This method is explained with the assumption that some experimental data is available, and is divided into seven steps. Monte Carlo analyses are performed to select the optimal dispersion variable to match the experimental data. Along with the nominal approach, an alternative distribution can be used along with corrections that can be utilized to expand the scope of this method. This method is one of a very few methods that can quantify uncertainty in the model form independently of the input parameters. Two examples are provided to illustrate the methodology, and example code is provided in the Appendix.

  15. Bayesian parameter inference and model selection by population annealing in systems biology.

    PubMed

    Murakami, Yohei

    2014-01-01

    Parameter inference and model selection are very important for mathematical modeling in systems biology. Bayesian statistics can be used to conduct both parameter inference and model selection. Especially, the framework named approximate Bayesian computation is often used for parameter inference and model selection in systems biology. However, Monte Carlo methods needs to be used to compute Bayesian posterior distributions. In addition, the posterior distributions of parameters are sometimes almost uniform or very similar to their prior distributions. In such cases, it is difficult to choose one specific value of parameter with high credibility as the representative value of the distribution. To overcome the problems, we introduced one of the population Monte Carlo algorithms, population annealing. Although population annealing is usually used in statistical mechanics, we showed that population annealing can be used to compute Bayesian posterior distributions in the approximate Bayesian computation framework. To deal with un-identifiability of the representative values of parameters, we proposed to run the simulations with the parameter ensemble sampled from the posterior distribution, named "posterior parameter ensemble". We showed that population annealing is an efficient and convenient algorithm to generate posterior parameter ensemble. We also showed that the simulations with the posterior parameter ensemble can, not only reproduce the data used for parameter inference, but also capture and predict the data which was not used for parameter inference. Lastly, we introduced the marginal likelihood in the approximate Bayesian computation framework for Bayesian model selection. We showed that population annealing enables us to compute the marginal likelihood in the approximate Bayesian computation framework and conduct model selection depending on the Bayes factor.

  16. SPOTting model parameters using a ready-made Python package

    NASA Astrophysics Data System (ADS)

    Houska, Tobias; Kraft, Philipp; Breuer, Lutz

    2015-04-01

    The selection and parameterization of reliable process descriptions in ecological modelling is driven by several uncertainties. The procedure is highly dependent on various criteria, like the used algorithm, the likelihood function selected and the definition of the prior parameter distributions. A wide variety of tools have been developed in the past decades to optimize parameters. Some of the tools are closed source. Due to this, the choice for a specific parameter estimation method is sometimes more dependent on its availability than the performance. A toolbox with a large set of methods can support users in deciding about the most suitable method. Further, it enables to test and compare different methods. We developed the SPOT (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of modules, to analyze and optimize parameters of (environmental) models. SPOT comes along with a selected set of algorithms for parameter optimization and uncertainty analyses (Monte Carlo, MC; Latin Hypercube Sampling, LHS; Maximum Likelihood, MLE; Markov Chain Monte Carlo, MCMC; Scuffled Complex Evolution, SCE-UA; Differential Evolution Markov Chain, DE-MCZ), together with several likelihood functions (Bias, (log-) Nash-Sutcliff model efficiency, Correlation Coefficient, Coefficient of Determination, Covariance, (Decomposed-, Relative-, Root-) Mean Squared Error, Mean Absolute Error, Agreement Index) and prior distributions (Binomial, Chi-Square, Dirichlet, Exponential, Laplace, (log-, multivariate-) Normal, Pareto, Poisson, Cauchy, Uniform, Weibull) to sample from. The model-independent structure makes it suitable to analyze a wide range of applications. We apply all algorithms of the SPOT package in three different case studies. Firstly, we investigate the response of the Rosenbrock function, where the MLE algorithm shows its strengths. Secondly, we study the Griewank function, which has a challenging response surface for

  17. Synchronous Generator Model Parameter Estimation Based on Noisy Dynamic Waveforms

    NASA Astrophysics Data System (ADS)

    Berhausen, Sebastian; Paszek, Stefan

    2016-01-01

    In recent years, there have occurred system failures in many power systems all over the world. They have resulted in a lack of power supply to a large number of recipients. To minimize the risk of occurrence of power failures, it is necessary to perform multivariate investigations, including simulations, of power system operating conditions. To conduct reliable simulations, the current base of parameters of the models of generating units, containing the models of synchronous generators, is necessary. In the paper, there is presented a method for parameter estimation of a synchronous generator nonlinear model based on the analysis of selected transient waveforms caused by introducing a disturbance (in the form of a pseudorandom signal) in the generator voltage regulation channel. The parameter estimation was performed by minimizing the objective function defined as a mean square error for deviations between the measurement waveforms and the waveforms calculated based on the generator mathematical model. A hybrid algorithm was used for the minimization of the objective function. In the paper, there is described a filter system used for filtering the noisy measurement waveforms. The calculation results of the model of a 44 kW synchronous generator installed on a laboratory stand of the Institute of Electrical Engineering and Computer Science of the Silesian University of Technology are also given. The presented estimation method can be successfully applied to parameter estimation of different models of high-power synchronous generators operating in a power system.

  18. [Parameter uncertainty analysis for urban rainfall runoff modelling].

    PubMed

    Huang, Jin-Liang; Lin, Jie; Du, Peng-Fei

    2012-07-01

    An urban watershed in Xiamen was selected to perform the parameter uncertainty analysis for urban stormwater runoff modeling in terms of identification and sensitivity analysis based on storm water management model (SWMM) using Monte-Carlo sampling and regionalized sensitivity analysis (RSA) algorithm. Results show that Dstore-Imperv, Dstore-Perv and Curve Number (CN) are the identifiable parameters with larger K-S values in hydrological and hydraulic module, and the rank of K-S values in hydrological and hydraulic module is Dstore-Imperv > CN > Dstore-Perv > N-Perv > conductivity > Con-Mann > N-Imperv. With regards to water quality module, the parameters in exponent washoff model including Coefficient and Exponent and the Max. Buildup parameter of saturation buildup model in three land cover types are the identifiable parameters with the larger K-S values. In comparison, the K-S value of rate constant in three landuse/cover types is smaller than that of Max. Buildup, Coefficient and Exponent.

  19. Optimizing Muscle Parameters in Musculoskeletal Modeling Using Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Hanson, Andrea; Reed, Erik; Cavanagh, Peter

    2011-01-01

    Astronauts assigned to long-duration missions experience bone and muscle atrophy in the lower limbs. The use of musculoskeletal simulation software has become a useful tool for modeling joint and muscle forces during human activity in reduced gravity as access to direct experimentation is limited. Knowledge of muscle and joint loads can better inform the design of exercise protocols and exercise countermeasure equipment. In this study, the LifeModeler(TM) (San Clemente, CA) biomechanics simulation software was used to model a squat exercise. The initial model using default parameters yielded physiologically reasonable hip-joint forces. However, no activation was predicted in some large muscles such as rectus femoris, which have been shown to be active in 1-g performance of the activity. Parametric testing was conducted using Monte Carlo methods and combinatorial reduction to find a muscle parameter set that more closely matched physiologically observed activation patterns during the squat exercise. Peak hip joint force using the default parameters was 2.96 times body weight (BW) and increased to 3.21 BW in an optimized, feature-selected test case. The rectus femoris was predicted to peak at 60.1% activation following muscle recruitment optimization, compared to 19.2% activation with default parameters. These results indicate the critical role that muscle parameters play in joint force estimation and the need for exploration of the solution space to achieve physiologically realistic muscle activation.

  20. A dynamic growth model of Dunaliella salina: parameter identification and profile likelihood analysis.

    PubMed

    Fachet, Melanie; Flassig, Robert J; Rihko-Struckmann, Liisa; Sundmacher, Kai

    2014-12-01

    In this work, a photoautotrophic growth model incorporating light and nutrient effects on growth and pigmentation of Dunaliella salina was formulated. The model equations were taken from literature and modified according to the experimental setup with special emphasis on model reduction. The proposed model has been evaluated with experimental data of D. salina cultivated in a flat-plate photobioreactor under stressed and non-stressed conditions. Simulation results show that the model can represent the experimental data accurately. The identifiability of the model parameters was studied using the profile likelihood method. This analysis revealed that three model parameters are practically non-identifiable. However, some of these non-identifiabilities can be resolved by model reduction and additional measurements. As a conclusion, our results suggest that the proposed model equations result in a predictive growth model for D. salina.

  1. Atomic modeling of cryo-electron microscopy reconstructions--joint refinement of model and imaging parameters.

    PubMed

    Chapman, Michael S; Trzynka, Andrew; Chapman, Brynmor K

    2013-04-01

    When refining the fit of component atomic structures into electron microscopic reconstructions, use of a resolution-dependent atomic density function makes it possible to jointly optimize the atomic model and imaging parameters of the microscope. Atomic density is calculated by one-dimensional Fourier transform of atomic form factors convoluted with a microscope envelope correction and a low-pass filter, allowing refinement of imaging parameters such as resolution, by optimizing the agreement of calculated and experimental maps. A similar approach allows refinement of atomic displacement parameters, providing indications of molecular flexibility even at low resolution. A modest improvement in atomic coordinates is possible following optimization of these additional parameters. Methods have been implemented in a Python program that can be used in stand-alone mode for rigid-group refinement, or embedded in other optimizers for flexible refinement with stereochemical restraints. The approach is demonstrated with refinements of virus and chaperonin structures at resolutions of 9 through 4.5 Å, representing regimes where rigid-group and fully flexible parameterizations are appropriate. Through comparisons to known crystal structures, flexible fitting by RSRef is shown to be an improvement relative to other methods and to generate models with all-atom rms accuracies of 1.5-2.5 Å at resolutions of 4.5-6 Å.

  2. Calculating slope and ED50 of additive dose-response curves, and application of these tabulated parameter values.

    PubMed

    Pöch, G; Pancheva, S N

    1995-06-01

    Comparing dose-response curves (DRCs) of a compound A in the absence and presence of a fixed dose of an antagonist B is standard in pharmacology and toxicology. When B qualitatively resembles A in its action, it is often useful to construct theoretical DRCs of additive and independent combinations. Theoretical curves are calculated from experimental values by the program ALLFIT, which uses the four parameter logistic equation. DRCs of theoretical, additive DRCs are obtained by using the respective values for slope and ED50, which were taken from tables presented here compiled on the basis of the slope of the DRC of A alone (0.6-14) and of the effect of B alone (1-75%). These tables are unnecessary for the construction of theoretical curves if A acts by an independent mechanism, giving values for slope and ED50 identical to those of the DRC of A alone. Experimental DRCs of antiviral and other effects (the latter taken from data in the literature) are compared with theoretical curves by an F-test analysis provided by ALLFIT. The method can be used successfully for the construction of theoretical curves for additive and independent DRCs and comparison with experimental curves. This comparison may help clarify the mode of interaction of A with B. PMID:7640393

  3. Estimation of the parameters of ETAS models by Simulated Annealing.

    PubMed

    Lombardi, Anna Maria

    2015-02-12

    This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is more significant. These results give new insights into the ETAS model and the efficiency of the maximum-likelihood method within this context.

  4. Estimation of the parameters of ETAS models by Simulated Annealing

    PubMed Central

    Lombardi, Anna Maria

    2015-01-01

    This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is more significant. These results give new insights into the ETAS model and the efficiency of the maximum-likelihood method within this context. PMID:25673036

  5. Extrinsic parameter extraction and RF modelling of CMOS

    NASA Astrophysics Data System (ADS)

    Alam, M. S.; Armstrong, G. A.

    2004-05-01

    An analytical approach for CMOS parameter extraction which includes the effect of parasitic resistance is presented. The method is based on small-signal equivalent circuit valid in all region of operation to uniquely extract extrinsic resistances, which can be used to extend the industry standard BSIM3v3 MOSFET model for radio frequency applications. The verification of the model was carried out through frequency domain measurements of S-parameters and direct time domain measurement at 2.4 GHz in a large signal non-linear mode of operation.

  6. NB-PLC channel modelling with cyclostationary noise addition & OFDM implementation for smart grid

    NASA Astrophysics Data System (ADS)

    Thomas, Togis; Gupta, K. K.

    2016-03-01

    Power line communication (PLC) technology can be a viable solution for the future ubiquitous networks because it provides a cheaper alternative to other wired technology currently being used for communication. In smart grid Power Line Communication (PLC) is used to support communication with low rate on low voltage (LV) distribution network. In this paper, we propose the channel modelling of narrowband (NB) PLC in the frequency range 5 KHz to 500 KHz by using ABCD parameter with cyclostationary noise addition. Behaviour of the channel was studied by the addition of 11KV/230V transformer, by varying load location and load. Bit error rate (BER) Vs signal to noise ratio SNR) was plotted for the proposed model by employing OFDM. Our simulation results based on the proposed channel model show an acceptable performance in terms of bit error rate versus signal to noise ratio, which enables communication required for smart grid applications.

  7. Optimization of Parameter Selection for Partial Least Squares Model Development

    NASA Astrophysics Data System (ADS)

    Zhao, Na; Wu, Zhi-Sheng; Zhang, Qiao; Shi, Xin-Yuan; Ma, Qun; Qiao, Yan-Jiang

    2015-07-01

    In multivariate calibration using a spectral dataset, it is difficult to optimize nonsystematic parameters in a quantitative model, i.e., spectral pretreatment, latent factors and variable selection. In this study, we describe a novel and systematic approach that uses a processing trajectory to select three parameters including different spectral pretreatments, variable importance in the projection (VIP) for variable selection and latent factors in the Partial Least-Square (PLS) model. The root mean square errors of calibration (RMSEC), the root mean square errors of prediction (RMSEP), the ratio of standard error of prediction to standard deviation (RPD), and the determination coefficient of calibration (Rcal2) and validation (Rpre2) were simultaneously assessed to optimize the best modeling path. We used three different near-infrared (NIR) datasets, which illustrated that there was more than one modeling path to ensure good modeling. The PLS model optimizes modeling parameters step-by-step, but the robust model described here demonstrates better efficiency than other published papers.

  8. Climate change decision-making: Model & parameter uncertainties explored

    SciTech Connect

    Dowlatabadi, H.; Kandlikar, M.; Linville, C.

    1995-12-31

    A critical aspect of climate change decision-making is uncertainties in current understanding of the socioeconomic, climatic and biogeochemical processes involved. Decision-making processes are much better informed if these uncertainties are characterized and their implications understood. Quantitative analysis of these uncertainties serve to inform decision makers about the likely outcome of policy initiatives, and help set priorities for research so that outcome ambiguities faced by the decision-makers are reduced. A family of integrated assessment models of climate change have been developed at Carnegie Mellon. These models are distinguished from other integrated assessment efforts in that they were designed from the outset to characterize and propagate parameter, model, value, and decision-rule uncertainties. The most recent of these models is ICAM 2.1. This model includes representation of the processes of demographics, economic activity, emissions, atmospheric chemistry, climate and sea level change and impacts from these changes and policies for emissions mitigation, and adaptation to change. The model has over 800 objects of which about one half are used to represent uncertainty. In this paper we show, that when considering parameter uncertainties, the relative contribution of climatic uncertainties are most important, followed by uncertainties in damage calculations, economic uncertainties and direct aerosol forcing uncertainties. When considering model structure uncertainties we find that the choice of policy is often dominated by model structure choice, rather than parameter uncertainties.

  9. Determination of modeling parameters for power IGBTs under pulsed power conditions

    SciTech Connect

    Dale, Gregory E; Van Gordon, Jim A; Kovaleski, Scott D

    2010-01-01

    While the power insulated gate bipolar transistor (IGRT) is used in many applications, it is not well characterized under pulsed power conditions. This makes the IGBT difficult to model for solid state pulsed power applications. The Oziemkiewicz implementation of the Hefner model is utilized to simulate IGBTs in some circuit simulation software packages. However, the seventeen parameters necessary for the Oziemkiewicz implementation must be known for the conditions under which the device will be operating. Using both experimental and simulated data with a least squares curve fitting technique, the parameters necessary to model a given IGBT can be determined. This paper presents two sets of these seventeen parameters that correspond to two different models of power IGBTs. Specifically, these parameters correspond to voltages up to 3.5 kV, currents up to 750 A, and pulse widths up to 10 {micro}s. Additionally, comparisons of the experimental and simulated data will be presented.

  10. Force Field Independent Metal Parameters Using a Nonbonded Dummy Model

    PubMed Central

    2014-01-01

    The cationic dummy atom approach provides a powerful nonbonded description for a range of alkaline-earth and transition-metal centers, capturing both structural and electrostatic effects. In this work we refine existing literature parameters for octahedrally coordinated Mn2+, Zn2+, Mg2+, and Ca2+, as well as providing new parameters for Ni2+, Co2+, and Fe2+. In all the cases, we are able to reproduce both M2+–O distances and experimental solvation free energies, which has not been achieved to date for transition metals using any other model. The parameters have also been tested using two different water models and show consistent performance. Therefore, our parameters are easily transferable to any force field that describes nonbonded interactions using Coulomb and Lennard-Jones potentials. Finally, we demonstrate the stability of our parameters in both the human and Escherichia coli variants of the enzyme glyoxalase I as showcase systems, as both enzymes are active with a range of transition metals. The parameters presented in this work provide a valuable resource for the molecular simulation community, as they extend the range of metal ions that can be studied using classical approaches, while also providing a starting point for subsequent parametrization of new metal centers. PMID:24670003

  11. Inhalation Exposure Input Parameters for the Biosphere Model

    SciTech Connect

    M. Wasiolek

    2006-06-05

    This analysis is one of the technical reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), referred to in this report as the biosphere model. ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. ''Inhalation Exposure Input Parameters for the Biosphere Model'' is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the biosphere model is presented in Figure 1-1 (based on BSC 2006 [DIRS 176938]). This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and how this analysis report contributes to biosphere modeling. This analysis report defines and justifies values of atmospheric mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of the biosphere model to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception. This report is concerned primarily with the

  12. Considerations for parameter optimization and sensitivity in climate models.

    PubMed

    Neelin, J David; Bracco, Annalisa; Luo, Hao; McWilliams, James C; Meyerson, Joyce E

    2010-12-14

    Climate models exhibit high sensitivity in some respects, such as for differences in predicted precipitation changes under global warming. Despite successful large-scale simulations, regional climatology features prove difficult to constrain toward observations, with challenges including high-dimensionality, computationally expensive simulations, and ambiguity in the choice of objective function. In an atmospheric General Circulation Model forced by observed sea surface temperature or coupled to a mixed-layer ocean, many climatic variables yield rms-error objective functions that vary smoothly through the feasible parameter range. This smoothness occurs despite nonlinearity strong enough to reverse the curvature of the objective function in some parameters, and to imply limitations on multimodel ensemble means as an estimator of global warming precipitation changes. Low-order polynomial fits to the model output spatial fields as a function of parameter (quadratic in model field, fourth-order in objective function) yield surprisingly successful metamodels for many quantities and facilitate a multiobjective optimization approach. Tradeoffs arise as optima for different variables occur at different parameter values, but with agreement in certain directions. Optima often occur at the limit of the feasible parameter range, identifying key parameterization aspects warranting attention--here the interaction of convection with free tropospheric water vapor. Analytic results for spatial fields of leading contributions to the optimization help to visualize tradeoffs at a regional level, e.g., how mismatches between sensitivity and error spatial fields yield regional error under minimization of global objective functions. The approach is sufficiently simple to guide parameter choices and to aid intercomparison of sensitivity properties among climate models. PMID:21115841

  13. Considerations for parameter optimization and sensitivity in climate models

    PubMed Central

    Neelin, J. David; Bracco, Annalisa; Luo, Hao; McWilliams, James C.; Meyerson, Joyce E.

    2010-01-01

    Climate models exhibit high sensitivity in some respects, such as for differences in predicted precipitation changes under global warming. Despite successful large-scale simulations, regional climatology features prove difficult to constrain toward observations, with challenges including high-dimensionality, computationally expensive simulations, and ambiguity in the choice of objective function. In an atmospheric General Circulation Model forced by observed sea surface temperature or coupled to a mixed-layer ocean, many climatic variables yield rms-error objective functions that vary smoothly through the feasible parameter range. This smoothness occurs despite nonlinearity strong enough to reverse the curvature of the objective function in some parameters, and to imply limitations on multimodel ensemble means as an estimator of global warming precipitation changes. Low-order polynomial fits to the model output spatial fields as a function of parameter (quadratic in model field, fourth-order in objective function) yield surprisingly successful metamodels for many quantities and facilitate a multiobjective optimization approach. Tradeoffs arise as optima for different variables occur at different parameter values, but with agreement in certain directions. Optima often occur at the limit of the feasible parameter range, identifying key parameterization aspects warranting attention—here the interaction of convection with free tropospheric water vapor. Analytic results for spatial fields of leading contributions to the optimization help to visualize tradeoffs at a regional level, e.g., how mismatches between sensitivity and error spatial fields yield regional error under minimization of global objective functions. The approach is sufficiently simple to guide parameter choices and to aid intercomparison of sensitivity properties among climate models. PMID:21115841

  14. Improving weather predictability by including land-surface model parameter uncertainty

    NASA Astrophysics Data System (ADS)

    Orth, Rene; Dutra, Emanuel; Pappenberger, Florian

    2016-04-01

    The land surface forms an important component of Earth system models and interacts nonlinearly with other parts such as ocean and atmosphere. To capture the complex and heterogenous hydrology of the land surface, land surface models include a large number of parameters impacting the coupling to other components of the Earth system model. Focusing on ECMWF's land-surface model HTESSEL we present in this study a comprehensive parameter sensitivity evaluation using multiple observational datasets in Europe. We select 6 poorly constrained effective parameters (surface runoff effective depth, skin conductivity, minimum stomatal resistance, maximum interception, soil moisture stress function shape, total soil depth) and explore their sensitivity to model outputs such as soil moisture, evapotranspiration and runoff using uncoupled simulations and coupled seasonal forecasts. Additionally we investigate the possibility to construct ensembles from the multiple land surface parameters. In the uncoupled runs we find that minimum stomatal resistance and total soil depth have the most influence on model performance. Forecast skill scores are moreover sensitive to the same parameters as HTESSEL performance in the uncoupled analysis. We demonstrate the robustness of our findings by comparing multiple best performing parameter sets and multiple randomly chosen parameter sets. We find better temperature and precipitation forecast skill with the best-performing parameter perturbations demonstrating representativeness of model performance across uncoupled (and hence less computationally demanding) and coupled settings. Finally, we construct ensemble forecasts from ensemble members derived with different best-performing parameterizations of HTESSEL. This incorporation of parameter uncertainty in the ensemble generation yields an increase in forecast skill, even beyond the skill of the default system. Orth, R., E. Dutra, and F. Pappenberger, 2016: Improving weather predictability by

  15. Parameter space for a dissipative Fermi-Ulam model

    NASA Astrophysics Data System (ADS)

    Oliveira, Diego F. M.; Leonel, Edson D.

    2011-12-01

    The parameter space for a dissipative bouncing ball model under the effect of inelastic collisions is studied. The system is described using a two-dimensional nonlinear area-contracting map. The introduction of dissipation destroys the mixed structure of phase space of the non-dissipative case, leading to the existence of a chaotic attractor and attracting fixed points, which may coexist for certain ranges of control parameters. We have computed the average velocity for the parameter space and made a connection with the parameter space based on the maximum Lyapunov exponent. For both cases, we found an infinite family of self-similar structures of shrimp shape, which correspond to the periodic attractors embedded in a large region that corresponds to the chaotic motion.

  16. Important observations and parameters for a salt water intrusion model

    USGS Publications Warehouse

    Shoemaker, W.B.

    2004-01-01

    Sensitivity analysis with a density-dependent ground water flow simulator can provide insight and understanding of salt water intrusion calibration problems far beyond what is possible through intuitive analysis alone. Five simple experimental simulations presented here demonstrate this point. Results show that dispersivity is a very important parameter for reproducing a steady-state distribution of hydraulic head, salinity, and flow in the transition zone between fresh water and salt water in a coastal aquifer system. When estimating dispersivity, the following conclusions can be drawn about the data types and locations considered. (1) The "toe" of the transition zone is the most effective location for hydraulic head and salinity observations. (2) Areas near the coastline where submarine ground water discharge occurs are the most effective locations for flow observations. (3) Salinity observations are more effective than hydraulic head observations. (4) The importance of flow observations aligned perpendicular to the shoreline varies dramatically depending on distance seaward from the shoreline. Extreme parameter correlation can prohibit unique estimation of permeability parameters such as hydraulic conductivity and flow parameters such as recharge in a density-dependent ground water flow model when using hydraulic head and salinity observations. Adding flow observations perpendicular to the shoreline in areas where ground water is exchanged with the ocean body can reduce the correlation, potentially resulting in unique estimates of these parameter values. Results are expected to be directly applicable to many complex situations, and have implications for model development whether or not formal optimization methods are used in model calibration.

  17. Important observations and parameters for a salt water intrusion model.

    PubMed

    Shoemaker, W Barclay

    2004-01-01

    Sensitivity analysis with a density-dependent ground water flow simulator can provide insight and understanding of salt water intrusion calibration problems far beyond what is possible through intuitive analysis alone. Five simple experimental simulations presented here demonstrate this point. Results show that dispersivity is a very important parameter for reproducing a steady-state distribution of hydraulic head, salinity, and flow in the transition zone between fresh water and salt water in a coastal aquifer system. When estimating dispersivity, the following conclusions can be drawn about the data types and locations considered. (1) The "toe" of the transition zone is the most effective location for hydraulic head and salinity observations. (2) Areas near the coastline where submarine ground water discharge occurs are the most effective locations for flow observations. (3) Salinity observations are more effective than hydraulic head observations. (4) The importance of flow observations aligned perpendicular to the shoreline varies dramatically depending on distance seaward from the shoreline. Extreme parameter correlation can prohibit unique estimation of permeability parameters such as hydraulic conductivity and flow parameters such as recharge in a density-dependent ground water flow model when using hydraulic head and salinity observations. Adding flow observations perpendicular to the shoreline in areas where ground water is exchanged with the ocean body can reduce the correlation, potentially resulting in unique estimates of these parameter values. Results are expected to be directly applicable to many complex situations, and have implications for model development whether or not formal optimization methods are used in model calibration.

  18. Hematological parameters in Polish mixed breed rabbits with addition of meat breed blood in the annual cycle.

    PubMed

    Tokarz-Deptuła, B; Niedźwiedzka-Rystwej, P; Adamiak, M; Hukowska-Szematowicz, B; Trzeciak-Ryczek, A; Deptuła, W

    2015-01-01

    In the paper we studied haematologic values, such as haemoglobin concentration, haematocrit value, thrombocytes, leucocytes: lymphocytes, neutrophils, basophils, eosinophils and monocytes in the pheral blood in Polish mixed-breeds with addition of meat breed blood in order to obtain the reference values which are until now not available for this animals. In studying this indices we took into consideration the impact of the season (spring, summer, autumn, winter), and sex of the animals. The studies have shown a high impact of the season of the year on those rabbits, but only in spring and summer. Moreover we observed that the sex has mean impact on the studied values of haematological parameters in those rabbits. According to our knowledge, this is the first paper on haematologic values in this widely used group of rabbits, so they may serve as reference values. PMID:26812808

  19. Integrated reservoir characterization: Improvement in heterogeneities stochastic modelling by integration of additional external constraints

    SciTech Connect

    Doligez, B.; Eschard, R.; Geffroy, F.

    1997-08-01

    The classical approach to construct reservoir models is to start with a fine scale geological model which is informed with petrophysical properties. Then scaling-up techniques allow to obtain a reservoir model which is compatible with the fluid flow simulators. Geostatistical modelling techniques are widely used to build the geological models before scaling-up. These methods provide equiprobable images of the area under investigation, which honor the well data, and which variability is the same than the variability computed from the data. At an appraisal phase, when few data are available, or when the wells are insufficient to describe all the heterogeneities and the behavior of the field, additional constraints are needed to obtain a more realistic geological model. For example, seismic data or stratigraphic models can provide average reservoir information with an excellent areal coverage, but with a poor vertical resolution. New advances in modelisation techniques allow now to integrate this type of additional external information in order to constrain the simulations. In particular, 2D or 3D seismic derived information grids, or sand-shale ratios maps coming from stratigraphic models can be used as external drifts to compute the geological image of the reservoir at the fine scale. Examples are presented to illustrate the use of these new tools, their impact on the final reservoir model, and their sensitivity to some key parameters.

  20. Estimation of growth parameters using a nonlinear mixed Gompertz model.

    PubMed

    Wang, Z; Zuidhof, M J

    2004-06-01

    In order to maximize the utility of simulation models for decision making, accurate estimation of growth parameters and associated variances is crucial. A mixed Gompertz growth model was used to account for between-bird variation and heterogeneous variance. The mixed model had several advantages over the fixed effects model. The mixed model partitioned BW variation into between- and within-bird variation, and the covariance structure assumed with the random effect accounted for part of the BW correlation across ages in the same individual. The amount of residual variance decreased by over 55% with the mixed model. The mixed model reduced estimation biases that resulted from selective sampling. For analysis of longitudinal growth data, the mixed effects growth model is recommended.

  1. Modelling Biophysical Parameters of Maize Using Landsat 8 Time Series

    NASA Astrophysics Data System (ADS)

    Dahms, Thorsten; Seissiger, Sylvia; Conrad, Christopher; Borg, Erik

    2016-06-01

    Open and free access to multi-frequent high-resolution data (e.g. Sentinel - 2) will fortify agricultural applications based on satellite data. The temporal and spatial resolution of these remote sensing datasets directly affects the applicability of remote sensing methods, for instance a robust retrieving of biophysical parameters over the entire growing season with very high geometric resolution. In this study we use machine learning methods to predict biophysical parameters, namely the fraction of absorbed photosynthetic radiation (FPAR), the leaf area index (LAI) and the chlorophyll content, from high resolution remote sensing. 30 Landsat 8 OLI scenes were available in our study region in Mecklenburg-Western Pomerania, Germany. In-situ data were weekly to bi-weekly collected on 18 maize plots throughout the summer season 2015. The study aims at an optimized prediction of biophysical parameters and the identification of the best explaining spectral bands and vegetation indices. For this purpose, we used the entire in-situ dataset from 24.03.2015 to 15.10.2015. Random forest and conditional inference forests were used because of their explicit strong exploratory and predictive character. Variable importance measures allowed for analysing the relation between the biophysical parameters with respect to the spectral response, and the performance of the two approaches over the plant stock evolvement. Classical random forest regression outreached the performance of conditional inference forests, in particular when modelling the biophysical parameters over the entire growing period. For example, modelling biophysical parameters of maize for the entire vegetation period using random forests yielded: FPAR: R² = 0.85; RMSE = 0.11; LAI: R² = 0.64; RMSE = 0.9 and chlorophyll content (SPAD): R² = 0.80; RMSE=4.9. Our results demonstrate the great potential in using machine-learning methods for the interpretation of long-term multi-frequent remote sensing datasets to model

  2. Percolation model with an additional source of disorder

    NASA Astrophysics Data System (ADS)

    Kundu, Sumanta; Manna, S. S.

    2016-06-01

    The ranges of transmission of the mobiles in a mobile ad hoc network are not uniform in reality. They are affected by the temperature fluctuation in air, obstruction due to the solid objects, even the humidity difference in the environment, etc. How the varying range of transmission of the individual active elements affects the global connectivity in the network may be an important practical question to ask. Here a model of percolation phenomena, with an additional source of disorder, is introduced for a theoretical understanding of this problem. As in ordinary percolation, sites of a square lattice are occupied randomly with probability p . Each occupied site is then assigned a circular disk of random value R for its radius. A bond is defined to be occupied if and only if the radii R1 and R2 of the disks centered at the ends satisfy a certain predefined condition. In a very general formulation, one divides the R1-R2 plane into two regions by an arbitrary closed curve. One defines a point within one region as representing an occupied bond; otherwise it is a vacant bond. The study of three different rules under this general formulation indicates that the percolation threshold always varies continuously. This threshold has two limiting values, one is pc(sq) , the percolation threshold for the ordinary site percolation on the square lattice, and the other is unity. The approach of the percolation threshold to its limiting values are characterized by two exponents. In a special case, all lattice sites are occupied by disks of random radii R ∈{0 ,R0} and a percolation transition is observed with R0 as the control variable, similar to the site occupation probability.

  3. Percolation model with an additional source of disorder.

    PubMed

    Kundu, Sumanta; Manna, S S

    2016-06-01

    The ranges of transmission of the mobiles in a mobile ad hoc network are not uniform in reality. They are affected by the temperature fluctuation in air, obstruction due to the solid objects, even the humidity difference in the environment, etc. How the varying range of transmission of the individual active elements affects the global connectivity in the network may be an important practical question to ask. Here a model of percolation phenomena, with an additional source of disorder, is introduced for a theoretical understanding of this problem. As in ordinary percolation, sites of a square lattice are occupied randomly with probability p. Each occupied site is then assigned a circular disk of random value R for its radius. A bond is defined to be occupied if and only if the radii R_{1} and R_{2} of the disks centered at the ends satisfy a certain predefined condition. In a very general formulation, one divides the R_{1}-R_{2} plane into two regions by an arbitrary closed curve. One defines a point within one region as representing an occupied bond; otherwise it is a vacant bond. The study of three different rules under this general formulation indicates that the percolation threshold always varies continuously. This threshold has two limiting values, one is p_{c}(sq), the percolation threshold for the ordinary site percolation on the square lattice, and the other is unity. The approach of the percolation threshold to its limiting values are characterized by two exponents. In a special case, all lattice sites are occupied by disks of random radii R∈{0,R_{0}} and a percolation transition is observed with R_{0} as the control variable, similar to the site occupation probability.

  4. Percolation model with an additional source of disorder.

    PubMed

    Kundu, Sumanta; Manna, S S

    2016-06-01

    The ranges of transmission of the mobiles in a mobile ad hoc network are not uniform in reality. They are affected by the temperature fluctuation in air, obstruction due to the solid objects, even the humidity difference in the environment, etc. How the varying range of transmission of the individual active elements affects the global connectivity in the network may be an important practical question to ask. Here a model of percolation phenomena, with an additional source of disorder, is introduced for a theoretical understanding of this problem. As in ordinary percolation, sites of a square lattice are occupied randomly with probability p. Each occupied site is then assigned a circular disk of random value R for its radius. A bond is defined to be occupied if and only if the radii R_{1} and R_{2} of the disks centered at the ends satisfy a certain predefined condition. In a very general formulation, one divides the R_{1}-R_{2} plane into two regions by an arbitrary closed curve. One defines a point within one region as representing an occupied bond; otherwise it is a vacant bond. The study of three different rules under this general formulation indicates that the percolation threshold always varies continuously. This threshold has two limiting values, one is p_{c}(sq), the percolation threshold for the ordinary site percolation on the square lattice, and the other is unity. The approach of the percolation threshold to its limiting values are characterized by two exponents. In a special case, all lattice sites are occupied by disks of random radii R∈{0,R_{0}} and a percolation transition is observed with R_{0} as the control variable, similar to the site occupation probability. PMID:27415234

  5. Revised Parameters for the AMOEBA Polarizable Atomic Multipole Water Model

    PubMed Central

    Pande, Vijay S.; Head-Gordon, Teresa; Ponder, Jay W.

    2016-01-01

    A set of improved parameters for the AMOEBA polarizable atomic multipole water model is developed. The protocol uses an automated procedure, ForceBalance, to adjust model parameters to enforce agreement with ab initio-derived results for water clusters and experimentally obtained data for a variety of liquid phase properties across a broad temperature range. The values reported here for the new AMOEBA14 water model represent a substantial improvement over the previous AMOEBA03 model. The new AMOEBA14 water model accurately predicts the temperature of maximum density and qualitatively matches the experimental density curve across temperatures ranging from 249 K to 373 K. Excellent agreement is observed for the AMOEBA14 model in comparison to a variety of experimental properties as a function of temperature, including the 2nd virial coefficient, enthalpy of vaporization, isothermal compressibility, thermal expansion coefficient and dielectric constant. The viscosity, self-diffusion constant and surface tension are also well reproduced. In comparison to high-level ab initio results for clusters of 2 to 20 water molecules, the AMOEBA14 model yields results similar to the AMOEBA03 and the direct polarization iAMOEBA models. With advances in computing power, calibration data, and optimization techniques, we recommend the use of the AMOEBA14 water model for future studies employing a polarizable water model. PMID:25683601

  6. Revised Parameters for the AMOEBA Polarizable Atomic Multipole Water Model.

    PubMed

    Laury, Marie L; Wang, Lee-Ping; Pande, Vijay S; Head-Gordon, Teresa; Ponder, Jay W

    2015-07-23

    A set of improved parameters for the AMOEBA polarizable atomic multipole water model is developed. An automated procedure, ForceBalance, is used to adjust model parameters to enforce agreement with ab initio-derived results for water clusters and experimental data for a variety of liquid phase properties across a broad temperature range. The values reported here for the new AMOEBA14 water model represent a substantial improvement over the previous AMOEBA03 model. The AMOEBA14 model accurately predicts the temperature of maximum density and qualitatively matches the experimental density curve across temperatures from 249 to 373 K. Excellent agreement is observed for the AMOEBA14 model in comparison to experimental properties as a function of temperature, including the second virial coefficient, enthalpy of vaporization, isothermal compressibility, thermal expansion coefficient, and dielectric constant. The viscosity, self-diffusion constant, and surface tension are also well reproduced. In comparison to high-level ab initio results for clusters of 2-20 water molecules, the AMOEBA14 model yields results similar to AMOEBA03 and the direct polarization iAMOEBA models. With advances in computing power, calibration data, and optimization techniques, we recommend the use of the AMOEBA14 water model for future studies employing a polarizable water model.

  7. Prediction of interest rate using CKLS model with stochastic parameters

    SciTech Connect

    Ying, Khor Chia; Hin, Pooi Ah

    2014-06-19

    The Chan, Karolyi, Longstaff and Sanders (CKLS) model is a popular one-factor model for describing the spot interest rates. In this paper, the four parameters in the CKLS model are regarded as stochastic. The parameter vector φ{sup (j)} of four parameters at the (J+n)-th time point is estimated by the j-th window which is defined as the set consisting of the observed interest rates at the j′-th time point where j≤j′≤j+n. To model the variation of φ{sup (j)}, we assume that φ{sup (j)} depends on φ{sup (j−m)}, φ{sup (j−m+1)},…, φ{sup (j−1)} and the interest rate r{sub j+n} at the (j+n)-th time point via a four-dimensional conditional distribution which is derived from a [4(m+1)+1]-dimensional power-normal distribution. Treating the (j+n)-th time point as the present time point, we find a prediction interval for the future value r{sub j+n+1} of the interest rate at the next time point when the value r{sub j+n} of the interest rate is given. From the above four-dimensional conditional distribution, we also find a prediction interval for the future interest rate r{sub j+n+d} at the next d-th (d≥2) time point. The prediction intervals based on the CKLS model with stochastic parameters are found to have better ability of covering the observed future interest rates when compared with those based on the model with fixed parameters.

  8. Nilsson parameters κ and μ in relativistic mean field models

    NASA Astrophysics Data System (ADS)

    Sulaksono, A.; Mart, T.; Bahri, C.

    2005-03-01

    Nilsson parameters κ and μ have been studied in the framework of relativistic mean field (RMF) models. They are used to investigate the reason why RMF models give a relatively good prediction of the spin-orbit splitting but fail to reproduce the placement of the states with different orbital angular momenta. Instead of the relatively small effective mass M*, the independence of M* from the angular momentum l is found to be the reason.

  9. Atmosphere models and the determination of stellar parameters

    NASA Astrophysics Data System (ADS)

    Martins, F.

    2014-11-01

    We present the basic concepts necessary to build atmosphere models for any type of star. We then illustrate how atmosphere models can be used to determine stellar parameters. We focus on the effects of line-blanketing for hot stars, and on non-LTE and three dimensional effects for cool stars. We illustrate the impact of these effects on the determination of the ages of stars from the HR diagram.

  10. Consistency of Rasch Model Parameter Estimation: A Simulation Study.

    ERIC Educational Resources Information Center

    van den Wollenberg, Arnold L.; And Others

    1988-01-01

    The unconditional--simultaneous--maximum likelihood (UML) estimation procedure for the one-parameter logistic model produces biased estimators. The UML method is inconsistent and is not a good alternative to conditional maximum likelihood method, at least with small numbers of items. The minimum Chi-square estimation procedure produces unbiased…

  11. Parabolic problems with parameters arising in evolution model for phytromediation

    NASA Astrophysics Data System (ADS)

    Sahmurova, Aida; Shakhmurov, Veli

    2012-12-01

    The past few decades, efforts have been made to clean sites polluted by heavy metals as chromium. One of the new innovative methods of eradicating metals from soil is phytoremediation. This uses plants to pull metals from the soil through the roots. This work develops a system of differential equations with parameters to model the plant metal interaction of phytoremediation (see [1]).

  12. [Comparison of optimum, RSA and GLUE methods in parameter identification of a nonlinear environmental model].

    PubMed

    Deng, Yixiang; Wang, Qi; Lai, Siyun; Chen, Jining

    2003-11-01

    Parameter identification plays a key role in environmental model application. The optimization method is one of the earliest and most widely used methods. However, as the parameters by optimization may not fully fit the observations, there is a risk that the errors may be enhanced in the decision-make stage. With this deficiency in consideration, the RSA and GLUE algorithms search for the feasible parameters not only to the optimum but also around the neighbors. The difference between RSA and GLUE is that the RSA accepts the estimated parameters equally as the candidates for application; while the GLUE keeps the difference among the parameters as measured by likelihood. In addition for parameter identification, both RSA and GLUE are efficient tools for global sensitivity analysis.

  13. Inhalation Exposure Input Parameters for the Biosphere Model

    SciTech Connect

    M. A. Wasiolek

    2003-09-24

    This analysis is one of the nine reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2003a) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents a set of input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for a Yucca Mountain repository. This report, ''Inhalation Exposure Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (BSC 2003b). It should be noted that some documents identified in Figure 1-1 may be under development at the time this report is issued and therefore not available at that time. This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this analysis report. This analysis report defines and justifies values of mass loading, which is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Measurements of mass loading are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air surrounding crops and concentrations in air inhaled by a receptor. Concentrations in air to which the

  14. Assessment of structural model and parameter uncertainty with a multi-model system for soil water balance models

    NASA Astrophysics Data System (ADS)

    Michalik, Thomas; Multsch, Sebastian; Frede, Hans-Georg; Breuer, Lutz

    2016-04-01

    Water for agriculture is strongly limited in arid and semi-arid regions and often of low quality in terms of salinity. The application of saline waters for irrigation increases the salt load in the rooting zone and has to be managed by leaching to maintain a healthy soil, i.e. to wash out salts by additional irrigation. Dynamic simulation models are helpful tools to calculate the root zone water fluxes and soil salinity content in order to investigate best management practices. However, there is little information on structural and parameter uncertainty for simulations regarding the water and salt balance of saline irrigation. Hence, we established a multi-model system with four different models (AquaCrop, RZWQM, SWAP, Hydrus1D/UNSATCHEM) to analyze the structural and parameter uncertainty by using the Global Likelihood and Uncertainty Estimation (GLUE) method. Hydrus1D/UNSATCHEM and SWAP were set up with multiple sets of different implemented functions (e.g. matric and osmotic stress for root water uptake) which results in a broad range of different model structures. The simulations were evaluated against soil water and salinity content observations. The posterior distribution of the GLUE analysis gives behavioral parameters sets and reveals uncertainty intervals for parameter uncertainty. Throughout all of the model sets, most parameters accounting for the soil water balance show a low uncertainty, only one or two out of five to six parameters in each model set displays a high uncertainty (e.g. pore-size distribution index in SWAP and Hydrus1D/UNSATCHEM). The differences between the models and model setups reveal the structural uncertainty. The highest structural uncertainty is observed for deep percolation fluxes between the model sets of Hydrus1D/UNSATCHEM (~200 mm) and RZWQM (~500 mm) that are more than twice as high for the latter. The model sets show a high variation in uncertainty intervals for deep percolation as well, with an interquartile range (IQR) of

  15. [Influence Additional Cognitive Tasks on EEG Beta Rhythm Parameters during Forming and Testing Set to Perception of the Facial Expression].

    PubMed

    Yakovenko, I A; Cheremushkin, E A; Kozlov, M K

    2015-01-01

    The research of changes of a beta rhythm parameters on condition of working memory loading by extension of a interstimuli interval between the target and triggering stimuli to 16 sec is investigated on 70 healthy adults in two series of experiments with set to a facial expression. In the second series at the middle of this interval for strengthening of the load was entered the additional cognitive task in the form of conditioning stimuli like Go/NoGo--circles of blue or green color. Data analysis of the research was carried out by means of continuous wavelet-transformation on the basis of "mather" complex Morlet-wavelet in the range of 1-35 Hz. Beta rhythm power was characterized by the mean level, maxima of wavelet-transformation coefficient (WLC) and latent periods of maxima. Introduction of additional cognitive task to pause between the target and triggering stimuli led to essential increase in absolute values of the mean level of beta rhythm WLC and relative sizes of maxima of beta rhythm WLC. In the series of experiments without conditioning stimulus subjects with large number of mistakes (from 6 to 40), i.e. rigid set, in comparison with subjects with small number of mistakes (to 5), i.e. plastic set, at the forming stage were characterized by higher values of the mean level of beta rhythm WLC. Introduction of the conditioning stimuli led to smoothing of intergroup distinctions throughout the experiment. PMID:26601500

  16. Squares of different sizes: effect of geographical projection on model parameter estimates in species distribution modeling.

    PubMed

    Budic, Lara; Didenko, Gregor; Dormann, Carsten F

    2016-01-01

    In species distribution analyses, environmental predictors and distribution data for large spatial extents are often available in long-lat format, such as degree raster grids. Long-lat projections suffer from unequal cell sizes, as a degree of longitude decreases in length from approximately 110 km at the equator to 0 km at the poles. Here we investigate whether long-lat and equal-area projections yield similar model parameter estimates, or result in a consistent bias. We analyzed the environmental effects on the distribution of 12 ungulate species with a northern distribution, as models for these species should display the strongest effect of projectional distortion. Additionally we choose four species with entirely continental distributions to investigate the effect of incomplete cell coverage at the coast. We expected that including model weights proportional to the actual cell area should compensate for the observed bias in model coefficients, and similarly that using land coverage of a cell should decrease bias in species with coastal distribution. As anticipated, model coefficients were different between long-lat and equal-area projections. Having progressively smaller and a higher number of cells with increasing latitude influenced the importance of parameters in models, increased the sample size for the northernmost parts of species ranges, and reduced the subcell variability of those areas. However, this bias could be largely removed by weighting long-lat cells by the area they cover, and marginally by correcting for land coverage. Overall we found little effect of using long-lat rather than equal-area projections in our analysis. The fitted relationship between environmental parameters and occurrence probability differed only very little between the two projection types. We still recommend using equal-area projections to avoid possible bias. More importantly, our results suggest that the cell area and the proportion of a cell covered by land should be

  17. Squares of different sizes: effect of geographical projection on model parameter estimates in species distribution modeling.

    PubMed

    Budic, Lara; Didenko, Gregor; Dormann, Carsten F

    2016-01-01

    In species distribution analyses, environmental predictors and distribution data for large spatial extents are often available in long-lat format, such as degree raster grids. Long-lat projections suffer from unequal cell sizes, as a degree of longitude decreases in length from approximately 110 km at the equator to 0 km at the poles. Here we investigate whether long-lat and equal-area projections yield similar model parameter estimates, or result in a consistent bias. We analyzed the environmental effects on the distribution of 12 ungulate species with a northern distribution, as models for these species should display the strongest effect of projectional distortion. Additionally we choose four species with entirely continental distributions to investigate the effect of incomplete cell coverage at the coast. We expected that including model weights proportional to the actual cell area should compensate for the observed bias in model coefficients, and similarly that using land coverage of a cell should decrease bias in species with coastal distribution. As anticipated, model coefficients were different between long-lat and equal-area projections. Having progressively smaller and a higher number of cells with increasing latitude influenced the importance of parameters in models, increased the sample size for the northernmost parts of species ranges, and reduced the subcell variability of those areas. However, this bias could be largely removed by weighting long-lat cells by the area they cover, and marginally by correcting for land coverage. Overall we found little effect of using long-lat rather than equal-area projections in our analysis. The fitted relationship between environmental parameters and occurrence probability differed only very little between the two projection types. We still recommend using equal-area projections to avoid possible bias. More importantly, our results suggest that the cell area and the proportion of a cell covered by land should be

  18. Parameter Calibration of Mini-LEO Hill Slope Model

    NASA Astrophysics Data System (ADS)

    Siegel, H.

    2015-12-01

    The mini-LEO hill slope, located at Biosphere 2, is a small-scale catchment model that is used to study the ways landscapes change in response to biological, chemical, and hydrological processes. Previous experiments have shown that soil heterogeneity can develop as a result of groundwater flow; changing the characteristics of the landscape. To determine whether or not flow has caused heterogeneity within the mini-LEO hill slope, numerical models were used to simulate the observed seepage flow, water table height, and storativity. To begin a numerical model of the hill slope was created using CATchment Hydrology (CATHY). The model was then brought to an initial steady state by applying a rainfall event of 5mm/day for 180 days. Then a specific rainfall experiment of alternating intensities was applied to the model. Next, a parameter calibration was conducted, to fit the model to the observed data, by changing soil parameters individually. The parameters of the best fitting calibration were taken to be the most representative of those present within the mini-LEO hill slope. Our model concluded that heterogeneities had indeed arisen as a result of the rainfall event, resulting in a lower hydraulic conductivity downslope. The lower hydraulic conductivity downslope in turn caused in an increased storage of water and a decrease in seepage flow compared to homogeneous models. This shows that the hydraulic processes acting within a landscape can change the very characteristics of the landscape itself, namely the permeability and conductivity of the soil. In the future results from the excavation of soil in mini-LEO can be compared to the models results to improve the model and validate its findings.

  19. Electro-optical parameters of bond polarizability model for aluminosilicates.

    PubMed

    Smirnov, Konstantin S; Bougeard, Daniel; Tandon, Poonam

    2006-04-01

    Electro-optical parameters (EOPs) of bond polarizability model (BPM) for aluminosilicate structures were derived from quantum-chemical DFT calculations of molecular models. The tensor of molecular polarizability and the derivatives of the tensor with respect to the bond length are well reproduced with the BPM, and the EOPs obtained are in a fair agreement with available experimental data. The parameters derived were found to be transferable to larger molecules. This finding suggests that the procedure used can be applied to systems with partially ionic chemical bonds. The transferability of the parameters to periodic systems was tested in molecular dynamics simulation of the polarized Raman spectra of alpha-quartz. It appeared that the molecular Si-O bond EOPs failed to reproduce the intensity of peaks in the spectra. This limitation is due to large values of the longitudinal components of the bond polarizability and its derivative found in the molecular calculations as compared to those obtained from periodic DFT calculations of crystalline silica polymorphs by Umari et al. (Phys. Rev. B 2001, 63, 094305). It is supposed that the electric field of the solid is responsible for the difference of the parameters. Nevertheless, the EOPs obtained can be used as an initial set of parameters for calculations of polarizability related characteristics of relevant systems in the framework of BPM.

  20. Soil-Related Input Parameters for the Biosphere Model

    SciTech Connect

    A. J. Smith

    2004-09-09

    This report presents one of the analyses that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the details of the conceptual model as well as the mathematical model and the required input parameters. The biosphere model is one of a series of process models supporting the postclosure Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A schematic representation of the documentation flow for the Biosphere input to TSPA is presented in Figure 1-1. This figure shows the evolutionary relationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (TWP) (BSC 2004 [DIRS 169573]). This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil-Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. The purpose of this analysis was to develop the biosphere model parameters associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation or ash deposition and, as a direct consequence, radionuclide concentration in other environmental media that are affected by radionuclide concentrations in soil. The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573]) where the governing procedure was defined as AP-SIII.9Q, ''Scientific Analyses''. This

  1. Telescoping strategies for improved parameter estimation of environmental simulation models

    NASA Astrophysics Data System (ADS)

    Matott, L. Shawn; Hymiak, Beth; Reslink, Camden; Baxter, Christine; Aziz, Shirmin

    2013-10-01

    The parameters of environmental simulation models are often inferred by minimizing differences between simulated output and observed data. Heuristic global search algorithms are a popular choice for performing minimization but many algorithms yield lackluster results when computational budgets are restricted, as is often required in practice. One way for improving performance is to limit the search domain by reducing upper and lower parameter bounds. While such range reduction is typically done prior to optimization, this study examined strategies for contracting parameter bounds during optimization. Numerical experiments evaluated a set of novel “telescoping” strategies that work in conjunction with a given optimizer to scale parameter bounds in accordance with the remaining computational budget. Various telescoping functions were considered, including a linear scaling of the bounds, and four nonlinear scaling functions that more aggressively reduce parameter bounds either early or late in the optimization. Several heuristic optimizers were integrated with the selected telescoping strategies and applied to numerous optimization test functions as well as calibration problems involving four environmental simulation models. The test suite ranged from simple 2-parameter surfaces to complex 100-parameter landscapes, facilitating robust comparisons of the selected optimizers across a variety of restrictive computational budgets. All telescoping strategies generally improved the performance of the selected optimizers, relative to baseline experiments that used no bounds reduction. Performance improvements varied but were as high as 38% for a real-coded genetic algorithm (RGA), 21% for shuffled complex evolution (SCE), 16% for simulated annealing (SA), 8% for particle swarm optimization (PSO), and 7% for dynamically dimensioned search (DDS). Inter-algorithm comparisons suggest that the SCE and DDS algorithms delivered the best overall performance. SCE appears well

  2. Realistic uncertainties on Hapke model parameters from photometric measurement

    NASA Astrophysics Data System (ADS)

    Schmidt, Frédéric; Fernando, Jennifer

    2015-11-01

    The single particle phase function describes the manner in which an average element of a granular material diffuses the light in the angular space usually with two parameters: the asymmetry parameter b describing the width of the scattering lobe and the backscattering fraction c describing the main direction of the scattering lobe. Hapke proposed a convenient and widely used analytical model to describe the spectro-photometry of granular materials. Using a compilation of the published data, Hapke (Hapke, B. [2012]. Icarus 221, 1079-1083) recently studied the relationship of b and c for natural examples and proposed the hockey stick relation (excluding b > 0.5 and c > 0.5). For the moment, there is no theoretical explanation for this relationship. One goal of this article is to study a possible bias due to the retrieval method. We expand here an innovative Bayesian inversion method in order to study into detail the uncertainties of retrieved parameters. On Emission Phase Function (EPF) data, we demonstrate that the uncertainties of the retrieved parameters follow the same hockey stick relation, suggesting that this relation is due to the fact that b and c are coupled parameters in the Hapke model instead of a natural phenomena. Nevertheless, the data used in the Hapke (Hapke, B. [2012]. Icarus 221, 1079-1083) compilation generally are full Bidirectional Reflectance Diffusion Function (BRDF) that are shown not to be subject to this artifact. Moreover, the Bayesian method is a good tool to test if the sampling geometry is sufficient to constrain the parameters (single scattering albedo, surface roughness, b, c , opposition effect). We performed sensitivity tests by mimicking various surface scattering properties and various single image-like/disk resolved image, EPF-like and BRDF-like geometric sampling conditions. The second goal of this article is to estimate the favorable geometric conditions for an accurate estimation of photometric parameters in order to provide

  3. Modeling and Bayesian parameter estimation for shape memory alloy bending actuators

    NASA Astrophysics Data System (ADS)

    Crews, John H.; Smith, Ralph C.

    2012-04-01

    In this paper, we employ a homogenized energy model (HEM) for shape memory alloy (SMA) bending actuators. Additionally, we utilize a Bayesian method for quantifying parameter uncertainty. The system consists of a SMA wire attached to a flexible beam. As the actuator is heated, the beam bends, providing endoscopic motion. The model parameters are fit to experimental data using an ordinary least-squares approach. The uncertainty in the fit model parameters is then quantified using Markov Chain Monte Carlo (MCMC) methods. The MCMC algorithm provides bounds on the parameters, which will ultimately be used in robust control algorithms. One purpose of the paper is to test the feasibility of the Random Walk Metropolis algorithm, the MCMC method used here.

  4. A constraint-based search algorithm for parameter identification of environmental models

    NASA Astrophysics Data System (ADS)

    Gharari, S.; Shafiei, M.; Hrachowitz, M.; Kumar, R.; Fenicia, F.; Gupta, H. V.; Savenije, H. H. G.

    2014-12-01

    Many environmental systems models, such as conceptual rainfall-runoff models, rely on model calibration for parameter identification. For this, an observed output time series (such as runoff) is needed, but frequently not available (e.g., when making predictions in ungauged basins). In this study, we provide an alternative approach for parameter identification using constraints based on two types of restrictions derived from prior (or expert) knowledge. The first, called parameter constraints, restricts the solution space based on realistic relationships that must hold between the different model parameters while the second, called process constraints requires that additional realism relationships between the fluxes and state variables must be satisfied. Specifically, we propose a search algorithm for finding parameter sets that simultaneously satisfy such constraints, based on stepwise sampling of the parameter space. Such parameter sets have the desirable property of being consistent with the modeler's intuition of how the catchment functions, and can (if necessary) serve as prior information for further investigations by reducing the prior uncertainties associated with both calibration and prediction.

  5. Inverse parameter determination in the development of an optimized lithium iron phosphate - Graphite battery discharge model

    NASA Astrophysics Data System (ADS)

    Maheshwari, Arpit; Dumitrescu, Mihaela Aneta; Destro, Matteo; Santarelli, Massimo

    2016-03-01

    Battery models are riddled with incongruous values of parameters considered for validation. In this work, thermally coupled electrochemical model of the pouch is developed and discharge tests on a LiFePO4 pouch cell at different discharge rates are used to optimize the LiFePO4 battery model by determining parameters for which there is no consensus in literature. A discussion on parameter determination, selection and comparison with literature values has been made. The electrochemical model is a P2D model, while the thermal model considers heat transfer in 3D. It is seen that even with no phase change considered for LiFePO4 electrode, the model is able to simulate the discharge curves over a wide range of discharge rates with a single set of parameters provided a dependency of the radius of the LiFePO4 electrode on discharge rate. The approach of using a current dependent radius is shown to be equivalent to using a current dependent diffusion coefficient. Both these modelling approaches are a representation of the particle size distribution in the electrode. Additionally, the model has been thermally validated, which increases the confidence level in the selection of values of parameters.

  6. Mass balance model parameter transferability on a tropical glacier

    NASA Astrophysics Data System (ADS)

    Gurgiser, Wolfgang; Mölg, Thomas; Nicholson, Lindsey; Kaser, Georg

    2013-04-01

    The mass balance and melt water production of glaciers is of particular interest in the Peruvian Andes where glacier melt water has markedly increased water supply during the pronounced dry seasons in recent decades. However, the melt water contribution from glaciers is projected to decrease with appreciable negative impacts on the local society within the coming decades. Understanding mass balance processes on tropical glaciers is a prerequisite for modeling present and future glacier runoff. As a first step towards this aim we applied a process-based surface mass balance model in order to calculate observed ablation at two stakes in the ablation zone of Shallap Glacier (4800 m a.s.l., 9°S) in the Cordillera Blanca, Peru. Under the tropical climate, the snow line migrates very frequently across most of the ablation zone all year round causing large temporal and spatial variations of glacier surface conditions and related ablation. Consequently, pronounced differences between the two chosen stakes and the two years were observed. Hourly records of temperature, humidity, wind speed, short wave incoming radiation, and precipitation are available from an automatic weather station (AWS) on the moraine near the glacier for the hydrological years 2006/07 and 2007/08 while stake readings are available at intervals of between 14 to 64 days. To optimize model parameters, we used 1000 model simulations in which the most sensitive model parameters were varied randomly within their physically meaningful ranges. The modeled surface height change was evaluated against the two stake locations in the lower ablation zone (SH11, 4760m) and in the upper ablation zone (SH22, 4816m), respectively. The optimal parameter set for each point achieved good model skill but if we transfer the best parameter combination from one stake site to the other stake site model errors increases significantly. The same happens if we optimize the model parameters for each year individually and transfer

  7. Estimating demographic parameters using hidden process dynamic models.

    PubMed

    Gimenez, Olivier; Lebreton, Jean-Dominique; Gaillard, Jean-Michel; Choquet, Rémi; Pradel, Roger

    2012-12-01

    Structured population models are widely used in plant and animal demographic studies to assess population dynamics. In matrix population models, populations are described with discrete classes of individuals (age, life history stage or size). To calibrate these models, longitudinal data are collected at the individual level to estimate demographic parameters. However, several sources of uncertainty can complicate parameter estimation, such as imperfect detection of individuals inherent to monitoring in the wild and uncertainty in assigning a state to an individual. Here, we show how recent statistical models can help overcome these issues. We focus on hidden process models that run two time series in parallel, one capturing the dynamics of the true states and the other consisting of observations arising from these underlying possibly unknown states. In a first case study, we illustrate hidden Markov models with an example of how to accommodate state uncertainty using Frequentist theory and maximum likelihood estimation. In a second case study, we illustrate state-space models with an example of how to estimate lifetime reproductive success despite imperfect detection, using a Bayesian framework and Markov Chain Monte Carlo simulation. Hidden process models are a promising tool as they allow population biologists to cope with process variation while simultaneously accounting for observation error. PMID:22373775

  8. Multiple beam interference model for measuring parameters of a capillary.

    PubMed

    Xu, Qiwei; Tian, Wenjing; You, Zhihong; Xiao, Jinghua

    2015-08-01

    A multiple beam interference model based on the ray tracing method and interference theory is built to analyze the interference patterns of a capillary tube filled with a liquid. The relations between the angular widths of the interference fringes and the parameters of both the capillary and liquid are derived. Based on these relations, an approach is proposed to simultaneously determine four parameters of the capillary, i.e., the inner and outer radii of the capillary, the refractive indices of the liquid, and the wall material. PMID:26368114

  9. Inversion of canopy reflectance models for estimation of vegetation parameters

    NASA Technical Reports Server (NTRS)

    Goel, Narendra S.

    1987-01-01

    One of the keys to successful remote sensing of vegetation is to be able to estimate important agronomic parameters like leaf area index (LAI) and biomass (BM) from the bidirectional canopy reflectance (CR) data obtained by a space-shuttle or satellite borne sensor. One approach for such an estimation is through inversion of CR models which relate these parameters to CR. The feasibility of this approach was shown. The overall objective of the research carried out was to address heretofore uninvestigated but important fundamental issues, develop the inversion technique further, and delineate its strengths and limitations.

  10. Multiple beam interference model for measuring parameters of a capillary.

    PubMed

    Xu, Qiwei; Tian, Wenjing; You, Zhihong; Xiao, Jinghua

    2015-08-01

    A multiple beam interference model based on the ray tracing method and interference theory is built to analyze the interference patterns of a capillary tube filled with a liquid. The relations between the angular widths of the interference fringes and the parameters of both the capillary and liquid are derived. Based on these relations, an approach is proposed to simultaneously determine four parameters of the capillary, i.e., the inner and outer radii of the capillary, the refractive indices of the liquid, and the wall material.

  11. Comparison of Cone Model Parameters for Halo Coronal Mass Ejections

    NASA Astrophysics Data System (ADS)

    Na, Hyeonock; Moon, Y.-J.; Jang, Soojeong; Lee, Kyoung-Sun; Kim, Hae-Yeon

    2013-11-01

    Halo coronal mass ejections (HCMEs) are a major cause of geomagnetic storms, hence their three-dimensional structures are important for space weather. We compare three cone models: an elliptical-cone model, an ice-cream-cone model, and an asymmetric-cone model. These models allow us to determine three-dimensional parameters of HCMEs such as radial speed, angular width, and the angle [ γ] between sky plane and cone axis. We compare these parameters obtained from three models using 62 HCMEs observed by SOHO/LASCO from 2001 to 2002. Then we obtain the root-mean-square (RMS) error between the highest measured projection speeds and their calculated projection speeds from the cone models. As a result, we find that the radial speeds obtained from the models are well correlated with one another ( R > 0.8). The correlation coefficients between angular widths range from 0.1 to 0.48 and those between γ-values range from -0.08 to 0.47, which is much smaller than expected. The reason may be the different assumptions and methods. The RMS errors between the highest measured projection speeds and the highest estimated projection speeds of the elliptical-cone model, the ice-cream-cone model, and the asymmetric-cone model are 376 km s-1, 169 km s-1, and 152 km s-1. We obtain the correlation coefficients between the location from the models and the flare location ( R > 0.45). Finally, we discuss strengths and weaknesses of these models in terms of space-weather application.

  12. Parameter estimation for a nonlinear control-oriented tokamak profile evolution model

    NASA Astrophysics Data System (ADS)

    Geelen, P.; Felici, F.; Merle, A.; Sauter, O.

    2015-12-01

    A control-oriented tokamak profile evolution model is crucial for the development and testing of control schemes for a fusion plasma. The RAPTOR (RApid Plasma Transport simulatOR) code was developed with this aim in mind (Felici 2011 Nucl. Fusion 51 083052). The performance of the control system strongly depends on the quality of the control-oriented model predictions. In RAPTOR a semi-empirical transport model is used, instead of a first-principles physics model, to describe the electron heat diffusivity {χ\\text{e}} in view of computational speed. The structure of the empirical model is given by the physics knowledge, and only some unknown physics of {χ\\text{e}} , which is more complicated and less well understood, is captured in its model parameters. Additionally, time-averaged sawtooth behavior is modeled by an ad hoc addition to the neoclassical conductivity {σ\\parallel} and electron heat diffusivity. As a result, RAPTOR contains parameters that need to be estimated for a tokamak plasma to make reliable predictions. In this paper a generic parameter estimation method, based on the nonlinear least-squares theory, was developed to estimate these model parameters. For the TCV tokamak, interpretative transport simulations that used measured {{T}\\text{e}} profiles were performed and it was shown that the developed method is capable of finding the model parameters such that RAPTOR’s predictions agree within ten percent with the simulated q profile and twenty percent with the measured {{T}\\text{e}} profile. The newly developed model-parameter estimation procedure now results in a better description of a fusion plasma and allows for a less ad hoc and more automated method to implement RAPTOR on a variety of tokamaks.

  13. Testing Departure from Additivity in Tukey’s Model using Shrinkage: Application to a Longitudinal Setting

    PubMed Central

    Ko, Yi-An; Mukherjee, Bhramar; Smith, Jennifer A.; Park, Sung Kyun; Kardia, Sharon L.R.; Allison, Matthew A.; Vokonas, Pantel S.; Chen, Jinbo; Diez-Roux, Ana V.

    2014-01-01

    While there has been extensive research developing gene-environment interaction (GEI) methods in case-control studies, little attention has been given to sparse and efficient modeling of GEI in longitudinal studies. In a two-way table for GEI with rows and columns as categorical variables, a conventional saturated interaction model involves estimation of a specific parameter for each cell, with constraints ensuring identifiability. The estimates are unbiased but are potentially inefficient because the number of parameters to be estimated can grow quickly with increasing categories of row/column factors. On the other hand, Tukey’s one degree of freedom (df) model for non-additivity treats the interaction term as a scaled product of row and column main effects. Due to the parsimonious form of interaction, the interaction estimate leads to enhanced efficiency and the corresponding test could lead to increased power. Unfortunately, Tukey’s model gives biased estimates and low power if the model is misspecified. When screening multiple GEIs where each genetic and environmental marker may exhibit a distinct interaction pattern, a robust estimator for interaction is important for GEI detection. We propose a shrinkage estimator for interaction effects that combines estimates from both Tukey’s and saturated interaction models and use the corresponding Wald test for testing interaction in a longitudinal setting. The proposed estimator is robust to misspecification of interaction structure. We illustrate the proposed methods using two longitudinal studies — the Normative Aging Study and the Multi-Ethnic Study of Atherosclerosis. PMID:25112650

  14. Variations in environmental tritium doses due to meteorological data averaging and uncertainties in pathway model parameters

    SciTech Connect

    Kock, A.

    1996-05-01

    The objectives of this research are: (1) to calculate and compare off site doses from atmospheric tritium releases at the Savannah River Site using monthly versus 5 year meteorological data and annual source terms, including additional seasonal and site specific parameters not included in present annual assessments; and (2) to calculate the range of the above dose estimates based on distributions in model parameters given by uncertainty estimates found in the literature. Consideration will be given to the sensitivity of parameters given in former studies.

  15. Unrealistic parameter estimates in inverse modelling: A problem or a benefit for model calibration?

    USGS Publications Warehouse

    Poeter, E.P.; Hill, M.C.

    1996-01-01

    Estimation of unrealistic parameter values by inverse modelling is useful for constructed model discrimination. This utility is demonstrated using the three-dimensional, groundwater flow inverse model MODFLOWP to estimate parameters in a simple synthetic model where the true conditions and character of the errors are completely known. When a poorly constructed model is used, unreasonable parameter values are obtained even when using error free observations and true initial parameter values. This apparent problem is actually a benefit because it differentiates accurately and inaccurately constructed models. The problems seem obvious for a synthetic problem in which the truth is known, but are obscure when working with field data. Situations in which unrealistic parameter estimates indicate constructed model problems are illustrated in applications of inverse modelling to three field sites and to complex synthetic test cases in which it is shown that prediction accuracy also suffers when constructed models are inaccurate.

  16. Dynamic imaging model and parameter optimization for a star tracker.

    PubMed

    Yan, Jinyun; Jiang, Jie; Zhang, Guangjun

    2016-03-21

    Under dynamic conditions, star spots move across the image plane of a star tracker and form a smeared star image. This smearing effect increases errors in star position estimation and degrades attitude accuracy. First, an analytical energy distribution model of a smeared star spot is established based on a line segment spread function because the dynamic imaging process of a star tracker is equivalent to the static imaging process of linear light sources. The proposed model, which has a clear physical meaning, explicitly reflects the key parameters of the imaging process, including incident flux, exposure time, velocity of a star spot in an image plane, and Gaussian radius. Furthermore, an analytical expression of the centroiding error of the smeared star spot is derived using the proposed model. An accurate and comprehensive evaluation of centroiding accuracy is obtained based on the expression. Moreover, analytical solutions of the optimal parameters are derived to achieve the best performance in centroid estimation. Finally, we perform numerical simulations and a night sky experiment to validate the correctness of the dynamic imaging model, the centroiding error expression, and the optimal parameters.

  17. Modeling crash spatial heterogeneity: random parameter versus geographically weighting.

    PubMed

    Xu, Pengpeng; Huang, Helai

    2015-02-01

    The widely adopted techniques for regional crash modeling include the negative binomial model (NB) and Bayesian negative binomial model with conditional autoregressive prior (CAR). The outputs from both models consist of a set of fixed global parameter estimates. However, the impacts of predicting variables on crash counts might not be stationary over space. This study intended to quantitatively investigate this spatial heterogeneity in regional safety modeling using two advanced approaches, i.e., random parameter negative binomial model (RPNB) and semi-parametric geographically weighted Poisson regression model (S-GWPR). Based on a 3-year data set from the county of Hillsborough, Florida, results revealed that (1) both RPNB and S-GWPR successfully capture the spatially varying relationship, but the two methods yield notably different sets of results; (2) the S-GWPR performs best with the highest value of Rd(2) as well as the lowest mean absolute deviance and Akaike information criterion measures. Whereas the RPNB is comparable to the CAR, in some cases, it provides less accurate predictions; (3) a moderately significant spatial correlation is found in the residuals of RPNB and NB, implying the inadequacy in accounting for the spatial correlation existed across adjacent zones. As crash data are typically collected with reference to location dimension, it is desirable to firstly make use of the geographical component to explore explicitly spatial aspects of the crash data (i.e., the spatial heterogeneity, or the spatially structured varying relationships), then is the unobserved heterogeneity by non-spatial or fuzzy techniques. The S-GWPR is proven to be more appropriate for regional crash modeling as the method outperforms the global models in capturing the spatial heterogeneity occurring in the relationship that is model, and compared with the non-spatial model, it is capable of accounting for the spatial correlation in crash data.

  18. Modeling crash spatial heterogeneity: random parameter versus geographically weighting.

    PubMed

    Xu, Pengpeng; Huang, Helai

    2015-02-01

    The widely adopted techniques for regional crash modeling include the negative binomial model (NB) and Bayesian negative binomial model with conditional autoregressive prior (CAR). The outputs from both models consist of a set of fixed global parameter estimates. However, the impacts of predicting variables on crash counts might not be stationary over space. This study intended to quantitatively investigate this spatial heterogeneity in regional safety modeling using two advanced approaches, i.e., random parameter negative binomial model (RPNB) and semi-parametric geographically weighted Poisson regression model (S-GWPR). Based on a 3-year data set from the county of Hillsborough, Florida, results revealed that (1) both RPNB and S-GWPR successfully capture the spatially varying relationship, but the two methods yield notably different sets of results; (2) the S-GWPR performs best with the highest value of Rd(2) as well as the lowest mean absolute deviance and Akaike information criterion measures. Whereas the RPNB is comparable to the CAR, in some cases, it provides less accurate predictions; (3) a moderately significant spatial correlation is found in the residuals of RPNB and NB, implying the inadequacy in accounting for the spatial correlation existed across adjacent zones. As crash data are typically collected with reference to location dimension, it is desirable to firstly make use of the geographical component to explore explicitly spatial aspects of the crash data (i.e., the spatial heterogeneity, or the spatially structured varying relationships), then is the unobserved heterogeneity by non-spatial or fuzzy techniques. The S-GWPR is proven to be more appropriate for regional crash modeling as the method outperforms the global models in capturing the spatial heterogeneity occurring in the relationship that is model, and compared with the non-spatial model, it is capable of accounting for the spatial correlation in crash data. PMID:25460087

  19. Important observations and parameters for a salt water intrusion model.

    PubMed

    Shoemaker, W Barclay

    2004-01-01

    Sensitivity analysis with a density-dependent ground water flow simulator can provide insight and understanding of salt water intrusion calibration problems far beyond what is possible through intuitive analysis alone. Five simple experimental simulations presented here demonstrate this point. Results show that dispersivity is a very important parameter for reproducing a steady-state distribution of hydraulic head, salinity, and flow in the transition zone between fresh water and salt water in a coastal aquifer system. When estimating dispersivity, the following conclusions can be drawn about the data types and locations considered. (1) The "toe" of the transition zone is the most effective location for hydraulic head and salinity observations. (2) Areas near the coastline where submarine ground water discharge occurs are the most effective locations for flow observations. (3) Salinity observations are more effective than hydraulic head observations. (4) The importance of flow observations aligned perpendicular to the shoreline varies dramatically depending on distance seaward from the shoreline. Extreme parameter correlation can prohibit unique estimation of permeability parameters such as hydraulic conductivity and flow parameters such as recharge in a density-dependent ground water flow model when using hydraulic head and salinity observations. Adding flow observations perpendicular to the shoreline in areas where ground water is exchanged with the ocean body can reduce the correlation, potentially resulting in unique estimates of these parameter values. Results are expected to be directly applicable to many complex situations, and have implications for model development whether or not formal optimization methods are used in model calibration. PMID:15584297

  20. Modeling association among demographic parameters in analysis of open population capture?recapture data

    USGS Publications Warehouse

    Link, W.A.; Barker, R.J.

    2005-01-01

    We present a hierarchical extension of the Cormack?Jolly?Seber (CJS) model for open population capture?recapture data. In addition to recaptures of marked animals, we model first captures of animals and losses on capture. The parameter set includes capture probabilities, survival rates, and birth rates. The survival rates and birth rates are treated as a random sample from a bivariate distribution, thus the model explicitly incorporates correlation in these demographic rates. A key feature of the model is that the likelihood function, which includes a CJS model factor, is expressed entirely in terms of identifiable parameters; losses on capture can be factored out of the model. Since the computational complexity of classical likelihood methods is prohibitive, we use Markov chain Monte Carlo in a Bayesian analysis. We describe an efficient candidate-generation scheme for Metropolis?Hastings sampling of CJS models and extensions. The procedure is illustrated using mark-recapture data for the moth Gonodontis bidentata.

  1. Dependency of parameter values of a crop model on the spatial scale of simulation

    NASA Astrophysics Data System (ADS)

    Iizumi, Toshichika; Tanaka, Yukiko; Sakurai, Gen; Ishigooka, Yasushi; Yokozawa, Masayuki

    2014-09-01

    Reliable regional-scale representation of crop growth and yields has been increasingly important in earth system modeling for the simulation of atmosphere-vegetation-soil interactions in managed ecosystems. While the parameter values in many crop models are location specific or cultivar specific, the validity of such values for regional simulation is in question. We present the scale dependency of likely parameter values that are related to the responses of growth rate and yield to temperature, using the paddy rice model applied to Japan as an example. For all regions, values of the two parameters that determine the degree of yield response to low temperature (the base temperature for calculating cooling degree days and the curvature factor of spikelet sterility caused by low temperature) appeared to change relative to the grid interval. Two additional parameters (the air temperature at which the developmental rate is half of the maximum rate at the optimum temperature and the value of developmental index at which point the crop becomes sensitive to the photoperiod) showed scale dependency in a limited region, whereas the remaining three parameters that determine the phenological characteristics of a rice cultivar and the technological level show no clear scale dependency. These results indicate the importance of using appropriate parameter values for the spatial scale at which a crop model operates. We recommend avoiding the use of location-specific or cultivar-specific parameter values for regional crop simulation, unless a rationale is presented suggesting these values are insensitive to spatial scale.

  2. Using Generalized Additive Models to Analyze Single-Case Designs

    ERIC Educational Resources Information Center

    Shadish, William; Sullivan, Kristynn

    2013-01-01

    Many analyses for single-case designs (SCDs)--including nearly all the effect size indicators-- currently assume no trend in the data. Regression and multilevel models allow for trend, but usually test only linear trend and have no principled way of knowing if higher order trends should be represented in the model. This paper shows how Generalized…

  3. Nano-Fe as feed additive improves the hematological and immunological parameters of fish, Labeo rohita H.

    NASA Astrophysics Data System (ADS)

    Behera, T.; Swain, P.; Rangacharulu, P. V.; Samanta, M.

    2014-08-01

    An experiment was conducted to compare the effects of iron oxide nanoparticles ( T 1) and ferrous sulfate ( T 2) on Indian major carp, Labeo rohita H. There were significant differences ( P < 0.05) in the final weight of T 1 and T 2 compared with the control. Survival rates were not affected by the dietary treatments. Fish fed a basal diet (control) showed lower ( P < 0.05) iron content in muscle compared to T 1 and T 2. Furthermore, the highest value ( P < 0.05) of iron content was observed in T 1. In addition, RBCs and hemoglobin levels were significantly higher in T 1 as compared to other treated groups. Different innate immune parameters such as respiratory burst activity, bactericidal activity and myeloperoxidase activity were higher in nano-Fe-treated diet ( T 1) as compared to other iron source ( T 2) and control in 30 days post-feeding. Moreover, nano-Fe appeared to be more effective ( P < 0.05) than ferrous sulfate in increasing muscle iron and hemoglobin contents. Dietary administration of nano-Fe did not cause any oxidative damage, but improved antioxidant enzymatic activities (SOD and GSH level) irrespective of different iron sources in the basal diet.

  4. Sonochemical degradation of the pharmaceutical fluoxetine: Effect of parameters, organic and inorganic additives and combination with a biological system.

    PubMed

    Serna-Galvis, Efraím A; Silva-Agredo, Javier; Giraldo-Aguirre, Ana L; Torres-Palma, Ricardo A

    2015-08-15

    Fluoxetine (FLX), one of the most widely used antidepressants in the world, is an emergent pollutant found in natural waters that causes disrupting effects on the endocrine systems of some aquatic species. This work explores the total elimination of FLX by sonochemical treatment coupled to a biological system. The biological process acting alone was shown to be unable to remove the pollutant, even under favourable conditions of pH and temperature. However, sonochemical treatment (600 kHz) was shown to be able to remove the pharmaceutical. Several parameters were evaluated for the ultrasound application: the applied power (20-60 W), dissolved gas (air, Ar and He), pH (3-11) and initial concentration of fluoxetine (2.9-162.0 μmol L(-1)). Additionally, the presence of organic (1-hexanol and 2-propanol) and inorganic (Fe(2+)) compounds in the water matrix and the degradation of FLX in a natural mineral water were evaluated. The sonochemical treatment readily eliminates FLX following a kinetic Langmuir. After 360 min of ultrasonic irradiation, 15% mineralization was achieved. Analysis of the biodegradability provided evidence that the sonochemical process transforms the pollutant into biodegradable substances, which can then be mineralized in a subsequent biological treatment.

  5. Parameter discovery in stochastic biological models using simulated annealing and statistical model checking.

    PubMed

    Hussain, Faraz; Jha, Sumit K; Jha, Susmit; Langmead, Christopher J

    2014-01-01

    Stochastic models are increasingly used to study the behaviour of biochemical systems. While the structure of such models is often readily available from first principles, unknown quantitative features of the model are incorporated into the model as parameters. Algorithmic discovery of parameter values from experimentally observed facts remains a challenge for the computational systems biology community. We present a new parameter discovery algorithm that uses simulated annealing, sequential hypothesis testing, and statistical model checking to learn the parameters in a stochastic model. We apply our technique to a model of glucose and insulin metabolism used for in-silico validation of artificial pancreata and demonstrate its effectiveness by developing parallel CUDA-based implementation for parameter synthesis in this model. PMID:24989866

  6. Additive Manufacturing of Anatomical Models from Computed Tomography Scan Data.

    PubMed

    Gür, Y

    2014-12-01

    The purpose of the study presented here was to investigate the manufacturability of human anatomical models from Computed Tomography (CT) scan data via a 3D desktop printer which uses fused deposition modelling (FDM) technology. First, Digital Imaging and Communications in Medicine (DICOM) CT scan data were converted to 3D Standard Triangle Language (STL) format by using In Vaselius digital imaging program. Once this STL file is obtained, a 3D physical version of the anatomical model can be fabricated by a desktop 3D FDM printer. As a case study, a patient's skull CT scan data was considered, and a tangible version of the skull was manufactured by a 3D FDM desktop printer. During the 3D printing process, the skull was built using acrylonitrile-butadiene-styrene (ABS) co-polymer plastic. The printed model showed that the 3D FDM printing technology is able to fabricate anatomical models with high accuracy. As a result, the skull model can be used for preoperative surgical planning, medical training activities, implant design and simulation to show the potential of the FDM technology in medical field. It will also improve communication between medical stuff and patients. Current result indicates that a 3D desktop printer which uses FDM technology can be used to obtain accurate anatomical models.

  7. Additive Manufacturing of Anatomical Models from Computed Tomography Scan Data.

    PubMed

    Gür, Y

    2014-12-01

    The purpose of the study presented here was to investigate the manufacturability of human anatomical models from Computed Tomography (CT) scan data via a 3D desktop printer which uses fused deposition modelling (FDM) technology. First, Digital Imaging and Communications in Medicine (DICOM) CT scan data were converted to 3D Standard Triangle Language (STL) format by using In Vaselius digital imaging program. Once this STL file is obtained, a 3D physical version of the anatomical model can be fabricated by a desktop 3D FDM printer. As a case study, a patient's skull CT scan data was considered, and a tangible version of the skull was manufactured by a 3D FDM desktop printer. During the 3D printing process, the skull was built using acrylonitrile-butadiene-styrene (ABS) co-polymer plastic. The printed model showed that the 3D FDM printing technology is able to fabricate anatomical models with high accuracy. As a result, the skull model can be used for preoperative surgical planning, medical training activities, implant design and simulation to show the potential of the FDM technology in medical field. It will also improve communication between medical stuff and patients. Current result indicates that a 3D desktop printer which uses FDM technology can be used to obtain accurate anatomical models. PMID:26336695

  8. Automated parameter estimation for biological models using Bayesian statistical model checking

    PubMed Central

    2015-01-01

    Background Probabilistic models have gained widespread acceptance in the systems biology community as a useful way to represent complex biological systems. Such models are developed using existing knowledge of the structure and dynamics of the system, experimental observations, and inferences drawn from statistical analysis of empirical data. A key bottleneck in building such models is that some system variables cannot be measured experimentally. These variables are incorporated into the model as numerical parameters. Determining values of these parameters that justify existing experiments and provide reliable predictions when model simulations are performed is a key research problem. Domain experts usually estimate the values of these parameters by fitting the model to experimental data. Model fitting is usually expressed as an optimization problem that requires minimizing a cost-function which measures some notion of distance between the model and the data. This optimization problem is often solved by combining local and global search methods that tend to perform well for the specific application domain. When some prior information about parameters is available, methods such as Bayesian inference are commonly used for parameter learning. Choosing the appropriate parameter search technique requires detailed domain knowledge and insight into the underlying system. Results Using an agent-based model of the dynamics of acute inflammation, we demonstrate a novel parameter estimation algorithm by discovering the amount and schedule of doses of bacterial lipopolysaccharide that guarantee a set of observed clinical outcomes with high probability. We synthesized values of twenty-eight unknown parameters such that the parameterized model instantiated with these parameter values satisfies four specifications describing the dynamic behavior of the model. Conclusions We have developed a new algorithmic technique for discovering parameters in complex stochastic models of

  9. A generalized additive model for the spatial distribution of snowpack in the Spanish Pyrenees

    NASA Astrophysics Data System (ADS)

    López-Moreno, J. I.; Nogués-Bravo, D.

    2005-10-01

    A generalized additive model (GAM) was used to model the spatial distribution of snow depth in the central Spanish Pyrenees. Statistically significant non-linear relationships were found between distinct location and topographical variables and the average depth of the April snowpack at 76 snow poles from 1985 to 2000. The joint effect of the predictor variables explained more than 73% of the variance of the dependent variable. The performance of the model was assessed by applying a number of quantitative approaches to the residuals from a cross-validation test. The relatively low estimated errors and the possibility of understanding the processes that control snow accumulation, through the response curves of each independent variable, indicate that GAMs may be a useful tool for interpolating local snow depth or other climate parameters.

  10. Addition of Diffusion Model to MELCOR and Comparison with Data

    SciTech Connect

    Brad Merrill; Richard Moore; Chang Oh

    2004-06-01

    A chemical diffusion model was incorporated into the thermal-hydraulics package of the MELCOR Severe Accident code (Reference 1) for analyzing air ingress events for a very high temperature gas-cooled reactor.

  11. Modelling spatial-temporal and coordinative parameters in swimming.

    PubMed

    Seifert, L; Chollet, D

    2009-07-01

    This study modelled the changes in spatial-temporal and coordinative parameters through race paces in the four swimming strokes. The arm and leg phases in simultaneous strokes (butterfly and breaststroke) and the inter-arm phases in alternating strokes (crawl and backstroke) were identified by video analysis to calculate the time gaps between propulsive phases. The relationships among velocity, stroke rate, stroke length and coordination were modelled by polynomial regression. Twelve elite male swimmers swam at four race paces. Quadratic regression modelled the changes in spatial-temporal and coordinative parameters with velocity increases for all four strokes. First, the quadratic regression between coordination and velocity showed changes common to all four strokes. Notably, the time gaps between the key points defining the beginning and end of the stroke phases decreased with increases in velocity, which led to decreases in glide times and increases in the continuity between propulsive phases. Conjointly, the quadratic regression among stroke rate, stroke length and velocity was similar to the changes in coordination, suggesting that these parameters may influence coordination. The main practical application for coaches and scientists is that ineffective time gaps can be distinguished from those that simply reflect an individual swimmer's profile by monitoring the glide times within a stroke cycle. In the case of ineffective time gaps, targeted training could improve the swimmer's management of glide time. PMID:18547862

  12. Modeling and Extraction of Parasitic Thermal Conductance and Intrinsic Model Parameters of Thermoelectric Modules

    NASA Astrophysics Data System (ADS)

    Sim, Minseob; Park, Hyunbin; Kim, Shiho

    2015-11-01

    We have presented both modeling and a method for extracting parasitic thermal conductance as well as intrinsic device parameters of a thermoelectric module based on information readily available in vendor datasheets. An equivalent circuit model that is compatible with circuit simulators is derived, followed by a methodology for extracting both intrinsic and parasitic model parameters. For the first time, the effective thermal resistance of the ceramic and copper interconnect layers of the thermoelectric module is extracted using only parameters listed in vendor datasheets. In the experimental condition, including under condition of varying electric current, the parameters extracted from the model accurately reproduce the performance of commercial thermoelectric modules.

  13. Nonlocal Order Parameters for the 1D Hubbard Model

    NASA Astrophysics Data System (ADS)

    Montorsi, Arianna; Roncaglia, Marco

    2012-12-01

    We characterize the Mott-insulator and Luther-Emery phases of the 1D Hubbard model through correlators that measure the parity of spin and charge strings along the chain. These nonlocal quantities order in the corresponding gapped phases and vanish at the critical point Uc=0, thus configuring as hidden order parameters. The Mott insulator consists of bound doublon-holon pairs, which in the Luther-Emery phase turn into electron pairs with opposite spins, both unbinding at Uc. The behavior of the parity correlators is captured by an effective free spinless fermion model.

  14. Water quality modelling for ephemeral rivers: Model development and parameter assessment

    NASA Astrophysics Data System (ADS)

    Mannina, Giorgio; Viviani, Gaspare

    2010-11-01

    SummaryRiver water quality models can be valuable tools for the assessment and management of receiving water body quality. However, such water quality models require accurate model calibration in order to specify model parameters. Reliable model calibration requires an extensive array of water quality data that are generally rare and resource-intensive, both economically and in terms of human resources, to collect. In the case of small rivers, such data are scarce due to the fact that these rivers are generally considered too insignificant, from a practical and economic viewpoint, to justify the investment of such considerable time and resources. As a consequence, the literature contains very few studies on the water quality modelling for small rivers, and such studies as have been published are fairly limited in scope. In this paper, a simplified river water quality model is presented. The model is an extension of the Streeter-Phelps model and takes into account the physico-chemical and biological processes most relevant to modelling the quality of receiving water bodies (i.e., degradation of dissolved carbonaceous substances, ammonium oxidation, algal uptake and denitrification, dissolved oxygen balance, including depletion by degradation processes and supply by physical reaeration and photosynthetic production). The model has been applied to an Italian case study, the Oreto river (IT), which has been the object of an Italian research project aimed at assessing the river's water quality. For this reason, several monitoring campaigns have been previously carried out in order to collect water quantity and quality data on this river system. In particular, twelve river cross sections were monitored, and both flow and water quality data were collected for each cross section. The results of the calibrated model show satisfactory agreement with the measured data and results reveal important differences between the parameters used to model small rivers as compared to

  15. Modeling shortest path selection of the ant Linepithema humile using psychophysical theory and realistic parameter values.

    PubMed

    von Thienen, Wolfhard; Metzler, Dirk; Witte, Volker

    2015-05-01

    The emergence of self-organizing behavior in ants has been modeled in various theoretical approaches in the past decades. One model explains experimental observations in which Argentine ants (Linepithema humile) selected the shorter of two alternative paths from their nest to a food source (shortest path experiments). This model serves as an important example for the emergence of collective behavior and self-organization in biological systems. In addition, it inspired the development of computer algorithms for optimization problems called ant colony optimization (ACO). In the model, a choice function describing how ants react to different pheromone concentrations is fundamental. However, the parameters of the choice function were not deduced experimentally but freely adapted so that the model fitted the observations of the shortest path experiments. Thus, important knowledge was lacking about crucial model assumptions. A recent study on the Argentine ant provided this information by measuring the response of the ants to varying pheromone concentrations. In said study, the above mentioned choice function was fitted to the experimental data and its parameters were deduced. In addition, a psychometric function was fitted to the data and its parameters deduced. Based on these findings, it is possible to test the shortest path model by applying realistic parameter values. Here we present the results of such tests using Monte Carlo simulations of shortest path experiments with Argentine ants. We compare the choice function and the psychometric function, both with parameter values deduced from the above-mentioned experiments. Our results show that by applying the psychometric function, the shortest path experiments can be explained satisfactorily by the model. The study represents the first example of how psychophysical theory can be used to understand and model collective foraging behavior of ants based on trail pheromones. These findings may be important for other

  16. An Improved Swarm Optimization for Parameter Estimation and Biological Model Selection

    PubMed Central

    Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail

    2013-01-01

    One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This

  17. Parameter and uncertainty estimation for mechanistic, spatially explicit epidemiological models

    NASA Astrophysics Data System (ADS)

    Finger, Flavio; Schaefli, Bettina; Bertuzzo, Enrico; Mari, Lorenzo; Rinaldo, Andrea

    2014-05-01

    Epidemiological models can be a crucially important tool for decision-making during disease outbreaks. The range of possible applications spans from real-time forecasting and allocation of health-care resources to testing alternative intervention mechanisms such as vaccines, antibiotics or the improvement of sanitary conditions. Our spatially explicit, mechanistic models for cholera epidemics have been successfully applied to several epidemics including, the one that struck Haiti in late 2010 and is still ongoing. Calibration and parameter estimation of such models represents a major challenge because of properties unusual in traditional geoscientific domains such as hydrology. Firstly, the epidemiological data available might be subject to high uncertainties due to error-prone diagnosis as well as manual (and possibly incomplete) data collection. Secondly, long-term time-series of epidemiological data are often unavailable. Finally, the spatially explicit character of the models requires the comparison of several time-series of model outputs with their real-world counterparts, which calls for an appropriate weighting scheme. It follows that the usual assumption of a homoscedastic Gaussian error distribution, used in combination with classical calibration techniques based on Markov chain Monte Carlo algorithms, is likely to be violated, whereas the construction of an appropriate formal likelihood function seems close to impossible. Alternative calibration methods, which allow for accurate estimation of total model uncertainty, particularly regarding the envisaged use of the models for decision-making, are thus needed. Here we present the most recent developments regarding methods for parameter and uncertainty estimation to be used with our mechanistic, spatially explicit models for cholera epidemics, based on informal measures of goodness of fit.

  18. Model for Assembly Line Re-Balancing Considering Additional Capacity and Outsourcing to Face Demand Fluctuations

    NASA Astrophysics Data System (ADS)

    Samadhi, TMAA; Sumihartati, Atin

    2016-02-01

    The most critical stage in a garment industry is sewing process, because generally, it consists of a number of operations and a large number of sewing machines for each operation. Therefore, it requires a balancing method that can assign task to work station with balance workloads. Many studies on assembly line balancing assume a new assembly line, but in reality, due to demand fluctuation and demand increased a re-balancing is needed. To cope with those fluctuating demand changes, additional capacity can be carried out by investing in spare sewing machine and paying for sewing service through outsourcing. This study develops an assembly line balancing (ALB) model on existing line to cope with fluctuating demand change. Capacity redesign is decided if the fluctuation demand exceeds the available capacity through a combination of making investment on new machines and outsourcing while considering for minimizing the cost of idle capacity in the future. The objective of the model is to minimize the total cost of the line assembly that consists of operating costs, machine cost, adding capacity cost, losses cost due to idle capacity and outsourcing costs. The model develop is based on an integer programming model. The model is tested for a set of data of one year demand with the existing number of sewing machines of 41 units. The result shows that additional maximum capacity up to 76 units of machine required when there is an increase of 60% of the average demand, at the equal cost parameters..

  19. Accelerated gravitational wave parameter estimation with reduced order modeling.

    PubMed

    Canizares, Priscilla; Field, Scott E; Gair, Jonathan; Raymond, Vivien; Smith, Rory; Tiglio, Manuel

    2015-02-20

    Inferring the astrophysical parameters of coalescing compact binaries is a key science goal of the upcoming advanced LIGO-Virgo gravitational-wave detector network and, more generally, gravitational-wave astronomy. However, current approaches to parameter estimation for these detectors require computationally expensive algorithms. Therefore, there is a pressing need for new, fast, and accurate Bayesian inference techniques. In this Letter, we demonstrate that a reduced order modeling approach enables rapid parameter estimation to be performed. By implementing a reduced order quadrature scheme within the LIGO Algorithm Library, we show that Bayesian inference on the 9-dimensional parameter space of nonspinning binary neutron star inspirals can be sped up by a factor of ∼30 for the early advanced detectors' configurations (with sensitivities down to around 40 Hz) and ∼70 for sensitivities down to around 20 Hz. This speedup will increase to about 150 as the detectors improve their low-frequency limit to 10 Hz, reducing to hours analyses which could otherwise take months to complete. Although these results focus on interferometric gravitational wave detectors, the techniques are broadly applicable to any experiment where fast Bayesian analysis is desirable. PMID:25763948

  20. Analysis and Modeling of soil hydrology under different soil additives in artificial runoff plots

    NASA Astrophysics Data System (ADS)

    Ruidisch, M.; Arnhold, S.; Kettering, J.; Huwe, B.; Kuzyakov, Y.; Ok, Y.; Tenhunen, J. D.

    2009-12-01

    The impact of monsoon events during June and July in the Korean project region Haean Basin, which is located in the northeastern part of South Korea plays a key role for erosion, leaching and groundwater pollution risk by agrochemicals. Therefore, the project investigates the main hydrological processes in agricultural soils under field and laboratory conditions on different scales (plot, hillslope and catchment). Soil hydrological parameters were analysed depending on different soil additives, which are known for prevention of soil erosion and nutrient loss as well as increasing of water infiltration, aggregate stability and soil fertility. Hence, synthetic water-soluble Polyacrylamides (PAM), Biochar (Black Carbon mixed with organic fertilizer), both PAM and Biochar were applied in runoff plots at three agricultural field sites. Additionally, as control a subplot was set up without any additives. The field sites were selected in areas with similar hillslope gradients and with emphasis on the dominant land management form of dryland farming in Haean, which is characterised by row planting and row covering by foil. Hydrological parameters like satured water conductivity, matrix potential and water content were analysed by infiltration experiments, continuous tensiometer measurements, time domain reflectometry as well as pressure plates to indentify characteristic water retention curves of each horizon. Weather data were observed by three weather stations next to the runoff plots. Measured data also provide the input data for modeling water transport in the unsatured zone in runoff plots with HYDRUS 1D/2D/3D and SWAT (Soil & Water Assessment Tool).

  1. Computational approaches to parameter estimation and model selection in immunology

    NASA Astrophysics Data System (ADS)

    Baker, C. T. H.; Bocharov, G. A.; Ford, J. M.; Lumb, P. M.; Norton, S. J.; Paul, C. A. H.; Junt, T.; Krebs, P.; Ludewig, B.

    2005-12-01

    One of the significant challenges in biomathematics (and other areas of science) is to formulate meaningful mathematical models. Our problem is to decide on a parametrized model which is, in some sense, most likely to represent the information in a set of observed data. In this paper, we illustrate the computational implementation of an information-theoretic approach (associated with a maximum likelihood treatment) to modelling in immunology.The approach is illustrated by modelling LCMV infection using a family of models based on systems of ordinary differential and delay differential equations. The models (which use parameters that have a scientific interpretation) are chosen to fit data arising from experimental studies of virus-cytotoxic T lymphocyte kinetics; the parametrized models that result are arranged in a hierarchy by the computation of Akaike indices. The practical illustration is used to convey more general insight. Because the mathematical equations that comprise the models are solved numerically, the accuracy in the computation has a bearing on the outcome, and we address this and other practical details in our discussion.

  2. How many parameters does a quark mass matrix model need

    SciTech Connect

    Koide, Y. )

    1990-11-01

    An investigation independent of matrix form is made of how many parameters, which characterize the difference between up- and down-quark mass matrices, are, at least, required from the present data on quark masses and mixings. From a general study of the model with hierarchical three-step mass generations described by the three parameters {alpha}{sub {ital q}}, {beta}{sub {ital q}}, and {gamma}{sub {ital q}} ({vert bar}{alpha}{sub {ital q}}{vert bar}{much gt}{vert bar}{beta}{sub {ital q}}{vert bar}{much gt}{vert bar}{gamma}{sub {ital q}}{vert bar}; {ital q}={ital u},{ital d}), it is pointed out that the model with {beta}{sub {ital u}}/{beta}{sub {ital d}}={gamma}{sub {ital u}}/{gamma}{sub {ital d}} (i.e., with two independent parameters {alpha}{sub {ital q}} and {beta}{sub {ital q}}) is ruled out.

  3. Parameter estimation and uncertainty quantification in a biogeochemical model using optimal experimental design methods

    NASA Astrophysics Data System (ADS)

    Reimer, Joscha; Piwonski, Jaroslaw; Slawig, Thomas

    2016-04-01

    The statistical significance of any model-data comparison strongly depends on the quality of the used data and the criterion used to measure the model-to-data misfit. The statistical properties (such as mean values, variances and covariances) of the data should be taken into account by choosing a criterion as, e.g., ordinary, weighted or generalized least squares. Moreover, the criterion can be restricted onto regions or model quantities which are of special interest. This choice influences the quality of the model output (also for not measured quantities) and the results of a parameter estimation or optimization process. We have estimated the parameters of a three-dimensional and time-dependent marine biogeochemical model describing the phosphorus cycle in the ocean. For this purpose, we have developed a statistical model for measurements of phosphate and dissolved organic phosphorus. This statistical model includes variances and correlations varying with time and location of the measurements. We compared the obtained estimations of model output and parameters for different criteria. Another question is if (and which) further measurements would increase the model's quality at all. Using experimental design criteria, the information content of measurements can be quantified. This may refer to the uncertainty in unknown model parameters as well as the uncertainty regarding which model is closer to reality. By (another) optimization, optimal measurement properties such as locations, time instants and quantities to be measured can be identified. We have optimized such properties for additional measurement for the parameter estimation of the marine biogeochemical model. For this purpose, we have quantified the uncertainty in the optimal model parameters and the model output itself regarding the uncertainty in the measurement data using the (Fisher) information matrix. Furthermore, we have calculated the uncertainty reduction by additional measurements depending on time

  4. Parameter and Process Significance in Mechanistic Modeling of Cellulose Hydrolysis

    NASA Astrophysics Data System (ADS)

    Rotter, B.; Barry, A.; Gerhard, J.; Small, J.; Tahar, B.

    2005-12-01

    The rate of cellulose hydrolysis, and of associated microbial processes, is important in determining the stability of landfills and their potential impact on the environment, as well as associated time scales. To permit further exploration in this field, a process-based model of cellulose hydrolysis was developed. The model, which is relevant to both landfill and anaerobic digesters, includes a novel approach to biomass transfer between a cellulose-bound biofilm and biomass in the surrounding liquid. Model results highlight the significance of the bacterial colonization of cellulose particles by attachment through contact in solution. Simulations revealed that enhanced colonization, and therefore cellulose degradation, was associated with reduced cellulose particle size, higher biomass populations in solution, and increased cellulose-binding ability of the biomass. A sensitivity analysis of the system parameters revealed different sensitivities to model parameters for a typical landfill scenario versus that for an anaerobic digester. The results indicate that relative surface area of cellulose and proximity of hydrolyzing bacteria are key factors determining the cellulose degradation rate.

  5. A method of estimating optimal catchment model parameters

    NASA Astrophysics Data System (ADS)

    Ibrahim, Yaacob; Liong, Shie-Yui

    1993-09-01

    A review of a calibration method developed earlier (Ibrahim and Liong, 1992) is presented. The method generates optimal values for single events. It entails randomizing the calibration parameters over bounds such that a system response under consideration is bounded. Within the bounds, which are narrow and generated automatically, explicit response surface representation of the response is obtained using experimental design techniques and regression analysis. The optimal values are obtained by searching on the response surface for a point at which the predicted response is equal to the measured response and the value of the joint probability density function at that point in a transformed space is the highest. The method is demonstrated on a catchment in Singapore. The issue of global optimal values is addressed by applying the method on wider bounds. The results indicate that the optimal values arising from the narrow set of bounds are, indeed, global. Improvements which are designed to achieve comparably accurate estimates but with less expense are introduced. A linear response surface model is used. Two approximations of the model are studied. The first is to fit the model using data points generated from simple Monte Carlo simulation; the second is to approximate the model by a Taylor series expansion. Very good results are obtained from both approximations. Two methods of obtaining a single estimate from the individual event's estimates of the parameters are presented. The simulated and measured hydrographs of four verification storms using these estimates compare quite well.

  6. Estimating recharge rates with analytic element models and parameter estimation

    USGS Publications Warehouse

    Dripps, W.R.; Hunt, R.J.; Anderson, M.P.

    2006-01-01

    Quantifying the spatial and temporal distribution of recharge is usually a prerequisite for effective ground water flow modeling. In this study, an analytic element (AE) code (GFLOW) was used with a nonlinear parameter estimation code (UCODE) to quantify the spatial and temporal distribution of recharge using measured base flows as calibration targets. The ease and flexibility of AE model construction and evaluation make this approach well suited for recharge estimation. An AE flow model of an undeveloped watershed in northern Wisconsin was optimized to match median annual base flows at four stream gages for 1996 to 2000 to demonstrate the approach. Initial optimizations that assumed a constant distributed recharge rate provided good matches (within 5%) to most of the annual base flow estimates, but discrepancies of >12% at certain gages suggested that a single value of recharge for the entire watershed is inappropriate. Subsequent optimizations that allowed for spatially distributed recharge zones based on the distribution of vegetation types improved the fit and confirmed that vegetation can influence spatial recharge variability in this watershed. Temporally, the annual recharge values varied >2.5-fold between 1996 and 2000 during which there was an observed 1.7-fold difference in annual precipitation, underscoring the influence of nonclimatic factors on interannual recharge variability for regional flow modeling. The final recharge values compared favorably with more labor-intensive field measurements of recharge and results from studies, supporting the utility of using linked AE-parameter estimation codes for recharge estimation. Copyright ?? 2005 The Author(s).

  7. Optimal vibration control of curved beams using distributed parameter models

    NASA Astrophysics Data System (ADS)

    Liu, Fushou; Jin, Dongping; Wen, Hao

    2016-12-01

    The design of linear quadratic optimal controller using spectral factorization method is studied for vibration suppression of curved beam structures modeled as distributed parameter models. The equations of motion for active control of the in-plane vibration of a curved beam are developed firstly considering its shear deformation and rotary inertia, and then the state space model of the curved beam is established directly using the partial differential equations of motion. The functional gains for the distributed parameter model of curved beam are calculated by extending the spectral factorization method. Moreover, the response of the closed-loop control system is derived explicitly in frequency domain. Finally, the suppression of the vibration at the free end of a cantilevered curved beam by point control moment is studied through numerical case studies, in which the benefit of the presented method is shown by comparison with a constant gain velocity feedback control law, and the performance of the presented method on avoidance of control spillover is demonstrated.

  8. The addition of algebraic turbulence modeling to program LAURA

    NASA Astrophysics Data System (ADS)

    Cheatwood, F. Mcneil; Thompson, R. A.

    1993-04-01

    The Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) is modified to allow the calculation of turbulent flows. This is accomplished using the Cebeci-Smith and Baldwin-Lomax eddy-viscosity models in conjunction with the thin-layer Navier-Stokes options of the program. Turbulent calculations can be performed for both perfect-gas and equilibrium flows. However, a requirement of the models is that the flow be attached. It is seen that for slender bodies, adequate resolution of the boundary-layer gradients may require more cells in the normal direction than a laminar solution, even when grid stretching is employed. Results for axisymmetric and three-dimensional flows are presented. Comparison with experimental data and other numerical results reveal generally good agreement, except in the regions of detached flow.

  9. Effect of the addition of conventional additives and whey proteins concentrates on technological parameters, physicochemical properties, microstructure and sensory attributes of sous vide cooked beef muscles.

    PubMed

    Szerman, N; Gonzalez, C B; Sancho, A M; Grigioni, G; Carduza, F; Vaudagna, S R

    2012-03-01

    Beef muscles submitted to four enhancement treatments (1.88% whey protein concentrate (WPC)+1.25% sodium chloride (NaCl); 1.88% modified whey protein concentrate (MWPC)+1.25%NaCl; 0.25% sodium tripolyphosphate (STPP)+1.25%NaCl; 1.25%NaCl) and a control treatment (non-injected muscles) were sous vide cooked. Muscles with STPP+NaCl presented a significantly higher total yield (106.5%) in comparison to those with WPC/MWPC+NaCl (94.7% and 92.9%, respectively), NaCl alone (84.8%) or controls (72.1%). Muscles with STPP+NaCl presented significantly lower shear force values than control ones; also, WPC/MWPC+NaCl added muscles presented similar values than those from the other treatments. After cooking, muscles with STPP+NaCl or WPC/MWPC+NaCl depicted compacted and uniform microstructures. Muscles with STPP+NaCl showed a pink colour, meanwhile other treatment muscles presented colours between pinkish-grey and grey-brown. STPP+NaCl added samples presented the highest values of global tenderness and juiciness. The addition of STPP+NaCl had a better performance than WPC/MWPC+NaCl. However, the addition of WPC/MWPC+NaCl improved total yield in comparison to NaCl added or control ones. PMID:22112522

  10. Effect of the addition of conventional additives and whey proteins concentrates on technological parameters, physicochemical properties, microstructure and sensory attributes of sous vide cooked beef muscles.

    PubMed

    Szerman, N; Gonzalez, C B; Sancho, A M; Grigioni, G; Carduza, F; Vaudagna, S R

    2012-03-01

    Beef muscles submitted to four enhancement treatments (1.88% whey protein concentrate (WPC)+1.25% sodium chloride (NaCl); 1.88% modified whey protein concentrate (MWPC)+1.25%NaCl; 0.25% sodium tripolyphosphate (STPP)+1.25%NaCl; 1.25%NaCl) and a control treatment (non-injected muscles) were sous vide cooked. Muscles with STPP+NaCl presented a significantly higher total yield (106.5%) in comparison to those with WPC/MWPC+NaCl (94.7% and 92.9%, respectively), NaCl alone (84.8%) or controls (72.1%). Muscles with STPP+NaCl presented significantly lower shear force values than control ones; also, WPC/MWPC+NaCl added muscles presented similar values than those from the other treatments. After cooking, muscles with STPP+NaCl or WPC/MWPC+NaCl depicted compacted and uniform microstructures. Muscles with STPP+NaCl showed a pink colour, meanwhile other treatment muscles presented colours between pinkish-grey and grey-brown. STPP+NaCl added samples presented the highest values of global tenderness and juiciness. The addition of STPP+NaCl had a better performance than WPC/MWPC+NaCl. However, the addition of WPC/MWPC+NaCl improved total yield in comparison to NaCl added or control ones.

  11. Parameter estimation for models of ligninolytic and cellulolytic enzyme kinetics

    SciTech Connect

    Wang, Gangsheng; Post, Wilfred M; Mayes, Melanie; Frerichs, Joshua T; Jagadamma, Sindhu

    2012-01-01

    While soil enzymes have been explicitly included in the soil organic carbon (SOC) decomposition models, there is a serious lack of suitable data for model parameterization. This study provides well-documented enzymatic parameters for application in enzyme-driven SOC decomposition models from a compilation and analysis of published measurements. In particular, we developed appropriate kinetic parameters for five typical ligninolytic and cellulolytic enzymes ( -glucosidase, cellobiohydrolase, endo-glucanase, peroxidase, and phenol oxidase). The kinetic parameters included the maximum specific enzyme activity (Vmax) and half-saturation constant (Km) in the Michaelis-Menten equation. The activation energy (Ea) and the pH optimum and sensitivity (pHopt and pHsen) were also analyzed. pHsen was estimated by fitting an exponential-quadratic function. The Vmax values, often presented in different units under various measurement conditions, were converted into the same units at a reference temperature (20 C) and pHopt. Major conclusions are: (i) Both Vmax and Km were log-normal distributed, with no significant difference in Vmax exhibited between enzymes originating from bacteria or fungi. (ii) No significant difference in Vmax was found between cellulases and ligninases; however, there was significant difference in Km between them. (iii) Ligninases had higher Ea values and lower pHopt than cellulases; average ratio of pHsen to pHopt ranged 0.3 0.4 for the five enzymes, which means that an increase or decrease of 1.1 1.7 pH units from pHopt would reduce Vmax by 50%. (iv) Our analysis indicated that the Vmax values from lab measurements with purified enzymes were 1 2 orders of magnitude higher than those for use in SOC decomposition models under field conditions.

  12. Sensitivity Analysis of Parameters in Linear-Quadratic Radiobiologic Modeling

    SciTech Connect

    Fowler, Jack F.

    2009-04-01

    Purpose: Radiobiologic modeling is increasingly used to estimate the effects of altered treatment plans, especially for dose escalation. The present article shows how much the linear-quadratic (LQ) (calculated biologically equivalent dose [BED] varies when individual parameters of the LQ formula are varied by {+-}20% and by 1%. Methods: Equivalent total doses (EQD2 = normalized total doses (NTD) in 2-Gy fractions for tumor control, acute mucosal reactions, and late complications were calculated using the linear- quadratic formula with overall time: BED = nd (1 + d/ [{alpha}/{beta}]) - log{sub e}2 (T - Tk) / {alpha}Tp, where BED is BED = total dose x relative effectiveness (RE = nd (1 + d/ [{alpha}/{beta}]). Each of the five biologic parameters in turn was altered by {+-}10%, and the altered EQD2s tabulated; the difference was finally divided by 20. EQD2 or NTD is obtained by dividing BED by the RE for 2-Gy fractions, using the appropriate {alpha}/{beta} ratio. Results: Variations in tumor and acute mucosal EQD ranged from 0.1% to 0.45% per 1% change in each parameter for conventional schedules, the largest variation being caused by overall time. Variations in 'late' EQD were 0.4% to 0.6% per 1% change in the only biologic parameter, the {alpha}/{beta} ratio. For stereotactic body radiotherapy schedules, variations were larger, up to 0.6 to 0.9 for tumor and 1.6% to 1.9% for late, per 1% change in parameter. Conclusions: Robustness occurs similar to that of equivalent uniform dose (EUD), for the same reasons. Total dose, dose per fraction, and dose-rate cause their major effects, as well known.

  13. Simulation-based parameter estimation for complex models: a breast cancer natural history modelling illustration.

    PubMed

    Chia, Yen Lin; Salzman, Peter; Plevritis, Sylvia K; Glynn, Peter W

    2004-12-01

    Simulation-based parameter estimation offers a powerful means of estimating parameters in complex stochastic models. We illustrate the application of these ideas in the setting of a natural history model for breast cancer. Our model assumes that the tumor growth process follows a geometric Brownian motion; parameters are estimated from the SEER registry. Our discussion focuses on the use of simulation for computing the maximum likelihood estimator for this class of models. The analysis shows that simulation provides a straightforward means of computing such estimators for models of substantial complexity.

  14. Modelling of some parameters from thermoelectric power plants

    NASA Astrophysics Data System (ADS)

    Popa, G. N.; Diniş, C. M.; Deaconu, S. I.; Maksay, Şt; Popa, I.

    2016-02-01

    Paper proposing new mathematical models for the main electrical parameters (active power P, reactive power Q of power supplies) and technological (mass flow rate of steam M from boiler and dust emission E from the output of precipitator) from a thermoelectric power plants using industrial plate-type electrostatic precipitators with three sections used in electrical power plants. The mathematical models were used experimental results taken from industrial facility, from boiler and plate-type electrostatic precipitators with three sections, and has used the least squares method for their determination. The modelling has been used equations of degree 1, 2 and 3. The equations were determined between dust emission depending on active power of power supplies and mass flow rate of steam from boiler, and, also, depending on reactive power of power supplies and mass flow rate of steam from boiler. These equations can be used to control the process from electrostatic precipitators.

  15. Anisotropic Effects on Constitutive Model Parameters of Aluminum Alloys

    NASA Astrophysics Data System (ADS)

    Brar, Nachhatter; Joshi, Vasant

    2011-06-01

    Simulation of low velocity impact on structures or high velocity penetration in armor materials heavily rely on constitutive material models. The model constants are required input to computer codes (LS-DYNA, DYNA3D or SPH) to accurately simulate fragment impact on structural components made of high strength 7075-T651 aluminum alloys. Johnson-Cook model constants determined for Al7075-T651 alloy bar material failed to simulate correctly the penetration into 1' thick Al-7075-T651plates. When simulations go well beyond minor parameter tweaking and experimental results are drastically different it is important to determine constitutive parameters from the actual material used in impact/penetration experiments. To investigate anisotropic effects on the yield/flow stress of this alloy we performed quasi-static and high strain rate tensile tests on specimens fabricated in the longitudinal, transverse, and thickness directions of 1' thick Al7075-T651 plate. Flow stresses at a strain rate of ~1100/s in the longitudinal and transverse direction are similar around 670MPa and decreases to 620 MPa in the thickness direction. These data are lower than the flow stress of 760 MPa measured in Al7075-T651 bar stock.

  16. Microbial Communities Model Parameter Calculation for TSPA/SR

    SciTech Connect

    D. Jolley

    2001-07-16

    This calculation has several purposes. First the calculation reduces the information contained in ''Committed Materials in Repository Drifts'' (BSC 2001a) to useable parameters required as input to MING V1.O (CRWMS M&O 1998, CSCI 30018 V1.O) for calculation of the effects of potential in-drift microbial communities as part of the microbial communities model. The calculation is intended to replace the parameters found in Attachment II of the current In-Drift Microbial Communities Model revision (CRWMS M&O 2000c) with the exception of Section 11-5.3. Second, this calculation provides the information necessary to supercede the following DTN: M09909SPAMING1.003 and replace it with a new qualified dataset (see Table 6.2-1). The purpose of this calculation is to create the revised qualified parameter input for MING that will allow {Delta}G (Gibbs Free Energy) to be corrected for long-term changes to the temperature of the near-field environment. Calculated herein are the quadratic or second order regression relationships that are used in the energy limiting calculations to potential growth of microbial communities in the in-drift geochemical environment. Third, the calculation performs an impact review of a new DTN: M00012MAJIONIS.000 that is intended to replace the currently cited DTN: GS9809083 12322.008 for water chemistry data used in the current ''In-Drift Microbial Communities Model'' revision (CRWMS M&O 2000c). Finally, the calculation updates the material lifetimes reported on Table 32 in section 6.5.2.3 of the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000c) based on the inputs reported in BSC (2001a). Changes include adding new specified materials and updating old materials information that has changed.

  17. Parameter optimization in differential geometry based solvation models

    PubMed Central

    Wang, Bao; Wei, G. W.

    2015-01-01

    Differential geometry (DG) based solvation models are a new class of variational implicit solvent approaches that are able to avoid unphysical solvent-solute boundary definitions and associated geometric singularities, and dynamically couple polar and non-polar interactions in a self-consistent framework. Our earlier study indicates that DG based non-polar solvation model outperforms other methods in non-polar solvation energy predictions. However, the DG based full solvation model has not shown its superiority in solvation analysis, due to its difficulty in parametrization, which must ensure the stability of the solution of strongly coupled nonlinear Laplace-Beltrami and Poisson-Boltzmann equations. In this work, we introduce new parameter learning algorithms based on perturbation and convex optimization theories to stabilize the numerical solution and thus achieve an optimal parametrization of the DG based solvation models. An interesting feature of the present DG based solvation model is that it provides accurate solvation free energy predictions for both polar and non-polar molecules in a unified formulation. Extensive numerical experiment demonstrates that the present DG based solvation model delivers some of the most accurate predictions of the solvation free energies for a large number of molecules. PMID:26450304

  18. Parameter optimization in differential geometry based solvation models.

    PubMed

    Wang, Bao; Wei, G W

    2015-10-01

    Differential geometry (DG) based solvation models are a new class of variational implicit solvent approaches that are able to avoid unphysical solvent-solute boundary definitions and associated geometric singularities, and dynamically couple polar and non-polar interactions in a self-consistent framework. Our earlier study indicates that DG based non-polar solvation model outperforms other methods in non-polar solvation energy predictions. However, the DG based full solvation model has not shown its superiority in solvation analysis, due to its difficulty in parametrization, which must ensure the stability of the solution of strongly coupled nonlinear Laplace-Beltrami and Poisson-Boltzmann equations. In this work, we introduce new parameter learning algorithms based on perturbation and convex optimization theories to stabilize the numerical solution and thus achieve an optimal parametrization of the DG based solvation models. An interesting feature of the present DG based solvation model is that it provides accurate solvation free energy predictions for both polar and non-polar molecules in a unified formulation. Extensive numerical experiment demonstrates that the present DG based solvation model delivers some of the most accurate predictions of the solvation free energies for a large number of molecules.

  19. Expanding the model: anisotropic displacement parameters in protein structure refinement.

    PubMed

    Merritt, E A

    1999-06-01

    Recent technological improvements in crystallographic data collection have led to a surge in the number of protein structures being determined at atomic or near-atomic resolution. At this resolution, structural models can be expanded to include anisotropic displacement parameters (ADPs) for individual atoms. New protocols and new tools are needed to refine, analyze and validate such models optimally. One such tool, PARVATI, has been used to examine all protein structures (peptide chains >50 residues) for which expanded models including ADPs are available from the Protein Data Bank. The distribution of anisotropy within each of these refined models is broadly similar across the entire set of structures, with a mean anisotropy A in the range 0.4-0.5. This is a significant departure from a purely isotropic model and explains why the inclusion of ADPs yields a substantial improvement in the crystallographic residuals R and Rfree. The observed distribution of anisotropy may prove useful in the validation of very high resolution structures. A more complete understanding of this distribution may also allow the development of improved protein structural models, even at lower resolution.

  20. Variational estimation of process parameters in a simplified atmospheric general circulation model

    NASA Astrophysics Data System (ADS)

    Lv, Guokun; Koehl, Armin; Stammer, Detlef

    2016-04-01

    Parameterizations are used to simulate effects of unresolved sub-grid-scale processes in current state-of-the-art climate model. The values of the process parameters, which determine the model's climatology, are usually manually adjusted to reduce the difference of model mean state to the observed climatology. This process requires detailed knowledge of the model and its parameterizations. In this work, a variational method was used to estimate process parameters in the Planet Simulator (PlaSim). The adjoint code was generated using automatic differentiation of the source code. Some hydrological processes were switched off to remove the influence of zero-order discontinuities. In addition, the nonlinearity of the model limits the feasible assimilation window to about 1day, which is too short to tune the model's climatology. To extend the feasible assimilation window, nudging terms for all state variables were added to the model's equations, which essentially suppress all unstable directions. In identical twin experiments, we found that the feasible assimilation window could be extended to over 1-year and accurate parameters could be retrieved. Although the nudging terms transform to a damping of the adjoint variables and therefore tend to erases the information of the data over time, assimilating climatological information is shown to provide sufficient information on the parameters. Moreover, the mechanism of this regularization is discussed.

  1. Important Scaling Parameters for Testing Model-Scale Helicopter Rotors

    NASA Technical Reports Server (NTRS)

    Singleton, Jeffrey D.; Yeager, William T., Jr.

    1998-01-01

    An investigation into the effects of aerodynamic and aeroelastic scaling parameters on model scale helicopter rotors has been conducted in the NASA Langley Transonic Dynamics Tunnel. The effect of varying Reynolds number, blade Lock number, and structural elasticity on rotor performance has been studied and the performance results are discussed herein for two different rotor blade sets at two rotor advance ratios. One set of rotor blades were rigid and the other set of blades were dynamically scaled to be representative of a main rotor design for a utility class helicopter. The investigation was con-densities permits the acquisition of data for several Reynolds and Lock number combinations.

  2. Analysis of sensitivity of simulated recharge to selected parameters for seven watersheds modeled using the precipitation-runoff modeling system

    USGS Publications Warehouse

    Ely, D. Matthew

    2006-01-01

    routing parameter. Although the primary objective of this study was to identify, by geographic region, the importance of the parameter value to the simulation of ground-water recharge, the secondary objectives proved valuable for future modeling efforts. The value of a rigorous sensitivity analysis can (1) make the calibration process more efficient, (2) guide additional data collection, (3) identify model limitations, and (4) explain simulated results.

  3. Additional Developments in Atmosphere Revitalization Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Coker, Robert F.; Knox, James C.; Cummings, Ramona; Brooks, Thomas; Schunk, Richard G.; Gomez, Carlos

    2013-01-01

    NASA's Advanced Exploration Systems (AES) program is developing prototype systems, demonstrating key capabilities, and validating operational concepts for future human missions beyond Earth orbit. These forays beyond the confines of earth's gravity will place unprecedented demands on launch systems. They must launch the supplies needed to sustain a crew over longer periods for exploration missions beyond earth's moon. Thus all spacecraft systems, including those for the separation of metabolic carbon dioxide and water from a crewed vehicle, must be minimized with respect to mass, power, and volume. Emphasis is also placed on system robustness both to minimize replacement parts and ensure crew safety when a quick return to earth is not possible. Current efforts are focused on improving the current state-of-the-art systems utilizing fixed beds of sorbent pellets by evaluating structured sorbents, seeking more robust pelletized sorbents, and examining alternate bed configurations to improve system efficiency and reliability. These development efforts combine testing of sub-scale systems and multi-physics computer simulations to evaluate candidate approaches, select the best performing options, and optimize the configuration of the selected approach. This paper describes the continuing development of atmosphere revitalization models and simulations in support of the Atmosphere Revitalization Recovery and Environmental Monitoring (ARREM) project within the AES program.

  4. Additional Developments in Atmosphere Revitalization Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Coker, Robert F.; Knox, James C.; Cummings, Ramona; Brooks, Thomas; Schunk, Richard G.

    2013-01-01

    NASA's Advanced Exploration Systems (AES) program is developing prototype systems, demonstrating key capabilities, and validating operational concepts for future human missions beyond Earth orbit. These forays beyond the confines of earth's gravity will place unprecedented demands on launch systems. They must launch the supplies needed to sustain a crew over longer periods for exploration missions beyond earth's moon. Thus all spacecraft systems, including those for the separation of metabolic carbon dioxide and water from a crewed vehicle, must be minimized with respect to mass, power, and volume. Emphasis is also placed on system robustness both to minimize replacement parts and ensure crew safety when a quick return to earth is not possible. Current efforts are focused on improving the current state-of-the-art systems utilizing fixed beds of sorbent pellets by evaluating structured sorbents, seeking more robust pelletized sorbents, and examining alternate bed configurations to improve system efficiency and reliability. These development efforts combine testing of sub-scale systems and multi-physics computer simulations to evaluate candidate approaches, select the best performing options, and optimize the configuration of the selected approach. This paper describes the continuing development of atmosphere revitalization models and simulations in support of the Atmosphere Revitalization Recovery and Environmental Monitoring (ARREM)

  5. Insight into model mechanisms through automatic parameter fitting: a new methodological framework for model development

    PubMed Central

    2014-01-01

    Background Striking a balance between the degree of model complexity and parameter identifiability, while still producing biologically feasible simulations using modelling is a major challenge in computational biology. While these two elements of model development are closely coupled, parameter fitting from measured data and analysis of model mechanisms have traditionally been performed separately and sequentially. This process produces potential mismatches between model and data complexities that can compromise the ability of computational frameworks to reveal mechanistic insights or predict new behaviour. In this study we address this issue by presenting a generic framework for combined model parameterisation, comparison of model alternatives and analysis of model mechanisms. Results The presented methodology is based on a combination of multivariate metamodelling (statistical approximation of the input–output relationships of deterministic models) and a systematic zooming into biologically feasible regions of the parameter space by iterative generation of new experimental designs and look-up of simulations in the proximity of the measured data. The parameter fitting pipeline includes an implicit sensitivity analysis and analysis of parameter identifiability, making it suitable for testing hypotheses for model reduction. Using this approach, under-constrained model parameters, as well as the coupling between parameters within the model are identified. The methodology is demonstrated by refitting the parameters of a published model of cardiac cellular mechanics using a combination of measured data and synthetic data from an alternative model of the same system. Using this approach, reduced models with simplified expressions for the tropomyosin/crossbridge kinetics were found by identification of model components that can be omitted without affecting the fit to the parameterising data. Our analysis revealed that model parameters could be constrained to a standard

  6. System parameters for erythropoiesis control model: Comparison of normal values in human and mouse model

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer model for erythropoietic control was adapted to the mouse system by altering system parameters originally given for the human to those which more realistically represent the mouse. Parameter values were obtained from a variety of literature sources. Using the mouse model, the mouse was studied as a potential experimental model for spaceflight. Simulation studies of dehydration and hypoxia were performed. A comparison of system parameters for the mouse and human models is presented. Aside from the obvious differences expected in fluid volumes, blood flows and metabolic rates, larger differences were observed in the following: erythrocyte life span, erythropoietin half-life, and normal arterial pO2.

  7. A Lumped Parameter Model for Feedback Studies in Tokamaks

    NASA Astrophysics Data System (ADS)

    Chance, M. S.; Chu, M. S.; Okabayashi, M.; Glasser, A. H.

    2004-11-01

    A lumped circuit model of the feedback stabilization studies in tokamaks is calculated. This work parallels the formulation by Boozer^a, is analogous to the studies done on axisymmetric modes^b, and generalizes the cylindrical model^c. The lumped circuit parameters are derived from the DCON derived eigenfunctions of the plasma, the resistive shell and the feedback coils. The inductances are calculated using the VACUUM code which is designed to calculate the responses between the various elements in the feedback system. The results are compared with the normal mode^d and the system identification^e approaches. ^aA.H. Boozer, Phys. Plasmas 5, 3350 (1998). ^b E.A. Lazarus et al., Nucl. Fusion 30, 111 (1990). ^c M. Okabayashi et al., Nucl. Fusion 38, 1607 (1998). ^dM.S. Chu et al., Nucl. Fusion 43, 441 (2003). ^eY.Q. Liu et al., Phys. Plasmas 7, 3681 (2000).

  8. Transferability of regional permafrost disturbance susceptibility modelling using generalized linear and generalized additive models

    NASA Astrophysics Data System (ADS)

    Rudy, Ashley C. A.; Lamoureux, Scott F.; Treitz, Paul; van Ewijk, Karin Y.

    2016-07-01

    To effectively assess and mitigate risk of permafrost disturbance, disturbance-prone areas can be predicted through the application of susceptibility models. In this study we developed regional susceptibility models for permafrost disturbances using a field disturbance inventory to test the transferability of the model to a broader region in the Canadian High Arctic. Resulting maps of susceptibility were then used to explore the effect of terrain variables on the occurrence of disturbances within this region. To account for a large range of landscape characteristics, the model was calibrated using two locations: Sabine Peninsula, Melville Island, NU, and Fosheim Peninsula, Ellesmere Island, NU. Spatial patterns of disturbance were predicted with a generalized linear model (GLM) and generalized additive model (GAM), each calibrated using disturbed and randomized undisturbed locations from both locations and GIS-derived terrain predictor variables including slope, potential incoming solar radiation, wetness index, topographic position index, elevation, and distance to water. Each model was validated for the Sabine and Fosheim Peninsulas using independent data sets while the transferability of the model to an independent site was assessed at Cape Bounty, Melville Island, NU. The regional GLM and GAM validated well for both calibration sites (Sabine and Fosheim) with the area under the receiver operating curves (AUROC) > 0.79. Both models were applied directly to Cape Bounty without calibration and validated equally with AUROC's of 0.76; however, each model predicted disturbed and undisturbed samples differently. Additionally, the sensitivity of the transferred model was assessed using data sets with different sample sizes. Results indicated that models based on larger sample sizes transferred more consistently and captured the variability within the terrain attributes in the respective study areas. Terrain attributes associated with the initiation of disturbances were

  9. Multiple-step model-experiment matching allows precise definition of dynamical leg parameters in human running.

    PubMed

    Ludwig, C; Grimmer, S; Seyfarth, A; Maus, H-M

    2012-09-21

    The spring-loaded inverted pendulum (SLIP) model is a well established model for describing bouncy gaits like human running. The notion of spring-like leg behavior has led many researchers to compute the corresponding parameters, predominantly stiffness, in various experimental setups and in various ways. However, different methods yield different results, making the comparison between studies difficult. Further, a model simulation with experimentally obtained leg parameters typically results in comparatively large differences between model and experimental center of mass trajectories. Here, we pursue the opposite approach which is calculating model parameters that allow reproduction of an experimental sequence of steps. In addition, to capture energy fluctuations, an extension of the SLIP (ESLIP) is required and presented. The excellent match of the models with the experiment validates the description of human running by the SLIP with the obtained parameters which we hence call dynamical leg parameters.

  10. CHARMM additive all-atom force field for carbohydrate derivatives and its utility in polysaccharide and carbohydrate-protein modeling

    PubMed Central

    Guvench, Olgun; Mallajosyula, Sairam S.; Raman, E. Prabhu; Hatcher, Elizabeth; Vanommeslaeghe, Kenno; Foster, Theresa J.; Jamison, Francis W.; MacKerell, Alexander D.

    2011-01-01

    Monosaccharide derivatives such as xylose, fucose, N-acetylglucosamine (GlcNAc), N-acetylgalactosamine (GlaNAc), glucuronic acid, iduronic acid, and N-acetylneuraminic acid (Neu5Ac) are important components of eukaryotic glycans. The present work details development of force-field parameters for these monosaccharides and their covalent connections to proteins via O-linkages to serine or threonine sidechains and via N-linkages to asparagine sidechains. The force field development protocol was designed to explicitly yield parameters that are compatible with the existing CHARMM additive force field for proteins, nucleic acids, lipids, carbohydrates, and small molecules. Therefore, when combined with previously developed parameters for pyranose and furanose monosaccharides, for glycosidic linkages between monosaccharides, and for proteins, the present set of parameters enables the molecular simulation of a wide variety of biologically-important molecules such as complex carbohydrates and glycoproteins. Parametrization included fitting to quantum mechanical (QM) geometries and conformational energies of model compounds, as well as to QM pair interaction energies and distances of model compounds with water. Parameters were validated in the context of crystals of relevant monosaccharides, as well NMR and/or x-ray crystallographic data on larger systems including oligomeric hyaluronan, sialyl Lewis X, O- and N-linked glycopeptides, and a lectin:sucrose complex. As the validated parameters are an extension of the CHARMM all-atom additive biomolecular force field, they further broaden the types of heterogeneous systems accessible with a consistently-developed force-field model. PMID:22125473

  11. Computationally Efficient Algorithms for Parameter Estimation and Uncertainty Propagation in Numerical Models of Groundwater Flow

    NASA Astrophysics Data System (ADS)

    Townley, Lloyd R.; Wilson, John L.

    1985-12-01

    Finite difference and finite element methods are frequently used to study aquifer flow; however, additional analysis is required when model parameters, and hence predicted heads are uncertain. Computational algorithms are presented for steady and transient models in which aquifer storage coefficients, transmissivities, distributed inputs, and boundary values may all be simultaneously uncertain. Innovative aspects of these algorithms include a new form of generalized boundary condition; a concise discrete derivation of the adjoint problem for transient models with variable time steps; an efficient technique for calculating the approximate second derivative during line searches in weighted least squares estimation; and a new efficient first-order second-moment algorithm for calculating the covariance of predicted heads due to a large number of uncertain parameter values. The techniques are presented in matrix form, and their efficiency depends on the structure of sparse matrices which occur repeatedly throughout the calculations. Details of matrix structures are provided for a two-dimensional linear triangular finite element model.

  12. Relations between fractional-order model parameters and lung pathology in chronic obstructive pulmonary disease.

    PubMed

    Ionescu, Clara M; De Keyser, Robin

    2009-04-01

    In this study, changes in respiratory mechanics from healthy and chronic obstructive pulmonary disease (COPD) diagnosed patients are observed from identified fractional-order (FO) model parameters. The noninvasive forced oscillation technique is employed for lung function testing. Parameters on tissue damping and elastance are analyzed with respect to lung pathology and additional indexes developed from the identified model. The observations show that the proposed model may be used to detect changes in respiratory mechanics and offers a clear-cut separation between the healthy and COPD subject groups. Our conclusion is that an FO model is able to capture changes in viscoelasticity of the soft tissue in lungs with disease. Apart from this, nonlinear effects present in the measured signals were observed and analyzed via signal processing techniques and led to supporting evidence in relation to the expected phenomena from lung pathology in healthy and COPD patients.

  13. Variational methods to estimate terrestrial ecosystem model parameters

    NASA Astrophysics Data System (ADS)

    Delahaies, Sylvain; Roulstone, Ian

    2016-04-01

    Carbon is at the basis of the chemistry of life. Its ubiquity in the Earth system is the result of complex recycling processes. Present in the atmosphere in the form of carbon dioxide it is adsorbed by marine and terrestrial ecosystems and stored within living biomass and decaying organic matter. Then soil chemistry and a non negligible amount of time transform the dead matter into fossil fuels. Throughout this cycle, carbon dioxide is released in the atmosphere through respiration and combustion of fossils fuels. Model-data fusion techniques allow us to combine our understanding of these complex processes with an ever-growing amount of observational data to help improving models and predictions. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Over the last decade several studies have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF, 4DVAR) to estimate model parameters and initial carbon stocks for DALEC and to quantify the uncertainty in the predictions. Despite its simplicity, DALEC represents the basic processes at the heart of more sophisticated models of the carbon cycle. Using adjoint based methods we study inverse problems for DALEC with various data streams (8 days MODIS LAI, monthly MODIS LAI, NEE). The framework of constraint optimization allows us to incorporate ecological common sense into the variational framework. We use resolution matrices to study the nature of the inverse problems and to obtain data importance and information content for the different type of data. We study how varying the time step affect the solutions, and we show how "spin up" naturally improves the conditioning of the inverse problems.

  14. Pressure pulsation in roller pumps: a validated lumped parameter model.

    PubMed

    Moscato, Francesco; Colacino, Francesco M; Arabia, Maurizio; Danieli, Guido A

    2008-11-01

    During open-heart surgery roller pumps are often used to keep the circulation of blood through the patient body. They present numerous key features, but they suffer from several limitations: (a) they normally deliver uncontrolled pulsatile inlet and outlet pressure; (b) blood damage appears to be more than that encountered with centrifugal pumps. A lumped parameter mathematical model of a roller pump (Sarns 7000, Terumo CVS, Ann Arbor, MI, USA) was developed to dynamically simulate pressures at the pump inlet and outlet in order to clarify the uncontrolled pulsation mechanism. Inlet and outlet pressures obtained by the mathematical model have been compared with those measured in various operating conditions: different rollers' rotating speed, different tube occlusion rates, and different clamping degree at the pump inlet and outlet. Model results agree with measured pressure waveforms, whose oscillations are generated by the tube compression/release mechanism during the rollers' engaging and disengaging phases. Average Euclidean Error (AEE) was 20mmHg and 33mmHg for inlet and outlet pressure estimates, respectively. The normalized AEE never exceeded 0.16. The developed model can be exploited for designing roller pumps with improved performances aimed at reducing the undesired pressure pulsation.

  15. Analysing DNA structural parameters using a mesoscopic model

    NASA Astrophysics Data System (ADS)

    Amarante, Tauanne D.; Weber, Gerald

    2014-03-01

    The Peyrard-Bishop model is a mesoscopic approximation to model DNA and RNA molecules. Several variants of this model exists, from 3D Hamiltonians, including torsional angles, to simpler 2D versions. Currently, we are able to parametrize the 2D variants of the model which allows us to extract important information about the molecule. For example, with this technique we were able recently to obtain the hydrogen bonds of RNA from melting temperatures, which previously were obtainable only from NMR measurements. Here, we take the 3D torsional Hamiltonian and set the angles to zero. Curiously, in doing this we do not recover the traditional 2D Hamiltonians. Instead, we obtain a different 2D Hamiltonian which now includes a base pair step distance, commonly known as rise. A detailed knowledge of the rise distance is important as it determines the overall length of the DNA molecule. This 2D Hamiltonian provides us with the exciting prospect of obtaining DNA structural parameters from melting temperatures. Our results of the rise distance at low salt concentration are in good qualitative agreement with those from several published x-ray measurements. We also found an important dependence of the rise distance with salt concentration. In contrast to our previous calculations, the elastic constants now show little dependence with salt concentrations which appears to be closer to what is seen experimentally in DNA flexibility experiments.

  16. Multi-objective parameter optimization of common land model using adaptive surrogate modeling

    NASA Astrophysics Data System (ADS)

    Gong, W.; Duan, Q.; Li, J.; Wang, C.; Di, Z.; Dai, Y.; Ye, A.; Miao, C.

    2015-05-01

    Parameter specification usually has significant influence on the performance of land surface models (LSMs). However, estimating the parameters properly is a challenging task due to the following reasons: (1) LSMs usually have too many adjustable parameters (20 to 100 or even more), leading to the curse of dimensionality in the parameter input space; (2) LSMs usually have many output variables involving water/energy/carbon cycles, so that calibrating LSMs is actually a multi-objective optimization problem; (3) Regional LSMs are expensive to run, while conventional multi-objective optimization methods need a large number of model runs (typically ~105-106). It makes parameter optimization computationally prohibitive. An uncertainty quantification framework was developed to meet the aforementioned challenges, which include the following steps: (1) using parameter screening to reduce the number of adjustable parameters, (2) using surrogate models to emulate the responses of dynamic models to the variation of adjustable parameters, (3) using an adaptive strategy to improve the efficiency of surrogate modeling-based optimization; (4) using a weighting function to transfer multi-objective optimization to single-objective optimization. In this study, we demonstrate the uncertainty quantification framework on a single column application of a LSM - the Common Land Model (CoLM), and evaluate the effectiveness and efficiency of the proposed framework. The result indicate that this framework can efficiently achieve optimal parameters in a more effective way. Moreover, this result implies the possibility of calibrating other large complex dynamic models, such as regional-scale LSMs, atmospheric models and climate models.

  17. Establishing a connection between hydrologic model parameters and physical catchment signatures for improved hierarchical Bayesian modeling in ungauged catchments

    NASA Astrophysics Data System (ADS)

    Marshall, L. A.; Weber, K.; Smith, T. J.; Greenwood, M. C.; Sharma, A.

    2012-12-01

    In an effort to improve hydrologic analysis in areas with limited data, hydrologists often seek to link catchments where little to no data collection occurs to catchments that are gauged. Various metrics and methods have been proposed to identify such relationships, in the hope that "surrogate" catchments might provide information for those catchments that are hydrologically similar. In this study we present a statistical analysis of over 150 catchments located in southeast Australia to examine the relationship between a hydrological model and certain catchment metrics. A conceptual rainfall-runoff model is optimized for each of the catchments and hierarchical clustering is performed to link catchments based on their calibrated model parameters. Clustering has been used in recent hydrologic studies but catchments are often clustered based on physical characteristics alone. Usually there is little evidence to suggest that such "surrogate" data approaches provide sufficiently similar model predictions. Beginning with model parameters and working backwards, we hope to establish if there is a relationship between the model parameters and physical characteristics for improved model predictions in the ungauged catchment. To analyze relationships, permutational multivariate analysis of variance tests are used that suggest which hydrologic metrics are most appropriate for discriminating between calibrated catchment clusters. Additional analysis is performed to determine which cluster pairs show significant differences for various metrics. We further examine the extent to which these results may be insightful for a hierarchical Bayesian modeling approach that is aimed at generating model predictions at an ungauged site. The method, known as Bayes Empirical Bayes (BEB) works to pool information from similar catchments to generate informed probability distributions for each model parameter at a data-limited catchment of interest. We demonstrate the effect of selecting

  18. Impact of parameter uncertainty on carbon sequestration modeling

    NASA Astrophysics Data System (ADS)

    Bandilla, K.; Celia, M. A.

    2013-12-01

    Geologic carbon sequestration through injection of supercritical carbon dioxide (CO2) into the subsurface is one option to reduce anthropogenic CO¬2 emissions. Widespread industrial-scale deployment, on the order of giga-tonnes of CO2 injected per year, will be necessary for carbon sequestration to make a significant contribution to solving the CO2 problem. Deep saline formations are suitable targets for CO2 sequestration due to their large storage capacity, high injectivity, and favorable pressure and temperature regimes. Due to the large areal extent of saline formations, and the need to inject very large amounts of CO2, multiple sequestration operations are likely to be developed in the same formation. The injection-induced migration of both CO2 and resident formation fluids (brine) needs to be predicted to determine the feasibility of industrial-scale deployment of carbon sequestration. Due to the larger spatial scale of the domain, many of the modeling parameters (e.g., permeability) will be highly uncertain. In this presentation we discuss a sensitivity analysis of both pressure response and CO2 plume migration to variations of model parameters such as permeability, compressibility and temperature. The impact of uncertainty in the stratigraphic succession is also explored. The sensitivity analysis is conducted using a numerical vertically-integrated modeling approach. The Illinois Basin, USA is selected as the test site for this study, due to its large storage capacity and large number of stationary CO2 sources. As there is currently only one active CO2 injection operation in the Illinois Basin, a hypothetical injection scenario is used, where CO2 is injected at the locations of large CO2 emitters related to electricity generation, ethanol production and hydrocarbon refinement. The Area of Review (AoR) is chosen as the comparison metric, as it includes both the CO2 plume size and pressure response.

  19. ORBSIM- ESTIMATING GEOPHYSICAL MODEL PARAMETERS FROM PLANETARY GRAVITY DATA

    NASA Technical Reports Server (NTRS)

    Sjogren, W. L.

    1994-01-01

    The ORBSIM program was developed for the accurate extraction of geophysical model parameters from Doppler radio tracking data acquired from orbiting planetary spacecraft. The model of the proposed planetary structure is used in a numerical integration of the spacecraft along simulated trajectories around the primary body. Using line of sight (LOS) Doppler residuals, ORBSIM applies fast and efficient modelling and optimization procedures which avoid the traditional complex dynamic reduction of data. ORBSIM produces quantitative geophysical results such as size, depth, and mass. ORBSIM has been used extensively to investigate topographic features on the Moon, Mars, and Venus. The program has proven particulary suitable for modelling gravitational anomalies and mascons. The basic observable for spacecraft-based gravity data is the Doppler frequency shift of a transponded radio signal. The time derivative of this signal carries information regarding the gravity field acting on the spacecraft in the LOS direction (the LOS direction being the path between the spacecraft and the receiving station, either Earth or another satellite). There are many dynamic factors taken into account: earth rotation, solar radiation, acceleration from planetary bodies, tracking station time and location adjustments, etc. The actual trajectories of the spacecraft are simulated using least squares fitted to conic motion. The theoretical Doppler readings from the simulated orbits are compared to actual Doppler observations and another least squares adjustment is made. ORBSIM has three modes of operation: trajectory simulation, optimization, and gravity modelling. In all cases, an initial gravity model of curved and/or flat disks, harmonics, and/or a force table are required input. ORBSIM is written in FORTRAN 77 for batch execution and has been implemented on a DEC VAX 11/780 computer operating under VMS. This program was released in 1985.

  20. Bayesian parameter inference for empirical stochastic models of paleoclimatic records with dating uncertainty

    NASA Astrophysics Data System (ADS)

    Boers, Niklas; Goswami, Bedartha; Chekroun, Mickael; Svensson, Anders; Rousseau, Denis-Didier; Ghil, Michael

    2016-04-01

    In the recent past, empirical stochastic models have been successfully applied to model a wide range of climatic phenomena [1,2]. In addition to enhancing our understanding of the geophysical systems under consideration, multilayer stochastic models (MSMs) have been shown to be solidly grounded in the Mori-Zwanzig formalism of statistical physics [3]. They are also well-suited for predictive purposes, e.g., for the El Niño Southern Oscillation [4] and the Madden-Julian Oscillation [5]. In general, these models are trained on a given time series under consideration, and then assumed to reproduce certain dynamical properties of the underlying natural system. Most existing approaches are based on least-squares fitting to determine optimal model parameters, which does not allow for an uncertainty estimation of these parameters. This approach significantly limits the degree to which dynamical characteristics of the time series can be safely inferred from the model. Here, we are specifically interested in fitting low-dimensional stochastic models to time series obtained from paleoclimatic proxy records, such as the oxygen isotope ratio and dust concentration of the NGRIP record [6]. The time series derived from these records exhibit substantial dating uncertainties, in addition to the proxy measurement errors. In particular, for time series of this kind, it is crucial to obtain uncertainty estimates for the final model parameters. Following [7], we first propose a statistical procedure to shift dating uncertainties from the time axis to the proxy axis of layer-counted paleoclimatic records. Thereafter, we show how Maximum Likelihood Estimation in combination with Markov Chain Monte Carlo parameter sampling can be employed to translate all uncertainties present in the original proxy time series to uncertainties of the parameter estimates of the stochastic model. We compare time series simulated by the empirical model to the original time series in terms of standard

  1. Modeling parameter extraction for DNQ-novolak thick film resists

    NASA Astrophysics Data System (ADS)

    Henderson, Clifford L.; Scheer, Steven A.; Tsiartas, Pavlos C.; Rathsack, Benjamen M.; Sagan, John P.; Dammel, Ralph R.; Erdmann, Andreas; Willson, C. Grant

    1998-06-01

    Optical lithography with special thick film DNQ-novolac photoresists have been practiced for many years to fabricate microstructures that require feature heights ranging from several to hundreds of microns such as thin film magnetic heads. It is common in these thick film photoresist systems to observe interesting non-uniform profiles with narrow regions near the top surface of the film that transition into broader and more concave shapes near the bottom of the resist profile. A number of explanations have been proposed for these various observations including the formation of `dry skins' at the resist surface and the presence of solvent gradients in the film which serve to modify the local development rate of the photoresist. There have been few detailed experimental studies of the development behavior of thick films resists. This has been due to part to the difficulty in studying these films with conventional dissolution rate monitors (DRMs). In general, this lack of experimental data along with other factors has made simulation and modeling of thick film resist performance difficult. As applications such as thin film head manufacturing drive to smaller features with higher aspect ratios, the need for accurate thick film simulation capability continues to grow. A new multi-wavelength DRM tool has been constructed and used in conjunction with a resist bleaching tool and rigorous parameter extraction techniques to establish exposure and development parameters for two thick film resists, AZTM 4330-RS and AZTM 9200. Simulations based on these parameters show good agreement to resist profiles for these two resists.

  2. Inverse modeling of hydrologic parameters using surface flux and runoff observations in the Community Land Model

    NASA Astrophysics Data System (ADS)

    Sun, Y.; Hou, Z.; Huang, M.; Tian, F.; Leung, L. R.

    2013-04-01

    This study demonstrates the possibility of inverting hydrologic parameters using surface flux and runoff observations in version 4 of the Community Land Model (CLM4). Previous studies showed that surface flux and runoff calculations are sensitive to major hydrologic parameters in CLM4 over different watersheds, and illustrated the necessity and possibility of parameter calibration. Two inversion strategies, the deterministic least-square fitting and stochastic Markov-Chain Monte-Carlo (MCMC) Bayesian inversion approaches, are evaluated by applying them to CLM4 at selected sites. The unknowns to be estimated include surface and subsurface runoff generation parameters and vadose zone soil water parameters. We find that using model parameters calibrated by the least-square fitting provides little improvements in the model simulations but the sampling-based stochastic inversion approaches are consistent - as more information comes in, the predictive intervals of the calibrated parameters become narrower and the misfits between the calculated and observed responses decrease. In general, parameters that are identified to be significant through sensitivity analyses and statistical tests are better calibrated than those with weak or nonlinear impacts on flux or runoff observations. Temporal resolution of observations has larger impacts on the results of inverse modeling using heat flux data than runoff data. Soil and vegetation cover have important impacts on parameter sensitivities, leading to different patterns of posterior distributions of parameters at different sites. Overall, the MCMC-Bayesian inversion approach effectively and reliably improves the simulation of CLM under different climates and environmental conditions. Bayesian model averaging of the posterior estimates with different reference acceptance probabilities can smooth the posterior distribution and provide more reliable parameter estimates, but at the expense of wider uncertainty bounds.

  3. Measuring morphological parameters of the pelvic floor for finite element modelling purposes.

    PubMed

    Janda, Stepán; van der Helm, Frans C T; de Blok, Sjoerd B

    2003-06-01

    The goal of this study was to obtain a complete data set needed for studying the complex biomechanical behaviour of the pelvic floor muscles using a computer model based on the finite element (FE) theory. The model should be able to predict the effect of surgical interventions and give insight into the function of pelvic floor muscles. Because there was a lack of any information concerning morphological parameters of the pelvic floor muscle structures, we performed an experimental measurement to uncover those morphological parameters. Geometric parameters as well as muscle parameters of the pelvic floor muscles were measured on an embalmed female cadaver. A three-dimensional (3D) geometric data set of the pelvic floor including muscle fibre directions was obtained using a palpator device. A 3D surface model based on the experimental data, needed for mathematical modelling of the pelvic floor, was created. For all parts of the diaphragma pelvis, the optimal muscle fibre length was determined by laser diffraction measurements of the sarcomere length. In addition, other muscle parameters such as physiological cross-sectional area and total muscle fibre length were determined. Apart from these measurements we obtained a data set of the pelvic floor structures based on nuclear magnetic resonance imaging (MRI) on the same cadaver specimen. The purpose of this experiment was to discover the relationship between the MRI morphology and geometrical parameters obtained from the previous measurements. The produced data set is not only important for biomechanical modelling of the pelvic floor muscles, but it also describes the geometry of muscle fibres and is useful for functional analysis of the pelvic floor in general. By the use of many reference landmarks all these morphologic data concerning fibre directions and optimal fibre length can be morphed to the geometrical data based on segmentation from MRI scans. These data can be directly used as an input for building a

  4. Symbolic-numeric estimation of parameters in biochemical models by quantifier elimination.

    PubMed

    Anai, Hirokazu; Orii, Shigeo; Horimoto, Katsuhisa

    2006-10-01

    The sequencing of complete genomes allows analyses of the interactions between various biological molecules on a genomic scale, which prompted us to simulate the global behaviors of biological phenomena on the molecular level. One of the basic mathematical problems in the simulation is the parameter optimization in the kinetic model for complex dynamics, and many estimation methods have been designed. We introduce a new approach to estimate the parameters in biological kinetic models by quantifier elimination (QE), in combination with numerical simulation methods. The estimation method was applied to a model for the inhibition kinetics of HIV proteinase with ten parameters and nine variables, and attained the goodness of fit to 300 points of observed data with the same magnitude as that obtained by the previous estimation methods, remarkably by using only one or two points of data. Furthermore, the utilization of QE demonstrated the feasibility of the present method for elucidating the behavior of the parameters and the variables in the analyzed model. Therefore, the present symbolic-numeric method is a powerful approach to reveal the fundamental mechanisms of kinetic models, in addition to being a computational engine. PMID:17099943

  5. Application of high-resolution, remotely sensed data for transient storage modeling parameter estimation

    NASA Astrophysics Data System (ADS)

    Bingham, Q. G.; Neilson, B. T.; Neale, C. M. U.; Cardenas, M. B.

    2012-08-01

    This paper presents a method that uses high-resolution multispectral and thermal infrared imagery from airborne remote sensing for estimating two model parameters within the two-zone in-stream temperature and solute (TZTS) model. Previous TZTS modeling efforts have provided accurate in-stream temperature predictions; however, model parameter ranges resulting from the multiobjective calibrations were quite large. In addition to the data types previously required to populate and calibrate the TZTS model, high-resolution, remotely sensed thermal infrared (TIR) and near-infrared, red, and green (multispectral) band imagery were collected to help estimate two previously calibrated parameters: (1) average total channel width (BTOT) and (2) the fraction of the channel comprising surface transient storage zones (β). Multispectral imagery in combination with the TIR imagery provided high-resolution estimates ofBTOT. In-stream temperature distributions provided by the TIR imagery enabled the calculation of temperature thresholds at which main channel temperatures could be delineated from surface transient storage, permitting the estimation ofβ. It was found that an increase in the resolution and frequency at which BTOT and β were physically estimated resulted in similar objective functions in the main channel and transient storage zones, but the uncertainty associated with the estimated parameters decreased.

  6. Improving filtering and prediction of spatially extended turbulent systems with model errors through stochastic parameter estimation

    SciTech Connect

    Gershgorin, B.; Harlim, J. Majda, A.J.

    2010-01-01

    The filtering and predictive skill for turbulent signals is often limited by the lack of information about the true dynamics of the system and by our inability to resolve the assumed dynamics with sufficiently high resolution using the current computing power. The standard approach is to use a simple yet rich family of constant parameters to account for model errors through parameterization. This approach can have significant skill by fitting the parameters to some statistical feature of the true signal; however in the context of real-time prediction, such a strategy performs poorly when intermittent transitions to instability occur. Alternatively, we need a set of dynamic parameters. One strategy for estimating parameters on the fly is a stochastic parameter estimation through partial observations of the true signal. In this paper, we extend our newly developed stochastic parameter estimation strategy, the Stochastic Parameterization Extended Kalman Filter (SPEKF), to filtering sparsely observed spatially extended turbulent systems which exhibit abrupt stability transition from time to time despite a stable average behavior. For our primary numerical example, we consider a turbulent system of externally forced barotropic Rossby waves with instability introduced through intermittent negative damping. We find high filtering skill of SPEKF applied to this toy model even in the case of very sparse observations (with only 15 out of the 105 grid points observed) and with unspecified external forcing and damping. Additive and multiplicative bias corrections are used to learn the unknown features of the true dynamics from observations. We also present a comprehensive study of predictive skill in the one-mode context including the robustness toward variation of stochastic parameters, imperfect initial conditions and finite ensemble effect. Furthermore, the proposed stochastic parameter estimation scheme applied to the same spatially extended Rossby wave system demonstrates

  7. Adaptive neuro-fuzzy inference system (ANFIS) to predict CI engine parameters fueled with nano-particles additive to diesel fuel

    NASA Astrophysics Data System (ADS)

    Ghanbari, M.; Najafi, G.; Ghobadian, B.; Mamat, R.; Noor, M. M.; Moosavian, A.

    2015-12-01

    This paper studies the use of adaptive neuro-fuzzy inference system (ANFIS) to predict the performance parameters and exhaust emissions of a diesel engine operating on nanodiesel blended fuels. In order to predict the engine parameters, the whole experimental data were randomly divided into training and testing data. For ANFIS modelling, Gaussian curve membership function (gaussmf) and 200 training epochs (iteration) were found to be optimum choices for training process. The results demonstrate that ANFIS is capable of predicting the diesel engine performance and emissions. In the experimental step, Carbon nano tubes (CNT) (40, 80 and 120 ppm) and nano silver particles (40, 80 and 120 ppm) with nanostructure were prepared and added as additive to the diesel fuel. Six cylinders, four-stroke diesel engine was fuelled with these new blended fuels and operated at different engine speeds. Experimental test results indicated the fact that adding nano particles to diesel fuel, increased diesel engine power and torque output. For nano-diesel it was found that the brake specific fuel consumption (bsfc) was decreased compared to the net diesel fuel. The results proved that with increase of nano particles concentrations (from 40 ppm to 120 ppm) in diesel fuel, CO2 emission increased. CO emission in diesel fuel with nano-particles was lower significantly compared to pure diesel fuel. UHC emission with silver nano-diesel blended fuel decreased while with fuels that contains CNT nano particles increased. The trend of NOx emission was inverse compared to the UHC emission. With adding nano particles to the blended fuels, NOx increased compared to the net diesel fuel. The tests revealed that silver & CNT nano particles can be used as additive in diesel fuel to improve combustion of the fuel and reduce the exhaust emissions significantly.

  8. Modeling soil detachment capacity by rill flow using hydraulic parameters

    NASA Astrophysics Data System (ADS)

    Wang, Dongdong; Wang, Zhanli; Shen, Nan; Chen, Hao

    2016-04-01

    The relationship between soil detachment capacity (Dc) by rill flow and hydraulic parameters (e.g., flow velocity, shear stress, unit stream power, stream power, and unit energy) at low flow rates is investigated to establish an accurate experimental model. Experiments are conducted using a 4 × 0.1 m rill hydraulic flume with a constant artificial roughness on the flume bed. The flow rates range from 0.22 × 10-3 m2 s-1 to 0.67 × 10-3 m2 s-1, and the slope gradients vary from 15.8% to 38.4%. Regression analysis indicates that the Dc by rill flow can be predicted using the linear equations of flow velocity, stream power, unit stream power, and unit energy. Dc by rill flow that is fitted to shear stress can be predicted with a power function equation. Predictions based on flow velocity, unit energy, and stream power are powerful, but those based on shear stress, especially on unit stream power, are relatively poor. The prediction based on flow velocity provides the best estimates of Dc by rill flow because of the simplicity and availability of its measurements. Owing to error in measuring flow velocity at low flow rates, the predictive abilities of Dc by rill flow using all hydraulic parameters are relatively lower in this study compared with the results of previous research. The measuring accuracy of experiments for flow velocity should be improved in future research.

  9. Parameters-related uncertainty in modeling sugar cane yield with an agro-Land Surface Model

    NASA Astrophysics Data System (ADS)

    Valade, A.; Ciais, P.; Vuichard, N.; Viovy, N.; Ruget, F.; Gabrielle, B.

    2012-12-01

    Agro-Land Surface Models (agro-LSM) have been developed from the coupling of specific crop models and large-scale generic vegetation models. They aim at accounting for the spatial distribution and variability of energy, water and carbon fluxes within soil-vegetation-atmosphere continuum with a particular emphasis on how crop phenology and agricultural management practice influence the turbulent fluxes exchanged with the atmosphere, and the underlying water and carbon pools. A part of the uncertainty in these models is related to the many parameters included in the models' equations. In this study, we quantify the parameter-based uncertainty in the simulation of sugar cane biomass production with the agro-LSM ORCHIDEE-STICS on a multi-regional approach with data from sites in Australia, La Reunion and Brazil. First, the main source of uncertainty for the output variables NPP, GPP, and sensible heat flux (SH) is determined through a screening of the main parameters of the model on a multi-site basis leading to the selection of a subset of most sensitive parameters causing most of the uncertainty. In a second step, a sensitivity analysis is carried out on the parameters selected from the screening analysis at a regional scale. For this, a Monte-Carlo sampling method associated with the calculation of Partial Ranked Correlation Coefficients is used. First, we quantify the sensitivity of the output variables to individual input parameters on a regional scale for two regions of intensive sugar cane cultivation in Australia and Brazil. Then, we quantify the overall uncertainty in the simulation's outputs propagated from the uncertainty in the input parameters. Seven parameters are identified by the screening procedure as driving most of the uncertainty in the agro-LSM ORCHIDEE-STICS model output at all sites. These parameters control photosynthesis (optimal temperature of photosynthesis, optimal carboxylation rate), radiation interception (extinction coefficient), root

  10. State and parameter estimation of a neural mass model from electrophysiological signals during the status epilepticus.

    PubMed

    López-Cuevas, Armando; Castillo-Toledo, Bernardino; Medina-Ceja, Laura; Ventura-Mejía, Consuelo

    2015-06-01

    Status epilepticus is an emergency condition in patients with prolonged seizure or recurrent seizures without full recovery between them. The pathophysiological mechanisms of status epilepticus are not well established. With this argument, we use a computational modeling approach combined with in vivo electrophysiological data obtained from an experimental model of status epilepticus to infer about changes that may lead to a seizure. Special emphasis is done to analyze parameter changes during or after pilocarpine administration. A cubature Kalman filter is utilized to estimate parameters and states of the model in real time from the observed electrophysiological signals. It was observed that during basal activity (before pilocarpine administration) the parameters presented a standard deviation below 30% of the mean value, while during SE activity, the parameters presented variations larger than 200% of the mean value with respect to basal state. The ratio of excitation-inhibition, increased during SE activity by 80% with respect to the transition state, and reaches the lowest value during cessation. In addition, a progression between low and fast inhibitions before or during this condition was found. This method can be implemented in real time, which is particularly important for the design of stimulation devices that attempt to stop seizures. These changes in the parameters analyzed during seizure activity can lead to better understanding of the mechanisms of epilepsy and to improve its treatments.

  11. Maximum likelihood identification of aircraft parameters with unsteady aerodynamic modelling

    NASA Technical Reports Server (NTRS)

    Keskar, D. A.; Wells, W. R.

    1979-01-01

    A simplified aerodynamic force model based on the physical principle of Prandtl's lifting line theory and trailing vortex concept has been developed to account for unsteady aerodynamic effects in aircraft dynamics. Longitudinal equations of motion have been modified to include these effects. The presence of convolution integrals in the modified equations of motion led to a frequency domain analysis utilizing Fourier transforms. This reduces the integro-differential equations to relatively simple algebraic equations, thereby reducing computation time significantly. A parameter extraction program based on the maximum likelihood estimation technique is developed in the frequency domain. The extraction algorithm contains a new scheme for obtaining sensitivity functions by using numerical differentiation. The paper concludes with examples using computer generated and real flight data

  12. Sound propagation and absorption in foam - A distributed parameter model.

    NASA Technical Reports Server (NTRS)

    Manson, L.; Lieberman, S.

    1971-01-01

    Liquid-base foams are highly effective sound absorbers. A better understanding of the mechanisms of sound absorption in foams was sought by exploration of a mathematical model of bubble pulsation and coupling and the development of a distributed-parameter mechanical analog. A solution by electric-circuit analogy was thus obtained and transmission-line theory was used to relate the physical properties of the foams to the characteristic impedance and propagation constants of the analog transmission line. Comparison of measured physical properties of the foam with values obtained from measured acoustic impedance and propagation constants and the transmission-line theory showed good agreement. We may therefore conclude that the sound propagation and absorption mechanisms in foam are accurately described by the resonant response of individual bubbles coupled to neighboring bubbles.

  13. Temporal adaptability and the inverse relationship to sensitivity: a parameter identification model.

    PubMed

    Langley, Keith

    2005-01-01

    Following a prolonged period of visual adaptation to a temporally modulated sinusoidal luminance pattern, the threshold contrast of a similar visual pattern is elevated. The adaptive elevation in threshold contrast is selective for spatial frequency, may saturate at low adaptor contrast, and increases as a function of the spatio-temporal frequency of the adapting signal. A model for signal extraction that is capable of explaining these threshold contrast effects of adaptation is proposed. Contrast adaptation in the model is explained by the identification of the parameters of an environmental model: the autocorrelation function of the visualized signal. The proposed model predicts that the adaptability of threshold contrast is governed by unpredicted signal variations present in the visual signal, and thus represents an internal adjustment by the visual system that takes into account these unpredicted signal variations given the additional possibility for signal corruption by additive noise.

  14. Parameter Estimation and Model Validation of Nonlinear Dynamical Networks

    SciTech Connect

    Abarbanel, Henry; Gill, Philip

    2015-03-31

    In the performance period of this work under a DOE contract, the co-PIs, Philip Gill and Henry Abarbanel, developed new methods for statistical data assimilation for problems of DOE interest, including geophysical and biological problems. This included numerical optimization algorithms for variational principles, new parallel processing Monte Carlo routines for performing the path integrals of statistical data assimilation. These results have been summarized in the monograph: “Predicting the Future: Completing Models of Observed Complex Systems” by Henry Abarbanel, published by Spring-Verlag in June 2013. Additional results and details have appeared in the peer reviewed literature.

  15. Effect of addition of a probiotic micro-organism to broiler diet on intestinal mucosal architecture and electrophysiological parameters.

    PubMed

    Awad, W A; Ghareeb, K; Böhm, J

    2010-08-01

    Probiotics might be one of the solutions to reduce the effects of the recent ban on antimicrobial growth promoters in feed. However, the mode of action of probiotics still not fully understood. Therefore, evaluating probiotics (microbial feed additives) is essential. Thus the objective of this work was to investigate the efficacy of a new microbial feed additive (Lactobacillus salivarius and Lactobacillus reuteri) in broiler nutrition. The body weight (BW), average daily weight gain was relatively increased by the dietary inclusion of Lactobacillus sp. in broiler diets. Furthermore, the Lactobacillus feed additive influenced the histomorphological measurements of small intestinal villi. The addition of Lactobacillus sp. increased (p < 0.05) the villus height (VH)/crypt depth ratio and the VH was numerically increased in duodenum. The duodenal crypt depth remained unaffected (p > 0.05), while the ileal crypt depth was decreased by dietary supplementation of Lactobacillus sp. compared with the control. At the end of the feeding period, the basal and glucose stimulated short-circuit current (Isc) and electrical tissue conductivity were measured in the isolated gut mucosa to characterize the electrical properties of the gut. The addition of glucose on the mucosal side in Ussing chamber produced a significant increase (p = 0.001) in Isc in both jejunum and colon relative to the basal values in Lactobacillus probiotic group. This increase in Isc for probiotic group in jejunum is equivalent to an increase of about two times that for the basal values, while in the control group is about half fold that for the basal value. In addition, the DeltaIsc after glucose addition to the large intestine was greater than the DeltaIsc in the small intestine in both control and probiotic group. Moreover in both jejunum and colon, the increase in Isc for birds fed Lactobacillus was higher than their control counterparts (p < or = 0.1). This result suggests that the addition of

  16. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    NASA Astrophysics Data System (ADS)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification

  17. Prediction of Parameters Distribution of Upward Boiling Two-Phase Flow With Two-Fluid Models

    SciTech Connect

    Yao, Wei; Morel, Christophe

    2002-07-01

    In this paper, a multidimensional two-fluid model with additional turbulence k - {epsilon} equations is used to predict the two-phase parameters distribution in freon R12 boiling flow. The 3D module of the CATHARE code is used for numerical calculation. The DEBORA experiment has been chosen to evaluate our models. The radial profiles of the outlet parameters were measured by means of an optical probe. The comparison of the radial profiles of void fraction, liquid temperature, gas velocity and volumetric interfacial area at the end of the heated section shows that the multidimensional two-fluid model with proper constitutive relations can yield reasonably predicted results in boiling conditions. Sensitivity tests show that the turbulent dispersion force, which involves the void fraction gradient, plays an important role in determining the void fraction distribution; and the turbulence eddy viscosity is a significant factor to influence the liquid temperature distribution. (authors)

  18. Adapting isostatic microbial growth parameters into non-isostatic models for use in dynamic ecosystems

    NASA Astrophysics Data System (ADS)

    Spangler, J.; Schulz, C. J.; Childers, G. W.

    2009-12-01

    Modeling microbial respiration and growth is an important tool for understanding many geochemical systems. The estimation of growth parameters relies on fitting experimental data to a selected model, such as the Monod equation or some variation, most often under batch or continuous culture conditions. While continuous culture conditions can be analogous to some natural environments, it often isn’t the case. More often, microorganisms are subject to fluctuating temperature, substrate concentrations, pH, water activity, and inhibitory compounds, to name a few. Microbial growth estimation under non-isothermal conditions has been possible through use of numerical solutions and has seen use in the field of food microbiology. In this study, numerical solutions were used to extend growth models under more non-isostatic conditions using momentary growth rate estimates. Using a model organism common in wastewater (Paracoccus denitrificans), growth and respiration rate parameters were estimated under varying static conditions (temperature, pH, electron donor/acceptor concentrations) and used to construct a non-isostatic growth model. After construction of the model, additional experiments were conducted to validate the model. These non-isostatic models hold the potential for allowing the prediction of cell biomass and respiration rates under a diverse array of conditions. By not restricting models to constant environmental conditions, the general applicability of the model can be greatly improved.

  19. Small-signal model parameter extraction for AlGaN/GaN HEMT

    NASA Astrophysics Data System (ADS)

    Le, Yu; Yingkui, Zheng; Sheng, Zhang; Lei, Pang; Ke, Wei; Xiaohua, Ma

    2016-03-01

    A new 22-element small signal equivalent circuit model for the AlGaN/GaN high electron mobility transistor (HEMT) is presented. Compared with the traditional equivalent circuit model, the gate forward and breakdown conductions (G gsf and G gdf) are introduced into the new model to characterize the gate leakage current. Additionally, for the new gate-connected field plate and the source-connected field plate of the device, an improved method for extracting the parasitic capacitances is proposed, which can be applied to the small-signal extraction for an asymmetric device. To verify the model, S-parameters are obtained from the modeling and measurements. The good agreement between the measured and the simulated results indicate that this model is accurate, stable and comparatively clear in physical significance.

  20. Inverse Modeling of Hydrologic Parameters Using Surface Flux and Streamflow Observations in the Community Land Model

    NASA Astrophysics Data System (ADS)

    Sun, Y.; Hou, Z.; Huang, M.; Tian, F.; Leung, L.

    2012-12-01

    This study aims at demonstrating the possibility of calibrating hydrologic parameters using surface flux and streamflow observations in version 4 of the Community Land Model (CLM4). Previously we showed that surface flux and streamflow calculations are sensitive to several key hydrologic parameters in CLM4, and discussed the necessity and possibility of parameter calibration. In this study, we evaluate performances of several different inversion strategies, including least-square fitting, quasi Monte-Carlo (QMC) sampling based Bayesian updating, and a Markov-Chain Monte-Carlo (MCMC) Bayesian inversion approach. The parameters to be calibrated include the surface and subsurface runoff generation parameters and vadose zone soil water parameters. We discuss the effects of surface flux and streamflow observations on the inversion results and compare their consistency and reliability using both monthly and daily observations at various flux tower and MOPEX sites. We find that the sampling-based stochastic inversion approaches behaved consistently - as more information comes in, the predictive intervals of the calibrated parameters as well as the misfits between the calculated and observed observations decrease. In general, the parameters that are identified to be significant through sensitivity analyses and statistical tests are better calibrated than those with weak or nonlinear impacts on flux or streamflow observations. We also evaluated the possibility of probabilistic model averaging for more consistent parameter estimation.

  1. Modeling Self-Ionized Plasma Wakefield Acceleration for Afterburner Parameters Using QuickPIC

    SciTech Connect

    Zhou, M.; Clayton, C.E.; Decyk, V.K.; Huang, C.; Johnson, D.K.; Joshi, C.; Lu, W.; Mori, W.B.; Tsung, F.S.; Deng, S.; Katsouleas, T.; Muggli, P.; Oz, E.; Decker, F.-J.; Iverson, R.; O'Connel, C.; Walz, D.; /SLAC

    2006-01-25

    For the parameters envisaged in possible afterburner stages[1] of a plasma wakefield accelerator (PWFA), the self-fields of the particle beam can be intense enough to tunnel ionize some neutral gases. Tunnel ionization has been investigated as a way for the beam itself to create the plasma, and the wakes generated may differ from those generated in pre-ionized plasmas[2],[3]. However, it is not practical to model the whole stage of PWFA with afterburner parameters using the models described in [2] and [3]. Here we describe the addition of a tunnel ionization package using the ADK model into QuickPIC, a highly efficient quasi-static particle in cell (PIC) code which can model a PWFA with afterburner parameters. Comparison between results from OSIRIS (a full PIC code with ionization) and from QuickPIC with the ionization package shows good agreement. Preliminary results using parameters relevant to the E164X experiment and the upcoming E167 experiment at SLAC are shown.

  2. Physical property parameter set for modeling ICPP aqueous wastes with ASPEN electrolyte NRTL model

    SciTech Connect

    Schindler, R.E.

    1996-09-01

    The aqueous waste evaporators at the Idaho Chemical Processing Plant (ICPP) are being modeled using ASPEN software. The ASPEN software calculates chemical and vapor-liquid equilibria with activity coefficients calculated using the electrolyte Non-Random Two Liquid (NRTL) model for local excess Gibbs free energies of interactions between ions and molecules in solution. The use of the electrolyte NRTL model requires the determination of empirical parameters for the excess Gibbs free energies of the interactions between species in solution. This report covers the development of a set parameters, from literature data, for the use of the electrolyte NRTL model with the major solutes in the ICPP aqueous wastes.

  3. Sensitivity of numerical dispersion modeling to explosive source parameters

    SciTech Connect

    Baskett, R.L. ); Cederwall, R.T. )

    1991-02-13

    The calculation of downwind concentrations from non-traditional sources, such as explosions, provides unique challenges to dispersion models. The US Department of Energy has assigned the Atmospheric Release Advisory Capability (ARAC) at the Lawrence Livermore National Laboratory (LLNL) the task of estimating the impact of accidental radiological releases to the atmosphere anywhere in the world. Our experience includes responses to over 25 incidents in the past 16 years, and about 150 exercises a year. Examples of responses to explosive accidents include the 1980 Titan 2 missile fuel explosion near Damascus, Arkansas and the hydrogen gas explosion in the 1986 Chernobyl nuclear power plant accident. Based on judgment and experience, we frequently estimate the source geometry and the amount of toxic material aerosolized as well as its particle size distribution. To expedite our real-time response, we developed some automated algorithms and default assumptions about several potential sources. It is useful to know how well these algorithms perform against real-world measurements and how sensitive our dispersion model is to the potential range of input values. In this paper we present the algorithms we use to simulate explosive events, compare these methods with limited field data measurements, and analyze their sensitivity to input parameters. 14 refs., 7 figs., 2 tabs.

  4. Parameter Estimation and Parameterization Uncertainty Using Bayesian Model Averaging

    NASA Astrophysics Data System (ADS)

    Tsai, F. T.; Li, X.

    2007-12-01

    This study proposes Bayesian model averaging (BMA) to address parameter estimation uncertainty arisen from non-uniqueness in parameterization methods. BMA provides a means of incorporating multiple parameterization methods for prediction through the law of total probability, with which an ensemble average of hydraulic conductivity distribution is obtained. Estimation uncertainty is described by the BMA variances, which contain variances within and between parameterization methods. BMA shows the facts that considering more parameterization methods tends to increase estimation uncertainty and estimation uncertainty is always underestimated using a single parameterization method. Two major problems in applying BMA to hydraulic conductivity estimation using a groundwater inverse method will be discussed in the study. The first problem is the use of posterior probabilities in BMA, which tends to single out one best method and discard other good methods. This problem arises from Occam's window that only accepts models in a very narrow range. We propose a variance window to replace Occam's window to cope with this problem. The second problem is the use of Kashyap information criterion (KIC), which makes BMA tend to prefer high uncertain parameterization methods due to considering the Fisher information matrix. We found that Bayesian information criterion (BIC) is a good approximation to KIC and is able to avoid controversial results. We applied BMA to hydraulic conductivity estimation in the 1,500-foot sand aquifer in East Baton Rouge Parish, Louisiana.

  5. Fundamental parameters of pulsating stars from atmospheric models

    NASA Astrophysics Data System (ADS)

    Barcza, S.

    2006-12-01

    A purely photometric method is reviewed to determine distance, mass, equilibrium temperature, and luminosity of pulsating stars by using model atmospheres and hydrodynamics. T Sex is given as an example: on the basis of Kurucz atmospheric models and UBVRI (in both Johnson and Kron-Cousins systems) data, variation of angular diameter, effective temperature, and surface gravity is derived as a function of phase, mass M=(0.76± 0.09) M⊙, distance d=530± 67 pc, Rmax=2.99R⊙, Rmin=2.87R⊙, magnitude averaged visual absolute brightness < MVmag>=1.17± 0.26 mag are found. During a pulsation cycle four standstills of the atmosphere are pointed out indicating the occurrence of two shocks in the atmosphere. The derived equilibrium temperature Teq=7781 K and luminosity (28.3± 8.8)L⊙ locate T Sex on the blue edge of the instability strip in a theoretical Hertzsprung-Russell diagram. The differences of the physical parameters from this study and Liu & Janes (1990) are discussed.

  6. Mechanical models for insect locomotion: stability and parameter studies

    NASA Astrophysics Data System (ADS)

    Schmitt, John; Holmes, Philip

    2001-08-01

    We extend the analysis of simple models for the dynamics of insect locomotion in the horizontal plane, developed in [Biol. Cybern. 83 (6) (2000) 501] and applied to cockroach running in [Biol. Cybern. 83 (6) (2000) 517]. The models consist of a rigid body with a pair of effective legs (each representing the insect’s support tripod) placed intermittently in ground contact. The forces generated may be prescribed as functions of time, or developed by compression of a passive leg spring. We find periodic gaits in both cases, and show that prescribed (sinusoidal) forces always produce unstable gaits, unless they are allowed to rotate with the body during stride, in which case a (small) range of physically unrealistic stable gaits does exist. Stability is much more robust in the passive spring case, in which angular momentum transfer at touchdown/liftoff can result in convergence to asymptotically straight motions with bounded yaw, fore-aft and lateral velocity oscillations. Using a non-dimensional formulation of the equations of motion, we also develop exact and approximate scaling relations that permit derivation of gait characteristics for a range of leg stiffnesses, lengths, touchdown angles, body masses and inertias, from a single gait family computed at ‘standard’ parameter values.

  7. Significance of settling model structures and parameter subsets in modelling WWTPs under wet-weather flow and filamentous bulking conditions.

    PubMed

    Ramin, Elham; Sin, Gürkan; Mikkelsen, Peter Steen; Plósz, Benedek Gy

    2014-10-15

    Current research focuses on predicting and mitigating the impacts of high hydraulic loadings on centralized wastewater treatment plants (WWTPs) under wet-weather conditions. The maximum permissible inflow to WWTPs depends not only on the settleability of activated sludge in secondary settling tanks (SSTs) but also on the hydraulic behaviour of SSTs. The present study investigates the impacts of ideal and non-ideal flow (dry and wet weather) and settling (good settling and bulking) boundary conditions on the sensitivity of WWTP model outputs to uncertainties intrinsic to the one-dimensional (1-D) SST model structures and parameters. We identify the critical sources of uncertainty in WWTP models through global sensitivity analysis (GSA) using the Benchmark simulation model No. 1 in combination with first- and second-order 1-D SST models. The results obtained illustrate that the contribution of settling parameters to the total variance of the key WWTP process outputs significantly depends on the influent flow and settling conditions. The magnitude of the impact is found to vary, depending on which type of 1-D SST model is used. Therefore, we identify and recommend potential parameter subsets for WWTP model calibration, and propose optimal choice of 1-D SST models under different flow and settling boundary conditions. Additionally, the hydraulic parameters in the second-order SST model are found significant under dynamic wet-weather flow conditions. These results highlight the importance of developing a more mechanistic based flow-dependent hydraulic sub-model in second-order 1-D SST models in the future.

  8. Bayesian Model Comparison and Parameter Inference in Systems Biology Using Nested Sampling

    PubMed Central

    Pullen, Nick; Morris, Richard J.

    2014-01-01

    Inferring parameters for models of biological processes is a current challenge in systems biology, as is the related problem of comparing competing models that explain the data. In this work we apply Skilling's nested sampling to address both of these problems. Nested sampling is a Bayesian method for exploring parameter space that transforms a multi-dimensional integral to a 1D integration over likelihood space. This approach focusses on the computation of the marginal likelihood or evidence. The ratio of evidences of different models leads to the Bayes factor, which can be used for model comparison. We demonstrate how nested sampling can be used to reverse-engineer a system's behaviour whilst accounting for the uncertainty in the results. The effect of missing initial conditions of the variables as well as unknown parameters is investigated. We show how the evidence and the model ranking can change as a function of the available data. Furthermore, the addition of data from extra variables of the system can deliver more information for model comparison than increasing the data from one variable, thus providing a basis for experimental design. PMID:24523891

  9. Bayesian model comparison and parameter inference in systems biology using nested sampling.

    PubMed

    Pullen, Nick; Morris, Richard J

    2014-01-01

    Inferring parameters for models of biological processes is a current challenge in systems biology, as is the related problem of comparing competing models that explain the data. In this work we apply Skilling's nested sampling to address both of these problems. Nested sampling is a Bayesian method for exploring parameter space that transforms a multi-dimensional integral to a 1D integration over likelihood space. This approach focuses on the computation of the marginal likelihood or evidence. The ratio of evidences of different models leads to the Bayes factor, which can be used for model comparison. We demonstrate how nested sampling can be used to reverse-engineer a system's behaviour whilst accounting for the uncertainty in the results. The effect of missing initial conditions of the variables as well as unknown parameters is investigated. We show how the evidence and the model ranking can change as a function of the available data. Furthermore, the addition of data from extra variables of the system can deliver more information for model comparison than increasing the data from one variable, thus providing a basis for experimental design. PMID:24523891

  10. A novel parameter for predicting arterial fusion and ablation in finite element models

    NASA Astrophysics Data System (ADS)

    Fankell, Douglas; Kramer, Eric; Taylor, Kenneth; Ferguson, Virginia; Rentschler, Mark E.

    2015-03-01

    Tissue fusion devices apply heat and pressure to ligate or ablate blood vessels during surgery. Although this process is widely used, a predictive finite element (FE) model incorporating both structural mechanics and heat transfer has not been developed, limiting improvements to empirical evidence. This work presents the development of a novel damage parameter, which incorporates stress, water content and temperature, and demonstrates its application in a FE model. A FE model, using the Holzapfel-Gasser-Ogden strain energy function to represent the structural mechanics and equations developed by Cezo to model water content and heat transfer, was created to simulate the fusion or ablation of a porcine splenic artery. Using state variables, the stresses, temperature and water content are recorded and combined to create a single parameter at each integration point. The parameter is then compared to a critical value (determined through experiments). If the critical value is reached, the element loses all strength. If the value is not reached, no change occurs. Little experimental data exists for validation, but the resulting stresses, temperatures and water content fall within ranges predicted by prior work. Due to the lack of published data, additional experimental studies are being conducted to rigorously validate and accurately determine the critical value. Ultimately, a novel method for demonstrating tissue damage and fusion in a FE model is presented, providing the first step towards in-depth FE models simulating fusion and ablation of arteries.

  11. Estimation and Inference in Generalized Additive Coefficient Models for Nonlinear Interactions with High-Dimensional Covariates

    PubMed Central

    Shujie, MA; Carroll, Raymond J.; Liang, Hua; Xu, Shizhong

    2015-01-01

    In the low-dimensional case, the generalized additive coefficient model (GACM) proposed by Xue and Yang [Statist. Sinica 16 (2006) 1423–1446] has been demonstrated to be a powerful tool for studying nonlinear interaction effects of variables. In this paper, we propose estimation and inference procedures for the GACM when the dimension of the variables is high. Specifically, we propose a groupwise penalization based procedure to distinguish significant covariates for the “large p small n” setting. The procedure is shown to be consistent for model structure identification. Further, we construct simultaneous confidence bands for the coefficient functions in the selected model based on a refined two-step spline estimator. We also discuss how to choose the tuning parameters. To estimate the standard deviation of the functional estimator, we adopt the smoothed bootstrap method. We conduct simulation experiments to evaluate the numerical performance of the proposed methods and analyze an obesity data set from a genome-wide association study as an illustration. PMID:26412908

  12. Sensitivity of injection costs to input petrophysical parameters in numerical geologic carbon sequestration models

    SciTech Connect

    Cheng, C. L.; Gragg, M. J.; Perfect, E.; White, Mark D.; Lemiszki, P. J.; McKay, L. D.

    2013-08-24

    Numerical simulations are widely used in feasibility studies for geologic carbon sequestration. Accurate estimates of petrophysical parameters are needed as inputs for these simulations. However, relatively few experimental values are available for CO2-brine systems. Hence, a sensitivity analysis was performed using the STOMP numerical code for supercritical CO2 injected into a model confined deep saline aquifer. The intrinsic permeability, porosity, pore compressibility, and capillary pressure-saturation/relative permeability parameters (residual liquid saturation, residual gas saturation, and van Genuchten alpha and m values) were varied independently. Their influence on CO2 injection rates and costs were determined and the parameters were ranked based on normalized coefficients of variation. The simulations resulted in differences of up to tens of millions of dollars over the life of the project (i.e., the time taken to inject 10.8 million metric tons of CO2). The two most influential parameters were the intrinsic permeability and the van Genuchten m value. Two other parameters, the residual gas saturation and the residual liquid saturation, ranked above the porosity. These results highlight the need for accurate estimates of capillary pressure-saturation/relative permeability parameters for geologic carbon sequestration simulations in addition to measurements of porosity and intrinsic permeability.

  13. Impact of the hard-coded parameters on the hydrologic fluxes of the land surface model Noah-MP

    NASA Astrophysics Data System (ADS)

    Cuntz, Matthias; Mai, Juliane; Samaniego, Luis; Clark, Martyn; Wulfmeyer, Volker; Attinger, Sabine; Thober, Stephan

    2016-04-01

    Land surface models incorporate a large number of processes, described by physical, chemical and empirical equations. The process descriptions contain a number of parameters that can be soil or plant type dependent and are typically read from tabulated input files. Land surface models may have, however, process descriptions that contain fixed, hard-coded numbers in the computer code, which are not identified as model parameters. Here we searched for hard-coded parameters in the computer code of the land surface model Noah with multiple process options (Noah-MP) to assess the importance of the fixed values on restricting the model's agility during parameter estimation. We found 139 hard-coded values in all Noah-MP process options, which are mostly spatially constant values. This is in addition to the 71 standard parameters of Noah-MP, which mostly get distributed spatially by given vegetation and soil input maps. We performed a Sobol' global sensitivity analysis of Noah-MP to variations of the standard and hard-coded parameters for a specific set of process options. 42 standard parameters and 75 hard-coded parameters were active with the chosen process options. The sensitivities of the hydrologic output fluxes latent heat and total runoff as well as their component fluxes were evaluated. These sensitivities were evaluated at twelve catchments of the Eastern United States with very different hydro-meteorological regimes. Noah-MP's hydrologic output fluxes are sensitive to two thirds of its standard parameters. The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for evaporation, which proved to be oversensitive in other land surface models as well. Surface runoff is sensitive to almost all hard-coded parameters of the snow processes and the meteorological inputs. These parameter sensitivities diminish in total runoff. Assessing these parameters in model calibration would require detailed snow observations or the

  14. An Adaptive Sequential Design for Model Discrimination and Parameter Estimation in Non-Linear Nested Models

    SciTech Connect

    Tommasi, C.; May, C.

    2010-09-30

    The DKL-optimality criterion has been recently proposed for the dual problem of model discrimination and parameter estimation, for the case of two rival models. A sequential version of the DKL-optimality criterion is herein proposed in order to discriminate and efficiently estimate more than two nested non-linear models. Our sequential method is inspired by the procedure of Biswas and Chaudhuri (2002), which is however useful only in the set up of nested linear models.

  15. Simultaneous model discrimination and parameter estimation in dynamic models of cellular systems

    PubMed Central

    2013-01-01

    Background Model development is a key task in systems biology, which typically starts from an initial model candidate and, involving an iterative cycle of hypotheses-driven model modifications, leads to new experimentation and subsequent model identification steps. The final product of this cycle is a satisfactory refined model of the biological phenomena under study. During such iterative model development, researchers frequently propose a set of model candidates from which the best alternative must be selected. Here we consider this problem of model selection and formulate it as a simultaneous model selection and parameter identification problem. More precisely, we consider a general mixed-integer nonlinear programming (MINLP) formulation for model selection and identification, with emphasis on dynamic models consisting of sets of either ODEs (ordinary differential equations) or DAEs (differential algebraic equations). Results We solved the MINLP formulation for model selection and identification using an algorithm based on Scatter Search (SS). We illustrate the capabilities and efficiency of the proposed strategy with a case study considering the KdpD/KdpE system regulating potassium homeostasis in Escherichia coli. The proposed approach resulted in a final model that presents a better fit to the in silico generated experimental data. Conclusions The presented MINLP-based optimization approach for nested-model selection and identification is a powerful methodology for model development in systems biology. This strategy can be used to perform model selection and parameter estimation in one single step, thus greatly reducing the number of experiments and computations of traditional modeling approaches. PMID:23938131

  16. Use of generalised additive models to categorise continuous variables in clinical prediction

    PubMed Central

    2013-01-01

    Background In medical practice many, essentially continuous, clinical parameters tend to be categorised by physicians for ease of decision-making. Indeed, categorisation is a common practice both in medical research and in the development of clinical prediction rules, particularly where the ensuing models are to be applied in daily clinical practice to support clinicians in the decision-making process. Since the number of categories into which a continuous predictor must be categorised depends partly on the relationship between the predictor and the outcome, the need for more than two categories must be borne in mind. Methods We propose a categorisation methodology for clinical-prediction models, using Generalised Additive Models (GAMs) with P-spline smoothers to determine the relationship between the continuous predictor and the outcome. The proposed method consists of creating at least one average-risk category along with high- and low-risk categories based on the GAM smooth function. We applied this methodology to a prospective cohort of patients with exacerbated chronic obstructive pulmonary disease. The predictors selected were respiratory rate and partial pressure of carbon dioxide in the blood (PCO2), and the response variable was poor evolution. An additive logistic regression model was used to show the relationship between the covariates and the dichotomous response variable. The proposed categorisation was compared to the continuous predictor as the best option, using the AIC and AUC evaluation parameters. The sample was divided into a derivation (60%) and validation (40%) samples. The first was used to obtain the cut points while the second was used to validate the proposed methodology. Results The three-category proposal for the respiratory rate was ≤ 20;(20,24];> 24, for which the following values were obtained: AIC=314.5 and AUC=0.638. The respective values for the continuous predictor were AIC=317.1 and AUC=0.634, with no statistically

  17. A NEW VARIANCE ESTIMATOR FOR PARAMETERS OF SEMI-PARAMETRIC GENERALIZED ADDITIVE MODELS. (R829213)

    EPA Science Inventory

    The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

  18. Recommended direct simulation Monte Carlo collision model parameters for modeling ionized air transport processes

    NASA Astrophysics Data System (ADS)

    Swaminathan-Gopalan, Krishnan; Stephani, Kelly A.

    2016-02-01

    A systematic approach for calibrating the direct simulation Monte Carlo (DSMC) collision model parameters to achieve consistency in the transport processes is presented. The DSMC collision cross section model parameters are calibrated for high temperature atmospheric conditions by matching the collision integrals from DSMC against ab initio based collision integrals that are currently employed in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and Data Parallel Line Relaxation (DPLR) high temperature computational fluid dynamics solvers. The DSMC parameter values are computed for the widely used Variable Hard Sphere (VHS) and the Variable Soft Sphere (VSS) models using the collision-specific pairing approach. The recommended best-fit VHS/VSS parameter values are provided over a temperature range of 1000-20 000 K for a thirteen-species ionized air mixture. Use of the VSS model is necessary to achieve consistency in transport processes of ionized gases. The agreement of the VSS model transport properties with the transport properties as determined by the ab initio collision integral fits was found to be within 6% in the entire temperature range, regardless of the composition of the mixture. The recommended model parameter values can be readily applied to any gas mixture involving binary collisional interactions between the chemical species presented for the specified temperature range.

  19. Implementation of intrinsic lumped parameter modeling into computational fluid dynamics studies of cardiopulmonary bypass.

    PubMed

    Kaufmann, Tim A S; Neidlin, Michael; Büsen, Martin; Sonntag, Simon J; Steinseifer, Ulrich

    2014-02-01

    Stroke and cerebral hypoxia are among the main complications during cardiopulmonary bypass (CPB). The two main reasons for these complications are the cannula jet, due to altered flow conditions and the sandblast effect, and impaired cerebral autoregulation which often occurs in the elderly. The effect of autoregulation has so far mainly been modeled using lumped parameter modeling, while Computational Fluid Dynamics (CFD) has been applied to analyze flow conditions during CPB. In this study, we combine both modeling techniques to analyze the effect of lumped parameter modeling on blood flow during CPB. Additionally, cerebral autoregulation is implemented using the Baroreflex, which adapts the cerebrovascular resistance and compliance based on the cerebral perfusion pressure. The results show that while a combination of CFD and lumped parameter modeling without autoregulation delivers feasible results for physiological flow conditions, it overestimates the loss of cerebral blood flow during CPB. This is counteracted by the Baroreflex, which restores the cerebral blood flow to native levels. However, the cerebral blood flow during CPB is typically reduced by 10-20% in the clinic. This indicates that either the Baroreflex is not fully functional during CPB, or that the target value for the Baroreflex is not a full native cerebral blood flow, but the plateau phase of cerebral autoregulation, which starts at approximately 80% of native flow.

  20. Approaches in highly parameterized inversion - PEST++, a Parameter ESTimation code optimized for large environmental models

    USGS Publications Warehouse

    Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.

    2012-01-01

    An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.

  1. Modeling the influence of building and HVAC system parameters on radon levels in a large building

    SciTech Connect

    Gu, L.; Swami, M.V.; Anello, M.T.

    1996-12-31

    Heating, ventilating, and air-conditioning (HVAC) operation can play a crucial role in controlling indoor radon concentrations. Both the introduction of fresh air and the pressurization of an indoor space have been considered as means of reducing indoor radon concentration levels. Using an in-house computer model, this paper examines the impact of configurational and operational parameters of a building and HVAC system on indoor radon concentrations in a large building. To achieve this, a three-phase approach was followed. First, testing and diagnostics of the building were carried out to obtain airflow, pressure differential, and leakage data. Second, using the data from the first phase and from additional experiments involving controlled pressurization and depressurization tests of the building, the model was calibrated to characterize the cracks in the slab. Last, the model was used to parametrically examine the various factors that influence radon concentration in a reference building. The influence of several parameters, including return leak, outdoor air to exhaust air ratio (OA/EA), building tightness, ventilation rates, variable-air-volume (VAV) box operation, plenum-to-plenum leaks, and forced vs. suction outdoor air, was examined. Based on the results of the parametric analysis, inferences are drawn about the influence of the parameters on indoor radon concentration, and the efficacy of radon mitigation strategies is examined. Results indicate that building ventilation rate and the OA/EA ratio are significant parameters that affect indoor radon concentration.

  2. Observation model and parameter partials for the JPL geodetic GPS modeling software GPSOMC

    NASA Technical Reports Server (NTRS)

    Sovers, O. J.; Border, J. S.

    1988-01-01

    The physical models employed in GPSOMC and the modeling module of the GIPSY software system developed at JPL for analysis of geodetic Global Positioning Satellite (GPS) measurements are described. Details of the various contributions to range and phase observables are given, as well as the partial derivatives of the observed quantities with respect to model parameters. A glossary of parameters is provided to enable persons doing data analysis to identify quantities in the current report with their counterparts in the computer programs. There are no basic model revisions, with the exceptions of an improved ocean loading model and some new options for handling clock parametrization. Such misprints as were discovered were corrected. Further revisions include modeling improvements and assurances that the model description is in accord with the current software.

  3. Adaptation of the pore diffusion model to describe multi-addition batch uptake high-throughput screening experiments.

    PubMed

    Traylor, Steven J; Xu, Xuankuo; Li, Yi; Jin, Mi; Li, Zheng Jian

    2014-11-14

    Equilibrium isotherm and kinetic mass transfer measurements are critical to mechanistic modeling of binding and elution behavior within a chromatographic column. However, traditional methods of measuring these parameters are impractically time- and labor-intensive. While advances in high-throughput robotic liquid handling systems have created time and labor-saving methods of performing kinetic and equilibrium measurements of proteins on chromatographic resins in a 96-well plate format, these techniques continue to be limited by physical constraints on protein addition, incubation and separation times; the available concentration of protein stocks and process pools; and practical constraints on resin and fluid volumes in the 96-well format. In this study, a novel technique for measuring protein uptake kinetics (multi-addition batch uptake) has been developed to address some of these limitations during high-throughput batch uptake kinetic measurements. This technique uses sequential additions of protein stock to chromatographic resin in a 96-well plate and the subsequent removal of each addition by centrifugation or vacuum separation. The pore diffusion model was adapted here to model multi-addition batch uptake and was tested and compared with traditional batch uptake measurements of uptake of an Fc-fusion protein on an anion exchange resin. Acceptable agreement between the two techniques is achieved for the two solution conditions investigated here. In addition, a sensitivity analysis of the model to the physical inputs is presented and the advantages and limitations of the multi-addition batch uptake technique are explored.

  4. Generalized Concentration Addition Modeling Predicts Mixture Effects of Environmental PPARγ Agonists.

    PubMed

    Watt, James; Webster, Thomas F; Schlezinger, Jennifer J

    2016-09-01

    The vast array of potential environmental toxicant combinations necessitates the development of efficient strategies for predicting toxic effects of mixtures. Current practices emphasize the use of concentration addition to predict joint effects of endocrine disrupting chemicals in coexposures. Generalized concentration addition (GCA) is one such method for predicting joint effects of coexposures to chemicals and has the advantage of allowing for mixture components to have differences in efficacy (ie, dose-response curve maxima). Peroxisome proliferator-activated receptor gamma (PPARγ) is a nuclear receptor that plays a central role in regulating lipid homeostasis, insulin sensitivity, and bone quality and is the target of an increasing number of environmental toxicants. Here, we tested the applicability of GCA in predicting mixture effects of therapeutic (rosiglitazone and nonthiazolidinedione partial agonist) and environmental PPARγ ligands (phthalate compounds identified using EPA's ToxCast database). Transcriptional activation of human PPARγ1 by individual compounds and mixtures was assessed using a peroxisome proliferator response element-driven luciferase reporter. Using individual dose-response parameters and GCA, we generated predictions of PPARγ activation by the mixtures, and we compared these predictions with the empirical data. At high concentrations, GCA provided a better estimation of the experimental response compared with 3 alternative models: toxic equivalency factor, effect summation and independent action. These alternatives provided reasonable fits to the data at low concentrations in this system. These experiments support the implementation of GCA in mixtures analysis with endocrine disrupting compounds and establish PPARγ as an important target for further studies of chemical mixtures.

  5. Model inversion by parameter fit using NN emulating the forward model: evaluation of indirect measurements.

    PubMed

    Schiller, Helmut

    2007-05-01

    The usage of inverse models to derive parameters of interest from measurements is widespread in science and technology. The operational usage of many inverse models became feasible just by emulation of the inverse model via a neural net (NN). This paper shows how NNs can be used to improve inversion accuracy by minimizing the sum of error squares. The procedure is very fast as it takes advantage of the Jacobian which is a byproduct of the NN calculation. An example from remote sensing is shown. It is also possible to take into account a non-diagonal covariance matrix of the measurement to derive the covariance matrix of the retrieved parameters.

  6. Geomagnetically induced currents in Uruguay: Sensitivity to modelling parameters

    NASA Astrophysics Data System (ADS)

    Caraballo, R.

    2016-11-01

    According to the traditional wisdom, geomagnetically induced currents (GIC) should occur rarely at mid-to-low latitudes, but in the last decades a growing number of reports have addressed their effects on high-voltage (HV) power grids at mid-to-low latitudes. The growing trend to interconnect national power grids to meet regional integration objectives, may lead to an increase in the size of the present energy transmission networks to form a sort of super-grid at continental scale. Such a broad and heterogeneous super-grid can be exposed to the effects of large GIC if appropriate mitigation actions are not taken into consideration. In the present study, we present GIC estimates for the Uruguayan HV power grid during severe magnetic storm conditions. The GIC intensities are strongly dependent on the rate of variation of the geomagnetic field, conductivity of the ground, power grid resistances and configuration. Calculated GIC are analysed as functions of these parameters. The results show a reasonable agreement with measured data in Brazil and Argentina, thus confirming the reliability of the model. The expansion of the grid leads to a strong increase in GIC intensities in almost all substations. The power grid response to changes in ground conductivity and resistances shows similar results in a minor extent. This leads us to consider GIC as a non-negligible phenomenon in South America. Consequently, GIC must be taken into account in mid-to-low latitude power grids as well.

  7. Observation model and parameter partials for the JPL geodetic (GPS) modeling software 'GPSOMC'

    NASA Technical Reports Server (NTRS)

    Sovers, O. J.

    1990-01-01

    The physical models employed in GPSOMC, the modeling module of the GIPSY software system developed at JPL for analysis of geodetic Global Positioning Satellite (GPS) measurements are described. Details of the various contributions to range and phase observables are given, as well as the partial derivatives of the observed quantities with respect to model parameters. A glossary of parameters is provided to enable persons doing data analysis to identify quantities with their counterparts in the computer programs. The present version is the second revision of the original document which it supersedes. The modeling is expanded to provide the option of using Cartesian station coordinates; parameters for the time rates of change of universal time and polar motion are also introduced.

  8. Growth and parameters of microflora in intestinal and faecal samples of piglets due to application of a phytogenic feed additive.

    PubMed

    Muhl, A; Liebert, F

    2007-10-01

    A commercial phytogenic feed additive (PFA), containing the fructopolysaccharide inulin, an essential oil mix (carvacrol, thymol), chestnut meal (tannins) and cellulose powder as carrier substance, was examined for effects on growth and faecal and intestinal microflora of piglets. Two experiments (35 days) were conducted, each with 40 male castrated weaned piglets. In experiment 1, graded levels of the PFA were supplied (A1: control; B1: 0.05% PFA; C1: 0.1% PFA; D1: 0.15% PFA) in diets based on wheat, barley, soybean meal and fish meal with lysine as the limiting amino acid. In experiment 2, a similar diet with 0.1% of the PFA (A2: control; B2: 0.1% PFA; C2: +0.35% lysine; D2: 0.1% PFA + 0.35% lysine) and lysine supplementation was utilized. During experiment 1, no significant effect of the PFA on growth, feed intake and feed conversion rate was observed (p > 0.05). Lysine supplementation in experiment 2 improved growth performance significantly, but no significant effect of the PFA was detected. Microbial counts in faeces (aerobes, Gram negatives, anaerobes and lactobacilli) during the first and fifth week did not indicate any significant PFA effect (p > 0.05). In addition, microflora in intestinal samples was not significantly modified by supplementing the PFA (p > 0.05). Lysine supplementation indicated lysine as limiting amino acid in the basal diet, but did not influence the microbial counts in faeces and small intestine respectively.

  9. [Immunobiological blood parameters in rabbits after addition to the diet suspensions of chlorella, sodium sulfate, citrate and chromium chloride].

    PubMed

    Lesyk, Ia V; Fedoruk, R S; Dolaĭchuk, O P

    2013-01-01

    We studied the content of glycoproteins and their individual carbohydrate components, the phagocyte activity of neutrophils, phagocyte index, phagocyte number lizotsym and bactericidal activity of the serum concentration of circulating immune complexes and middle mass molecules in the blood of rabbits following administration into the diet chlorella suspension, sodium sulfate, chromium citrate and chromium chloride. The studies were conducted on rabbits weighing 3.7-3.9 kg with altered diet from the first day of life to 118 days old. Rabbits were divided into five groups: the control one and four experimental groups. We found that in the blood of rabbits of experimental groups recieved sodium sulphate, chromium chloride and chromium citrate, the content of glycoprotein's and their carbohydrate components was significantly higher during the 118 days of the study compared with the control group. Feeding rabbits with mineral supplements likely reflected the differences compared with the control parameters of nonspecific resistance in the blood for the study period, which was more pronounced in the first two months of life.

  10. Spatially Distributed Estimation of Mesoscale Water Balance Model Parameters using Hydrological Soil Maps

    NASA Astrophysics Data System (ADS)

    Gronz, O.; Casper, M. C.; Gemmar, P.

    2009-04-01

    In mesoscale water balance models, the relevant hydrological processes in runoff generation are abstractly simulated. One aspect of this abstraction is grouping areas to model elements, each of which simulated individually, resulting in a set of model elements. A single element might be homogeneous by means of a certain characteristics, e. g. land use, but it might also be heterogeneous considering a different feature, e. g. slope. Due to this abstraction and grouping, the processes cannot be described in detail by physical laws and thus, parameters to be calibrated will occur in the model's assumptions. Typically, the same value is used for all elements of a catchment, mainly due to the quantity of all possible parameter value configurations. Thus, the spatial distribution of the occurrence of processes and their specific strength, which can be observed in the real catchment, will not be represented by the model. The model might rather represent the mean behavior. As a result, the distribution of water in the model might not match the real system. This strongly limits the applicability of the model and it increases the complexity of calibration. To support a spatial distributed parameterization of a model, new sources of information need to be incorporated. One way of incorporating additional information is the usage of hydrological soil maps, which are available today. They indicate the potentially dominant runoff processes like Horton overland flow, subsurface flow, deep percolation etc. These maps are e. g. generated by artificial neural networks using various different sources like geological maps, digital terrain models and characteristics derived from this model, land use maps etc. An interdisciplinary project has started to integrate these maps in the calibration process. The main aim is to represent the spatial distribution shown by the map in the model. An initial idea is to find parameter prototypes for each of the runoff processes. These parameter

  11. Discussion of skill improvement in marine ecosystem dynamic models based on parameter optimization and skill assessment

    NASA Astrophysics Data System (ADS)

    Shen, Chengcheng; Shi, Honghua; Liu, Yongzhi; Li, Fen; Ding, Dewen

    2016-07-01

    Marine ecosystem dynamic models (MEDMs) are important tools for the simulation and prediction of marine ecosystems. This article summarizes the methods and strategies used for the improvement and assessment of MEDM skill, and it attempts to establish a technical framework to inspire further ideas concerning MEDM skill improvement. The skill of MEDMs can be improved by parameter optimization (PO), which is an important step in model calibration. An efficient approach to solve the problem of PO constrained by MEDMs is the global treatment of both sensitivity analysis and PO. Model validation is an essential step following PO, which validates the efficiency of model calibration by analyzing and estimating the goodness-of-fit of the optimized model. Additionally, by focusing on the degree of impact of various factors on model skill, model uncertainty analysis can supply model users with a quantitative assessment of model confidence. Research on MEDMs is ongoing; however, improvement in model skill still lacks global treatments and its assessment is not integrated. Thus, the predictive performance of MEDMs is not strong and model uncertainties lack quantitative descriptions, limiting their application. Therefore, a large number of case studies concerning model skill should be performed to promote the development of a scientific and normative technical framework for the improvement of MEDM skill.

  12. Roughness parameter optimization using Land Parameter Retrieval Model and Soil Moisture Deficit: Implementation using SMOS brightness temperatures

    NASA Astrophysics Data System (ADS)

    Srivastava, Prashant K.; O'Neill, Peggy; Han, Dawei; Rico-Ramirez, Miguel A.; Petropoulos, George P.; Islam, Tanvir; Gupta, Manika

    2015-04-01

    Roughness parameterization is necessary for nearly all soil moisture retrieval algorithms such as single or dual channel algorithms, L-band Microwave Emission of Biosphere (LMEB), Land Parameter Retrieval Model (LPRM), etc. At present, roughness parameters can be obtained either by field experiments, although obtaining field measurements all over the globe is nearly impossible, or by using a land cover-based look up table, which is not always accurate everywhere for individual fields. From a catalogue of models available in the technical literature domain, the LPRM model was used here because of its robust nature and applicability to a wide range of frequencies. LPRM needs several parameters for soil moisture retrieval -- in particular, roughness parameters (h and Q) are important for calculating reflectivity. In this study, the h and Q parameters are optimized using the soil moisture deficit (SMD) estimated from the probability distributed model (PDM) and Soil Moisture and Ocean Salinity (SMOS) brightness temperatures following the Levenberg-Marquardt (LM) algorithm over the Brue catchment, Southwest of England, U.K.. The catchment is predominantly a pasture land with moderate topography. The PDM-based SMD is used as it is calibrated and validated using locally available ground-based information, suitable for large scale areas such as catchments. The optimal h and Q parameters are determined by maximizing the correlation between SMD and LPRM retrieved soil moisture. After optimization the values of h and Q have been found to be 0.32 and 0.15, respectively. For testing the usefulness of the estimated roughness parameters, a separate set of SMOS datasets are taken into account for soil moisture retrieval using the LPRM model and optimized roughness parameters. The overall analysis indicates a satisfactory result when compared against the SMD information. This work provides quantitative values of roughness parameters suitable for large scale applications. The

  13. Parameter Choice and Constraint in Hydrologic Models for Evaluating Land Use Change

    NASA Astrophysics Data System (ADS)

    Jackson, C. R.

    2011-12-01

    Hydrologic models are used to answer questions, from simple, "what is the expected 100-year peak flow for a basin?", to complex, "how will land use change alter flow pathways, flow time series, and water chemistry?" Appropriate model structure and complexity depend on the questions being addressed. Numerous studies of simple transfer models for converting climate signals into streamflows suggest that only three or four parameters are needed. The conceptual corollary to such models is a single hillslope bucket with storage, evapotranspiration, fast flow, and slow flow. While having the benefit of low uncertainty, such models are ill-suited to addressing land use questions. Land use questions require models that can simulate effects of changes in vegetation, alterations of soil characteristics, and resulting changes in flow pathways. For example, minimum goals for a hydrologic model evaluating bioenergy feedstock production might include: 1) calculate Horton overland flow based on surface conductivities and saturated surface flow based on relative moisture content in the topsoils, 2) allow reinfiltration of Horton overland flow created by bare soils, compacted soils, and pavement (roads, logging roads, skid trails, landings), 3) account for root zone depth and LAI in transpiration calculations, 4) allow mixing of hillslope flows in the riparian aquifer, 5) allow separate simulation of the riparian soils and vegetation and upslope soils and vegetation, 6) incorporate important aspects of topography and stratigraphy, and 7) estimate residence times in different flow paths. How many parameters are needed for such a model, and what information beside streamflow can be collected to constrain the parameters? Additional information that can be used for evaluating and testing watershed models are in-situ conductivity measurements, soil porosity, soil moisture dynamics, shallow perched groundwater behavior, interflow occurrence, groundwater behavior, regional ET estimates

  14. Observation model and parameter partials for the JPL VLBI parameter estimation software MODEST/1991

    NASA Technical Reports Server (NTRS)

    Sovers, O. J.

    1991-01-01

    A revision is presented of MASTERFIT-1987, which it supersedes. Changes during 1988 to 1991 included introduction of the octupole component of solid Earth tides, the NUVEL tectonic motion model, partial derivatives for the precession constant and source position rates, the option to correct for source structure, a refined model for antenna offsets, modeling the unique antenna at Richmond, FL, improved nutation series due to Zhu, Groten, and Reigber, and reintroduction of the old (Woolard) nutation series for simulation purposes. Text describing the relativistic transformations and gravitational contributions to the delay model was also revised in order to reflect the computer code more faithfully.

  15. Multi-Variable Model-Based Parameter Estimation Model for Antenna Radiation Pattern Prediction

    NASA Technical Reports Server (NTRS)

    Deshpande, Manohar D.; Cravey, Robin L.

    2002-01-01

    A new procedure is presented to develop multi-variable model-based parameter estimation (MBPE) model to predict far field intensity of antenna. By performing MBPE model development procedure on a single variable at a time, the present method requires solution of smaller size matrices. The utility of the present method is demonstrated by determining far field intensity due to a dipole antenna over a frequency range of 100-1000 MHz and elevation angle range of 0-90 degrees.

  16. A Note on the Item Information Function of the Four-Parameter Logistic Model

    ERIC Educational Resources Information Center

    Magis, David

    2013-01-01

    This article focuses on four-parameter logistic (4PL) model as an extension of the usual three-parameter logistic (3PL) model with an upper asymptote possibly different from 1. For a given item with fixed item parameters, Lord derived the value of the latent ability level that maximizes the item information function under the 3PL model. The…

  17. Additive Manufacturing Modeling and Simulation A Literature Review for Electron Beam Free Form Fabrication

    NASA Technical Reports Server (NTRS)

    Seufzer, William J.

    2014-01-01

    Additive manufacturing is coming into industrial use and has several desirable attributes. Control of the deposition remains a complex challenge, and so this literature review was initiated to capture current modeling efforts in the field of additive manufacturing. This paper summarizes about 10 years of modeling and simulation related to both welding and additive manufacturing. The goals were to learn who is doing what in modeling and simulation, to summarize various approaches taken to create models, and to identify research gaps. Later sections in the report summarize implications for closed-loop-control of the process, implications for local research efforts, and implications for local modeling efforts.

  18. Complete regional waveform modeling to estimate seismic velocity structure and source parameters for CTBT monitoring

    SciTech Connect

    Bredbeck, T; Rodgers, A; Walter, W

    1999-07-23

    The velocity structures and source parameters estimated by waveform modeling provide valuable information for CTBT monitoring. The inferred crustal and uppermost mantle structures advance understanding of tectonics and guides regionalization for event location and identification efforts. Estimation of source parameters such as seismic moment, depth and mechanism (whether earthquake, explosion or collapse) is crucial to event identification. In this paper we briefly outline some of the waveform modeling research for CTBT monitoring performed in the last year. In the future we will estimate structure for new regions by modeling waveforms of large well-observed events along additional paths. Of particular interest will be the estimation of velocity structure in aseismic regions such as most of Africa and the Former Soviet Union. Our previous work on aseismic regions in the Middle East, north Africa and south Asia give us confidence to proceed with our current methods. Using the inferred velocity models we plan to estimate source parameters for smaller events. It is especially important to obtain seismic moments of earthquakes for use in applying the Magnitude-Distance Amplitude Correction (MDAC; Taylor et al., 1999) to regional body-wave amplitudes for discrimination and calibrating the coda-based magnitude scales.

  19. Fundamental M-dwarf parameters from high-resolution spectra using PHOENIX ACES models. I. Parameter accuracy and benchmark stars

    NASA Astrophysics Data System (ADS)

    Passegger, V. M.; Wende-von Berg, S.; Reiners, A.

    2016-03-01

    M-dwarf stars are the most numerous stars in the Universe; they span a wide range in mass and are in the focus of ongoing and planned exoplanet surveys. To investigate and understand their physical nature, detailed spectral information and accurate stellar models are needed. We use a new synthetic atmosphere model generation and compare model spectra to observations. To test the model accuracy, we compared the models to four benchmark stars with atmospheric parameters for which independent information from interferometric radius measurements is available. We used χ2-based methods to determine parameters from high-resolution spectroscopic observations. Our synthetic spectra are based on the new PHOENIX grid that uses the ACES description for the equation of state. This is a model generation expected to be especially suitable for the low-temperature atmospheres. We identified suitable spectral tracers of atmospheric parameters and determined the uncertainties in Teff, log g, and [Fe/H] resulting from degeneracies between parameters and from shortcomings of the model atmospheres. The inherent uncertainties we find are σTeff = 35 K, σlog g = 0.14, and σ[Fe/H] = 0.11. The new model spectra achieve a reliable match to our observed data; our results for Teff and log g are consistent with literature values to within 1σ. However, metallicities reported from earlier photometric and spectroscopic calibrations in some cases disagree with our results by more than 3σ. A possible explanation are systematic errors in earlier metallicity determinations that were based on insufficient descriptions of the cool atmospheres. At this point, however, we cannot definitely identify the reason for this discrepancy, but our analysis indicates that there is a large uncertainty in the accuracy of M-dwarf parameter estimates. Based on observations carried out with UVES at ESO VLT.

  20. Spatiotemporal and random parameter panel data models of traffic crash fatalities in Vietnam.

    PubMed

    Truong, Long T; Kieu, Le-Minh; Vu, Tuan A

    2016-09-01

    This paper investigates factors associated with traffic crash fatalities in 63 provinces of Vietnam during the period from 2012 to 2014. Random effect negative binomial (RENB) and random parameter negative binomial (RPNB) panel data models are adopted to consider spatial heterogeneity across provinces. In addition, a spatiotemporal model with conditional autoregressive priors (ST-CAR) is utilised to account for spatiotemporal autocorrelation in the data. The statistical comparison indicates the ST-CAR model outperforms the RENB and RPNB models. Estimation results provide several significant findings. For example, traffic crash fatalities tend to be higher in provinces with greater numbers of level crossings. Passenger distance travelled and road lengths are also positively associated with fatalities. However, hospital densities are negatively associated with fatalities. The safety impact of the national highway 1A, the main transport corridor of the country, is also highlighted. PMID:27294863

  1. Study on the effect of hydrogen addition on the variation of plasma parameters of argon-oxygen magnetron glow discharge for synthesis of TiO2 films

    NASA Astrophysics Data System (ADS)

    Saikia, Partha; Saikia, Bipul Kumar; Bhuyan, Heman

    2016-04-01

    We report the effect of hydrogen addition on plasma parameters of argon-oxygen magnetron glow discharge plasma in the synthesis of H-doped TiO2 films. The parameters of the hydrogen-added Ar/O2 plasma influence the properties and the structural phases of the deposited TiO2 film. Therefore, the variation of plasma parameters such as electron temperature (Te), electron density (ne), ion density (ni), degree of ionization of Ar and degree of dissociation of H2 as a function of hydrogen content in the discharge is studied. Langmuir probe and Optical emission spectroscopy are used to characterize the plasma. On the basis of the different reactions in the gas phase of the magnetron discharge, the variation of plasma parameters and sputtering rate are explained. It is observed that the electron and heavy ion density decline with gradual addition of hydrogen in the discharge. Hydrogen addition significantly changes the degree of ionization of Ar which influences the structural phases of the TiO2 film.

  2. Stochastic modelling of daily rainfall in Nigeria: intra-annual variation of model parameters

    NASA Astrophysics Data System (ADS)

    Jimoh, O. D.; Webster, P.

    1999-09-01

    A Markov model of order 1 may be used to describe the occurrence of wet and dry days in Nigeria. Such models feature two parameter sets; P01 to characterise the probability of a wet day following a dry day and P11 to characterise the probability of a wet day following a wet day. The model parameter sets, when estimated from historical records, are characterised by a distinctive seasonal behaviour. However, the comparison of this seasonal behaviour between rainfall stations is hampered by the noise reflecting the high variability of parameters on successive days. The first part of this article is concerned with methods for smoothing these inherently noisy parameter sets. Smoothing has been approached using Fourier series, averaging techniques, or a combination thereof. It has been found that different methods generally perform well with respect to estimation of the average number of wet events and the frequency duration curves of wet and dry events. Parameterisation of the P01 parameter set is more successful than the P11 in view of the relatively small number of wet events lasting two or more days. The second part of the article is concerned with describing the regional variation in smoothed parameter sets. There is a systematic variation in the P01 parameter set as one moves northwards. In contrast, there is limited regional variation in the P11 set. Although this regional variation in P01 appears to be related to the gradual movement of the Inter Tropical Convergence Zone, the contrasting behaviour of the two parameter sets is difficult to explain on physical grounds.

  3. Dynamic hydrologic modeling using the zero-parameter Budyko model with instantaneous dryness index

    NASA Astrophysics Data System (ADS)

    Biswal, Basudev

    2016-09-01

    Long-term partitioning of hydrologic quantities is achieved by using the zero-parameter Budyko model which defines a dryness index. However, this approach is not suitable for dynamic partitioning particularly at diminishing timescales, and therefore, a universally applicable zero-parameter model remains elusive. Here an instantaneous dryness index is proposed which enables dynamic hydrologic modeling using the Budyko model. By introducing a "decay function" that characterizes the effects of antecedent rainfall and solar energy on the dryness state of a basin at a time, I propose the concept of instantaneous dryness index and use the Budyko function to perform continuous hydrologic partitioning. Using the same decay function, I then obtain discharge time series from the effective rainfall time series. The model is evaluated by considering data form 63 U.S. Geological Survey basins. Results indicate the possibility of using the proposed framework as an alternative platform for prediction in ungagued basins.

  4. Multiprocessing and Correction Algorithm of 3D-models for Additive Manufacturing

    NASA Astrophysics Data System (ADS)

    Anamova, R. R.; Zelenov, S. V.; Kuprikov, M. U.; Ripetskiy, A. V.

    2016-07-01

    This article addresses matters related to additive manufacturing preparation. A layer-by-layer model presentation was developed on the basis of a routing method. Methods for correction of errors in the layer-by-layer model presentation were developed. A multiprocessing algorithm for forming an additive manufacturing batch file was realized.

  5. Ecosystem Modeling of College Drinking: Parameter Estimation and Comparing Models to Data*

    PubMed Central

    Ackleh, Azmy S.; Fitzpatrick, Ben G.; Scribner, Richard; Simonsen, Neal; Thibodeaux, Jeremy J.

    2009-01-01

    Recently we developed a model composed of five impulsive differential equations that describes the changes in drinking patterns (that persist at epidemic level) amongst college students. Many of the model parameters cannot be measured directly from data; thus, an inverse problem approach, which chooses the set of parameters that results in the “best” model to data fit, is crucial for using this model as a predictive tool. The purpose of this paper is to present the procedure and results of an unconventional approach to parameter estimation that we developed after more common approaches were unsuccessful for our specific problem. The results show that our model provides a good fit to survey data for 32 campuses. Using these parameter estimates, we examined the effect of two hypothetical intervention policies: 1) reducing environmental wetness, and 2) penalizing students who are caught drinking. The results suggest that reducing campus wetness may be a very effective way of reducing heavy episodic (binge) drinking on a college campus, while a policy that penalizes students who drink is not nearly as effective. PMID:20161275

  6. Strengthen forensic entomology in court--the need for data exploration and the validation of a generalised additive mixed model.

    PubMed

    Baqué, Michèle; Amendt, Jens

    2013-01-01

    Developmental data of juvenile blow flies (Diptera: Calliphoridae) are typically used to calculate the age of immature stages found on or around a corpse and thus to estimate a minimum post-mortem interval (PMI(min)). However, many of those data sets don't take into account that immature blow flies grow in a non-linear fashion. Linear models do not supply a sufficient reliability on age estimates and may even lead to an erroneous determination of the PMI(min). According to the Daubert standard and the need for improvements in forensic science, new statistic tools like smoothing methods and mixed models allow the modelling of non-linear relationships and expand the field of statistical analyses. The present study introduces into the background and application of these statistical techniques by analysing a model which describes the development of the forensically important blow fly Calliphora vicina at different temperatures. The comparison of three statistical methods (linear regression, generalised additive modelling and generalised additive mixed modelling) clearly demonstrates that only the latter provided regression parameters that reflect the data adequately. We focus explicitly on both the exploration of the data--to assure their quality and to show the importance of checking it carefully prior to conducting the statistical tests--and the validation of the resulting models. Hence, we present a common method for evaluating and testing forensic entomological data sets by using for the first time generalised additive mixed models.

  7. Structural modelling and control design under incomplete parameter information: The maximum-entropy approach

    NASA Technical Reports Server (NTRS)

    Hyland, D. C.

    1983-01-01

    A stochastic structural control model is described. In contrast to the customary deterministic model, the stochastic minimum data/maximum entropy model directly incorporates the least possible a priori parameter information. The approach is to adopt this model as the basic design model, thus incorporating the effects of parameter uncertainty at a fundamental level, and design mean-square optimal controls (that is, choose the control law to minimize the average of a quadratic performance index over the parameter ensemble).

  8. Long wave atmospheric noise model, phase 1. Volume 2: Mode parameters

    NASA Astrophysics Data System (ADS)

    Warber, Chris R.

    1989-04-01

    The full wave propagation code is used to calculate waveguide mode parameters in spread debris environments in order to develop a long wave atmospheric noise model. The parameters are stored for retrieval whenever the model is exercised. Because the noise-model data encompass parameters of all significant modes for a wide range of ground conductivities, frequencies, and nuclear environment intensities, graphs of those parameters are presented in this volume handbook format.

  9. The Role of a Steepness Parameter in the Exponential Stability of a Model Problem. Numerical Aspects

    NASA Astrophysics Data System (ADS)

    Todorovic, N.

    2011-06-01

    The Nekhoroshev theorem considers quasi integrable Hamiltonians providing stability of actions in exponentially long times. One of the hypothesis required by the theorem is a mathematical condition called steepness. Nekhoroshev conjectured that different steepness properties should imply numerically observable differences in the stability times. After a recent study on this problem (Guzzo et al. 2011, Todorovic et al. 2011) we show some additional numerical results on the change of resonances and the diffusion laws produced by the increasing effect of steepness. The experiments are performed on a 4-dimensional steep symplectic map designed in a way that a parameter smoothly regulates the steepness properties in the model.

  10. A stochastic optimization model under modeling uncertainty and parameter certainty for groundwater remediation design--part I. Model development.

    PubMed

    He, L; Huang, G H; Lu, H W

    2010-04-15

    Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the "true" ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes.

  11. Difference-based ridge-type estimator of parameters in restricted partial linear model with correlated errors.

    PubMed

    Wu, Jibo

    2016-01-01

    In this article, a generalized difference-based ridge estimator is proposed for the vector parameter in a partial linear model when the errors are dependent. It is supposed that some additional linear constraints may hold to the whole parameter space. Its mean-squared error matrix is compared with the generalized restricted difference-based estimator. Finally, the performance of the new estimator is explained by a simulation study and a numerical example.

  12. Parameter sensitivity and uncertainty analysis for a storm surge and wave model

    NASA Astrophysics Data System (ADS)

    Bastidas, Luis A.; Knighton, James; Kline, Shaun W.

    2016-09-01

    Development and simulation of synthetic hurricane tracks is a common methodology used to estimate hurricane hazards in the absence of empirical coastal surge and wave observations. Such methods typically rely on numerical models to translate stochastically generated hurricane wind and pressure forcing into coastal surge and wave estimates. The model output uncertainty associated with selection of appropriate model parameters must therefore be addressed. The computational overburden of probabilistic surge hazard estimates is exacerbated by the high dimensionality of numerical surge and wave models. We present a model parameter sensitivity analysis of the Delft3D model for the simulation of hazards posed by Hurricane Bob (1991) utilizing three theoretical wind distributions (NWS23, modified Rankine, and Holland). The sensitive model parameters (of 11 total considered) include wind drag, the depth-induced breaking γB, and the bottom roughness. Several parameters show no sensitivity (threshold depth, eddy viscosity, wave triad parameters, and depth-induced breaking αB) and can therefore be excluded to reduce the computational overburden of probabilistic surge hazard estimates. The sensitive model parameters also demonstrate a large number of interactions between parameters and a nonlinear model response. While model outputs showed sensitivity to several parameters, the ability of these parameters to act as tuning parameters for calibration is somewhat limited as proper model calibration is strongly reliant on accurate wind and pressure forcing data. A comparison of the model performance with forcings from the different wind models is also presented.

  13. Parameter sensitivity and uncertainty analysis for a storm surge and wave model

    NASA Astrophysics Data System (ADS)

    Bastidas, L. A.; Knighton, J.; Kline, S. W.

    2015-10-01

    Development and simulation of synthetic hurricane tracks is a common methodology used to estimate hurricane hazards in the absence of empirical coastal surge and wave observations. Such methods typically rely on numerical models to translate stochastically generated hurricane wind and pressure forcing into coastal surge and wave estimates. The model output uncertainty associated with selection of appropriate model parameters must therefore be addressed. The computational overburden of probabilistic surge hazard estimates is exacerbated by the high dimensionality of numerical surge and wave models. We present a model parameter sensitivity analysis of the Delft3D model for the simulation of hazards posed by Hurricane Bob (1991) utilizing three theoretical wind distributions (NWS23, modified Rankine, and Holland). The sensitive model parameters (of eleven total considered) include wind drag, the depth-induced breaking γB, and the bottom roughness. Several parameters show no sensitivity (threshold depth, eddy viscosity, wave triad parameters and depth-induced breaking αB) and can therefore be excluded to reduce the computational overburden of probabilistic surge hazard estimates. The sensitive model parameters also demonstrate a large amount of interactions between parameters and a non-linear model response. While model outputs showed sensitivity to several parameters, the ability of these parameters to act as tuning parameters for calibration is somewhat limited as proper model calibration is strongly reliant on accurate wind and pressure forcing data. A comparison of the model performance with forcings from the different wind models is also presented.

  14. The Effects on Parameter Estimation of Correlated Abilities Using a Two-Dimensional, Two-Parameter Logistic Item Response Model.

    ERIC Educational Resources Information Center

    Batley, Rose-Marie; Boss, Marvin W.

    The effects of correlated dimensions on parameter estimation were assessed, using a two-dimensional item response theory model. Past research has shown the inadequacies of the unidimensional analysis of multidimensional item response data. However, few studies have reported multidimensional analysis of multidimensional data, and, in those using…

  15. [Study on the automatic parameters identification of water pipe network model].

    PubMed

    Jia, Hai-Feng; Zhao, Qi-Feng

    2010-01-01

    Based on the problems analysis on development and application of water pipe network model, the model parameters automatic identification is regarded as a kernel bottleneck of model's application in water supply enterprise. The methodology of water pipe network model parameters automatic identification based on GIS and SCADA database is proposed. Then the kernel algorithm of model parameters automatic identification is studied, RSA (Regionalized Sensitivity Analysis) is used for automatic recognition of sensitive parameters, and MCS (Monte-Carlo Sampling) is used for automatic identification of parameters, the detail technical route based on RSA and MCS is presented. The module of water pipe network model parameters automatic identification is developed. At last, selected a typical water pipe network as a case, the case study on water pipe network model parameters automatic identification is conducted and the satisfied results are achieved. PMID:20329520

  16. Estimation of Ecosystem Parameters of the Community Land Model with DREAM: Evaluation of the Potential for Upscaling Net Ecosystem Exchange

    NASA Astrophysics Data System (ADS)

    Hendricks Franssen, H. J.; Post, H.; Vrugt, J. A.; Fox, A. M.; Baatz, R.; Kumbhar, P.; Vereecken, H.

    2015-12-01

    Estimation of net ecosystem exchange (NEE) by land surface models is strongly affected by uncertain ecosystem parameters and initial conditions. A possible approach is the estimation of plant functional type (PFT) specific parameters for sites with measurement data like NEE and application of the parameters at other sites with the same PFT and no measurements. This upscaling strategy was evaluated in this work for sites in Germany and France. Ecosystem parameters and initial conditions were estimated with NEE-time series of one year length, or a time series of only one season. The DREAM(zs) algorithm was used for the estimation of parameters and initial conditions. DREAM(zs) is not limited to Gaussian distributions and can condition to large time series of measurement data simultaneously. DREAM(zs) was used in combination with the Community Land Model (CLM) v4.5. Parameter estimates were evaluated by model predictions at the same site for an independent verification period. In addition, the parameter estimates were evaluated at other, independent sites situated >500km away with the same PFT. The main conclusions are: i) simulations with estimated parameters reproduced better the NEE measurement data in the verification periods, including the annual NEE-sum (23% improvement), annual NEE-cycle and average diurnal NEE course (error reduction by factor 1,6); ii) estimated parameters based on seasonal NEE-data outperformed estimated parameters based on yearly data; iii) in addition, those seasonal parameters were often also significantly different from their yearly equivalents; iv) estimated parameters were significantly different if initial conditions were estimated together with the parameters. We conclude that estimated PFT-specific parameters improve land surface model predictions significantly at independent verification sites and for independent verification periods so that their potential for upscaling is demonstrated. However, simulation results also indicate

  17. A Paradox between IRT Invariance and Model-Data Fit When Utilizing the One-Parameter and Three-Parameter Models

    ERIC Educational Resources Information Center

    Custer, Michael; Sharairi, Sid; Yamazaki, Kenji; Signatur, Diane; Swift, David; Frey, Sharon

    2008-01-01

    The present study compared item and ability invariance as well as model-data fit between the one-parameter (1PL) and three-parameter (3PL) Item Response Theory (IRT) models utilizing real data across five grades; second through sixth as well as simulated data at second, fourth and sixth grade. At each grade, the 1PL and 3PL IRT models were run…

  18. Automatic parameter extraction technique for gate leakage current modeling in double gate MOSFET

    NASA Astrophysics Data System (ADS)

    Darbandy, Ghader; Gneiting, Thomas; Alius, Heidrun; Alvarado, Joaquín; Cerdeira, Antonio; Iñiguez, Benjamin

    2013-11-01

    Direct Tunneling (DT) and Trap Assisted Tunneling (TAT) gate leakage current parameters have been extracted and verified considering automatic parameter extraction approach. The industry standard package IC-CAP is used to extract our leakage current model parameters. The model is coded in Verilog-A and the comparison between the model and measured data allows to obtain the model parameter values and parameters correlations/relations. The model and parameter extraction techniques have been used to study the impact of parameters in the gate leakage current based on the extracted parameter values. It is shown that the gate leakage current depends on the interfacial barrier height more strongly than the barrier height of the dielectric layer. There is almost the same scenario with respect to the carrier effective masses into the interfacial layer and the dielectric layer. The comparison between the simulated results and available measured gate leakage current transistor characteristics of Trigate MOSFETs shows good agreement.

  19. Correction of biased climate simulated by biased physics through parameter estimation in an intermediate coupled model

    NASA Astrophysics Data System (ADS)

    Zhang, Xuefeng; Zhang, Shaoqing; Liu, Zhengyu; Wu, Xinrong; Han, Guijun

    2016-09-01

    Imperfect physical parameterization schemes are an important source of model bias in a coupled model and adversely impact the performance of model simulation. With a coupled ocean-atmosphere-land model of intermediate complexity, the impact of imperfect parameter estimation on model simulation with biased physics has been studied. Here, the biased physics is induced by using different outgoing longwave radiation schemes in the assimilation and "truth" models. To mitigate model bias, the parameters employed in the biased longwave radiation scheme are optimized using three different methods: least-squares parameter fitting (LSPF), single-valued parameter estimation and geography-dependent parameter optimization (GPO), the last two of which belong to the coupled model parameter estimation (CMPE) method. While the traditional LSPF method is able to improve the performance of coupled model simulations, the optimized parameter values from the CMPE, which uses the coupled model dynamics to project observational information onto the parameters, further reduce the bias of the simulated climate arising from biased physics. Further, parameters estimated by the GPO method can properly capture the climate-scale signal to improve the simulation of climate variability. These results suggest that the physical parameter estimation via the CMPE scheme is an effective approach to restrain the model climate drift during decadal climate predictions using coupled general circulation models.

  20. In vivo characterization of two additional Leishmania donovani strains using the murine and hamster model.

    PubMed

    Kauffmann, F; Dumetz, F; Hendrickx, S; Muraille, E; Dujardin, J-C; Maes, L; Magez, S; De Trez, C

    2016-05-01

    Leishmania donovani is a protozoan parasite causing the neglected tropical disease visceral leishmaniasis. One difficulty to study the immunopathology upon L. donovani infection is the limited adaptability of the strains to experimental mammalian hosts. Our knowledge about L. donovani infections relies on a restricted number of East African strains (LV9, 1S). Isolated from patients in the 1960s, these strains were described extensively in mice and Syrian hamsters and have consequently become 'reference' laboratory strains. L. donovani strains from the Indian continent display distinct clinical features compared to East African strains. Some reports describing the in vivo immunopathology of strains from the Indian continent exist. This study comprises a comprehensive immunopathological characterization upon infection with two additional strains, the Ethiopian L. donovani L82 strain and the Nepalese L. donovani BPK282 strain in both Syrian hamsters and C57BL/6 mice. Parameters that include parasitaemia levels, weight loss, hepatosplenomegaly and alterations in cellular composition of the spleen and liver, showed that the L82 strain generated an overall more virulent infection compared to the BPK282 strain. Altogether, both L. donovani strains are suitable and interesting for subsequent in vivo investigation of visceral leishmaniasis in the Syrian hamster and the C57BL/6 mouse model. PMID:27012562

  1. Elongated Quantum Dots of Ge on Si Growth Kinetics Modeling with Respect to the Additional Energy of Edges

    NASA Astrophysics Data System (ADS)

    Lozovoy, K. A.; Pishchagin, A. A.; Kokhanenko, A. P.; Voitsekhovskii, A. V.

    2016-08-01

    In this paper refining of mathematical model for calculation of parameters of selforganised quantum dots (QDs) of Ge on Si grown by the method of molecular beam epitaxy (MBE) is done. Calculations of pyramidal and wedge-like clusters formation energy were conducted with respect to contributions of surface energy, additional edge energy, elastic strain relaxation, and decrease in the atoms attraction to substrate. With the help of well-known model based on the generalization of classical nucleation theory it was shown that elongated islands emerge later than pyramidal clusters. Calculations of QDs surface density and size distribution function for wedge-like clusters with different length to width ratio were performed. The absence of special geometry of islands for which surface density and average size of islands reach points of extremum that was predicted earlier by the model not taking into account energy of edges was revealed when considering the additional contribution of edge formation energy.

  2. [Error structure and additivity of individual tree biomass model for four natural conifer species in Northeast China].

    PubMed

    Dogn, Li-hu; Li, Feng-ri; Song, Yu-wen

    2015-03-01

    Based on the biomass data of 276 sampling trees of Pinus koraiensis, Abies nephrolepis, Picea koraiensis and Larix gmelinii, the mono-element and dual-element additive system of biomass equations for the four conifer species was developed. The model error structure (additive vs. multiplicative) of the allometric equation was evaluated using the likelihood analysis, while nonlinear seemly unrelated regression was used to estimate the parameters in the additive system of biomass equations. The results indicated that the assumption of multiplicative error structure was strongly supported for the biomass equations of total and tree components for the four conifer species. Thus, the additive system of log-transformed biomass equations was developed. The adjusted coefficient of determination (Ra 2) of the additive system of biomass equations for the four conifer species was 0.85-0.99, the mean relative error was between -7.7% and 5.5%, and the mean absolute relative error was less than 30.5%. Adding total tree height in the additive systems of biomass equations could significantly improve model fitting performance and predicting precision, and the biomass equations of total, aboveground and stem were better than biomass equations of root, branch, foliage and crown. The precision of each biomass equation in the additive system varied from 77.0% to 99.7% with a mean value of 92.3% that would be suitable for predicting the biomass of the four natural conifer species.

  3. A stochastic optimization model under modeling uncertainty and parameter certainty for groundwater remediation design: part II. Model application.

    PubMed

    He, L; Huang, G H; Lu, H W

    2010-04-15

    A new stochastic optimization model under modeling uncertainty (SOMUM) and parameter certainty is applied to a practical site located in western Canada. Various groundwater remediation strategies under different significance levels are obtained from the SOMUM model. The impact of modeling uncertainty (proxy-simulator residuals) on optimal remediation strategies is compared to that of parameter uncertainty (arising from physical properties). The results show that the increased remediation cost for mitigating modeling-uncertainty impact would be higher than those from models where the coefficient of variance of input parameters approximates to 40%. This provides new evidence that the modeling uncertainty in proxy-simulator residuals can hardly be ignored; there is thus a need of investigating and mitigating the impact of such uncertainties on groundwater remediation design. This work would be helpful for lowering the risk of system failure due to potential environmental-standard violation when determining optimal groundwater remediation strategies.

  4. Generalized Concentration Addition Modeling Predicts Mixture Effects of Environmental PPARγ Agonists.

    PubMed

    Watt, James; Webster, Thomas F; Schlezinger, Jennifer J

    2016-09-01

    The vast array of potential environmental toxicant combinations necessitates the development of efficient strategies for predicting toxic effects of mixtures. Current practices emphasize the use of concentration addition to predict joint effects of endocrine disrupting chemicals in coexposures. Generalized concentration addition (GCA) is one such method for predicting joint effects of coexposures to chemicals and has the advantage of allowing for mixture components to have differences in efficacy (ie, dose-response curve maxima). Peroxisome proliferator-activated receptor gamma (PPARγ) is a nuclear receptor that plays a central role in regulating lipid homeostasis, insulin sensitivity, and bone quality and is the target of an increasing number of environmental toxicants. Here, we tested the applicability of GCA in predicting mixture effects of therapeutic (rosiglitazone and nonthiazolidinedione partial agonist) and environmental PPARγ ligands (phthalate compounds identified using EPA's ToxCast database). Transcriptional activation of human PPARγ1 by individual compounds and mixtures was assessed using a peroxisome proliferator response element-driven luciferase reporter. Using individual dose-response parameters and GCA, we generated predictions of PPARγ activation by the mixtures, and we compared these predictions with the empirical data. At high concentrations, GCA provided a better estimation of the experimental response compared with 3 alternative models: toxic equivalency factor, effect summation and independent action. These alternatives provided reasonable fits to the data at low concentrations in this system. These experiments support the implementation of GCA in mixtures analysis with endocrine disrupting compounds and establish PPARγ as an important target for further studies of chemical mixtures. PMID:27255385

  5. Cognitive Models of Risky Choice: Parameter Stability and Predictive Accuracy of Prospect Theory

    ERIC Educational Resources Information Center

    Glockner, Andreas; Pachur, Thorsten

    2012-01-01

    In the behavioral sciences, a popular approach to describe and predict behavior is cognitive modeling with adjustable parameters (i.e., which can be fitted to data). Modeling with adjustable parameters allows, among other things, measuring differences between people. At the same time, parameter estimation also bears the risk of overfitting. Are…

  6. Modeling Nonlinear Adsorption to Carbon with a Single Chemical Parameter: A Lognormal Langmuir Isotherm.

    PubMed

    Davis, Craig Warren; Di Toro, Dominic M

    2015-07-01

    Predictive models for linear sorption of solutes onto various media, e.g., soil organic carbon, are well-established; however, methods for predicting parameters for nonlinear isotherm models, e.g., Freundlich and Langmuir models, are not. Predicting nonlinear partition coefficients is complicated by the number of model parameters to fit n isotherms (e.g., Freundlich (2n) or Polanyi-Manes (3n)). The purpose of this paper is to present a nonlinear adsorption model with only one chemically specific parameter. To accomplish this, several simplifications to a log-normal Langmuir (LNL) isotherm model with 3n parameters were explored. A single sorbate-specific binding constant, the median Langmuir binding constant, and two global sorbent parameters; the total site density and the standard deviation of the Langmuir binding constant were employed. This single-solute specific (ss-LNL) model (2 + n parameters) was demonstrated to fit adsorption data as well as the 2n parameter Freundlich model. The LNL isotherm model is fit to four data sets composed of various chemicals sorbed to graphite, charcoal, and activated carbon. The RMS errors for the 3-, 2-, and 1-chemical specific parameter models were 0.066, 0.068, 0.069, and 0.113, respectively. The median logarithmic parameter standard errors for the four models were 1.070, 0.4537, 0.382, and 0.201 respectively. Further, the single-parameter model was the only model for which there were no standard errors of estimated parameters greater than a factor of 3 (0.50 log units). The surprising result is that very little decrease in RMSE occurs when two of the three parameters, σκ and qmax, are sorbate independent. However, the large standard errors present in the other models are significantly reduced. This remarkable simplification yields the single sorbate-specific parameter (ss-LNL) model. PMID:26035092

  7. Modeling Nonlinear Adsorption to Carbon with a Single Chemical Parameter: A Lognormal Langmuir Isotherm.

    PubMed

    Davis, Craig Warren; Di Toro, Dominic M

    2015-07-01

    Predictive models for linear sorption of solutes onto various media, e.g., soil organic carbon, are well-established; however, methods for predicting parameters for nonlinear isotherm models, e.g., Freundlich and Langmuir models, are not. Predicting nonlinear partition coefficients is complicated by the number of model parameters to fit n isotherms (e.g., Freundlich (2n) or Polanyi-Manes (3n)). The purpose of this paper is to present a nonlinear adsorption model with only one chemically specific parameter. To accomplish this, several simplifications to a log-normal Langmuir (LNL) isotherm model with 3n parameters were explored. A single sorbate-specific binding constant, the median Langmuir binding constant, and two global sorbent parameters; the total site density and the standard deviation of the Langmuir binding constant were employed. This single-solute specific (ss-LNL) model (2 + n parameters) was demonstrated to fit adsorption data as well as the 2n parameter Freundlich model. The LNL isotherm model is fit to four data sets composed of various chemicals sorbed to graphite, charcoal, and activated carbon. The RMS errors for the 3-, 2-, and 1-chemical specific parameter models were 0.066, 0.068, 0.069, and 0.113, respectively. The median logarithmic parameter standard errors for the four models were 1.070, 0.4537, 0.382, and 0.201 respectively. Further, the single-parameter model was the only model for which there were no standard errors of estimated parameters greater than a factor of 3 (0.50 log units). The surprising result is that very little decrease in RMSE occurs when two of the three parameters, σκ and qmax, are sorbate independent. However, the large standard errors present in the other models are significantly reduced. This remarkable simplification yields the single sorbate-specific parameter (ss-LNL) model.

  8. Lumped Parameter Modeling for Rapid Vibration Response Prototyping and Test Correlation for Electronic Units

    NASA Technical Reports Server (NTRS)

    Van Dyke, Michael B.

    2013-01-01

    Present preliminary work using lumped parameter models to approximate dynamic response of electronic units to random vibration; Derive a general N-DOF model for application to electronic units; Illustrate parametric influence of model parameters; Implication of coupled dynamics for unit/board design; Demonstrate use of model to infer printed wiring board (PWB) dynamics from external chassis test measurement.

  9. A new genetic fuzzy system approach for parameter estimation of ARIMA model

    NASA Astrophysics Data System (ADS)

    Hassan, Saima; Jaafar, Jafreezal; Belhaouari, Brahim S.; Khosravi, Abbas

    2012-09-01

    The Autoregressive Integrated moving Average model is the most powerful and practical time series model for forecasting. Parameter estimation is the most crucial part in ARIMA modeling. Inaccurate and wrong estimated parameters lead to bias and unacceptable forecasting results. Parameter optimization can be adopted in order to increase the demand forecasting accuracy. A paradigm of the fuzzy system and a genetic algorithm is proposed in this paper as a parameter estimation approach for ARIMA. The new approach will optimize the parameters by tuning the fuzzy membership functions with a genetic algorithm. The proposed Hybrid model of ARIMA and the genetic fuzzy system will yield acceptable forecasting results.

  10. Constructing Approximate Confidence Intervals for Parameters with Structural Equation Models

    ERIC Educational Resources Information Center

    Cheung, Mike W. -L.

    2009-01-01

    Confidence intervals (CIs) for parameters are usually constructed based on the estimated standard errors. These are known as Wald CIs. This article argues that likelihood-based CIs (CIs based on likelihood ratio statistics) are often preferred to Wald CIs. It shows how the likelihood-based CIs and the Wald CIs for many statistics and psychometric…

  11. Distributed parameter modelling of flexible spacecraft: Where's the beef?

    NASA Technical Reports Server (NTRS)

    Hyland, D. C.

    1994-01-01

    This presentation discusses various misgivings concerning the directions and productivity of Distributed Parameter System (DPS) theory as applied to spacecraft vibration control. We try to show the need for greater cross-fertilization between DPS theorists and spacecraft control designers. We recommend a shift in research directions toward exploration of asymptotic frequency response characteristics of critical importance to control designers.

  12. Cognitive models of risky choice: parameter stability and predictive accuracy of prospect theory.

    PubMed

    Glöckner, Andreas; Pachur, Thorsten

    2012-04-01

    In the behavioral sciences, a popular approach to describe and predict behavior is cognitive modeling with adjustable parameters (i.e., which can be fitted to data). Modeling with adjustable parameters allows, among other things, measuring differences between people. At the same time, parameter estimation also bears the risk of overfitting. Are individual differences as measured by model parameters stable enough to improve the ability to predict behavior as compared to modeling without adjustable parameters? We examined this issue in cumulative prospect theory (CPT), arguably the most widely used framework to model decisions under risk. Specifically, we examined (a) the temporal stability of CPT's parameters; and (b) how well different implementations of CPT, varying in the number of adjustable parameters, predict individual choice relative to models with no adjustable parameters (such as CPT with fixed parameters, expected value theory, and various heuristics). We presented participants with risky choice problems and fitted CPT to each individual's choices in two separate sessions (which were 1 week apart). All parameters were correlated across time, in particular when using a simple implementation of CPT. CPT allowing for individual variability in parameter values predicted individual choice better than CPT with fixed parameters, expected value theory, and the heuristics. CPT's parameters thus seem to pick up stable individual differences that need to be considered when predicting risky choice.

  13. SMA actuators for vibration control and experimental determination of model parameters dependent on ambient airflow velocity

    NASA Astrophysics Data System (ADS)

    Suzuki, Y.

    2016-05-01

    This article demonstrates the practical applicability of a method of modelling shape memory alloys (SMAs) as actuators. For this study, a pair of SMA wires was installed in an antagonistic manner to form an actuator, and a linear differential equation that describes the behaviour of the actuator’s generated force relative to its input voltage was derived for the limited range below the austenite onset temperature. In this range, hysteresis need not be considered, and the proposed SMA actuator can therefore be practically applied in linear control systems, which is significant because large deformations accompanied by hysteresis do not necessarily occur in most vibration control cases. When specific values of the parameters used in the differential equation were identified experimentally, it became clear that one of the parameters was dependent on ambient airflow velocity. The values of this dependent parameter were obtained using an additional SMA wire as a sensor. In these experiments, while the airflow distribution around the SMA wires was varied by changing the rotational speed of the fans in the wind tunnels, an input voltage was conveyed to the SMA actuator circuit, and the generated force was measured. In this way, the parameter dependent on airflow velocity was estimated in real time, and it was validated that the calculated force was consistent with the measured one.

  14. Impact of kinetic parameters on heat transfer modeling for a pultrusion process

    NASA Astrophysics Data System (ADS)

    Gorthala, R.; Roux, J. A.; Vaughan, J. G.; Donti, R. P.; Hassouneh, A.

    An examination is conducted of pultrusion heat model predictions for various parameters of resin chemical kinetics; these parameters' values affect model heat-transfer results and model predictions. Attention is given to the applicability of DSC kinetic parameters to resin cure modeling, by comparing the predicted product cure temperature profiles and resin degree-of-cure values with pultrusion experiment results obtained for both carbon and glass reinforcements, different pull speeds and fiber volumes, and various die temperature profiles.

  15. Individual based modeling and parameter estimation for a Lotka-Volterra system.

    PubMed

    Waniewski, J; Jedruch, W

    1999-03-15

    Stochastic component, inevitable in biological systems, makes problematic the estimation of the model parameters from a single sequence of measurements, despite the complete knowledge of the system. We studied the problem of parameter estimation using individual-based computer simulations of a 'Lotka-Volterra world'. Two kinds (species) of particles--X (preys) and Y (predators)--moved on a sphere according to deterministic rules and at the collision (interaction) of X and Y the particle X was changed to a new particle Y. Birth of preys and death of predators were simulated by addition of X and removal of Y, respectively, according to exponential probability distributions. With this arrangement of the system, the numbers of particles of each kind might be described by the Lotka-Volterra equations. The simulations of the system with low (200-400 particles on average) number of individuals showed unstable oscillations of the population size. In some simulation runs one of the species became extinct. Nevertheless, the oscillations had some generic properties (e.g. mean, in one simulation run, oscillation period, mean ratio of the amplitudes of the consecutive maxima of X and Y numbers, etc.) characteristic for the solutions of the Lotka-Volterra equations. This observation made it possible to estimate the four parameters of the Lotka-Volterra model with high accuracy and good precision. The estimation was performed using the integral form of the Lotka-Volterra equations and two parameter linear regression for each oscillation cycle separately. We conclude that in spite of the irregular time course of the number of individuals in each population due to stochastic intraspecies component, the generic features of the simulated system evolution can provide enough information for quantitative estimation of the system parameters.

  16. Individual based modeling and parameter estimation for a Lotka-Volterra system.

    PubMed

    Waniewski, J; Jedruch, W

    1999-03-15

    Stochastic component, inevitable in biological systems, makes problematic the estimation of the model parameters from a single sequence of measurements, despite the complete knowledge of the system. We studied the problem of parameter estimation using individual-based computer simulations of a 'Lotka-Volterra world'. Two kinds (species) of particles--X (preys) and Y (predators)--moved on a sphere according to deterministic rules and at the collision (interaction) of X and Y the particle X was changed to a new particle Y. Birth of preys and death of predators were simulated by addition of X and removal of Y, respectively, according to exponential probability distributions. With this arrangement of the system, the numbers of particles of each kind might be described by the Lotka-Volterra equations. The simulations of the system with low (200-400 particles on average) number of individuals showed unstable oscillations of the population size. In some simulation runs one of the species became extinct. Nevertheless, the oscillations had some generic properties (e.g. mean, in one simulation run, oscillation period, mean ratio of the amplitudes of the consecutive maxima of X and Y numbers, etc.) characteristic for the solutions of the Lotka-Volterra equations. This observation made it possible to estimate the four parameters of the Lotka-Volterra model with high accuracy and good precision. The estimation was performed using the integral form of the Lotka-Volterra equations and two parameter linear regression for each oscillation cycle separately. We conclude that in spite of the irregular time course of the number of individuals in each population due to stochastic intraspecies component, the generic features of the simulated system evolution can provide enough information for quantitative estimation of the system parameters. PMID:10194922

  17. An analytic solution to the Monod-Wyman-Changeux model and all parameters in this model.

    PubMed Central

    Zhou, G; Ho, P S; van Holde, K E

    1989-01-01

    Starting from the Monod-Wyman-Changeux (MWC) model (Monod, J., J. Wyman, and J. P. Changeux. 1965. J. Mol. Biol. 12:88-118), we obtain an analytical expression for the slope of the Hill plot at any ligand concentration. Furthermore, we derive an equation satisfied by the ligand concentration at the position of maximum slope. From these results, we derive a set of formulas which allow determination of the parameters of the MWC model (kR, C, and L) from the value of the Hill coefficient, nH, the ligand concentration at the position of maximum slope [( A]0), and the value of nu/(n-nu) at this point. We then outline procedures for utilizing these equations to provide a "best fit" of the MWC model to the experimental data, and to obtain a refined set of the parameters. Finally, we demonstrate the applicability of the technique by analysis of oxygen binding data for Octopus hemocyanin. PMID:2713440

  18. Using Dirichlet Priors to Improve Model Parameter Plausibility

    ERIC Educational Resources Information Center

    Rai, Dovan; Gong, Yue; Beck, Joseph E.

    2009-01-01

    Student modeling is a widely used approach to make inference about a student's attributes like knowledge, learning, etc. If we wish to use these models to analyze and better understand student learning there are two problems. First, a model's ability to predict student performance is at best weakly related to the accuracy of any one of its…

  19. Prediction models for solitary pulmonary nodules based on curvelet textural features and clinical parameters.

    PubMed

    Wang, Jing-Jing; Wu, Hai-Feng; Sun, Tao; Li, Xia; Wang, Wei; Tao, Li-Xin; Huo, Da; Lv, Ping-Xin; He, Wen; Guo, Xiu-Hua

    2013-01-01

    Lung cancer, one of the leading causes of cancer-related deaths, usually appears as solitary pulmonary nodules (SPNs) which are hard to diagnose using the naked eye. In this paper, curvelet-based textural features and clinical parameters are used with three prediction models [a multilevel model, a least absolute shrinkage and selection operator (LASSO) regression method, and a support vector machine (SVM)] to improve the diagnosis of benign and malignant SPNs. Dimensionality reduction of the original curvelet-based textural features was achieved using principal component analysis. In addition, non-conditional logistical regression was used to find clinical predictors among demographic parameters and morphological features. The results showed that, combined with 11 clinical predictors, the accuracy rates using 12 principal components were higher than those using the original curvelet-based textural features. To evaluate the models, 10-fold cross validation and back substitution were applied. The results obtained, respectively, were 0.8549 and 0.9221 for the LASSO method, 0.9443 and 0.9831 for SVM, and 0.8722 and 0.9722 for the multilevel model. All in all, it was found that using curvelet-based textural features after dimensionality reduction and using clinical predictors, the highest accuracy rate was achieved with SVM. The method may be used as an auxiliary tool to differentiate between benign and malignant SPNs in CT images. PMID:24289618

  20. Parameter identification and calibration of the Xin'anjiang model using the surrogate modeling approach

    NASA Astrophysics Data System (ADS)

    Ye, Yan; Song, Xiaomeng; Zhang, Jianyun; Kong, Fanzhe; Ma, Guangwen

    2014-06-01

    Practical experience has demonstrated that single objective functions, no matter how carefully chosen, prove to be inadequate in providing proper measurements for all of the characteristics of the observed data. One strategy to circumvent this problem is to define multiple fitting criteria that measure different aspects of system behavior, and to use multi-criteria optimization to identify non-dominated optimal solutions. Unfortunately, these analyses require running original simulation models thousands of times. As such, they demand prohibitively large computational budgets. As a result, surrogate models have been used in combination with a variety of multi-objective optimization algorithms to approximate the true Pareto-front within limited evaluations for the original model. In this study, multi-objective optimization based on surrogate modeling (multivariate adaptive regression splines, MARS) for a conceptual rainfall-runoff model (Xin'anjiang model, XAJ) was proposed. Taking the Yanduhe basin of Three Gorges in the upper stream of the Yangtze River in China as a case study, three evaluation criteria were selected to quantify the goodness-of-fit of observations against calculated values from the simulation model. The three criteria chosen were the Nash-Sutcliffe efficiency coefficient, the relative error of peak flow, and runoff volume (REPF and RERV). The efficacy of this method is demonstrated on the calibration of the XAJ model. Compared to the single objective optimization results, it was indicated that the multi-objective optimization method can infer the most probable parameter set. The results also demonstrate that the use of surrogate-modeling enables optimization that is much more efficient; and the total computational cost is reduced by about 92.5%, compared to optimization without using surrogate modeling. The results obtained with the proposed method support the feasibility of applying parameter optimization to computationally intensive simulation

  1. Macroscopic control parameter for avalanche models for bursty transport

    SciTech Connect

    Chapman, S. C.; Rowlands, G.; Watkins, N. W.

    2009-01-15

    Similarity analysis is used to identify the control parameter R{sub A} for the subset of avalanching systems that can exhibit self-organized criticality (SOC). This parameter expresses the ratio of driving to dissipation. The transition to SOC, when the number of excited degrees of freedom is maximal, is found to occur when R{sub A}{yields}0. This is in the opposite sense to (Kolmogorov) turbulence, thus identifying a deep distinction between turbulence and SOC and suggesting an observable property that could distinguish them. A corollary of this similarity analysis is that SOC phenomenology, that is, power law scaling of avalanches, can persist for finite R{sub A} with the same R{sub A}{yields}0 exponent if the system supports a sufficiently large range of lengthscales, necessary for SOC to be a candidate for physical (R{sub A} finite) systems.

  2. Parameter Uncertainty for Aircraft Aerodynamic Modeling using Recursive Least Squares

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2016-01-01

    A real-time method was demonstrated for determining accurate uncertainty levels of stability and control derivatives estimated using recursive least squares and time-domain data. The method uses a recursive formulation of the residual autocorrelation to account for colored residuals, which are routinely encountered in aircraft parameter estimation and change the predicted uncertainties. Simulation data and flight test data for a subscale jet transport aircraft were used to demonstrate the approach. Results showed that the corrected uncertainties matched the observed scatter in the parameter estimates, and did so more accurately than conventional uncertainty estimates that assume white residuals. Only small differences were observed between batch estimates and recursive estimates at the end of the maneuver. It was also demonstrated that the autocorrelation could be reduced to a small number of lags to minimize computation and memory storage requirements without significantly degrading the accuracy of predicted uncertainty levels.

  3. Estimates of genetic parameters for growth traits in Brahman cattle using random regression and multitrait models.

    PubMed

    Bertipaglia, T S; Carreño, L O D; Aspilcueta-Borquis, R R; Boligon, A A; Farah, M M; Gomes, F J; Machado, C H C; Rey, F S B; da Fonseca, R

    2015-08-01

    Random regression models (RRM) and multitrait models (MTM) were used to estimate genetic parameters for growth traits in Brazilian Brahman cattle and to compare the estimated breeding values obtained by these 2 methodologies. For RRM, 78,641 weight records taken between 60 and 550 d of age from 16,204 cattle were analyzed, and for MTM, the analysis consisted of 17,385 weight records taken at the same ages from 12,925 cattle. All models included the fixed effects of contemporary group and the additive genetic, maternal genetic, and animal permanent environmental effects and the quadratic effect of age at calving (AAC) as covariate. For RRM, the AAC was nested in the animal's age class. The best RRM considered cubic polynomials and the residual variance heterogeneity (5 levels). For MTM, the weights were adjusted for standard ages. For RRM, additive heritability estimates ranged from 0.42 to 0.75, and for MTM, the estimates ranged from 0.44 to 0.72 for both models at 60, 120, 205, 365, and 550 d of age. The maximum maternal heritability estimate (0.08) was at 140 d for RRM, but for MTM, it was highest at weaning (0.09). The magnitude of the genetic correlations was generally from moderate to high. The RRM adequately modeled changes in variance or covariance with age, and provided there was sufficient number of samples, increased accuracy in the estimation of the genetic parameters can be expected. Correlation of bull classifications were different in both methods and at all the ages evaluated, especially at high selection intensities, which could affect the response to selection. PMID:26440161

  4. Parameter Estimation for Differential Equation Models Using a Framework of Measurement Error in Regression Models

    PubMed Central

    Liang, Hua

    2008-01-01

    Differential equation (DE) models are widely used in many scientific fields that include engineering, physics and biomedical sciences. The so-called “forward problem”, the problem of simulations and predictions of state variables for given parameter values in the DE models, has been extensively studied by mathematicians, physicists, engineers and other scientists. However, the “inverse problem”, the problem of parameter estimation based on the measurements of output variables, has not been well explored using modern statistical methods, although some least squares-based approaches have been proposed and studied. In this paper, we propose parameter estimation methods for ordinary differential equation models (ODE) based on the local smoothing approach and a pseudo-least squares (PsLS) principle under a framework of measurement error in regression models. The asymptotic properties of the proposed PsLS estimator are established. We also compare the PsLS method to the corresponding SIMEX method and evaluate their finite sample performances via simulation studies. We illustrate the proposed approach using an application example from an HIV dynamic study. PMID:19956350

  5. The influence of dispersing additive on the paraffin crystallization in model systems

    NASA Astrophysics Data System (ADS)

    Gorshkov, A. M.; Tien Thang, Pham; Shishmina, L. V.; Chekantseva, L. V.

    2015-11-01

    The work is dedicated to investigation of the influence of dispersing additive on the paraffin crystallization in model systems. A new method to determine the paraffin saturation point of transparent solutions based on the phenomenon of light scattering has been proposed. The linear relationship between the values of critical micelle concentrations of the additive and the quantity of paraffin in solution has been obtained. The influence of the model system composition on the paraffin crystallization has been studied.

  6. Target Rotations and Assessing the Impact of Model Violations on the Parameters of Unidimensional Item Response Theory Models

    ERIC Educational Resources Information Center

    Reise, Steven; Moore, Tyler; Maydeu-Olivares, Alberto

    2011-01-01

    Reise, Cook, and Moore proposed a "comparison modeling" approach to assess the distortion in item parameter estimates when a unidimensional item response theory (IRT) model is imposed on multidimensional data. Central to their approach is the comparison of item slope parameter estimates from a unidimensional IRT model (a restricted model), with…

  7. Ramsay-Curve Item Response Theory for the Three-Parameter Logistic Item Response Model

    ERIC Educational Resources Information Center

    Woods, Carol M.

    2008-01-01

    In Ramsay-curve item response theory (RC-IRT), the latent variable distribution is estimated simultaneously with the item parameters of a unidimensional item response model using marginal maximum likelihood estimation. This study evaluates RC-IRT for the three-parameter logistic (3PL) model with comparisons to the normal model and to the empirical…

  8. Identification of the 1PL Model with Guessing Parameter: Parametric and Semi-Parametric Results

    ERIC Educational Resources Information Center

    San Martin, Ernesto; Rolin, Jean-Marie; Castro, Luis M.

    2013-01-01

    In this paper, we study the identification of a particular case of the 3PL model, namely when the discrimination parameters are all constant and equal to 1. We term this model, 1PL-G model. The identification analysis is performed under three different specifications. The first specification considers the abilities as unknown parameters. It is…

  9. A General Approach for Specifying Informative Prior Distributions for PBPK Model Parameters

    EPA Science Inventory

    Characterization of uncertainty in model predictions is receiving more interest as more models are being used in applications that are critical to human health. For models in which parameters reflect biological characteristics, it is often possible to provide estimates of paramet...

  10. Solute transport modeling using morphological parameters of step-pool reaches

    NASA Astrophysics Data System (ADS)

    JiméNez, Mario A.; Wohl, Ellen

    2013-03-01

    Step-pool systems have been widely studied during the past few years, resulting in enhanced knowledge of mechanisms for sediment transport, energy dissipation and patterns of self-organization. We use rhodamine tracer data collected in nine step-pool reaches during high, intermediate and low flows to explore scaling of solute transport processes. Using the scaling patterns found, we propose an extension of the Aggregated Dead Zone (ADZ) approach for solute transport modeling based on the morphological features of step-pool units and their corresponding inherent variability within a stream reach. In addition to discharge, the reach-average bankfull width, mean step height, and the ratio of pool length to step-to-step length can be used as explanatory variables for the dispersion process within the studied reaches. These variables appeared to be sufficient for estimating ADZ model parameters and simulating solute transport in predictive mode for applications in reaches lacking tracer data.

  11. Averaging Models: Parameters Estimation with the R-Average Procedure

    ERIC Educational Resources Information Center

    Vidotto, G.; Massidda, D.; Noventa, S.

    2010-01-01

    The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982), can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto &…

  12. Estimation of MIMIC Model Parameters with Multilevel Data

    ERIC Educational Resources Information Center

    Finch, W. Holmes; French, Brian F.

    2011-01-01

    The purpose of this simulation study was to assess the performance of latent variable models that take into account the complex sampling mechanism that often underlies data used in educational, psychological, and other social science research. Analyses were conducted using the multiple indicator multiple cause (MIMIC) model, which is a flexible…

  13. Relating Data and Models to Characterize Parameter and Prediction Uncertainty

    EPA Science Inventory

    Applying PBPK models in risk analysis requires that we realistically assess the uncertainty of relevant model predictions in as quantitative a way as possible. The reality of human variability may add a confusing feature to the overall uncertainty assessment, as uncertainty and v...

  14. Parameter Variability and Distributional Assumptions in the Diffusion Model

    ERIC Educational Resources Information Center

    Ratcliff, Roger

    2013-01-01

    If the diffusion model (Ratcliff & McKoon, 2008) is to account for the relative speeds of correct responses and errors, it is necessary that the components of processing identified by the model vary across the trials of a task. In standard applications, the rate at which information is accumulated by the diffusion process is assumed to be normally…

  15. Model parameter uncertainty analysis for an annual field-scale P loss model

    NASA Astrophysics Data System (ADS)

    Bolster, Carl H.; Vadas, Peter A.; Boykin, Debbie

    2016-08-01

    Phosphorous (P) fate and transport models are important tools for developing and evaluating conservation practices aimed at reducing P losses from agricultural fields. Because all models are simplifications of complex systems, there will exist an inherent amount of uncertainty associated with their predictions. It is therefore important that efforts be directed at identifying, quantifying, and communicating the different sources of model uncertainties. In this study, we conducted an uncertainty analysis with the Annual P Loss Estimator (APLE) model. Our analysis included calculating parameter uncertainties and confidence and prediction intervals for five internal regression equations in APLE. We also estimated uncertainties of the model input variables based on values reported in the literature. We then predicted P loss for a suite of fields under different management and climatic conditions while accounting for uncertainties in the model parameters and inputs and compared the relative contributions of these two sources of uncertainty to the overall uncertainty associated with predictions of P loss. Both the overall magnitude of the prediction uncertainties and the relative contributions of the two sources of uncertainty varied depending on management practices and field characteristics. This was due to differences in the number of model input variables and the uncertainties in the regression equations associated with each P loss pathway. Inspection of the uncertainties in the five regression equations brought attention to a previously unrecognized limitation with the equation used to partition surface-applied fertilizer P between leaching and runoff losses. As a result, an alternate equation was identified that provided similar predictions with much less uncertainty. Our results demonstrate how a thorough uncertainty and model residual analysis can be used to identify limitations with a model. Such insight can then be used to guide future data collection and model

  16. Parameter Estimation In Ensemble Data Assimilation To Characterize Model Errors In Surface-Layer Schemes Over Complex Terrain

    NASA Astrophysics Data System (ADS)

    Hacker, Joshua; Lee, Jared; Lei, Lili

    2014-05-01

    Numerical weather prediction (NWP) models have deficiencies in surface and boundary layer parameterizations, which may be particularly acute over complex terrain. Structural and physical model deficiencies are often poorly understood, and can be difficult to identify. Uncertain model parameters can lead to one class of model deficiencies when they are mis-specified. Augmenting the model state variables with parameters, data assimilation can be used to estimate the parameter distributions as long as the forecasts for observed variables is linearly dependent on the parameters. Reduced forecast (background) error shows that the parameter is accounting for some component of model error. Ensemble data assimilation has the favorable characteristic of providing ensemble-mean parameter estimates, eliminating some noise in the estimates when additional constraints on the error dynamics are unknown. This study focuses on coupling the Weather Research and Forecasting (WRF) NWP model with the Data Assimilation Research Testbed (DART) to estimate the Zilitinkevich parameter (CZIL). CZIL controls the thermal 'roughness length' for a given momentum roughness, thereby controlling heat and moisture fluxes through the surface layer by specifying the (unobservable) aerodynamic surface temperature. Month-long data assimilation experiments with 96 ensemble members, and grid spacing down to 3.3 km, provide a data set for interpreting parametric model errors in complex terrain. Experiments are during fall 2012 over the western U.S., and radiosonde, aircraft, satellite wind, surface, and mesonet observations are assimilated every 3 hours. One ensemble has a globally constant value of CZIL=0.1 (the WRF default value), while a second ensemble allows CZIL to vary over the range [0.01, 0.99], with distributions updated via the assimilation. Results show that the CZIL estimates do vary in time and space. Most often, forecasts are more skillful with the updated parameter values, compared to the

  17. Simultaneous Parameters Identifiability and Estimation of an E. coli Metabolic Network Model

    PubMed Central

    Alberton, André Luís; Di Maggio, Jimena Andrea; Estrada, Vanina Gisela; Díaz, María Soledad; Secchi, Argimiro Resende

    2015-01-01

    This work proposes a procedure for simultaneous parameters identifiability and estimation in metabolic networks in order to overcome difficulties associated with lack of experimental data and large number of parameters, a common scenario in the modeling of such systems. As case study, the complex real problem of parameters identifiability of the Escherichia coli K-12 W3110 dynamic model was investigated, composed by 18 differential ordinary equations and 35 kinetic rates, containing 125 parameters. With the procedure, model fit was improved for most of the measured metabolites, achieving 58 parameters estimated, including 5 unknown initial conditions. The results indicate that simultaneous parameters identifiability and estimation approach in metabolic networks is appealing, since model fit to the most of measured metabolites was possible even when important measures of intracellular metabolites and good initial estimates of parameters are not available. PMID:25654103

  18. Simultaneous parameters identifiability and estimation of an E. coli metabolic network model.

    PubMed

    Pontes Freitas Alberton, Kese; Alberton, André Luís; Di Maggio, Jimena Andrea; Estrada, Vanina Gisela; Díaz, María Soledad; Secchi, Argimiro Resende

    2015-01-01

    This work proposes a procedure for simultaneous parameters identifiability and estimation in metabolic networks in order to overcome difficulties associated with lack of experimental data and large number of parameters, a common scenario in the modeling of such systems. As case study, the complex real problem of parameters identifiability of the Escherichia coli K-12 W3110 dynamic model was investigated, composed by 18 differential ordinary equations and 35 kinetic rates, containing 125 parameters. With the procedure, model fit was improved for most of the measured metabolites, achieving 58 parameters estimated, including 5 unknown initial conditions. The results indicate that simultaneous parameters identifiability and estimation approach in metabolic networks is appealing, since model fit to the most of measured metabolites was possible even when important measures of intracellular metabolites and good initial estimates of parameters are not available. PMID:25654103

  19. Network Scale Modeling of Lymph Transport and Its Effective Pumping Parameters.

    PubMed

    Jamalian, Samira; Davis, Michael J; Zawieja, David C; Moore, James E

    2016-01-01

    The lymphatic system is an open-ended network of vessels that run in parallel to the blood circulation system. These vessels are present in almost all of the tissues of the body to remove excess fluid. Similar to blood vessels, lymphatic vessels are found in branched arrangements. Due to the complexity of experiments on lymphatic networks and the difficulty to control the important functional parameters in these setups, computational modeling becomes an effective and essential means of understanding lymphatic network pumping dynamics. Here we aimed to determine the effect of pumping coordination in branched network structures on the regulation of lymph flow. Lymphatic vessel networks were created by building upon our previous lumped-parameter model of lymphangions in series. In our network model, each vessel is itself divided into multiple lymphangions by lymphatic valves that help maintain forward flow. Vessel junctions are modeled by equating the pressures and balancing mass flows. Our results demonstrated that a 1.5 s rest-period between contractions optimizes the flow rate. A time delay between contractions of lymphangions at the junction of branches provided an advantage over synchronous pumping, but additional time delays within individual vessels only increased the flow rate for adverse pressure differences greater than 10.5 cmH2O. Additionally, we quantified the pumping capability of the system under increasing levels of steady transmural pressure and outflow pressure for different network sizes. We observed that peak flow rates normally occurred under transmural pressures between 2 to 4 cmH2O (for multiple pressure differences and network sizes). Networks with 10 lymphangions per vessel had the highest pumping capability under a wide range of adverse pressure differences. For favorable pressure differences, pumping was more efficient with fewer lymphangions. These findings are valuable for translating experimental measurements from the single lymphangion

  20. Personalization of models with many model parameters: an efficient sensitivity analysis approach.

    PubMed

    Donders, W P; Huberts, W; van de Vosse, F N; Delhaas, T

    2015-10-01

    Uncertainty quantification and global sensitivity analysis are indispensable for patient-specific applications of models that enhance diagnosis or aid decision-making. Variance-based sensitivity analysis methods, which apportion each fraction of the output uncertainty (variance) to the effects of individual input parameters or their interactions, are considered the gold standard. The variance portions are called the Sobol sensitivity indices and can be estimated by a Monte Carlo (MC) approach (e.g., Saltelli's method [1]) or by employing a metamodel (e.g., the (generalized) polynomial chaos expansion (gPCE) [2, 3]). All these methods require a large number of model evaluations when estimating the Sobol sensitivity indices for models with many parameters [4]. To reduce the computational cost, we introduce a two-step approach. In the first step, a subset of important parameters is identified for each output of interest using the screening method of Morris [5]. In the second step, a quantitative variance-based sensitivity analysis is performed using gPCE. Efficient sampling strategies are introduced to minimize the number of model runs required to obtain the sensitivity indices for models considering multiple outputs. The approach is tested using a model that was developed for predicting post-operative flows after creation of a vascular access for renal failure patients. We compare the sensitivity indices obtained with the novel two-step approach with those obtained from a reference analysis that applies Saltelli's MC method. The two-step approach was found to yield accurate estimates of the sensitivity indices at two orders of magnitude lower computational cost. PMID:26017545

  1. A review of distributed parameter groundwater management modeling methods.

    USGS Publications Warehouse

    Gorelick, S.M.

    1983-01-01

    Models which solve the governing groundwater flow or solute transport equations in conjunction with optimization techniques, such as linear and quadratic programming, are powerful aquifer management tools. Groundwater management models fall in two general categories: hydraulics or policy evaluation and water allocation. Experience from the few real world applications of groundwater optimization-management techniques is summarized. Classified separately are methods for groundwater quality management aimed at optimal waste disposal in the subsurface. This classification is composed of steady state and transient management models that determine disposal patterns in such a way that water quality is protected at supply locations. -from Author

  2. On an Additive Semigraphoid Model for Statistical Networks With Application to Pathway Analysis

    PubMed Central

    Li, Bing; Chun, Hyonho; Zhao, Hongyu

    2014-01-01

    We introduce a nonparametric method for estimating non-gaussian graphical models based on a new statistical relation called additive conditional independence, which is a three-way relation among random vectors that resembles the logical structure of conditional independence. Additive conditional independence allows us to use one-dimensional kernel regardless of the dimension of the graph, which not only avoids the curse of dimensionality but also simplifies computation. It also gives rise to a parallel structure to the gaussian graphical model that replaces the precision matrix by an additive precision operator. The estimators derived from additive conditional independence cover the recently introduced nonparanormal graphical model as a special case, but outperform it when the gaussian copula assumption is violated. We compare the new method with existing ones by simulations and in genetic pathway analysis. PMID:26401064

  3. On the Influence of Material Parameters in a Complex Material Model for Powder Compaction

    NASA Astrophysics Data System (ADS)

    Staf, Hjalmar; Lindskog, Per; Andersson, Daniel C.; Larsson, Per-Lennart

    2016-10-01

    Parameters in a complex material model for powder compaction, based on a continuum mechanics approach, are evaluated using real insert geometries. The parameter sensitivity with respect to density and stress after compaction, pertinent to a wide range of geometries, is studied in order to investigate completeness and limitations of the material model. Finite element simulations with varied material parameters are used to build surrogate models for the sensitivity study. The conclusion from this analysis is that a simplification of the material model is relevant, especially for simple insert geometries. Parameters linked to anisotropy and the plastic strain evolution angle have a small impact on the final result.

  4. On the Influence of Material Parameters in a Complex Material Model for Powder Compaction

    NASA Astrophysics Data System (ADS)

    Staf, Hjalmar; Lindskog, Per; Andersson, Daniel C.; Larsson, Per-Lennart

    2016-08-01

    Parameters in a complex material model for powder compaction, based on a continuum mechanics approach, are evaluated using real insert geometries. The parameter sensitivity with respect to density and stress after compaction, pertinent to a wide range of geometries, is studied in order to investigate completeness and limitations of the material model. Finite element simulations with varied material parameters are used to build surrogate models for the sensitivity study. The conclusion from this analysis is that a simplification of the material model is relevant, especially for simple insert geometries. Parameters linked to anisotropy and the plastic strain evolution angle have a small impact on the final result.

  5. From global to local: exploring the relationship between parameters and behaviors in models of electrical excitability.

    PubMed

    Fletcher, Patrick; Bertram, Richard; Tabak, Joel

    2016-06-01

    Models of electrical activity in excitable cells involve nonlinear interactions between many ionic currents. Changing parameters in these models can produce a variety of activity patterns with sometimes unexpected effects. Further more, introducing new currents will have different effects depending on the initial parameter set. In this study we combined global sampling of parameter space and local analysis of representative parameter sets in a pituitary cell model to understand the effects of adding K (+) conductances, which mediate some effects of hormone action on these cells. Global sampling ensured that the effects of introducing K (+) conductances were captured across a wide variety of contexts of model parameters. For each type of K (+) conductance we determined the types of behavioral transition that it evoked. Some transitions were counterintuitive, and may have been missed without the use of global sampling. In general, the wide range of transitions that occurred when the same current was applied to the model cell at different locations in parameter space highlight the challenge of making accurate model predictions in light of cell-to-cell heterogeneity. Finally, we used bifurcation analysis and fast/slow analysis to investigate why specific transitions occur in representative individual models. This approach relies on the use of a graphics processing unit (GPU) to quickly map parameter space to model behavior and identify parameter sets for further analysis. Acceleration with modern low-cost GPUs is particularly well suited to exploring the moderate-sized (5-20) parameter spaces of excitable cell and signaling models.

  6. When the Optimal Is Not the Best: Parameter Estimation in Complex Biological Models

    PubMed Central

    Fernández Slezak, Diego; Suárez, Cecilia; Cecchi, Guillermo A.; Marshall, Guillermo; Stolovitzky, Gustavo

    2010-01-01

    Background The vast computational resources that became available during the past decade enabled the development and simulation of increasingly complex mathematical models of cancer growth. These models typically involve many free parameters whose determination is a substantial obstacle to model development. Direct measurement of biochemical parameters in vivo is often difficult and sometimes impracticable, while fitting them under data-poor conditions may result in biologically implausible values. Results We discuss different methodological approaches to estimate parameters in complex biological models. We make use of the high computational power of the Blue Gene technology to perform an extensive study of the parameter space in a model of avascular tumor growth. We explicitly show that the landscape of the cost function used to optimize the model to the data has a very rugged surface in parameter space. This cost function has many local minima with unrealistic solutions, including the global minimum corresponding to the best fit. Conclusions The case studied in this paper shows one example in which model parameters that optimally fit the data are not necessarily the best ones from a biological point of view. To avoid force-fitting a model to a dataset, we propose that the best model parameters should be found by choosing, among suboptimal parameters, those that match criteria other than the ones used to fit the model. We also conclude that the model, data and optimization approach form a new complex system and point to the need of a theory that addresses this problem more generally. PMID:21049094

  7. Relaxation oscillation model of hemodynamic parameters in the cerebral vessels

    NASA Astrophysics Data System (ADS)

    Cherevko, A. A.; Mikhaylova, A. V.; Chupakhin, A. P.; Ufimtseva, I. V.; Krivoshapkin, A. L.; Orlov, K. Yu

    2016-06-01

    Simulation of a blood flow under normality as well as under pathology is extremely complex problem of great current interest both from the point of view of fundamental hydrodynamics, and for medical applications. This paper proposes a model of Van der Pol - Duffing nonlinear oscillator equation describing relaxation oscillations of a blood flow in the cerebral vessels. The model is based on the patient-specific clinical experimental data flow obtained during the neurosurgical operations in Meshalkin Novosibirsk Research Institute of Circulation Pathology. The stability of the model is demonstrated through the variations of initial data and coefficients. It is universal and describes pressure and velocity fluctuations in different cerebral vessels (arteries, veins, sinuses), as well as in a laboratory model of carotid bifurcation. Derived equation describes the rheology of the ”blood stream - elastic vessel wall gelatinous brain environment” composite system and represents the state equation of this complex environment.

  8. Tradeoffs among watershed model calibration targets for parameter estimation

    EPA Science Inventory

    Hydrologic models are commonly calibrated by optimizing a single objective function target to compare simulated and observed flows, although individual targets are influenced by specific flow modes. Nash-Sutcliffe efficiency (NSE) emphasizes flood peaks in evaluating simulation f...

  9. A practical method to assess model sensitivity and parameter uncertainty in C cycle models

    NASA Astrophysics Data System (ADS)

    Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy

    2015-04-01

    The carbon cycle combines multiple spatial and temporal scales, from minutes to hours for the chemical processes occurring in plant cells to several hundred of years for the exchange between the atmosphere and the deep ocean and finally to millennia for the formation of fossil fuels. Together with our knowledge of the transformation processes involved in the carbon cycle, many Earth Observation systems are now available to help improving models and predictions using inverse modelling techniques. A generic inverse problem consists in finding a n-dimensional state vector x such that h(x) = y, for a given N-dimensional observation vector y, including random noise, and a given model h. The problem is well posed if the three following conditions hold: 1) there exists a solution, 2) the solution is unique and 3) the solution depends continuously on the input data. If at least one of these conditions is violated the problem is said ill-posed. The inverse problem is often ill-posed, a regularization method is required to replace the original problem with a well posed problem and then a solution strategy amounts to 1) constructing a solution x, 2) assessing the validity of the solution, 3) characterizing its uncertainty. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Intercomparison experiments have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF) to estimate model parameters and initial carbon stocks for DALEC using eddy covariance measurements of net ecosystem exchange of CO2 and leaf area index observations. Most results agreed on the fact that parameters and initial stocks directly related to fast processes were best estimated with narrow confidence intervals, whereas those related to slow processes were poorly estimated with very large uncertainties. While other studies have tried to overcome this difficulty by adding complementary

  10. Testing a Gender Additive Model: The Role of Body Image in Adolescent Depression

    ERIC Educational Resources Information Center

    Bearman, Sarah Kate; Stice, Eric

    2008-01-01

    Despite consistent evidence that adolescent girls are at greater risk of developing depression than adolescent boys, risk factor models that account for this difference have been elusive. The objective of this research was to examine risk factors proposed by the "gender additive" model of depression that attempts to partially explain the increased…

  11. Measures of GCM Performance as Functions of Model Parameters Affecting Clouds and Radiation

    NASA Astrophysics Data System (ADS)

    Jackson, C.; Mu, Q.; Sen, M.; Stoffa, P.

    2002-05-01

    This abstract is one of three related presentations at this meeting dealing with several issues surrounding optimal parameter and uncertainty estimation of model predictions of climate. Uncertainty in model predictions of climate depends in part on the uncertainty produced by model approximations or parameterizations of unresolved physics. Evaluating these uncertainties is computationally expensive because one needs to evaluate how arbitrary choices for any given combination of model parameters affects model performance. Because the computational effort grows exponentially with the number of parameters being investigated, it is important to choose parameters carefully. Evaluating whether a parameter is worth investigating depends on two considerations: 1) does reasonable choices of parameter values produce a large range in model response relative to observational uncertainty? and 2) does the model response depend non-linearly on various combinations of model parameters? We have decided to narrow our attention to selecting parameters that affect clouds and radiation, as it is likely that these parameters will dominate uncertainties in model predictions of future climate. We present preliminary results of ~20 to 30 AMIPII style climate model integrations using NCAR's CCM3.10 that show model performance as functions of individual parameters controlling 1) critical relative humidity for cloud formation (RHMIN), and 2) boundary layer critical Richardson number (RICR). We also explore various definitions of model performance that include some or all observational data sources (surface air temperature and pressure, meridional and zonal winds, clouds, long and short-wave cloud forcings, etc...) and evaluate in a few select cases whether the model's response depends non-linearly on the parameter values we have selected.

  12. Parameter Optimization for the Gaussian Model of Folded Proteins

    NASA Astrophysics Data System (ADS)

    Erman, Burak; Erkip, Albert

    2000-03-01

    Recently, we proposed an analytical model of protein folding (B. Erman, K. A. Dill, J. Chem. Phys, 112, 000, 2000) and showed that this model successfully approximates the known minimum energy configurations of two dimensional HP chains. All attractions (covalent and non-covalent) as well as repulsions were treated as if the monomer units interacted with each other through linear spring forces. Since the governing potential of the linear springs are derived from a Gaussian potential, the model is called the ''Gaussian Model''. The predicted conformations from the model for the hexamer and various 9mer sequences all lie on the square lattice, although the model does not contain information about the lattice structure. Results of predictions for chains with 20 or more monomers also agreed well with corresponding known minimum energy lattice structures. However, these predicted conformations did not lie exactly on the square lattice. In the present work, we treat the specific problem of optimizing the potentials (the strengths of the spring constants) so that the predictions are in better agreement with the known minimum energy structures.

  13. Simultaneous estimation of model parameters and diffuse pollution sources for river water quality modeling.

    PubMed

    Jun, K S; Kang, J W; Lee, K S

    2007-01-01

    Diffuse pollution sources along a stream reach are very difficult to both monitor and estimate. In this paper, a systematic method using an optimal estimation algorithm is presented for simultaneous estimation of diffuse pollution and model parameters in a stream water quality model. It was applied with the QUAL2E model to the South Han River in South Korea for optimal estimation of kinetic constants and diffuse loads along the river. Initial calibration results for kinetic constants selected from a sensitivity analysis reveal that diffuse source inputs for nitrogen and phosphorus are essential to satisfy the system mass balance. Diffuse loads for total nitrogen and total phosphorus were estimated by solving the expanded inverse problem. Comparison of kinetic constants estimated simultaneously with diffuse sources to those estimated without diffuse loads, suggests that diffuse sources must be included in the optimization not only for its own estimation but also for adequate estimation of the model parameters. Application of the optimization method to river water quality modeling is discussed in terms of the sensitivity coefficient matrix structure.

  14. Hydrologic Modeling and Parameter Estimation under Data Scarcity for Java Island, Indonesia

    NASA Astrophysics Data System (ADS)

    Yanto, M.; Livneh, B.; Rajagopalan, B.; Kasprzyk, J. R.

    2015-12-01

    The Indonesian island of Java is routinely subjected to intense flooding, drought and related natural hazards, resulting in severe social and economic impacts. Although an improved understanding of the island's hydrology would help mitigate these risks, data scarcity issues make the modeling challenging. To this end, we developed a hydrological representation of Java using the Variable Infiltration Capacity (VIC) model, to simulate the hydrologic processes of several watersheds across the island. We measured the model performance using Nash-Sutcliffe Efficiency (NSE) at monthly time step. Data scarcity and quality issues for precipitation and streamflow warranted the application of a quality control procedure to data ensure consistency among watersheds resulting in 7 watersheds. To optimize the model performance, the calibration parameters were estimated using Borg Multi Objective Evolutionary Algorithm (Borg MOEA), which offers efficient searching of the parameter space, adaptive population sizing and local optima escape facility. The result shows that calibration performance is best (NSE ~ 0.6 - 0.9) in the eastern part of the domain and moderate (NSE ~ 0.3 - 0.5) in the western part of the island. The validation results are lower (NSE ~ 0.1 - 0.5) and (NSE ~ 0.1 - 0.4) in the east and west, respectively. We surmise that the presence of outliers and stark differences in the climate between calibration and validation periods in the western watersheds are responsible for low NSE in this region. In addition, we found that approximately 70% of total errors were contributed by less than 20% of total data. The spatial variability of model performance suggests the influence of both topographical and hydroclimatic controls on the hydrological processes. Most watersheds in eastern part perform better in wet season and vice versa for the western part. This modeling framework is one of the first attempts at comprehensively simulating the hydrology in this maritime, tropical

  15. Ecohydrological model parameter selection for stream health evaluation.

    PubMed

    Woznicki, Sean A; Nejadhashemi, A Pouyan; Ross, Dennis M; Zhang, Zhen; Wang, Lizhu; Esfahanian, Abdol-Hossein

    2015-04-01

    Variable selection is a critical step in development of empirical stream health prediction models. This study develops a framework for selecting important in-stream variables to predict four measures of biological integrity: total number of Ephemeroptera, Plecoptera, and Trichoptera (EPT) taxa, family index of biotic integrity (FIBI), Hilsenhoff biotic integrity (HBI), and fish index of biotic integrity (IBI). Over 200 flow regime and water quality variables were calculated using the Hydrologic Index Tool (HIT) and Soil and Water Assessment Tool (SWAT). Streams of the River Raisin watershed in Michigan were grouped using the Strahler stream classification system (orders 1-3 and orders 4-6), k-means clustering technique (two clusters: C1 and C2), and all streams (one grouping). For each grouping, variable selection was performed using Bayesian variable selection, principal component analysis, and Spearman's rank correlation. Following selection of best variable sets, models were developed to predict the measures of biological integrity using adaptive-neuro fuzzy inference systems (ANFIS), a technique well-suited to complex, nonlinear ecological problems. Multiple unique variable sets were identified, all which differed by selection method and stream grouping. Final best models were mostly built using the Bayesian variable selection method. The most effective stream grouping method varied by health measure, although k-means clustering and grouping by stream order were always superior to models built without grouping. Commonly selected variables were related to streamflow magnitude, rate of change, and seasonal nitrate concentration. Each best model was effective in simulating stream health observations, with EPT taxa validation R2 ranging from 0.67 to 0.92, FIBI ranging from 0.49 to 0.85, HBI from 0.56 to 0.75, and fish IBI at 0.99 for all best models. The comprehensive variable selection and modeling process proposed here is a robust method that extends our

  16. Ecohydrological model parameter selection for stream health evaluation.

    PubMed

    Woznicki, Sean A; Nejadhashemi, A Pouyan; Ross, Dennis M; Zhang, Zhen; Wang, Lizhu; Esfahanian, Abdol-Hossein

    2015-04-01

    Variable selection is a critical step in development of empirical stream health prediction models. This study develops a framework for selecting important in-stream variables to predict four measures of biological integrity: total number of Ephemeroptera, Plecoptera, and Trichoptera (EPT) taxa, family index of biotic integrity (FIBI), Hilsenhoff biotic integrity (HBI), and fish index of biotic integrity (IBI). Over 200 flow regime and water quality variables were calculated using the Hydrologic Index Tool (HIT) and Soil and Water Assessment Tool (SWAT). Streams of the River Raisin watershed in Michigan were grouped using the Strahler stream classification system (orders 1-3 and orders 4-6), k-means clustering technique (two clusters: C1 and C2), and all streams (one grouping). For each grouping, variable selection was performed using Bayesian variable selection, principal component analysis, and Spearman's rank correlation. Following selection of best variable sets, models were developed to predict the measures of biological integrity using adaptive-neuro fuzzy inference systems (ANFIS), a technique well-suited to complex, nonlinear ecological problems. Multiple unique variable sets were identified, all which differed by selection method and stream grouping. Final best models were mostly built using the Bayesian variable selection method. The most effective stream grouping method varied by health measure, although k-means clustering and grouping by stream order were always superior to models built without grouping. Commonly selected variables were related to streamflow magnitude, rate of change, and seasonal nitrate concentration. Each best model was effective in simulating stream health observations, with EPT taxa validation R2 ranging from 0.67 to 0.92, FIBI ranging from 0.49 to 0.85, HBI from 0.56 to 0.75, and fish IBI at 0.99 for all best models. The comprehensive variable selection and modeling process proposed here is a robust method that extends our

  17. Measuring the basic parameters of neutron stars using model atmospheres

    NASA Astrophysics Data System (ADS)

    Suleimanov, V. F.; Poutanen, J.; Klochkov, D.; Werner, K.

    2016-02-01

    Model spectra of neutron star atmospheres are nowadays widely used to fit the observed thermal X-ray spectra of neutron stars. This fitting is the key element in the method of the neutron star radius determination. Here, we present the basic assumptions used for the neutron star atmosphere modeling as well as the main qualitative features of the stellar atmospheres leading to the deviations of the emergent model spectrum from blackbody. We describe the properties of two of our model atmosphere grids: i) pure carbon atmospheres for relatively cool neutron stars (1-4MK) and ii) hot atmospheres with Compton scattering taken into account. The results obtained by applying these grids to model the X-ray spectra of the central compact object in supernova remnant HESS 1731-347, and two X-ray bursting neutron stars in low-mass X-ray binaries, 4U 1724-307 and 4U 1608-52, are presented. Possible systematic uncertainties associated with the obtained neutron star radii are discussed.

  18. Canyon building ventilation system dynamic model -- Parameters and validation

    SciTech Connect

    Moncrief, B.R. ); Chen, F.F.K. )

    1993-01-01

    Plant system simulation crosses many disciplines. At the core is the mimic of key components in the form of mathematical models.'' These component models are functionally integrated to represent the plant. With today's low cost high capacity computers, the whole plant can be truly and effectively reproduced in a computer model. Dynamic simulation has its roots in single loop'' design, which is still a common objective in the employment of simulation. The other common objectives are the ability to preview plant operation, to anticipate problem areas, and to test the impact of design options. As plant system complexity increases and our ability to simulate the entire plant grows, the objective to optimize plant system design becomes practical. This shift in objectives from problem avoidance to total optimization by far offers the most rewarding potential. Even a small reduction in bulk materials and space can sufficiently justify the application of this technology. Furthermore, to realize an optimal plant starts from a tight and disciplined design. We believe the assurance required to execute such a design strategy can partly be derived from a plant model. This paper reports on the application of a dynamic model to evaluate the capacity of an existing production plant ventilation system. This study met the practical objectives of capacity evaluation under present and future conditions, and under normal and accidental situations. More importantly, the description of this application, in its methods and its utility, aims to validate the technology of dynamic simulation in the environment of plant system design and safe operation.

  19. Cell death, perfusion and electrical parameters are critical in models of hepatic radiofrequency ablation

    PubMed Central

    Hall, Sheldon K.; Ooi, Ean H.; Payne, Stephen J.

    2015-01-01

    Abstract Purpose: A sensitivity analysis has been performed on a mathematical model of radiofrequency ablation (RFA) in the liver. The purpose of this is to identify the most important parameters in the model, defined as those that produce the largest changes in the prediction. This is important in understanding the role of uncertainty and when comparing the model predictions to experimental data. Materials and methods: The Morris method was chosen to perform the sensitivity analysis because it is ideal for models with many parameters or that take a significant length of time to obtain solutions. A comprehensive literature review was performed to obtain ranges over which the model parameters are expected to vary, crucial input information. Results: The most important parameters in predicting the ablation zone size in our model of RFA are those representing the blood perfusion, electrical conductivity and the cell death model. The size of the 50 °C isotherm is sensitive to the electrical properties of tissue while the heat source is active, and to the thermal parameters during cooling. Conclusions: The parameter ranges chosen for the sensitivity analysis are believed to represent all that is currently known about their values in combination. The Morris method is able to compute global parameter sensitivities taking into account the interaction of all parameters, something that has not been done before. Research is needed to better understand the uncertainties in the cell death, electrical conductivity and perfusion models, but the other parameters are only of second order, providing a significant simplification. PMID:26000972

  20. Hydrological model parameters identification in a coastal nested catchment in Mersin province (SE Turkey)

    NASA Astrophysics Data System (ADS)

    Yıldırım, Ümit; Jomaa, Seifeddine; Güler, Cüneyt; Rode, Michael

    2016-04-01

    It is known that the coastal Mediterranean region is facing a serious problem of water resources exploitation due to the rapid demographic, socio-economic, land use and climate changes. The hydrological modeling has proven to be an efficient tool for better water resources prediction and management. In this study, the HYdrological Predictions for the Environment (HYPE) model was setup on the nested coastal Sorgun catchment in Turkey (449 km2). This catchment is located in the east part of the Mersin province and is characterized by extremely varied topography, land use, and population density in semi-arid Mediterranean climate conditions. First, the model was calibrated at the catchment outlet (Sarilar) for the period 2003-2006. Second, the model was validated temporally for the period 2009-2013 at daily and monthly time intervals. In addition, the model performance was tested spatially using an internal station (B. Sorgun, 269 km2) located in the headwater region. Results showed that the HYPE model could reproduce the measured daily discharge significantly well (Nash Sutcliffe Efficiency (NSE) were 0.78 and 0.68 for calibration and validation periods, respectively). For monthly time step, the model performs better compared with daily time interval (NSE were 0.92 and 0.83 for calibration and validation periods, respectively). The model could represent the water balance relatively good at daily and monthly time steps, where the lowest PBIAS (percentage bias) were - 4.19% and - 3.53% for daily and monthly time intervals, respectively (considering the whole period). Results revealed, however, the agreement between the predicted and measured discharge was reduced, when the same best optimized model-parameters at Sarilar gauging station (catchment outlet) were used at B. Sorgun station (internal station). This model transferability less performance at internal station can be explained by the clear changes in terms of land use, soil type and precipitation rate in the

  1. NASA Workshop on Distributed Parameter Modeling and Control of Flexible Aerospace Systems

    NASA Technical Reports Server (NTRS)

    Marks, Virginia B. (Compiler); Keckler, Claude R. (Compiler)

    1994-01-01

    Although significant advances have been made in modeling and controlling flexible systems, there remains a need for improvements in model accuracy and in control performance. The finite element models of flexible systems are unduly complex and are almost intractable to optimum parameter estimation for refinement using experimental data. Distributed parameter or continuum modeling offers some advantages and some challenges in both modeling and control. Continuum models often result in a significantly reduced number of model parameters, thereby enabling optimum parameter estimation. The dynamic equations of motion of continuum models provide the advantage of allowing the embedding of the control system dynamics, thus forming a complete set of system dynamics. There is also increased insight provided by the continuum model approach.

  2. Development of wavelet-ANN models to predict water quality parameters in Hilo Bay, Pacific Ocean.

    PubMed

    Alizadeh, Mohamad Javad; Kavianpour, Mohamad Reza

    2015-09-15

    The main objective of this study is to apply artificial neural network (ANN) and wavelet-neural network (WNN) models for predicting a variety of ocean water quality parameters. In this regard, several water quality parameters in Hilo Bay, Pacific Ocean, are taken under consideration. Different combinations of water quality parameters are applied as input variables to predict daily values of salinity, temperature and DO as well as hourly values of DO. The results demonstrate that the WNN models are superior to the ANN models. Also, the hourly models developed for DO prediction outperform the daily models of DO. For the daily models, the most accurate model has R equal to 0.96, while for the hourly model it reaches up to 0.98. Overall, the results show the ability of the model to monitor the ocean parameters, in condition with missing data, or when regular measurement and monitoring are impossible.

  3. Multi-objective global sensitivity analysis of the WRF model parameters

    NASA Astrophysics Data System (ADS)

    Quan, Jiping; Di, Zhenhua; Duan, Qingyun; Gong, Wei; Wang, Chen

    2015-04-01

    Tuning model parameters to match model simulations with observations can be an effective way to enhance the performance of numerical weather prediction (NWP) models such as Weather Research and Forecasting (WRF) model. However, this is a very complicated process as a typical NWP model involves many model parameters and many output variables. One must take a multi-objective approach to ensure all of the major simulated model outputs are satisfactory. This talk presents the results of an investigation of multi-objective parameter sensitivity analysis of the WRF model to different model outputs, including conventional surface meteorological variables such as precipitation, surface temperature, humidity and wind speed, as well as atmospheric variables such as total precipitable water, cloud cover, boundary layer height and outgoing long radiation at the top of the atmosphere. The goal of this study is to identify the most important parameters that affect the predictive skill of short-range meteorological forecasts by the WRF model. The study was performed over the Greater Beijing Region of China. A total of 23 adjustable parameters from seven different physical parameterization schemes were considered. Using a multi-objective global sensitivity analysis method, we examined the WRF model parameter sensitivities to the 5-day simulations of the aforementioned model outputs. The results show that parameter sensitivities vary with different model outputs. But three to four of the parameters are shown to be sensitive to all model outputs considered. The sensitivity results from this research can be the basis for future model parameter optimization of the WRF model.

  4. Generation of pareto optimal ensembles of calibrated parameter sets for climate models.

    SciTech Connect

    Dalbey, Keith R.; Levy, Michael Nathan

    2010-12-01

    Climate models have a large number of inputs and outputs. In addition, diverse parameters sets can match observations similarly well. These factors make calibrating the models difficult. But as the Earth enters a new climate regime, parameters sets may cease to match observations. History matching is necessary but not sufficient for good predictions. We seek a 'Pareto optimal' ensemble of calibrated parameter sets for the CCSM climate model, in which no individual criteria can be improved without worsening another. One Multi Objective Genetic Algorithm (MOGA) optimization typically requires thousands of simulations but produces an ensemble of Pareto optimal solutions. Our simulation budget of 500-1000 runs allows us to perform the MOGA optimization once, but with far fewer evaluations than normal. We devised an analytic test problem to aid in the selection MOGA settings. The test problem's Pareto set is the surface of a 6 dimensional hypersphere with radius 1 centered at the origin, or rather the portion of it in the [0,1] octant. We also explore starting MOGA from a space-filling Latin Hypercube sample design, specifically Binning Optimal Symmetric Latin Hypercube Sampling (BOSLHS), instead of Monte Carlo (MC). We compare the Pareto sets based on: their number of points, N, larger is better; their RMS distance, d, to the ensemble's center, 0.5553 is optimal; their average radius, {mu}(r), 1 is optimal; their radius standard deviation, {sigma}(r), 0 is optimal. The estimated distributions for these metrics when starting from MC and BOSLHS are shown in Figs. 1 and 2.

  5. Bayesian Student Modeling and the Problem of Parameter Specification.

    ERIC Educational Resources Information Center

    Millan, Eva; Agosta, John Mark; Perez de la Cruz, Jose Luis

    2001-01-01

    Discusses intelligent tutoring systems and the application of Bayesian networks to student modeling. Considers reasons for not using Bayesian networks, including the computational complexity of the algorithms and the difficulty of knowledge acquisition, and proposes an approach to simplify knowledge acquisition that applies causal independence to…

  6. Genetic Parameters for Milk Yield and Lactation Persistency Using Random Regression Models in Girolando Cattle

    PubMed Central

    Canaza-Cayo, Ali William; Lopes, Paulo Sávio; da Silva, Marcos Vinicius Gualberto Barbosa; de Almeida Torres, Robledo; Martins, Marta Fonseca; Arbex, Wagner Antonio; Cobuci, Jaime Araujo

    2015-01-01

    A total of 32,817 test-day milk yield (TDMY) records of the first lactation of 4,056 Girolando cows daughters of 276 sires, collected from 118 herds between 2000 and 2011 were utilized to estimate the genetic parameters for TDMY via random regression models (RRM) using Legendre’s polynomial functions whose orders varied from 3 to 5. In addition, nine measures of persistency in milk yield (PSi) and the genetic trend of 305-day milk yield (305MY) were evaluated. The fit quality criteria used indicated RRM employing the Legendre’s polynomial of orders 3 and 5 for fitting the genetic additive and permanent environment effects, respectively, as the best model. The heritability and genetic correlation for TDMY throughout the lactation, obtained with the best model, varied from 0.18 to 0.23 and from −0.03 to 1.00, respectively. The heritability and genetic correlation for persistency and 305MY varied from 0.10 to 0.33 and from −0.98 to 1.00, respectively. The use of PS7 would be the most suitable option for the evaluation of Girolando cattle. The estimated breeding values for 305MY of sires and cows showed significant and positive genetic trends. Thus, the use of selection indices would be indicated in the genetic evaluation of Girolando cattle for both traits. PMID:26323397

  7. Mechanisms and modeling of the effects of additives on the nitrogen oxides emission

    NASA Technical Reports Server (NTRS)

    Kundu, Krishna P.; Nguyen, H. Lee; Kang, M. Paul

    1991-01-01

    A theoretical study on the emission of the oxides of nitrogen in the combustion of hydrocarbons is presented. The current understanding of the mechanisms and the rate parameters for gas phase reactions were used to calculate the NO(x) emission. The possible effects of different chemical species on thermal NO(x), on a long time scale were discussed. The mixing of these additives at various stages of combustion were considered and NO(x) concentrations were calculated; effects of temperatures were also considered. The chemicals such as hydrocarbons, H2, CH3OH, NH3, and other nitrogen species were chosen as additives in this discussion. Results of these calculations can be used to evaluate the effects of these additives on the NO(x) emission in the industrial combustion system.

  8. Canyon building ventilation system dynamic model -- Parameters and validation

    SciTech Connect

    Moncrief, B.R.; Chen, F.F.K.

    1993-02-01

    Plant system simulation crosses many disciplines. At the core is the mimic of key components in the form of mathematical ``models.`` These component models are functionally integrated to represent the plant. With today`s low cost high capacity computers, the whole plant can be truly and effectively reproduced in a computer model. Dynamic simulation has its roots in ``single loop`` design, which is still a common objective in the employment of simulation. The other common objectives are the ability to preview plant operation, to anticipate problem areas, and to test the impact of design options. As plant system complexity increases and our ability to simulate the entire plant grows, the objective to optimize plant system design becomes practical. This shift in objectives from problem avoidance to total optimization by far offers the most rewarding potential. Even a small reduction in bulk materials and space can sufficiently justify the application of this technology. Furthermore, to realize an optimal plant starts from a tight and disciplined design. We believe the assurance required to execute such a design strategy can partly be derived from a plant model. This paper reports on the application of a dynamic model to evaluate the capacity of an existing production plant ventilation system. This study met the practical objectives of capacity evaluation under present and future conditions, and under normal and accidental situations. More importantly, the description of this application, in its methods and its utility, aims to validate the technology of dynamic simulation in the environment of plant system design and safe operation.

  9. Microscopy and Spectroscopy Techniques to Guide Parameters for Modeling Mineral Dust Aerosol Particles

    NASA Astrophysics Data System (ADS)

    Veghte, D. P.; Moore, J. E.; Jensen, L.; Freedman, M. A.

    2013-12-01

    Mineral dust aerosol particles are the second largest emission by mass into the atmosphere and contribute to the largest uncertainty in radiative forcing. Due to the variation in size, composition, and shape, caused by physical and chemical processing, uncertainty exists as to whether mineral dust causes a net warming or cooling effect. We have used Cavity Ring-Down Aerosol Extinction Spectroscopy (CRD-AES), Scanning Electron Microscopy (SEM), and Transmission Electron Microscopy (TEM) to measure extinction cross sections and morphologies of size-selected, non-absorbing and absorbing mineral dust aerosol particles. We have found that microscopy is essential for characterizing the polydispersity of the size selection of non-spherical particles. Through the combined use of CRD-AES, microscopy, and computation (Mie theory and Discreet Dipole Approximation), we have determined the effect of shape on the optical properties of additional species including clay minerals, quartz, and hematite in the sub-micron regime. Our results have shown that calcite can be treated as polydisperse spheres while quartz and hematite need additional modeling parameters to account for their irregularity. Size selection of clay minerals cannot be performed due to their irregular shape, but microscopy techniques can be used to better quantify the particle aspect ratio. Our results demonstrate a new method that can be used to extend cavity ring-down spectroscopy for the measurement of the optical properties of non-spherical particles. This characterization will lead to better aerosol extinction parameters for modeling aerosol optical properties in climate models and satellite retrieval algorithms.

  10. Parameter identification of the SWAT model on the BANI catchment (West Africa) under limited data condition

    NASA Astrophysics Data System (ADS)

    Chaibou Begou, Jamilatou; Jomaa, Seifeddine; Benabdallah, Sihem; Rode, Michael

    2015-04-01

    Due to the climate change, drier conditions have prevailed in West Africa, since the seventies, and the consequences are important on water resources. In order to identify and implement management strategies of adaptation to climate change in the sector of water, it is crucial to improve our physical understanding of water resources evolution in the region. To this end, hydrologic modelling is an appropriate tool for flow predictions under changing climate and land use conditions. In this study, the applicability and performance of the recent version of Soil and Water Assessment Tool (SWAT2012) model were tested on the Bani catchment in West Africa under limited data condition. Model parameters identification was also tested using one site and multisite calibration approaches. The Bani is located in the upper part of the Niger River and drains an area of about 101, 000 km2 at the outlet of Douna. The climate is tropical, humid to semi-arid from the South to the North with an average annual rainfall of 1050 mm (period 1981-2000). Global datasets were used for the model setup such as: USGS hydrosheds DEM, USGS LCI GlobCov2009 and the FAO Digital Soil Map of the World. Daily measured rainfall from nine rain gauges and maximum and minimum temperature from five weather stations covering the period 1981-1997 were used for model setup. Sensitivity analysis, calibration and validation are performed within SWATCUP using GLUE procedure at Douna station first (one site calibration), then at three additional internal stations, Bougouni, Pankourou and Kouoro1 (multi-site calibration). Model parameters were calibrated at daily time step for the period 1983-1992, then validated for the period 1993-1997. A period of two years (1981-1982) was used for model warming up. Results of one-site calibration showed that the model performance is evaluated by 0.76 and 0.79 for Nash-Sutcliffe (NS) and correlation coefficient (R2), respectively. While for the validation period the performance

  11. In vivo assessment of an industrial waste product as a feed additive in dairy cows: Effects of larch (Larix decidua L.) sawdust on blood parameters and milk composition.

    PubMed

    Tedesco, D; Garavaglia, L; Spagnuolo, M S; Pferschy-Wenzig, E M; Bauer, R; Franz, C

    2015-12-01

    When larch (Larix spp.) is processed in the wood industry, the sawdust is currently disposed of as waste or used as combustible material, even though it is rich in biologically active compounds. In this study the effect of larch sawdust supplementation on blood parameters as well as milk composition was examined in healthy mid-lactating dairy cows. Twenty-four multiparous Italian Friesian dairy cows were assigned to groups receiving either 300 g/day/cow of larch sawdust or a control diet, and treatments were continued for a 20 day period. Milk parameters were unaffected by treatment. A lower plasma total protein concentration was observed and can be attributed to a decrease in globulin concentration. A lower plasma urea concentration was also detected in the larch group. Moreover, biomarkers of liver function were influenced by the treatment. Total bilirubin was lower in larch-treated animals, and cholesterol tended to be lower. In addition, an interaction between day and treatment was observed for very low density lipoprotein. The concentration of other parameters, including reactive oxygen metabolites, superoxide dismutase, glutathione peroxidase and nitrotyrosine, did not differ between treatments. The observed benefits, together with the good palatability, make larch sawdust a promising candidate for the development of beneficial feed supplements for livestock. Further studies will be useful, particularly to evaluate its efficacy in different health conditions. PMID:26526868

  12. Automatic parameter extraction techniques in IC-CAP for a compact double gate MOSFET model

    NASA Astrophysics Data System (ADS)

    Darbandy, Ghader; Gneiting, Thomas; Alius, Heidrun; Alvarado, Joaquín; Cerdeira, Antonio; Iñiguez, Benjamin

    2013-05-01

    In this paper, automatic parameter extraction techniques of Agilent's IC-CAP modeling package are presented to extract our explicit compact model parameters. This model is developed based on a surface potential model and coded in Verilog-A. The model has been adapted to Trigate MOSFETs, includes short channel effects (SCEs) and allows accurate simulations of the device characteristics. The parameter extraction routines provide an effective way to extract the model parameters. The techniques minimize the discrepancy and error between the simulation results and the available experimental data for more accurate parameter values and reliable circuit simulation. Behavior of the second derivative of the drain current is also verified and proves to be accurate and continuous through the different operating regimes. The results show good agreement with measured transistor characteristics under different conditions and through all operating regimes.

  13. Model Sensitivity to Parameters in the Simple 1-D Land-Atmosphere Model

    NASA Astrophysics Data System (ADS)

    Liang, C.; Van Ogtrop, F.; Willem, V.

    2012-04-01

    Large scale effects are generally more important to the regional climate than local effects, such as land cover. However there is rarely any comparison of the two types of effects due to the complexity of the land-atmosphere system and the difficulties in controlling different climate drivers. Here we look into this matter from a model perspective. The modified simple 1-D land-atmosphere model based on D'Andrea (2006) and Baudena (2008) is used to investigate the relative sensitivity of climate variables (air temperature and precipitation) to the external forcing and local forcing. The model has two properties: firstly, it is an equilibrium model and secondly, it requires a small set of parameters. Therefore, this model is suitable for sensitivity analysis in which the effect of change in one factor can be isolated. In this study, we perform sensitivity analysis on the effects of four parameters. External forcing is represented by solar radiation (100 - 800 W m2) and moisture influx (0 - 1 mm hr-1) to the region. Local forcing is represented by the initial leaf area index (LAI, 0 - 10) and the initial soil wetness (0.13 - 0.63). A normalized index is used to access the sensitivity of the model outputs to the parameters. The index is defined as SI = dmax -dmin, Dmean ·r where dmax and dmin represent the local extremes; Dmean is the mean value for the whole domain and r is the proportion of the whole domain from which the local extremes are taken. Precipitation and air temperature output both responded nonlinearly to the tested parameters. Precipitation is resistant to changes when parameters are near to the lower end of value ranges until a threshold is hit. On the other hand, temperature is more sensitive to the low parameter values than the high parameter values. Hence, precipitation is suppressed and temperature remains high due to lack of vegetation cover, or low soil moisture, or negligible moisture influx from outside the region. Both precipitation and

  14. Effect of whey protein concentrate and sodium chloride addition plus tumbling procedures on technological parameters, physical properties and visual appearance of sous vide cooked beef.

    PubMed

    Szerman, N; Gonzalez, C B; Sancho, A M; Grigioni, G; Carduza, F; Vaudagna, S R

    2007-07-01

    Beef muscles cooked by the sous vide system were evaluated for the effects of pre-injection tumbling, brine addition and post-injection tumbling on technological parameters, physical properties, visual appearance and tissue microstructure. The muscles were injected at 120% (over original weight) with a brine formulated to give a concentration of 3.5% whey protein concentrate and 0.7% sodium chloride on an injected raw product basis. Pre-injection tumbling did not affect most of the evaluated parameters. Brine addition reduced significantly the cooking and total weight losses. Total weight loss was 7.2% for injected muscles, and significantly higher (28.2%) for non-injected ones. Brine incorporation increased pH and reduced shear force values of cooked muscles. Extended post-injection tumbling (5rpm-10h) improved brine distribution and visual appearance, and also diminished the shear force values of cooked muscles. However, this treatment increased the weight losses of post-injection tumbling and cooking-pasteurization stages. PMID:22060988

  15. Effect of whey protein concentrate and sodium chloride addition plus tumbling procedures on technological parameters, physical properties and visual appearance of sous vide cooked beef.

    PubMed

    Szerman, N; Gonzalez, C B; Sancho, A M; Grigioni, G; Carduza, F; Vaudagna, S R

    2007-07-01

    Beef muscles cooked by the sous vide system were evaluated for the effects of pre-injection tumbling, brine addition and post-injection tumbling on technological parameters, physical properties, visual appearance and tissue microstructure. The muscles were injected at 120% (over original weight) with a brine formulated to give a concentration of 3.5% whey protein concentrate and 0.7% sodium chloride on an injected raw product basis. Pre-injection tumbling did not affect most of the evaluated parameters. Brine addition reduced significantly the cooking and total weight losses. Total weight loss was 7.2% for injected muscles, and significantly higher (28.2%) for non-injected ones. Brine incorporation increased pH and reduced shear force values of cooked muscles. Extended post-injection tumbling (5rpm-10h) improved brine distribution and visual appearance, and also diminished the shear force values of cooked muscles. However, this treatment increased the weight losses of post-injection tumbling and cooking-pasteurization stages.

  16. A new sewage exfiltration model--parameters and calibration.

    PubMed

    Karpf, Christian; Krebs, Peter

    2011-01-01

    Exfiltration of waste water from sewer systems represents a potential danger for the soil and the aquifer. Common models, which are used to describe the exfiltration process, are based on the law of Darcy, extended by a more or less detailed consideration of the expansion of leaks, the characteristics of the soil and the colmation layer. But, due to the complexity of the exfiltration process, the calibration of these models includes a significant uncertainty. In this paper, a new exfiltration approach is introduced, which implements the dynamics of the clogging process and the structural conditions near sewer leaks. The calibration is realised according to experimental studies and analysis of groundwater infiltration to sewers. Furthermore, exfiltration rates and the sensitivity of the approach are estimated and evaluated, respectively, by Monte-Carlo simulations.

  17. Rapid tsunami models and earthquake source parameters: Far-field and local applications

    USGS Publications Warehouse

    Geist, E.L.

    2005-01-01

    Rapid tsunami models have recently been developed to forecast far-field tsunami amplitudes from initial earthquake information (magnitude and hypocenter). Earthquake source parameters that directly affect tsunami generation as used in rapid tsunami models are examined, with particular attention to local versus far-field application of those models. First, validity of the assumption that the focal mechanism and type of faulting for tsunamigenic earthquakes is similar in a given region can be evaluated by measuring the seismic consistency of past events. Second, the assumption that slip occurs uniformly over an area of rupture will most often underestimate the amplitude and leading-wave steepness of the local tsunami. Third, sometimes large magnitude earthquakes will exhibit a high degree of spatial heterogeneity such that tsunami sources will be composed of distinct sub-events that can cause constructive and destructive interference in the wavefield away from the source. Using a stochastic source model, it is demonstrated that local tsunami amplitudes vary by as much as a factor of two or more, depending on the local bathymetry. If other earthquake source parameters such as focal depth or shear modulus are varied in addition to the slip distribution patterns, even greater uncertainty in local tsunami amplitude is expected for earthquakes of similar magnitude. Because of the short amount of time available to issue local warnings and because of the high degree of uncertainty associated with local, model-based forecasts as suggested by this study, direct wave height observations and a strong public education and preparedness program are critical for those regions near suspected tsunami sources.

  18. Binary system parameters and the hibernation model of cataclysmic variables

    SciTech Connect

    Livio, M.; Shara, M.M.

    1987-08-01

    The hibernation model, in which nova systems spend most of the time between eruptions in a state of low mass transfer rate, is examined. The binary systems more likely to undergo hibernation are determined. The predictions of the hibernation scenario are shown to be consistent with available observational data. It is shown how the hibernation scenario provides links between classical novae, dwarf novae, and novalike variables, all of which represent different stages in the cyclic evolution of the same systems. 72 references.

  19. Bifurcations, chaos, and sensitivity to parameter variations in the Sato cardiac cell model

    NASA Astrophysics Data System (ADS)

    Otte, Stefan; Berg, Sebastian; Luther, Stefan; Parlitz, Ulrich

    2016-08-01

    The dynamics of a detailed ionic cardiac cell model proposed by Sato et al. (2009) is investigated in terms of periodic and chaotic action potentials, bifurcation scenarios, and coexistence of attractors. Starting from the model's standard parameter values bifurcation diagrams are computed to evaluate the model's robustness with respect to (small) parameter changes. While for some parameters the dynamics turns out to be practically independent from their values, even minor changes of other parameters have a very strong impact and cause qualitative changes due to bifurcations or transitions to coexisting attractors. Implications of this lack of robustness are discussed.

  20. Predicting vulnerability to sleep deprivation using diffusion model parameters.

    PubMed

    Patanaik, Amiya; Zagorodnov, Vitali; Kwoh, Chee Keong; Chee, Michael W L

    2014-10-01

    We used diffusion modelling to predict vulnerability to decline in psychomotor vigilance task (PVT) performance following a night of total sleep deprivation (SD). A total of 135 healthy young adults (69 women, age = 21.9 ± 1.7 years) participated in several within-subject cross-over design studies that incorporated the PVT. Participants were classified as vulnerable (lower tertile) or non-vulnerable