NASA Astrophysics Data System (ADS)
Sotner, R.; Kartci, A.; Jerabek, J.; Herencsar, N.; Dostal, T.; Vrba, K.
2012-12-01
Several behavioral models of current active elements for experimental purposes are introduced in this paper. These models are based on commercially available devices. They are suitable for experimental tests of current- and mixed-mode filters, oscillators, and other circuits (employing current-mode active elements) frequently used in analog signal processing without necessity of onchip fabrication of proper active element. Several methods of electronic control of intrinsic resistance in the proposed behavioral models are discussed. All predictions and theoretical assumptions are supported by simulations and experiments. This contribution helps to find a cheaper and more effective way to preliminary laboratory tests without expensive on-chip fabrication of special active elements.
Yin, Junming; Chen, Xi; Xing, Eric P.
2016-01-01
We consider the problem of sparse variable selection in nonparametric additive models, with the prior knowledge of the structure among the covariates to encourage those variables within a group to be selected jointly. Previous works either study the group sparsity in the parametric setting (e.g., group lasso), or address the problem in the nonparametric setting without exploiting the structural information (e.g., sparse additive models). In this paper, we present a new method, called group sparse additive models (GroupSpAM), which can handle group sparsity in additive models. We generalize the ℓ1/ℓ2 norm to Hilbert spaces as the sparsity-inducing penalty in GroupSpAM. Moreover, we derive a novel thresholding condition for identifying the functional sparsity at the group level, and propose an efficient block coordinate descent algorithm for constructing the estimate. We demonstrate by simulation that GroupSpAM substantially outperforms the competing methods in terms of support recovery and prediction accuracy in additive models, and also conduct a comparative experiment on a real breast cancer dataset.
Functional Generalized Additive Models.
McLean, Mathew W; Hooker, Giles; Staicu, Ana-Maria; Scheipl, Fabian; Ruppert, David
2014-01-01
We introduce the functional generalized additive model (FGAM), a novel regression model for association studies between a scalar response and a functional predictor. We model the link-transformed mean response as the integral with respect to t of F{X(t), t} where F(·,·) is an unknown regression function and X(t) is a functional covariate. Rather than having an additive model in a finite number of principal components as in Müller and Yao (2008), our model incorporates the functional predictor directly and thus our model can be viewed as the natural functional extension of generalized additive models. We estimate F(·,·) using tensor-product B-splines with roughness penalties. A pointwise quantile transformation of the functional predictor is also considered to ensure each tensor-product B-spline has observed data on its support. The methods are evaluated using simulated data and their predictive performance is compared with other competing scalar-on-function regression alternatives. We illustrate the usefulness of our approach through an application to brain tractography, where X(t) is a signal from diffusion tensor imaging at position, t, along a tract in the brain. In one example, the response is disease-status (case or control) and in a second example, it is the score on a cognitive test. R code for performing the simulations and fitting the FGAM can be found in supplemental materials available online.
Petersen, Ashley; Witten, Daniela; Simon, Noah
2016-01-01
We consider the problem of predicting an outcome variable using p covariates that are measured on n independent observations, in a setting in which additive, flexible, and interpretable fits are desired. We propose the fused lasso additive model (FLAM), in which each additive function is estimated to be piecewise constant with a small number of adaptively-chosen knots. FLAM is the solution to a convex optimization problem, for which a simple algorithm with guaranteed convergence to a global optimum is provided. FLAM is shown to be consistent in high dimensions, and an unbiased estimator of its degrees of freedom is proposed. We evaluate the performance of FLAM in a simulation study and on two data sets. Supplemental materials are available online, and the R package flam is available on CRAN. PMID:28239246
Raghavan, Narendran; Dehoff, Ryan; Pannala, Sreekanth; Simunovic, Srdjan; Kirka, Michael; Turner, John; Carlson, Neil; Babu, Sudarsanam S.
2016-04-26
The fabrication of 3-D parts from CAD models by additive manufacturing (AM) is a disruptive technology that is transforming the metal manufacturing industry. The correlation between solidification microstructure and mechanical properties has been well understood in the casting and welding processes over the years. This paper focuses on extending these principles to additive manufacturing to understand the transient phenomena of repeated melting and solidification during electron beam powder melting process to achieve site-specific microstructure control within a fabricated component. In this paper, we have developed a novel melt scan strategy for electron beam melting of nickel-base superalloy (Inconel 718) and also analyzed 3-D heat transfer conditions using a parallel numerical solidification code (Truchas) developed at Los Alamos National Laboratory. The spatial and temporal variations of temperature gradient (G) and growth velocity (R) at the liquid-solid interface of the melt pool were calculated as a function of electron beam parameters. By manipulating the relative number of voxels that lie in the columnar or equiaxed region, the crystallographic texture of the components can be controlled to an extent. The analysis of the parameters provided optimum processing conditions that will result in columnar to equiaxed transition (CET) during the solidification. Furthermore, the results from the numerical simulations were validated by experimental processing and characterization thereby proving the potential of additive manufacturing process to achieve site-specific crystallographic texture control within a fabricated component.
Raghavan, Narendran; Dehoff, Ryan; Pannala, Sreekanth; ...
2016-04-26
The fabrication of 3-D parts from CAD models by additive manufacturing (AM) is a disruptive technology that is transforming the metal manufacturing industry. The correlation between solidification microstructure and mechanical properties has been well understood in the casting and welding processes over the years. This paper focuses on extending these principles to additive manufacturing to understand the transient phenomena of repeated melting and solidification during electron beam powder melting process to achieve site-specific microstructure control within a fabricated component. In this paper, we have developed a novel melt scan strategy for electron beam melting of nickel-base superalloy (Inconel 718) andmore » also analyzed 3-D heat transfer conditions using a parallel numerical solidification code (Truchas) developed at Los Alamos National Laboratory. The spatial and temporal variations of temperature gradient (G) and growth velocity (R) at the liquid-solid interface of the melt pool were calculated as a function of electron beam parameters. By manipulating the relative number of voxels that lie in the columnar or equiaxed region, the crystallographic texture of the components can be controlled to an extent. The analysis of the parameters provided optimum processing conditions that will result in columnar to equiaxed transition (CET) during the solidification. Furthermore, the results from the numerical simulations were validated by experimental processing and characterization thereby proving the potential of additive manufacturing process to achieve site-specific crystallographic texture control within a fabricated component.« less
Computational Process Modeling for Additive Manufacturing
NASA Technical Reports Server (NTRS)
Bagg, Stacey; Zhang, Wei
2014-01-01
Computational Process and Material Modeling of Powder Bed additive manufacturing of IN 718. Optimize material build parameters with reduced time and cost through modeling. Increase understanding of build properties. Increase reliability of builds. Decrease time to adoption of process for critical hardware. Potential to decrease post-build heat treatments. Conduct single-track and coupon builds at various build parameters. Record build parameter information and QM Meltpool data. Refine Applied Optimization powder bed AM process model using data. Report thermal modeling results. Conduct metallography of build samples. Calibrate STK models using metallography findings. Run STK models using AO thermal profiles and report STK modeling results. Validate modeling with additional build. Photodiode Intensity measurements highly linear with power input. Melt Pool Intensity highly correlated to Melt Pool Size. Melt Pool size and intensity increase with power. Applied Optimization will use data to develop powder bed additive manufacturing process model.
Parameter Identifiability of Fundamental Pharmacodynamic Models
Janzén, David L. I.; Bergenholm, Linnéa; Jirstrand, Mats; Parkinson, Joanna; Yates, James; Evans, Neil D.; Chappell, Michael J.
2016-01-01
Issues of parameter identifiability of routinely used pharmacodynamics models are considered in this paper. The structural identifiability of 16 commonly applied pharmacodynamic model structures was analyzed analytically, using the input-output approach. Both fixed-effects versions (non-population, no between-subject variability) and mixed-effects versions (population, including between-subject variability) of each model structure were analyzed. All models were found to be structurally globally identifiable under conditions of fixing either one of two particular parameters. Furthermore, an example was constructed to illustrate the importance of sufficient data quality and show that structural identifiability is a prerequisite, but not a guarantee, for successful parameter estimation and practical parameter identifiability. This analysis was performed by generating artificial data of varying quality to a structurally identifiable model with known true parameter values, followed by re-estimation of the parameter values. In addition, to show the benefit of including structural identifiability as part of model development, a case study was performed applying an unidentifiable model to real experimental data. This case study shows how performing such an analysis prior to parameter estimation can improve the parameter estimation process and model performance. Finally, an unidentifiable model was fitted to simulated data using multiple initial parameter values, resulting in highly different estimated uncertainties. This example shows that although the standard errors of the parameter estimates often indicate a structural identifiability issue, reasonably “good” standard errors may sometimes mask unidentifiability issues. PMID:27994553
Parameter extraction and transistor models
NASA Technical Reports Server (NTRS)
Rykken, Charles; Meiser, Verena; Turner, Greg; Wang, QI
1985-01-01
Using specified mathematical models of the MOSFET device, the optimal values of the model-dependent parameters were extracted from data provided by the Jet Propulsion Laboratory (JPL). Three MOSFET models, all one-dimensional were used. One of the models took into account diffusion (as well as convection) currents. The sensitivity of the models was assessed for variations of the parameters from their optimal values. Lines of future inquiry are suggested on the basis of the behavior of the devices, of the limitations of the proposed models, and of the complexity of the required numerical investigations.
Photovoltaic module parameters acquisition model
NASA Astrophysics Data System (ADS)
Cibira, Gabriel; Koščová, Marcela
2014-09-01
This paper presents basic procedures for photovoltaic (PV) module parameters acquisition using MATLAB and Simulink modelling. In first step, MATLAB and Simulink theoretical model are set to calculate I-V and P-V characteristics for PV module based on equivalent electrical circuit. Then, limited I-V data string is obtained from examined PV module using standard measurement equipment at standard irradiation and temperature conditions and stated into MATLAB data matrix as a reference model. Next, the theoretical model is optimized to keep-up with the reference model and to learn its basic parameters relations, over sparse data matrix. Finally, PV module parameters are deliverable for acquisition at different realistic irradiation, temperature conditions as well as series resistance. Besides of output power characteristics and efficiency calculation for PV module or system, proposed model validates computing statistical deviation compared to reference model.
NASA Astrophysics Data System (ADS)
Rosa, Benoit; Brient, Antoine; Samper, Serge; Hascoët, Jean-Yves
2016-12-01
Mastering the additive laser manufacturing surface is a real challenge and would allow functional surfaces to be obtained without finishing. Direct Metal Deposition (DMD) surfaces are composed by directional and chaotic textures that are directly linked to the process principles. The aim of this work is to obtain surface topographies by mastering the operating process parameters. Based on experimental investigation, the influence of operating parameters on the surface finish has been modeled. Topography parameters and multi-scale analysis have been used in order to characterize the DMD obtained surfaces. This study also proposes a methodology to characterize DMD chaotic texture through topography filtering and 3D image treatment. In parallel, a new parameter is proposed: density of particles (D p). Finally, this study proposes a regression modeling between process parameters and density of particles parameter.
Additional Investigations of Ice Shape Sensitivity to Parameter Variations
NASA Technical Reports Server (NTRS)
Miller, Dean R.; Potapczuk, Mark G.; Langhals, Tammy J.
2006-01-01
A second parameter sensitivity study was conducted at the NASA Glenn Research Center's Icing Research Tunnel (IRT) using a 36 in. chord (0.91 m) NACA-0012 airfoil. The objective of this work was to further investigate the feasibility of using ice shape feature changes to define requirements for the simulation and measurement of SLD and appendix C icing conditions. A previous study concluded that it was feasible to use changes in ice shape features (e.g., ice horn angle, ice horn thickness, and ice shape mass) to detect relatively small variations in icing spray condition parameters (LWC, MVD, and temperature). The subject of this current investigation extends the scope of this previous work, by also examining the effect of icing tunnel spray-bar parameter variations (water pressure, air pressure) on ice shape feature changes. The approach was to vary spray-bar water pressure and air pressure, and then evaluate the effects of these parameter changes on the resulting ice shapes. This paper will provide a description of the experimental method, present selected experimental results, and conclude with an evaluation of these results.
Identification of driver model parameters.
Reński, A
2001-01-01
The paper presents a driver model, which can be used in a computer simulation of a curved ride of a car. The identification of the driver parameters consisted in a comparison of the results of computer calculations obtained for the driver-vehicle-environment model with different driver data sets with test results of the double lane-change manoeuvre (Standard No. ISO/TR 3888:1975, International Organization for Standardization [ISO], 1975) and the wind gust manoeuvre. The optimisation method allows to choose for each real driver a set of driver model parameters for which the differences between test and calculation results are smallest. The presented driver model can be used in investigating the driver-vehicle control system, which allows to adapt the car construction to the psychophysical characteristics of a driver.
Parameter identification in continuum models
NASA Technical Reports Server (NTRS)
Banks, H. T.; Crowley, J. M.
1983-01-01
Approximation techniques for use in numerical schemes for estimating spatially varying coefficients in continuum models such as those for Euler-Bernoulli beams are discussed. The techniques are based on quintic spline state approximations and cubic spline parameter approximations. Both theoretical and numerical results are presented.
Computational Process Modeling for Additive Manufacturing (OSU)
NASA Technical Reports Server (NTRS)
Bagg, Stacey; Zhang, Wei
2015-01-01
Powder-Bed Additive Manufacturing (AM) through Direct Metal Laser Sintering (DMLS) or Selective Laser Melting (SLM) is being used by NASA and the Aerospace industry to "print" parts that traditionally are very complex, high cost, or long schedule lead items. The process spreads a thin layer of metal powder over a build platform, then melts the powder in a series of welds in a desired shape. The next layer of powder is applied, and the process is repeated until layer-by-layer, a very complex part can be built. This reduces cost and schedule by eliminating very complex tooling and processes traditionally used in aerospace component manufacturing. To use the process to print end-use items, NASA seeks to understand SLM material well enough to develop a method of qualifying parts for space flight operation. Traditionally, a new material process takes many years and high investment to generate statistical databases and experiential knowledge, but computational modeling can truncate the schedule and cost -many experiments can be run quickly in a model, which would take years and a high material cost to run empirically. This project seeks to optimize material build parameters with reduced time and cost through modeling.
NASA Astrophysics Data System (ADS)
García-San-Miguel, D.; Lerma, J. L.
2013-05-01
Terrestrial laser scanning systems are steadily increasing in many fields of engineering, geoscience and architecture namely for fast data acquisition, 3-D modeling and mapping. Similarly to other precision instruments, these systems provide measurements with implicit systematic errors. Systematic errors are physically corrected by manufacturers before delivery and sporadically afterwards. The approach presented herein tackles the raw observables acquired by a laser scanner with additional parameters, a set of geometric calibration parameters that model the systematic error of the instrument to achieve the most accurate point cloud outputs, improving eventual workflow owing to less filtering, better registration and best 3D modeling. This paper presents a fully automatic strategy to calibrate geometrically terrestrial laser scanning datasets. The strategy is tested with multiple scans taken by a FARO FOCUS 3D, a phase-based terrestrial laser scanner. A calibration with local parameters for datasets is undertaken to improve the raw observables and a weighted mathematical index is proposed to select the most significant set of additional parameters. The improvements achieved are exposed, highlighting the necessity of correcting the terrestrial laser scanner before handling multiple data sets.
Parameter Estimation of Partial Differential Equation Models.
Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Carroll, Raymond J; Maity, Arnab
2013-01-01
Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown, and need to be estimated from the measurements of the dynamic system in the present of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE, and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from LIDAR data.
Moose models with vanishing S parameter
Casalbuoni, R.; De Curtis, S.; Dominici, D.
2004-09-01
In the linear moose framework, which naturally emerges in deconstruction models, we show that there is a unique solution for the vanishing of the S parameter at the lowest order in the weak interactions. We consider an effective gauge theory based on K SU(2) gauge groups, K+1 chiral fields, and electroweak groups SU(2){sub L} and U(1){sub Y} at the ends of the chain of the moose. S vanishes when a link in the moose chain is cut. As a consequence one has to introduce a dynamical nonlocal field connecting the two ends of the moose. Then the model acquires an additional custodial symmetry which protects this result. We examine also the possibility of a strong suppression of S through an exponential behavior of the link couplings as suggested by the Randall Sundrum metric.
Understanding Parameter Invariance in Unidimensional IRT Models
ERIC Educational Resources Information Center
Rupp, Andre A.; Zumbo, Bruno D.
2006-01-01
One theoretical feature that makes item response theory (IRT) models those of choice for many psychometric data analysts is parameter invariance, the equality of item and examinee parameters from different examinee populations or measurement conditions. In this article, using the well-known fact that item and examinee parameters are identical only…
Shahab, Yosif A; Khalil, Rabah A
2006-10-01
A new approach to NMR chemical shift additivity parameters using simultaneous linear equation method has been introduced. Three general nitrogen-15 NMR chemical shift additivity parameters with physical significance for aliphatic amines in methanol and cyclohexane and their hydrochlorides in methanol have been derived. A characteristic feature of these additivity parameters is the individual equation can be applied to both open-chain and rigid systems. The factors that influence the (15)N chemical shift of these substances have been determined. A new method for evaluating conformational equilibria at nitrogen in these compounds using the derived additivity parameters has been developed. Conformational analyses of these substances have been worked out. In general, the results indicate that there are four factors affecting the (15)N chemical shift of aliphatic amines; paramagnetic term (p-character), lone pair-proton interactions, proton-proton interactions, symmetry of alkyl substituents and molecular association.
Additional field verification of convective scaling for the lateral dispersion parameter
Sakiyama, S.K.; Davis, P.A.
1988-07-01
The results of a series of diffusion trials over the heterogeneous surface of the Canadian Precambrian Shield provide additional support for the convective scaling of the lateral dispersion parameter. The data indicate that under convective conditions, the lateral dispersion parameter can be scaled with the convective velocity scale and the mixing depth. 10 references.
Screening parameters for the relativistic hydrogenic model
NASA Astrophysics Data System (ADS)
Lanzini, Fernando; Di Rocco, Héctor O.
2015-12-01
We present a Relativistic Screened Hydrogenic Model (RSHM) where the screening parameters depend on the variables (n , l , j) and the parameters (Z , N) . These screening parameters were derived theoretically in a neat form with no use of experimental values nor numerical values from self-consistent codes. The results of the model compare favorably with those obtained by using more sophisticated approaches. For the interested reader, a copy of our code can be requested from the corresponding author.
Network Reconstruction Using Nonparametric Additive ODE Models
Henderson, James; Michailidis, George
2014-01-01
Network representations of biological systems are widespread and reconstructing unknown networks from data is a focal problem for computational biologists. For example, the series of biochemical reactions in a metabolic pathway can be represented as a network, with nodes corresponding to metabolites and edges linking reactants to products. In a different context, regulatory relationships among genes are commonly represented as directed networks with edges pointing from influential genes to their targets. Reconstructing such networks from data is a challenging problem receiving much attention in the literature. There is a particular need for approaches tailored to time-series data and not reliant on direct intervention experiments, as the former are often more readily available. In this paper, we introduce an approach to reconstructing directed networks based on dynamic systems models. Our approach generalizes commonly used ODE models based on linear or nonlinear dynamics by extending the functional class for the functions involved from parametric to nonparametric models. Concomitantly we limit the complexity by imposing an additive structure on the estimated slope functions. Thus the submodel associated with each node is a sum of univariate functions. These univariate component functions form the basis for a novel coupling metric that we define in order to quantify the strength of proposed relationships and hence rank potential edges. We show the utility of the method by reconstructing networks using simulated data from computational models for the glycolytic pathway of Lactocaccus Lactis and a gene network regulating the pluripotency of mouse embryonic stem cells. For purposes of comparison, we also assess reconstruction performance using gene networks from the DREAM challenges. We compare our method to those that similarly rely on dynamic systems models and use the results to attempt to disentangle the distinct roles of linearity, sparsity, and derivative
CREATION OF THE MODEL ADDITIONAL PROTOCOL
Houck, F.; Rosenthal, M.; Wulf, N.
2010-05-25
In 1991, the international nuclear nonproliferation community was dismayed to discover that the implementation of safeguards by the International Atomic Energy Agency (IAEA) under its NPT INFCIRC/153 safeguards agreement with Iraq had failed to detect Iraq's nuclear weapon program. It was now clear that ensuring that states were fulfilling their obligations under the NPT would require not just detecting diversion but also the ability to detect undeclared materials and activities. To achieve this, the IAEA initiated what would turn out to be a five-year effort to reappraise the NPT safeguards system. The effort engaged the IAEA and its Member States and led to agreement in 1997 on a new safeguards agreement, the Model Protocol Additional to the Agreement(s) between States and the International Atomic Energy Agency for the Application of Safeguards. The Model Protocol makes explicit that one IAEA goal is to provide assurance of the absence of undeclared nuclear material and activities. The Model Protocol requires an expanded declaration that identifies a State's nuclear potential, empowers the IAEA to raise questions about the correctness and completeness of the State's declaration, and, if needed, allows IAEA access to locations. The information required and the locations available for access are much broader than those provided for under INFCIRC/153. The negotiation was completed in quite a short time because it started with a relatively complete draft of an agreement prepared by the IAEA Secretariat. This paper describes how the Model Protocol was constructed and reviews key decisions that were made both during the five-year period and in the actual negotiation.
Rheological parameters of dough with inulin addition and its effect on bread quality
NASA Astrophysics Data System (ADS)
Bojnanska, T.; Tokar, M.; Vollmannova, A.
2015-04-01
The rheological properties of enriched flour prepared with an addition of inulin were studied. The addition of inulin caused changes of the rheological parameters of the recorder curve. 10% and more addition significantly extended development time and on the farinogram were two peaks of consistency, what is a non-standard shape. With increasing addition of inulin resistance to deformation grows and dough is difficult to process, over 15% addition make dough short and unsuitable for making bread. Bread volume, the most important parameter, significantly decreased with inulin addition. Our results suggest a level of 5% inulin to produce a functional bread of high sensory acceptance and a level of 10% inulin produce a bread of satisfactory sensory acceptance. Bread with a level over 10% of inulin was unsatisfactory.
On Interpreting the Model Parameters for the Three Parameter Logistic Model
ERIC Educational Resources Information Center
Maris, Gunter; Bechger, Timo
2009-01-01
This paper addresses two problems relating to the interpretability of the model parameters in the three parameter logistic model. First, it is shown that if the values of the discrimination parameters are all the same, the remaining parameters are nonidentifiable in a nontrivial way that involves not only ability and item difficulty, but also the…
Electroacoustics modeling of piezoelectric welders for ultrasonic additive manufacturing processes
NASA Astrophysics Data System (ADS)
Hehr, Adam; Dapino, Marcelo J.
2016-04-01
Ultrasonic additive manufacturing (UAM) is a recent 3D metal printing technology which utilizes ultrasonic vibrations from high power piezoelectric transducers to additively weld similar and dissimilar metal foils. CNC machining is used intermittent of welding to create internal channels, embed temperature sensitive components, sensors, and materials, and for net shaping parts. Structural dynamics of the welder and work piece influence the performance of the welder and part quality. To understand the impact of structural dynamics on UAM, a linear time-invariant model is used to relate system shear force and electric current inputs to the system outputs of welder velocity and voltage. Frequency response measurements are combined with in-situ operating measurements of the welder to identify model parameters and to verify model assumptions. The proposed LTI model can enhance process consistency, performance, and guide the development of improved quality monitoring and control strategies.
Variable deceleration parameter and dark energy models
NASA Astrophysics Data System (ADS)
Bishi, Binaya K.
2016-03-01
This paper deals with the Bianchi type-III dark energy model and equation of state parameter in a first class of f(R,T) gravity. Here, R and T represents the Ricci scalar and trace of the energy momentum tensor, respectively. The exact solutions of the modified field equations are obtained by using (i) linear relation between expansion scalar and shear scalar, (ii) linear relation between state parameter and skewness parameter and (iii) variable deceleration parameter. To obtain the physically plausible cosmological models, the variable deceleration parameter with the suitable substitution leads to the scale factor of the form a(t) = [sinh(αt)] 1 n, where α and n > 0 are arbitrary constants. It is observed that our models are accelerating for 0 < n < 1 and for n > 1, transition phase from deceleration to acceleration. Further, we have discussed physical properties of the models.
Generalised additive modelling approach to the fermentation process of glutamate.
Liu, Chun-Bo; Li, Yun; Pan, Feng; Shi, Zhong-Ping
2011-03-01
In this work, generalised additive models (GAMs) were used for the first time to model the fermentation of glutamate (Glu). It was found that three fermentation parameters fermentation time (T), dissolved oxygen (DO) and oxygen uptake rate (OUR) could capture 97% variance of the production of Glu during the fermentation process through a GAM model calibrated using online data from 15 fermentation experiments. This model was applied to investigate the individual and combined effects of T, DO and OUR on the production of Glu. The conditions to optimize the fermentation process were proposed based on the simulation study from this model. Results suggested that the production of Glu can reach a high level by controlling concentration levels of DO and OUR to the proposed optimization conditions during the fermentation process. The GAM approach therefore provides an alternative way to model and optimize the fermentation process of Glu.
Exploiting intrinsic fluctuations to identify model parameters.
Zimmer, Christoph; Sahle, Sven; Pahle, Jürgen
2015-04-01
Parameterisation of kinetic models plays a central role in computational systems biology. Besides the lack of experimental data of high enough quality, some of the biggest challenges here are identification issues. Model parameters can be structurally non-identifiable because of functional relationships. Noise in measured data is usually considered to be a nuisance for parameter estimation. However, it turns out that intrinsic fluctuations in particle numbers can make parameters identifiable that were previously non-identifiable. The authors present a method to identify model parameters that are structurally non-identifiable in a deterministic framework. The method takes time course recordings of biochemical systems in steady state or transient state as input. Often a functional relationship between parameters presents itself by a one-dimensional manifold in parameter space containing parameter sets of optimal goodness. Although the system's behaviour cannot be distinguished on this manifold in a deterministic framework it might be distinguishable in a stochastic modelling framework. Their method exploits this by using an objective function that includes a measure for fluctuations in particle numbers. They show on three example models, immigration-death, gene expression and Epo-EpoReceptor interaction, that this resolves the non-identifiability even in the case of measurement noise with known amplitude. The method is applied to partially observed recordings of biochemical systems with measurement noise. It is simple to implement and it is usually very fast to compute. This optimisation can be realised in a classical or Bayesian fashion.
Modeling pattern in collections of parameters
Link, W.A.
1999-01-01
Wildlife management is increasingly guided by analyses of large and complex datasets. The description of such datasets often requires a large number of parameters, among which certain patterns might be discernible. For example, one may consider a long-term study producing estimates of annual survival rates; of interest is the question whether these rates have declined through time. Several statistical methods exist for examining pattern in collections of parameters. Here, I argue for the superiority of 'random effects models' in which parameters are regarded as random variables, with distributions governed by 'hyperparameters' describing the patterns of interest. Unfortunately, implementation of random effects models is sometimes difficult. Ultrastructural models, in which the postulated pattern is built into the parameter structure of the original data analysis, are approximations to random effects models. However, this approximation is not completely satisfactory: failure to account for natural variation among parameters can lead to overstatement of the evidence for pattern among parameters. I describe quasi-likelihood methods that can be used to improve the approximation of random effects models by ultrastructural models.
Delineating parameter unidentifiabilities in complex models
NASA Astrophysics Data System (ADS)
Raman, Dhruva V.; Anderson, James; Papachristodoulou, Antonis
2017-03-01
Scientists use mathematical modeling as a tool for understanding and predicting the properties of complex physical systems. In highly parametrized models there often exist relationships between parameters over which model predictions are identical, or nearly identical. These are known as structural or practical unidentifiabilities, respectively. They are hard to diagnose and make reliable parameter estimation from data impossible. They furthermore imply the existence of an underlying model simplification. We describe a scalable method for detecting unidentifiabilities, as well as the functional relations defining them, for generic models. This allows for model simplification, and appreciation of which parameters (or functions thereof) cannot be estimated from data. Our algorithm can identify features such as redundant mechanisms and fast time-scale subsystems, as well as the regimes in parameter space over which such approximations are valid. We base our algorithm on a quantification of regional parametric sensitivity that we call `multiscale sloppiness'. Traditionally, the link between parametric sensitivity and the conditioning of the parameter estimation problem is made locally, through the Fisher information matrix. This is valid in the regime of infinitesimal measurement uncertainty. We demonstrate the duality between multiscale sloppiness and the geometry of confidence regions surrounding parameter estimates made where measurement uncertainty is non-negligible. Further theoretical relationships are provided linking multiscale sloppiness to the likelihood-ratio test. From this, we show that a local sensitivity analysis (as typically done) is insufficient for determining the reliability of parameter estimation, even with simple (non)linear systems. Our algorithm can provide a tractable alternative. We finally apply our methods to a large-scale, benchmark systems biology model of necrosis factor (NF)-κ B , uncovering unidentifiabilities.
Systematic parameter inference in stochastic mesoscopic modeling
NASA Astrophysics Data System (ADS)
Lei, Huan; Yang, Xiu; Li, Zhen; Karniadakis, George Em
2017-02-01
We propose a method to efficiently determine the optimal coarse-grained force field in mesoscopic stochastic simulations of Newtonian fluid and polymer melt systems modeled by dissipative particle dynamics (DPD) and energy conserving dissipative particle dynamics (eDPD). The response surfaces of various target properties (viscosity, diffusivity, pressure, etc.) with respect to model parameters are constructed based on the generalized polynomial chaos (gPC) expansion using simulation results on sampling points (e.g., individual parameter sets). To alleviate the computational cost to evaluate the target properties, we employ the compressive sensing method to compute the coefficients of the dominant gPC terms given the prior knowledge that the coefficients are "sparse". The proposed method shows comparable accuracy with the standard probabilistic collocation method (PCM) while it imposes a much weaker restriction on the number of the simulation samples especially for systems with high dimensional parametric space. Fully access to the response surfaces within the confidence range enables us to infer the optimal force parameters given the desirable values of target properties at the macroscopic scale. Moreover, it enables us to investigate the intrinsic relationship between the model parameters, identify possible degeneracies in the parameter space, and optimize the model by eliminating model redundancies. The proposed method provides an efficient alternative approach for constructing mesoscopic models by inferring model parameters to recover target properties of the physics systems (e.g., from experimental measurements), where those force field parameters and formulation cannot be derived from the microscopic level in a straight forward way.
Models and parameters for environmental radiological assessments
Miller, C W
1984-01-01
This book presents a unified compilation of models and parameters appropriate for assessing the impact of radioactive discharges to the environment. Models examined include those developed for the prediction of atmospheric and hydrologic transport and deposition, for terrestrial and aquatic food-chain bioaccumulation, and for internal and external dosimetry. Chapters have been entered separately into the data base. (ACR)
Estimation of Model Parameters for Steerable Needles
Park, Wooram; Reed, Kyle B.; Okamura, Allison M.; Chirikjian, Gregory S.
2010-01-01
Flexible needles with bevel tips are being developed as useful tools for minimally invasive surgery and percutaneous therapy. When such a needle is inserted into soft tissue, it bends due to the asymmetric geometry of the bevel tip. This insertion with bending is not completely repeatable. We characterize the deviations in needle tip pose (position and orientation) by performing repeated needle insertions into artificial tissue. The base of the needle is pushed at a constant speed without rotating, and the covariance of the distribution of the needle tip pose is computed from experimental data. We develop the closed-form equations to describe how the covariance varies with different model parameters. We estimate the model parameters by matching the closed-form covariance and the experimentally obtained covariance. In this work, we use a needle model modified from a previously developed model with two noise parameters. The modified needle model uses three noise parameters to better capture the stochastic behavior of the needle insertion. The modified needle model provides an improvement of the covariance error from 26.1% to 6.55%. PMID:21643451
Analysis of Modeling Parameters on Threaded Screws.
Vigil, Miquela S.; Brake, Matthew Robert; Vangoethem, Douglas
2015-06-01
Assembled mechanical systems often contain a large number of bolted connections. These bolted connections (joints) are integral aspects of the load path for structural dynamics, and, consequently, are paramount for calculating a structure's stiffness and energy dissipation prop- erties. However, analysts have not found the optimal method to model appropriately these bolted joints. The complexity of the screw geometry cause issues when generating a mesh of the model. This paper will explore different approaches to model a screw-substrate connec- tion. Model parameters such as mesh continuity, node alignment, wedge angles, and thread to body element size ratios are examined. The results of this study will give analysts a better understanding of the influences of these parameters and will aide in finding the optimal method to model bolted connections.
Parameter identification and modeling of longitudinal aerodynamics
NASA Technical Reports Server (NTRS)
Aksteter, J. W.; Parks, E. K.; Bach, R. E., Jr.
1995-01-01
Using a comprehensive flight test database and a parameter identification software program produced at NASA Ames Research Center, a math model of the longitudinal aerodynamics of the Harrier aircraft was formulated. The identification program employed the equation error method using multiple linear regression to estimate the nonlinear parameters. The formulated math model structure adhered closely to aerodynamic and stability/control theory, particularly with regard to compressibility and dynamic manoeuvring. Validation was accomplished by using a three degree-of-freedom nonlinear flight simulator with pilot inputs from flight test data. The simulation models agreed quite well with the measured states. It is important to note that the flight test data used for the validation of the model was not used in the model identification.
Parameter Estimation of Spacecraft Fuel Slosh Model
NASA Technical Reports Server (NTRS)
Gangadharan, Sathya; Sudermann, James; Marlowe, Andrea; Njengam Charles
2004-01-01
Fuel slosh in the upper stages of a spinning spacecraft during launch has been a long standing concern for the success of a space mission. Energy loss through the movement of the liquid fuel in the fuel tank affects the gyroscopic stability of the spacecraft and leads to nutation (wobble) which can cause devastating control issues. The rate at which nutation develops (defined by Nutation Time Constant (NTC can be tedious to calculate and largely inaccurate if done during the early stages of spacecraft design. Pure analytical means of predicting the influence of onboard liquids have generally failed. A strong need exists to identify and model the conditions of resonance between nutation motion and liquid modes and to understand the general characteristics of the liquid motion that causes the problem in spinning spacecraft. A 3-D computerized model of the fuel slosh that accounts for any resonant modes found in the experimental testing will allow for increased accuracy in the overall modeling process. Development of a more accurate model of the fuel slosh currently lies in a more generalized 3-D computerized model incorporating masses, springs and dampers. Parameters describing the model include the inertia tensor of the fuel, spring constants, and damper coefficients. Refinement and understanding the effects of these parameters allow for a more accurate simulation of fuel slosh. The current research will focus on developing models of different complexity and estimating the model parameters that will ultimately provide a more realistic prediction of Nutation Time Constant obtained through simulation.
NASA Technical Reports Server (NTRS)
Smalheer, C. V.
1973-01-01
The chemistry of lubricant additives is discussed to show what the additives are chemically and what functions they perform in the lubrication of various kinds of equipment. Current theories regarding the mode of action of lubricant additives are presented. The additive groups discussed include the following: (1) detergents and dispersants, (2) corrosion inhibitors, (3) antioxidants, (4) viscosity index improvers, (5) pour point depressants, and (6) antifouling agents.
Uncertainty in dual permeability model parameters for structured soils
NASA Astrophysics Data System (ADS)
Arora, B.; Mohanty, B. P.; McGuire, J. T.
2012-01-01
Successful application of dual permeability models (DPM) to predict contaminant transport is contingent upon measured or inversely estimated soil hydraulic and solute transport parameters. The difficulty in unique identification of parameters for the additional macropore- and matrix-macropore interface regions, and knowledge about requisite experimental data for DPM has not been resolved to date. Therefore, this study quantifies uncertainty in dual permeability model parameters of experimental soil columns with different macropore distributions (single macropore, and low- and high-density multiple macropores). Uncertainty evaluation is conducted using adaptive Markov chain Monte Carlo (AMCMC) and conventional Metropolis-Hastings (MH) algorithms while assuming 10 out of 17 parameters to be uncertain or random. Results indicate that AMCMC resolves parameter correlations and exhibits fast convergence for all DPM parameters while MH displays large posterior correlations for various parameters. This study demonstrates that the choice of parameter sampling algorithms is paramount in obtaining unique DPM parameters when information on covariance structure is lacking, or else additional information on parameter correlations must be supplied to resolve the problem of equifinality of DPM parameters. This study also highlights the placement and significance of matrix-macropore interface in flow experiments of soil columns with different macropore densities. Histograms for certain soil hydraulic parameters display tri-modal characteristics implying that macropores are drained first followed by the interface region and then by pores of the matrix domain in drainage experiments. Results indicate that hydraulic properties and behavior of the matrix-macropore interface is not only a function of saturated hydraulic conductivity of the macroporematrix interface (Ksa) and macropore tortuosity (lf) but also of other parameters of the matrix and macropore domains.
Field measurements and neural network modeling of water quality parameters
NASA Astrophysics Data System (ADS)
Qishlaqi, Afishin; Kordian, Sediqeh; Parsaie, Abbas
2017-01-01
Rivers are one of the main resources for water supplying the agricultural, industrial, and urban use; therefore, unremitting surveying the quality of them is necessary. Recently, artificial neural networks have been proposed as a powerful tool for modeling and predicting the water quality parameters in natural streams. In this paper, to predict water quality parameters of Tireh River located at South West of Iran, a multilayer neural network model (MLP) was developed. The T.D.S, Ec, pH, HCO3, Cl, Na, So4, Mg, and Ca as main parameters of water quality parameters were measured and predicted using the MLP model. The architecture of the proposed MLP model included two hidden layers that at the first and second hidden layers, eight and six neurons were considered. The tangent sigmoid and pure-line functions were selected as transfer function for the neurons in hidden and output layers, respectively. The results showed that the MLP model has suitable performance to predict water quality parameters of Tireh River. For assessing the performance of the MLP model in the water quality prediction along the studied area, in addition to existing sampling stations, another 14 stations along were considered by authors. Evaluating the performance of developed MLP model to map relation between the water quality parameters along the studied area showed that it has suitable accuracy and minimum correlation between the results of MLP model and measured data was 0.85.
Distributed parameter modeling of repeated truss structures
NASA Technical Reports Server (NTRS)
Wang, Han-Ching
1994-01-01
A new approach to find homogeneous models for beam-like repeated flexible structures is proposed which conceptually involves two steps. The first step involves the approximation of 3-D non-homogeneous model by a 1-D periodic beam model. The structure is modeled as a 3-D non-homogeneous continuum. The displacement field is approximated by Taylor series expansion. Then, the cross sectional mass and stiffness matrices are obtained by energy equivalence using their additive properties. Due to the repeated nature of the flexible bodies, the mass, and stiffness matrices are also periodic. This procedure is systematic and requires less dynamics detail. The first step involves the homogenization from a 1-D periodic beam model to a 1-D homogeneous beam model. The periodic beam model is homogenized into an equivalent homogeneous beam model using the additive property of compliance along the generic axis. The major departure from previous approaches in literature is using compliance instead of stiffness in homogenization. An obvious justification is that the stiffness is additive at each cross section but not along the generic axis. The homogenized model preserves many properties of the original periodic model.
Effects of model deficiencies on parameter estimation
NASA Technical Reports Server (NTRS)
Hasselman, T. K.
1988-01-01
Reliable structural dynamic models will be required as a basis for deriving the reduced-order plant models used in control systems for large space structures. Ground vibration testing and model verification will play an important role in the development of these models; however, fundamental differences between the space environment and earth environment, as well as variations in structural properties due to as-built conditions, will make on-orbit identification essential. The efficiency, and perhaps even the success, of on-orbit identification will depend on having a valid model of the structure. It is envisioned that the identification process will primarily involve parametric methods. Given a correct model, a variety of estimation algorithms may be used to estimate parameter values. This paper explores the effects of modeling errors and model deficiencies on parameter estimation by reviewing previous case histories. The effects depend at least to some extent on the estimation algorithm being used. Bayesian estimation was used in the case histories presented here. It is therefore conceivable that the behavior of an estimation algorithm might be useful in detecting and possibly even diagnosing deficiencies. In practice, the task is complicated by the presence of systematic errors in experimental procedures and data processing and in the use of the estimation procedures themselves.
Benton, David
2013-01-01
The criteria used to establish dietary reference values are discussed and it is suggested that the too often the "need" they aim to satisfy is at the best vaguely specified. The proposition is considered that if we aim to establish optimal nutrition we will gain from considering psychological in addition to physiological parameters. The brain is by a considerable extent the most complex and metabolically active organ in the body. As such it would be predicted that the first signs of minor subclinical deficiencies will be the disruption of the functioning of the brain. The output of the brain is the product of countless millions of biochemical processes, such that if enzyme activity is only a few percentage points less than maximum, a cumulative influence would result. A series of studies of micronutrient supplementation in well-designed trials were reviewed. In metaanalyses the cognitive functioning of children and the mood and memory of adults has been shown to respond to multivitamin/mineral supplementation. Given the concerns that have been expressed about the negative responses to high levels of micronutrients, the implications are discussed of the finding that psychological functioning may benefits from an intake greater than those currently recommended.
Test models for improving filtering with model errors through stochastic parameter estimation
Gershgorin, B.; Harlim, J. Majda, A.J.
2010-01-01
The filtering skill for turbulent signals from nature is often limited by model errors created by utilizing an imperfect model for filtering. Updating the parameters in the imperfect model through stochastic parameter estimation is one way to increase filtering skill and model performance. Here a suite of stringent test models for filtering with stochastic parameter estimation is developed based on the Stochastic Parameterization Extended Kalman Filter (SPEKF). These new SPEKF-algorithms systematically correct both multiplicative and additive biases and involve exact formulas for propagating the mean and covariance including the parameters in the test model. A comprehensive study is presented of robust parameter regimes for increasing filtering skill through stochastic parameter estimation for turbulent signals as the observation time and observation noise are varied and even when the forcing is incorrectly specified. The results here provide useful guidelines for filtering turbulent signals in more complex systems with significant model errors.
NASA Astrophysics Data System (ADS)
Sultanov, Albert H.; Gayfulin, Renat R.; Vinogradova, Irina L.
2008-04-01
Fiber optic telecommunication systems with duplex data transmitting over single fiber require reflection minimization. Moreover reflections may be so high that causes system deactivating by misoperation of conventional alarm, and system can not automatically adjudge the collision, so operator manual control is required. In this paper we proposed technical solution of mentioned problem based on additional analysis subsystem, realized on the installed Ufa-city fiber optic CTV system "Crystal". Experience of it's maintenance and results of investigations of the fault tolerance parameters are represented
Reliability of parameter estimation in respirometric models.
Checchi, Nicola; Marsili-Libelli, Stefano
2005-09-01
When modelling a biochemical system, the fact that model parameters cannot be estimated exactly stimulates the definition of tests for checking unreliable estimates and design better experiments. The method applied in this paper is a further development from Marsili-Libelli et al. [2003. Confidence regions of estimated parameters for ecological systems. Ecol. Model. 165, 127-146.] and is based on the confidence regions computed with the Fisher or the Hessian matrix. It detects the influence of the curvature, representing the distortion of the model response due to its nonlinear structure. If the test is passed then the estimation can be considered reliable, in the sense that the optimisation search has reached a point on the error surface where the effect of nonlinearities is negligible. The test is used here for an assessment of respirometric model calibration, i.e. checking the experimental design and estimation reliability, with an application to real-life data in the ASM context. Only dissolved oxygen measurements have been considered, because this is a very popular experimental set-up in wastewater modelling. The estimation of a two-step nitrification model using batch respirometric data is considered, showing that the initial amount of ammonium-N and the number of data play a crucial role in obtaining reliable estimates. From this basic application other results are derived, such as the estimation of the combined yield factor and of the second step parameters, based on a modified kinetics and a specific nitrite experiment. Finally, guidelines for designing reliable experiments are provided.
Constant-parameter capture-recapture models
Brownie, C.; Hines, J.E.; Nichols, J.D.
1986-01-01
Jolly (1982, Biometrics 38, 301-321) presented modifications of the Jolly-Seber model for capture-recapture data, which assume constant survival and/or capture rates. Where appropriate, because of the reduced number of parameters, these models lead to more efficient estimators than the Jolly-Seber model. The tests to compare models given by Jolly do not make complete use of the data, and we present here the appropriate modifications, and also indicate how to carry out goodness-of-fit tests which utilize individual capture history information. We also describe analogous models for the case where young and adult animals are tagged. The availability of computer programs to perform the analysis is noted, and examples are given using output from these programs.
Sensitivity analysis of geometric errors in additive manufacturing medical models.
Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian
2015-03-01
Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms.
CHAMP: Changepoint Detection Using Approximate Model Parameters
2014-06-01
detecting changes in the parameters and mod- els that generate observed data. Commonly cited examples include detecting changes in stock market behavior [4...experimentally verified using artifi- cially generated data and are compared to those of Fearnhead and Liu [5]. 2 Related work Hidden Markov Models ( HMMs ) are...largely the de facto tool of choice when analyzing time series data, but the standard HMM formulation has several undesirable properties. The number of
NASA Technical Reports Server (NTRS)
Sovers, O. J.; Fanselow, J. L.
1987-01-01
This report is a revision of the document of the same title (1986), dated August 1, which it supersedes. Model changes during 1986 and 1987 included corrections for antenna feed rotation, refraction in modelling antenna axis offsets, and an option to employ improved values of the semiannual and annual nutation amplitudes. Partial derivatives of the observables with respect to an additional parameter (surface temperature) are now available. New versions of two figures representing the geometric delay are incorporated. The expressions for the partial derivatives with respect to the nutation parameters have been corrected to include contributions from the dependence of UTI on nutation. The authors hope to publish revisions of this document in the future, as modeling improvements warrant.
Identification of Neurofuzzy models using GTLS parameter estimation.
Jakubek, Stefan; Hametner, Christoph
2009-10-01
In this paper, nonlinear system identification utilizing generalized total least squares (GTLS) methodologies in neurofuzzy systems is addressed. The problem involved with the estimation of the local model parameters of neurofuzzy networks is the presence of noise in measured data. When some or all input channels are subject to noise, the GTLS algorithm yields consistent parameter estimates. In addition to the estimation of the parameters, the main challenge in the design of these local model networks is the determination of the region of validity for the local models. The method presented in this paper is based on an expectation-maximization algorithm that uses a residual from the GTLS parameter estimation for proper partitioning. The performance of the resulting nonlinear model with local parameters estimated by weighted GTLS is a product both of the parameter estimation itself and the associated residual used for the partitioning process. The applicability and benefits of the proposed algorithm are demonstrated by means of illustrative examples and an automotive application.
Radar altimeter waveform modeled parameter recovery. [SEASAT-1 data
NASA Technical Reports Server (NTRS)
1981-01-01
Satellite-borne radar altimeters include waveform sampling gates providing point samples of the transmitted radar pulse after its scattering from the ocean's surface. Averages of the waveform sampler data can be fitted by varying parameters in a model mean return waveform. The theoretical waveform model used is described as well as a general iterative nonlinear least squares procedures used to obtain estimates of parameters characterizing the modeled waveform for SEASAT-1 data. The six waveform parameters recovered by the fitting procedure are: (1) amplitude; (2) time origin, or track point; (3) ocean surface rms roughness; (4) noise baseline; (5) ocean surface skewness; and (6) altitude or off-nadir angle. Additional practical processing considerations are addressed and FORTRAN source listing for subroutines used in the waveform fitting are included. While the description is for the Seasat-1 altimeter waveform data analysis, the work can easily be generalized and extended to other radar altimeter systems.
Macroscopic singlet oxygen model incorporating photobleaching as an input parameter
NASA Astrophysics Data System (ADS)
Kim, Michele M.; Finlay, Jarod C.; Zhu, Timothy C.
2015-03-01
A macroscopic singlet oxygen model for photodynamic therapy (PDT) has been used extensively to calculate the reacted singlet oxygen concentration for various photosensitizers. The four photophysical parameters (ξ, σ, β, δ) and threshold singlet oxygen dose ([1O2]r,sh) can be found for various drugs and drug-light intervals using a fitting algorithm. The input parameters for this model include the fluence, photosensitizer concentration, optical properties, and necrosis radius. An additional input variable of photobleaching was implemented in this study to optimize the results. Photobleaching was measured by using the pre-PDT and post-PDT sensitizer concentrations. Using the RIF model of murine fibrosarcoma, mice were treated with a linear source with fluence rates from 12 - 150 mW/cm and total fluences from 24 - 135 J/cm. The two main drugs investigated were benzoporphyrin derivative monoacid ring A (BPD) and 2-[1-hexyloxyethyl]-2-devinyl pyropheophorbide-a (HPPH). Previously published photophysical parameters were fine-tuned and verified using photobleaching as the additional fitting parameter. Furthermore, photobleaching can be used as an indicator of the robustness of the model for the particular mouse experiment by comparing the experimental and model-calculated photobleaching ratio.
Estimation of dynamic stability parameters from drop model flight tests
NASA Technical Reports Server (NTRS)
Chambers, J. R.; Iliff, K. W.
1981-01-01
A recent NASA application of a remotely-piloted drop model to studies of the high angle-of-attack and spinning characteristics of a fighter configuration has provided an opportunity to evaluate and develop parameter estimation methods for the complex aerodynamic environment associated with high angles of attack. The paper discusses the overall drop model operation including descriptions of the model, instrumentation, launch and recovery operations, piloting concept, and parameter identification methods used. Static and dynamic stability derivatives were obtained for an angle-of-attack range from -20 deg to 53 deg. The results of the study indicated that the variations of the estimates with angle of attack were consistent for most of the static derivatives, and the effects of configuration modifications to the model (such as nose strakes) were apparent in the static derivative estimates. The dynamic derivatives exhibited greater uncertainty levels than the static derivatives, possibly due to nonlinear aerodynamics, model response characteristics, or additional derivatives.
Multiscale and Multiphysics Modeling of Additive Manufacturing of Advanced Materials
NASA Technical Reports Server (NTRS)
Liou, Frank; Newkirk, Joseph; Fan, Zhiqiang; Sparks, Todd; Chen, Xueyang; Fletcher, Kenneth; Zhang, Jingwei; Zhang, Yunlu; Kumar, Kannan Suresh; Karnati, Sreekar
2015-01-01
The objective of this proposed project is to research and develop a prediction tool for advanced additive manufacturing (AAM) processes for advanced materials and develop experimental methods to provide fundamental properties and establish validation data. Aircraft structures and engines demand materials that are stronger, useable at much higher temperatures, provide less acoustic transmission, and enable more aeroelastic tailoring than those currently used. Significant improvements in properties can only be achieved by processing the materials under nonequilibrium conditions, such as AAM processes. AAM processes encompass a class of processes that use a focused heat source to create a melt pool on a substrate. Examples include Electron Beam Freeform Fabrication and Direct Metal Deposition. These types of additive processes enable fabrication of parts directly from CAD drawings. To achieve the desired material properties and geometries of the final structure, assessing the impact of process parameters and predicting optimized conditions with numerical modeling as an effective prediction tool is necessary. The targets for the processing are multiple and at different spatial scales, and the physical phenomena associated occur in multiphysics and multiscale. In this project, the research work has been developed to model AAM processes in a multiscale and multiphysics approach. A macroscale model was developed to investigate the residual stresses and distortion in AAM processes. A sequentially coupled, thermomechanical, finite element model was developed and validated experimentally. The results showed the temperature distribution, residual stress, and deformation within the formed deposits and substrates. A mesoscale model was developed to include heat transfer, phase change with mushy zone, incompressible free surface flow, solute redistribution, and surface tension. Because of excessive computing time needed, a parallel computing approach was also tested. In addition
Additive Functions in Boolean Models of Gene Regulatory Network Modules
Darabos, Christian; Di Cunto, Ferdinando; Tomassini, Marco; Moore, Jason H.; Provero, Paolo; Giacobini, Mario
2011-01-01
Gene-on-gene regulations are key components of every living organism. Dynamical abstract models of genetic regulatory networks help explain the genome's evolvability and robustness. These properties can be attributed to the structural topology of the graph formed by genes, as vertices, and regulatory interactions, as edges. Moreover, the actual gene interaction of each gene is believed to play a key role in the stability of the structure. With advances in biology, some effort was deployed to develop update functions in Boolean models that include recent knowledge. We combine real-life gene interaction networks with novel update functions in a Boolean model. We use two sub-networks of biological organisms, the yeast cell-cycle and the mouse embryonic stem cell, as topological support for our system. On these structures, we substitute the original random update functions by a novel threshold-based dynamic function in which the promoting and repressing effect of each interaction is considered. We use a third real-life regulatory network, along with its inferred Boolean update functions to validate the proposed update function. Results of this validation hint to increased biological plausibility of the threshold-based function. To investigate the dynamical behavior of this new model, we visualized the phase transition between order and chaos into the critical regime using Derrida plots. We complement the qualitative nature of Derrida plots with an alternative measure, the criticality distance, that also allows to discriminate between regimes in a quantitative way. Simulation on both real-life genetic regulatory networks show that there exists a set of parameters that allows the systems to operate in the critical region. This new model includes experimentally derived biological information and recent discoveries, which makes it potentially useful to guide experimental research. The update function confers additional realism to the model, while reducing the complexity
Additive functions in boolean models of gene regulatory network modules.
Darabos, Christian; Di Cunto, Ferdinando; Tomassini, Marco; Moore, Jason H; Provero, Paolo; Giacobini, Mario
2011-01-01
Gene-on-gene regulations are key components of every living organism. Dynamical abstract models of genetic regulatory networks help explain the genome's evolvability and robustness. These properties can be attributed to the structural topology of the graph formed by genes, as vertices, and regulatory interactions, as edges. Moreover, the actual gene interaction of each gene is believed to play a key role in the stability of the structure. With advances in biology, some effort was deployed to develop update functions in boolean models that include recent knowledge. We combine real-life gene interaction networks with novel update functions in a boolean model. We use two sub-networks of biological organisms, the yeast cell-cycle and the mouse embryonic stem cell, as topological support for our system. On these structures, we substitute the original random update functions by a novel threshold-based dynamic function in which the promoting and repressing effect of each interaction is considered. We use a third real-life regulatory network, along with its inferred boolean update functions to validate the proposed update function. Results of this validation hint to increased biological plausibility of the threshold-based function. To investigate the dynamical behavior of this new model, we visualized the phase transition between order and chaos into the critical regime using Derrida plots. We complement the qualitative nature of Derrida plots with an alternative measure, the criticality distance, that also allows to discriminate between regimes in a quantitative way. Simulation on both real-life genetic regulatory networks show that there exists a set of parameters that allows the systems to operate in the critical region. This new model includes experimentally derived biological information and recent discoveries, which makes it potentially useful to guide experimental research. The update function confers additional realism to the model, while reducing the complexity
Model parameters for simulation of physiological lipids
McGlinchey, Nicholas
2016-01-01
Coarse grain simulation of proteins in their physiological membrane environment can offer insight across timescales, but requires a comprehensive force field. Parameters are explored for multicomponent bilayers composed of unsaturated lipids DOPC and DOPE, mixed‐chain saturation POPC and POPE, and anionic lipids found in bacteria: POPG and cardiolipin. A nonbond representation obtained from multiscale force matching is adapted for these lipids and combined with an improved bonding description of cholesterol. Equilibrating the area per lipid yields robust bilayer simulations and properties for common lipid mixtures with the exception of pure DOPE, which has a known tendency to form nonlamellar phase. The models maintain consistency with an existing lipid–protein interaction model, making the force field of general utility for studying membrane proteins in physiologically representative bilayers. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:26864972
Influence of dispersing additive on asphaltenes aggregation in model system
NASA Astrophysics Data System (ADS)
Gorshkov, A. M.; Shishmina, L. V.; Tukhvatullina, A. Z.; Ismailov, Yu R.; Ges, G. A.
2016-09-01
The work is devoted to investigation of the dispersing additive influence on asphaltenes aggregation in the asphaltenes-toluene-heptane model system by photon correlation spectroscopy method. The experimental relationship between the onset point of asphaltenes and their concentration in toluene has been obtained. The influence of model system composition on asphaltenes aggregation has been researched. The estimation of aggregative and sedimentation stability of asphaltenes in model system and system with addition of dispersing additive has been given.
NASA Astrophysics Data System (ADS)
Khandal, Dhriti; Mikus, Pierre-Yves; Dole, Patrice; Coqueret, Xavier
2013-03-01
This paper reports on the effects of electron beam (EB) irradiation on poly α-1,4-glucose oligomers (maltodextrins) in the presence of water and of various aromatic additives, as model blends for gaining a better understanding at a molecular level the modifications occurring in amorphous starch-lignin blends submitted to ionizing irradiation for improving the properties of this type of bio-based thermoplastic material. A series of aromatic compounds, namely p-methoxy benzyl alcohol, benzene dimethanol, cinnamyl alcohol and some related carboxylic acids namely cinnamic acid, coumaric acid, and ferulic acid, was thus studied for assessing the ability of each additive to counteract chain scission of the polysaccharide and induce interchain covalent linkages. Gel formation in EB-irradiated blends comprising of maltodextrin was shown to be dependent on three main factors: the type of aromatic additive, presence of glycerol, and irradiation dose. The chain scission versus grafting phenomenon as a function of blend composition and dose were studied using Size Exclusion Chromatography by determining the changes in molecular weight distribution (MWD) from Refractive Index (RI) chromatograms and the presence of aromatic grafts onto the maltodextrin chains from UV chromatograms. The occurrence of crosslinking was quantified by gel fraction measurements allowing for ranking the cross-linking efficiency of the additives. When applying the method to destructurized starch blends, gel formation was also shown to be strongly affected by the moisture content of the sample submitted to irradiation. The results demonstrate the possibility to tune the reactivity of tailored blend for minimizing chain degradation and control the degree of cross-linking.
Effect of processing parameters and glycerin addition on the properties of Al foams
NASA Astrophysics Data System (ADS)
Gilani, Hossein; Jafari, Sajjad; Gholami, Roozbeh; Habibolahzadeh, Ali; Mirshahi, Mohammad
2012-04-01
Aluminum foam has been produced by sintering and dissolution processes using NaCl powders as a space holder. In this research, glycerin is used as a novel lubricant along with acetone. The effects of the processing parameters including compacting pressure, sintering temperatures (620, 640 and 650 °C), size, and volume fraction of the space holder, on the physical and mechanical properties of the produced foams have been investigated. Due to segregation of the Al and NaCl powders at high compaction pressures, spalling of Al foams was observed. Meanwhile, adding small amounts of acetone and glycerin to the mixture ensures homogeneity and prevents segregation of dissimilar powders at varying pressure. Moreover, the addition of glycerin provides an improved homogenous stress distribution within the produced foams during mechanical testing, which in turn halts crack propagation. Meanwhile, an alternative technique to remove NaCl particles during the dissolution stage has been proposed. The results showed that high quality foams were successfully produced under a compaction pressure range of 250-265 MPa and sintering temperature of 650 °C.
Generating Effective Models and Parameters for RNA Genetic Circuits.
Hu, Chelsea Y; Varner, Jeffrey D; Lucks, Julius B
2015-08-21
RNA genetic circuitry is emerging as a powerful tool to control gene expression. However, little work has been done to create a theoretical foundation for RNA circuit design. A prerequisite to this is a quantitative modeling framework that accurately describes the dynamics of RNA circuits. In this work, we develop an ordinary differential equation model of transcriptional RNA genetic circuitry, using an RNA cascade as a test case. We show that parameter sensitivity analysis can be used to design a set of four simple experiments that can be performed in parallel using rapid cell-free transcription-translation (TX-TL) reactions to determine the 13 parameters of the model. The resulting model accurately recapitulates the dynamic behavior of the cascade, and can be easily extended to predict the function of new cascade variants that utilize new elements with limited additional characterization experiments. Interestingly, we show that inconsistencies between model predictions and experiments led to the model-guided discovery of a previously unknown maturation step required for RNA regulator function. We also determine circuit parameters in two different batches of TX-TL, and show that batch-to-batch variation can be attributed to differences in parameters that are directly related to the concentrations of core gene expression machinery. We anticipate the RNA circuit models developed here will inform the creation of computer aided genetic circuit design tools that can incorporate the growing number of RNA regulators, and that the parametrization method will find use in determining functional parameters of a broad array of natural and synthetic regulatory systems.
Multiscale modeling of failure in composites under model parameter uncertainty
NASA Astrophysics Data System (ADS)
Bogdanor, Michael J.; Oskay, Caglar; Clay, Stephen B.
2015-09-01
This manuscript presents a multiscale stochastic failure modeling approach for fiber reinforced composites. A homogenization based reduced-order multiscale computational model is employed to predict the progressive damage accumulation and failure in the composite. Uncertainty in the composite response is modeled at the scale of the microstructure by considering the constituent material (i.e., matrix and fiber) parameters governing the evolution of damage as random variables. Through the use of the multiscale model, randomness at the constituent scale is propagated to the scale of the composite laminate. The probability distributions of the underlying material parameters are calibrated from unidirectional composite experiments using a Bayesian statistical approach. The calibrated multiscale model is exercised to predict the ultimate tensile strength of quasi-isotropic open-hole composite specimens at various loading rates. The effect of random spatial distribution of constituent material properties on the composite response is investigated.
Fixing the c Parameter in the Three-Parameter Logistic Model
ERIC Educational Resources Information Center
Han, Kyung T.
2012-01-01
For several decades, the "three-parameter logistic model" (3PLM) has been the dominant choice for practitioners in the field of educational measurement for modeling examinees' response data from multiple-choice (MC) items. Past studies, however, have pointed out that the c-parameter of 3PLM should not be interpreted as a guessing parameter. This…
Hyperbolic value addition and general models of animal choice.
Mazur, J E
2001-01-01
Three mathematical models of choice--the contextual-choice model (R. Grace, 1994), delay-reduction theory (N. Squires & E. Fantino, 1971), and a new model called the hyperbolic value-added model--were compared in their ability to predict the results from a wide variety of experiments with animal subjects. When supplied with 2 or 3 free parameters, all 3 models made fairly accurate predictions for a large set of experiments that used concurrent-chain procedures. One advantage of the hyperbolic value-added model is that it is derived from a simpler model that makes accurate predictions for many experiments using discrete-trial adjusting-delay procedures. Some results favor the hyperbolic value-added model and delay-reduction theory over the contextual-choice model, but more data are needed from choice situations for which the models make distinctly different predictions.
Estimating Regression Parameters in an Extended Proportional Odds Model
Chen, Ying Qing; Hu, Nan; Cheng, Su-Chun; Musoke, Philippa; Zhao, Lue Ping
2012-01-01
The proportional odds model may serve as a useful alternative to the Cox proportional hazards model to study association between covariates and their survival functions in medical studies. In this article, we study an extended proportional odds model that incorporates the so-called “external” time-varying covariates. In the extended model, regression parameters have a direct interpretation of comparing survival functions, without specifying the baseline survival odds function. Semiparametric and maximum likelihood estimation procedures are proposed to estimate the extended model. Our methods are demonstrated by Monte-Carlo simulations, and applied to a landmark randomized clinical trial of a short course Nevirapine (NVP) for mother-to-child transmission (MTCT) of human immunodeficiency virus type-1 (HIV-1). Additional application includes analysis of the well-known Veterans Administration (VA) Lung Cancer Trial. PMID:22904583
Investigation of RADTRAN Stop Model input parameters for truck stops
Griego, N.R.; Smith, J.D.; Neuhauser, K.S.
1996-03-01
RADTRAN is a computer code for estimating the risks and consequences as transport of radioactive materials (RAM). RADTRAN was developed and is maintained by Sandia National Laboratories for the US Department of Energy (DOE). For incident-free transportation, the dose to persons exposed while the shipment is stopped is frequently a major percentage of the overall dose. This dose is referred to as Stop Dose and is calculated by the Stop Model. Because stop dose is a significant portion of the overall dose associated with RAM transport, the values used as input for the Stop Model are important. Therefore, an investigation of typical values for RADTRAN Stop Parameters for truck stops was performed. The resulting data from these investigations were analyzed to provide mean values, standard deviations, and histograms. Hence, the mean values can be used when an analyst does not have a basis for selecting other input values for the Stop Model. In addition, the histograms and their characteristics can be used to guide statistical sampling techniques to measure sensitivity of the RADTRAN calculated Stop Dose to the uncertainties in the stop model input parameters. This paper discusses the details and presents the results of the investigation of stop model input parameters at truck stops.
NASA Astrophysics Data System (ADS)
Ghanbari, M.; Najafi, G.; Ghobadian, B.; Mamat, R.; Noor, M. M.; Moosavian, A.
2015-12-01
This paper studies the use of adaptive Support Vector Machine (SVM) to predict the performance parameters and exhaust emissions of a diesel engine operating on nanodiesel blended fuels. In order to predict the engine parameters, the whole experimental data were randomly divided into training and testing data. For SVM modelling, different values for radial basis function (RBF) kernel width and penalty parameters (C) were considered and the optimum values were then found. The results demonstrate that SVM is capable of predicting the diesel engine performance and emissions. In the experimental step, Carbon nano tubes (CNT) (40, 80 and 120 ppm) and nano silver particles (40, 80 and 120 ppm) with nanostructure were prepared and added as additive to the diesel fuel. Six cylinders, four-stroke diesel engine was fuelled with these new blended fuels and operated at different engine speeds. Experimental test results indicated the fact that adding nano particles to diesel fuel, increased diesel engine power and torque output. For nano-diesel it was found that the brake specific fuel consumption (bsfc) was decreased compared to the net diesel fuel. The results proved that with increase of nano particles concentrations (from 40 ppm to 120 ppm) in diesel fuel, CO2 emission increased. CO emission in diesel fuel with nano-particles was lower significantly compared to pure diesel fuel. UHC emission with silver nano-diesel blended fuel decreased while with fuels that contains CNT nano particles increased. The trend of NOx emission was inverse compared to the UHC emission. With adding nano particles to the blended fuels, NOx increased compared to the net diesel fuel. The tests revealed that silver & CNT nano particles can be used as additive in diesel fuel to improve complete combustion of the fuel and reduce the exhaust emissions significantly.
Empirical flow parameters : a tool for hydraulic model validity
Asquith, William H.; Burley, Thomas E.; Cleveland, Theodore G.
2013-01-01
The objectives of this project were (1) To determine and present from existing data in Texas, relations between observed stream flow, topographic slope, mean section velocity, and other hydraulic factors, to produce charts such as Figure 1 and to produce empirical distributions of the various flow parameters to provide a methodology to "check if model results are way off!"; (2) To produce a statistical regional tool to estimate mean velocity or other selected parameters for storm flows or other conditional discharges at ungauged locations (most bridge crossings) in Texas to provide a secondary way to compare such values to a conventional hydraulic modeling approach. (3.) To present ancillary values such as Froude number, stream power, Rosgen channel classification, sinuosity, and other selected characteristics (readily determinable from existing data) to provide additional information to engineers concerned with the hydraulic-soil-foundation component of transportation infrastructure.
Investigation of land use effects on Nash model parameters
NASA Astrophysics Data System (ADS)
Niazi, Faegheh; Fakheri Fard, Ahmad; Nourani, Vahid; Goodrich, David; Gupta, Hoshin
2015-04-01
the Nash model is more sensitive to the K. In addition, there is a wider range in parameter values in urban sub-watershed than the natural one in which the efficiency has an acceptable value. It might be due to less uncertainty in urban watersheds where runoff to rainfall ratios are much larger than in the natural sub-watershed. The uncertainty in rainfall observations (noise) is therefore a much smaller percentage of the runoff (signal).
Integrating microbial diversity in soil carbon dynamic models parameters
NASA Astrophysics Data System (ADS)
Louis, Benjamin; Menasseri-Aubry, Safya; Leterme, Philippe; Maron, Pierre-Alain; Viaud, Valérie
2015-04-01
Faced with the numerous concerns about soil carbon dynamic, a large quantity of carbon dynamic models has been developed during the last century. These models are mainly in the form of deterministic compartment models with carbon fluxes between compartments represented by ordinary differential equations. Nowadays, lots of them consider the microbial biomass as a compartment of the soil organic matter (carbon quantity). But the amount of microbial carbon is rarely used in the differential equations of the models as a limiting factor. Additionally, microbial diversity and community composition are mostly missing, although last advances in soil microbial analytical methods during the two past decades have shown that these characteristics play also a significant role in soil carbon dynamic. As soil microorganisms are essential drivers of soil carbon dynamic, the question about explicitly integrating their role have become a key issue in soil carbon dynamic models development. Some interesting attempts can be found and are dominated by the incorporation of several compartments of different groups of microbial biomass in terms of functional traits and/or biogeochemical compositions to integrate microbial diversity. However, these models are basically heuristic models in the sense that they are used to test hypotheses through simulations. They have rarely been confronted to real data and thus cannot be used to predict realistic situations. The objective of this work was to empirically integrate microbial diversity in a simple model of carbon dynamic through statistical modelling of the model parameters. This work is based on available experimental results coming from a French National Research Agency program called DIMIMOS. Briefly, 13C-labelled wheat residue has been incorporated into soils with different pedological characteristics and land use history. Then, the soils have been incubated during 104 days and labelled and non-labelled CO2 fluxes have been measured at ten
Additive and subtractive scrambling in optional randomized response modeling.
Hussain, Zawar; Al-Sobhi, Mashail M; Al-Zahrani, Bander
2014-01-01
This article considers unbiased estimation of mean, variance and sensitivity level of a sensitive variable via scrambled response modeling. In particular, we focus on estimation of the mean. The idea of using additive and subtractive scrambling has been suggested under a recent scrambled response model. Whether it is estimation of mean, variance or sensitivity level, the proposed scheme of estimation is shown relatively more efficient than that recent model. As far as the estimation of mean is concerned, the proposed estimators perform relatively better than the estimators based on recent additive scrambling models. Relative efficiency comparisons are also made in order to highlight the performance of proposed estimators under suggested scrambling technique.
Complex Modelling Scheme Of An Additive Manufacturing Centre
NASA Astrophysics Data System (ADS)
Popescu, Liliana Georgeta
2015-09-01
This paper presents a modelling scheme sustaining the development of an additive manufacturing research centre model and its processes. This modelling is performed using IDEF0, the resulting model process representing the basic processes required in developing such a centre in any university. While the activities presented in this study are those recommended in general, changes may occur in specific existing situations in a research centre.
Improving a regional model using reduced complexity and parameter estimation
Kelson, Victor A.; Hunt, Randall J.; Haitjema, Henk M.
2002-01-01
The availability of powerful desktop computers and graphical user interfaces for ground water flow models makes possible the construction of ever more complex models. A proposed copper-zinc sulfide mine in northern Wisconsin offers a unique case in which the same hydrologic system has been modeled using a variety of techniques covering a wide range of sophistication and complexity. Early in the permitting process, simple numerical models were used to evaluate the necessary amount of water to be pumped from the mine, reductions in streamflow, and the drawdowns in the regional aquifer. More complex models have subsequently been used in an attempt to refine the predictions. Even after so much modeling effort, questions regarding the accuracy and reliability of the predictions remain. We have performed a new analysis of the proposed mine using the two-dimensional analytic element code GFLOW coupled with the nonlinear parameter estimation code UCODE. The new model is parsimonious, containing fewer than 10 parameters, and covers a region several times larger in areal extent than any of the previous models. The model demonstrates the suitability of analytic element codes for use with parameter estimation codes. The simplified model results are similar to the more complex models; predicted mine inflows and UCODE-derived 95% confidence intervals are consistent with the previous predictions. More important, the large areal extent of the model allowed us to examine hydrological features not included in the previous models, resulting in new insights about the effects that far-field boundary conditions can have on near-field model calibration and parameterization. In this case, the addition of surface water runoff into a lake in the headwaters of a stream while holding recharge constant moved a regional ground watershed divide and resulted in some of the added water being captured by the adjoining basin. Finally, a simple analytical solution was used to clarify the GFLOW model
Transfer function modeling of damping mechanisms in distributed parameter models
NASA Technical Reports Server (NTRS)
Slater, J. C.; Inman, D. J.
1994-01-01
This work formulates a method for the modeling of material damping characteristics in distributed parameter models which may be easily applied to models such as rod, plate, and beam equations. The general linear boundary value vibration equation is modified to incorporate hysteresis effects represented by complex stiffness using the transfer function approach proposed by Golla and Hughes. The governing characteristic equations are decoupled through separation of variables yielding solutions similar to those of undamped classical theory, allowing solution of the steady state as well as transient response. Example problems and solutions are provided demonstrating the similarity of the solutions to those of the classical theories and transient responses of nonviscous systems.
Comprehensive European dietary exposure model (CEDEM) for food additives.
Tennant, David R
2016-05-01
European methods for assessing dietary exposures to nutrients, additives and other substances in food are limited by the availability of detailed food consumption data for all member states. A proposed comprehensive European dietary exposure model (CEDEM) applies summary data published by the European Food Safety Authority (EFSA) in a deterministic model based on an algorithm from the EFSA intake method for food additives. The proposed approach can predict estimates of food additive exposure provided in previous EFSA scientific opinions that were based on the full European food consumption database.
Estimation of Time-Varying Pilot Model Parameters
NASA Technical Reports Server (NTRS)
Zaal, Peter M. T.; Sweet, Barbara T.
2011-01-01
Human control behavior is rarely completely stationary over time due to fatigue or loss of attention. In addition, there are many control tasks for which human operators need to adapt their control strategy to vehicle dynamics that vary in time. In previous studies on the identification of time-varying pilot control behavior wavelets were used to estimate the time-varying frequency response functions. However, the estimation of time-varying pilot model parameters was not considered. Estimating these parameters can be a valuable tool for the quantification of different aspects of human time-varying manual control. This paper presents two methods for the estimation of time-varying pilot model parameters, a two-step method using wavelets and a windowed maximum likelihood estimation method. The methods are evaluated using simulations of a closed-loop control task with time-varying pilot equalization and vehicle dynamics. Simulations are performed with and without remnant. Both methods give accurate results when no pilot remnant is present. The wavelet transform is very sensitive to measurement noise, resulting in inaccurate parameter estimates when considerable pilot remnant is present. Maximum likelihood estimation is less sensitive to pilot remnant, but cannot detect fast changes in pilot control behavior.
Improvement of hydrological model calibration by selecting multiple parameter ranges
NASA Astrophysics Data System (ADS)
Wu, Qiaofeng; Liu, Shuguang; Cai, Yi; Li, Xinjian; Jiang, Yangming
2017-01-01
The parameters of hydrological models are usually calibrated to achieve good performance, owing to the highly non-linear problem of hydrology process modelling. However, parameter calibration efficiency has a direct relation with parameter range. Furthermore, parameter range selection is affected by probability distribution of parameter values, parameter sensitivity, and correlation. A newly proposed method is employed to determine the optimal combination of multi-parameter ranges for improving the calibration of hydrological models. At first, the probability distribution was specified for each parameter of the model based on genetic algorithm (GA) calibration. Then, several ranges were selected for each parameter according to the corresponding probability distribution, and subsequently the optimal range was determined by comparing the model results calibrated with the different selected ranges. Next, parameter correlation and sensibility were evaluated by quantifying two indexes, RC Y, X and SE, which can be used to coordinate with the negatively correlated parameters to specify the optimal combination of ranges of all parameters for calibrating models. It is shown from the investigation that the probability distribution of calibrated values of any particular parameter in a Xinanjiang model approaches a normal or exponential distribution. The multi-parameter optimal range selection method is superior to the single-parameter one for calibrating hydrological models with multiple parameters. The combination of optimal ranges of all parameters is not the optimum inasmuch as some parameters have negative effects on other parameters. The application of the proposed methodology gives rise to an increase of 0.01 in minimum Nash-Sutcliffe efficiency (ENS) compared with that of the pure GA method. The rising of minimum ENS with little change of the maximum may shrink the range of the possible solutions, which can effectively reduce uncertainty of the model performance.
Sample Size and Item Parameter Estimation Precision When Utilizing the One-Parameter "Rasch" Model
ERIC Educational Resources Information Center
Custer, Michael
2015-01-01
This study examines the relationship between sample size and item parameter estimation precision when utilizing the one-parameter model. Item parameter estimates are examined relative to "true" values by evaluating the decline in root mean squared deviation (RMSD) and the number of outliers as sample size increases. This occurs across…
Numerical model for thermal parameters in optical materials
NASA Astrophysics Data System (ADS)
Sato, Yoichi; Taira, Takunori
2016-04-01
Thermal parameters of optical materials, such as thermal conductivity, thermal expansion, temperature coefficient of refractive index play a decisive role for the thermal design inside laser cavities. Therefore, numerical value of them with temperature dependence is quite important in order to develop the high intense laser oscillator in which optical materials generate excessive heat across mode volumes both of lasing output and optical pumping. We already proposed a novel model of thermal conductivity in various optical materials. Thermal conductivity is a product of isovolumic specific heat and thermal diffusivity, and independent modeling of these two figures should be required from the viewpoint of a clarification of physical meaning. Our numerical model for thermal conductivity requires one material parameter for specific heat and two parameters for thermal diffusivity in the calculation of each optical material. In this work we report thermal conductivities of various optical materials as Y3Al5O12 (YAG), YVO4 (YVO), GdVO4 (GVO), stoichiometric and congruent LiTaO3, synthetic quartz, YAG ceramics and Y2O3 ceramics. The dependence on Nd3+-doping in laser gain media in YAG, YVO and GVO is also studied. This dependence can be described by only additional three parameters. Temperature dependence of thermal expansion and temperature coefficient of refractive index for YAG, YVO, and GVO: these are also included in this work for convenience. We think our numerical model is quite useful for not only thermal analysis in laser cavities or optical waveguides but also the evaluation of physical properties in various transparent materials.
Parameter Estimates in Differential Equation Models for Chemical Kinetics
ERIC Educational Resources Information Center
Winkel, Brian
2011-01-01
We discuss the need for devoting time in differential equations courses to modelling and the completion of the modelling process with efforts to estimate the parameters in the models using data. We estimate the parameters present in several differential equation models of chemical reactions of order n, where n = 0, 1, 2, and apply more general…
Saki, Ali Asghar; Aliarabi, Hassan; Hosseini Siyar, Sayed Ali; Salari, Jalal; Hashemi, Mahdi
2014-01-01
This present study was conducted to evaluate the effects of dietary inclusion of 4, 8 and 12 g kg-1 phytogenic feed additives mixture on performance, egg quality, ovary parameters, serum biochemical parameters and yolk trimethylamine level in laying hens. The results of experiment have shown that egg weight was increased by supplementation of 12 g kg-1 feed additive whereas egg production, feed intake and feed conversion ratio (FCR) were not significantly affected. There were no significant differences in egg quality parameters by supplementation of phytogenic feed additive, whereas yolk trimethylamine level was decreased as the feed additive level increased. The sensory evaluation parameters did not differ significantly. No significant differences were found in serum cholesterol and triglyceride levels between the treatments but low- and high-density lipoprotein were significantly increased. Number of small follicles and ovary weight were significantly increased by supplementation of 12 g kg-1 feed additive. Overall, dietary supplementation of polyherbal additive increased egg weigh, improved ovary characteristics and declined yolk trimethylamine level. PMID:25610580
Saki, Ali Asghar; Aliarabi, Hassan; Hosseini Siyar, Sayed Ali; Salari, Jalal; Hashemi, Mahdi
2014-01-01
This present study was conducted to evaluate the effects of dietary inclusion of 4, 8 and 12 g kg(-1) phytogenic feed additives mixture on performance, egg quality, ovary parameters, serum biochemical parameters and yolk trimethylamine level in laying hens. The results of experiment have shown that egg weight was increased by supplementation of 12 g kg(-1) feed additive whereas egg production, feed intake and feed conversion ratio (FCR) were not significantly affected. There were no significant differences in egg quality parameters by supplementation of phytogenic feed additive, whereas yolk trimethylamine level was decreased as the feed additive level increased. The sensory evaluation parameters did not differ significantly. No significant differences were found in serum cholesterol and triglyceride levels between the treatments but low- and high-density lipoprotein were significantly increased. Number of small follicles and ovary weight were significantly increased by supplementation of 12 g kg(-1) feed additive. Overall, dietary supplementation of polyherbal additive increased egg weigh, improved ovary characteristics and declined yolk trimethylamine level.
Order-parameter model for unstable multilane traffic flow
NASA Astrophysics Data System (ADS)
Lubashevsky, Ihor A.; Mahnke, Reinhard
2000-11-01
We discuss a phenomenological approach to the description of unstable vehicle motion on multilane highways that explains in a simple way the observed sequence of the ``free flow <--> synchronized mode <--> jam'' phase transitions as well as the hysteresis in these transitions. We introduce a variable called an order parameter that accounts for possible correlations in the vehicle motion at different lanes. So, it is principally due to the ``many-body'' effects in the car interaction in contrast to such variables as the mean car density and velocity being actually the zeroth and first moments of the ``one-particle'' distribution function. Therefore, we regard the order parameter as an additional independent state variable of traffic flow. We assume that these correlations are due to a small group of ``fast'' drivers and by taking into account the general properties of the driver behavior we formulate a governing equation for the order parameter. In this context we analyze the instability of homogeneous traffic flow that manifested itself in the above-mentioned phase transitions and gave rise to the hysteresis in both of them. Besides, the jam is characterized by the vehicle flows at different lanes which are independent of one another. We specify a certain simplified model in order to study the general features of the car cluster self-formation under the ``free flow <--> synchronized motion'' phase transition. In particular, we show that the main local parameters of the developed cluster are determined by the state characteristics of vehicle motion only.
Bayesian approach to decompression sickness model parameter estimation.
Howle, L E; Weber, P W; Nichols, J M
2017-03-01
We examine both maximum likelihood and Bayesian approaches for estimating probabilistic decompression sickness model parameters. Maximum likelihood estimation treats parameters as fixed values and determines the best estimate through repeated trials, whereas the Bayesian approach treats parameters as random variables and determines the parameter probability distributions. We would ultimately like to know the probability that a parameter lies in a certain range rather than simply make statements about the repeatability of our estimator. Although both represent powerful methods of inference, for models with complex or multi-peaked likelihoods, maximum likelihood parameter estimates can prove more difficult to interpret than the estimates of the parameter distributions provided by the Bayesian approach. For models of decompression sickness, we show that while these two estimation methods are complementary, the credible intervals generated by the Bayesian approach are more naturally suited to quantifying uncertainty in the model parameters.
Effect of argon addition on plasma parameters and dust charging in hydrogen plasma
Kakati, B. Kausik, S. S.; Saikia, B. K.; Bandyopadhyay, M.; Saxena, Y. C.
2014-10-28
Experimental results on effect of adding argon gas to hydrogen plasma in a multi-cusp dusty plasma device are reported. Addition of argon modifies plasma density, electron temperature, degree of hydrogen dissociation, dust current as well as dust charge. From the dust charging profile, it is observed that the dust current and dust charge decrease significantly up to 40% addition of argon flow rate in hydrogen plasma. But beyond 40% of argon flow rate, the changes in dust current and dust charge are insignificant. Results show that the addition of argon to hydrogen plasma in a dusty plasma device can be used as a tool to control the dust charging in a low pressure dusty plasma.
Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?
NASA Technical Reports Server (NTRS)
Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan
2013-01-01
The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.
NASA Astrophysics Data System (ADS)
Riedl, M.; Suhrbier, A.; Malberg, H.; Penzel, T.; Bretthauer, G.; Kurths, J.; Wessel, N.
2008-07-01
The parameters of heart rate variability and blood pressure variability have proved to be useful analytical tools in cardiovascular physics and medicine. Model-based analysis of these variabilities additionally leads to new prognostic information about mechanisms behind regulations in the cardiovascular system. In this paper, we analyze the complex interaction between heart rate, systolic blood pressure, and respiration by nonparametric fitted nonlinear additive autoregressive models with external inputs. Therefore, we consider measurements of healthy persons and patients suffering from obstructive sleep apnea syndrome (OSAS), with and without hypertension. It is shown that the proposed nonlinear models are capable of describing short-term fluctuations in heart rate as well as systolic blood pressure significantly better than similar linear ones, which confirms the assumption of nonlinear controlled heart rate and blood pressure. Furthermore, the comparison of the nonlinear and linear approaches reveals that the heart rate and blood pressure variability in healthy subjects is caused by a higher level of noise as well as nonlinearity than in patients suffering from OSAS. The residue analysis points at a further source of heart rate and blood pressure variability in healthy subjects, in addition to heart rate, systolic blood pressure, and respiration. Comparison of the nonlinear models within and among the different groups of subjects suggests the ability to discriminate the cohorts that could lead to a stratification of hypertension risk in OSAS patients.
Constraint of fault parameters inferred from nonplanar fault modeling
NASA Astrophysics Data System (ADS)
Aochi, Hideo; Madariaga, Raul; Fukuyama, Eiichi
2003-02-01
We study the distribution of initial stress and frictional parameters for the 28 June 1992 Landers, California, earthquake through dynamic rupture simulation along a nonplanar fault system. We find that observational evidence of large slip distribution near the ground surface requires large nonzero cohesive forces in the depth-dependent friction law. This is the only way that stress can accumulate and be released at shallow depths. We then study the variation of frictional parameters along the strike of the fault. For this purpose we mapped into our segmented fault model the initial stress heterogeneity inverted by Peyrat et al. [2001] using a planar fault model. Simulations with this initial stress field improved the overall fit of the rupture process to that inferred from kinematic inversions, and also improved the fit to the ground motion observed in Southern California. In order to obtain this fit, we had to introduce an additional variations of frictional parameters along the fault. The most important is a weak Kickapoo fault and a strong Johnson Valley fault.
[Temperature dependence of parameters of plant photosynthesis models: a review].
Borjigidai, Almaz; Yu, Gui-Rui
2013-12-01
This paper reviewed the progress on the temperature response models of plant photosynthesis. Mechanisms involved in changes in the photosynthesis-temperature curve were discussed based on four parameters, intercellular CO2 concentration, activation energy of the maximum rate of RuBP (ribulose-1,5-bisphosphate) carboxylation (V (c max)), activation energy of the rate of RuBP regeneration (J(max)), and the ratio of J(max) to V(c max) All species increased the activation energy of V(c max) with increasing growth temperature, while other parameters changed but differed among species, suggesting the activation energy of V(c max) might be the most important parameter for the temperature response of plant photosynthesis. In addition, research problems and prospects were proposed. It's necessary to combine the photosynthesis models at foliage and community levels, and to investigate the mechanism of plants in response to global change from aspects of leaf area, solar radiation, canopy structure, canopy microclimate and photosynthetic capacity. It would benefit the understanding and quantitative assessment of plant growth, carbon balance of communities and primary productivity of ecosystems.
An Additional Symmetry in the Weinberg-Salam Model
Bakker, B.L.G.; Veselov, A.I.; Zubkov, M.A.
2005-06-01
An additional Z{sub 6} symmetry hidden in the fermion and Higgs sectors of the Standard Model has been found recently. It has a singular nature and is connected to the centers of the SU(3) and SU(2) subgroups of the gauge group. A lattice regularization of the Standard Model was constructed that possesses this symmetry. In this paper, we report our results on the numerical simulation of its electroweak sector.
Modeling uranium transport in acidic contaminated groundwater with base addition.
Zhang, Fan; Luo, Wensui; Parker, Jack C; Brooks, Scott C; Watson, David B; Jardine, Philip M; Gu, Baohua
2011-06-15
This study investigates reactive transport modeling in a column of uranium(VI)-contaminated sediments with base additions in the circulating influent. The groundwater and sediment exhibit oxic conditions with low pH, high concentrations of NO(3)(-), SO(4)(2-), U and various metal cations. Preliminary batch experiments indicate that additions of strong base induce rapid immobilization of U for this material. In the column experiment that is the focus of the present study, effluent groundwater was titrated with NaOH solution in an inflow reservoir before reinjection to gradually increase the solution pH in the column. An equilibrium hydrolysis, precipitation and ion exchange reaction model developed through simulation of the preliminary batch titration experiments predicted faster reduction of aqueous Al than observed in the column experiment. The model was therefore modified to consider reaction kinetics for the precipitation and dissolution processes which are the major mechanism for Al immobilization. The combined kinetic and equilibrium reaction model adequately described variations in pH, aqueous concentrations of metal cations (Al, Ca, Mg, Sr, Mn, Ni, Co), sulfate and U(VI). The experimental and modeling results indicate that U(VI) can be effectively sequestered with controlled base addition due to sorption by slowly precipitated Al with pH-dependent surface charge. The model may prove useful to predict field-scale U(VI) sequestration and remediation effectiveness.
Non-additive model for specific heat of electrons
NASA Astrophysics Data System (ADS)
Anselmo, D. H. A. L.; Vasconcelos, M. S.; Silva, R.; Mello, V. D.
2016-10-01
By using non-additive Tsallis entropy we demonstrate numerically that one-dimensional quasicrystals, whose energy spectra are multifractal Cantor sets, are characterized by an entropic parameter, and calculate the electronic specific heat, where we consider a non-additive entropy Sq. In our method we consider an energy spectra calculated using the one-dimensional tight binding Schrödinger equation, and their bands (or levels) are scaled onto the [ 0 , 1 ] interval. The Tsallis' formalism is applied to the energy spectra of Fibonacci and double-period one-dimensional quasiperiodic lattices. We analytically obtain an expression for the specific heat that we consider to be more appropriate to calculate this quantity in those quasiperiodic structures.
Effects of additional food in a delayed predator-prey model.
Sahoo, Banshidhar; Poria, Swarup
2015-03-01
We examine the effects of supplying additional food to predator in a gestation delay induced predator-prey system with habitat complexity. Additional food works in favor of predator growth in our model. Presence of additional food reduces the predatory attack rate to prey in the model. Supplying additional food we can control predator population. Taking time delay as bifurcation parameter the stability of the coexisting equilibrium point is analyzed. Hopf bifurcation analysis is done with respect to time delay in presence of additional food. The direction of Hopf bifurcations and the stability of bifurcated periodic solutions are determined by applying the normal form theory and the center manifold theorem. The qualitative dynamical behavior of the model is simulated using experimental parameter values. It is observed that fluctuations of the population size can be controlled either by supplying additional food suitably or by increasing the degree of habitat complexity. It is pointed out that Hopf bifurcation occurs in the system when the delay crosses some critical value. This critical value of delay strongly depends on quality and quantity of supplied additional food. Therefore, the variation of predator population significantly effects the dynamics of the model. Model results are compared with experimental results and biological implications of the analytical findings are discussed in the conclusion section.
Estimation of Parameters in Latent Class Models with Constraints on the Parameters.
1986-06-01
the item parameters. Let us briefly review the elements of latent class models. The reader desiring a thorough introduction can consult Lazarsfeld and...parameters, including most of the models which have been proposed to date. The latent distance model of Lazarsfeld and Henry (1968) and the quasi...Psychometrika, 1964, 29, 115-129. Lazarsfeld , P.F., and Henry, N.W. Latent structure analysis. Boston: Houghton-Mifflin, 1968. L6. - 29 References continued
Parameter redundancy in discrete state‐space and integrated models
McCrea, Rachel S.
2016-01-01
Discrete state‐space models are used in ecology to describe the dynamics of wild animal populations, with parameters, such as the probability of survival, being of ecological interest. For a particular parametrization of a model it is not always clear which parameters can be estimated. This inability to estimate all parameters is known as parameter redundancy or a model is described as nonidentifiable. In this paper we develop methods that can be used to detect parameter redundancy in discrete state‐space models. An exhaustive summary is a combination of parameters that fully specify a model. To use general methods for detecting parameter redundancy a suitable exhaustive summary is required. This paper proposes two methods for the derivation of an exhaustive summary for discrete state‐space models using discrete analogues of methods for continuous state‐space models. We also demonstrate that combining multiple data sets, through the use of an integrated population model, may result in a model in which all parameters are estimable, even though models fitted to the separate data sets may be parameter redundant. PMID:27362826
Mullier, François; Lainey, Elodie; Fenneteau, Odile; Da Costa, Lydie; Schillinger, Françoise; Bailly, Nicolas; Cornet, Yvan; Chatelain, Christian; Dogne, Jean-Michel; Chatelain, Bernard
2011-07-01
Hereditary spherocytosis (HS) is characterised by weakened vertical linkages between the membrane skeleton and the red blood cell's lipid bilayer, leading to the release of microparticles. All the reference tests suffer from specific limitations. The aim of this study was to develop easy to use diagnostic tool for screening of hereditary spherocytosis based on routinely acquired haematological parameters like percentage of microcytes, percentage of hypochromic cells, reticulocyte counts, and percentage of immature reticulocytes. The levels of haemoglobin, mean cell volume, mean corpuscular haemoglobin concentration, reticulocytes (Ret), immature reticulocytes fraction (IRF), hypochromic erythrocytes (Hypo-He) and microcytic erythrocytes (MicroR) were determined on EDTA samples on Sysmex instruments from a cohort of 45 confirmed SH. The HS group was then compared with haemolytical disorders, microcytic anaemia, healthy individuals and routine samples (n = 1,488). HS is characterised by a high Ret count without an equally elevated IRF. All 45 HS have Ret >80,000/μl and Ret(10(9)/L)/IRF (%) greater than 7.7 (rule 1). Trait and mild HS had a Ret/IRF ratio greater than 19. Moderate and severe HS had increased MicroR and MicroR/Hypo-He (rule 2). Combination of both rules gave predictive positive value and negative predictive value of respectively 75% and 100% (n=1,488), which is much greater than single parameters or existing rules. This simple and fast diagnostic method could be used as an excellent screening tool for HS. It is also valid for mild HS, neonates and ABO incompatibilities and overcomes the lack of sensitivity of electrophoresis in ankyrin deficiencies.
Tweaking Model Parameters: Manual Adjustment and Self Calibration
NASA Astrophysics Data System (ADS)
Schulz, B.; Tuffs, R. J.; Laureijs, R. J.; Lu, N.; Peschke, S. B.; Gabriel, C.; Khan, I.
2002-12-01
The reduction of P32 data is not always straight forward and the application of the transient model needs tight control by the user. This paper describes how to access the model parameters within the P32Tools software and how to work with the "Inspect signals per pixel" panel, in order to explore the parameter space and improve the model fit.
Estimation Methods for One-Parameter Testlet Models
ERIC Educational Resources Information Center
Jiao, Hong; Wang, Shudong; He, Wei
2013-01-01
This study demonstrated the equivalence between the Rasch testlet model and the three-level one-parameter testlet model and explored the Markov Chain Monte Carlo (MCMC) method for model parameter estimation in WINBUGS. The estimation accuracy from the MCMC method was compared with those from the marginalized maximum likelihood estimation (MMLE)…
Equating Parameter Estimates from the Generalized Graded Unfolding Model.
ERIC Educational Resources Information Center
Roberts, James S.
Three common methods for equating parameter estimates from binary item response theory models are extended to the generalized grading unfolding model (GGUM). The GGUM is an item response model in which single-peaked, nonmonotonic expected value functions are implemented for polytomous responses. GGUM parameter estimates are equated using extended…
Validation of transport models using additive flux minimization technique
NASA Astrophysics Data System (ADS)
Pankin, A. Y.; Kruger, S. E.; Groebner, R. J.; Hakim, A.; Kritz, A. H.; Rafiq, T.
2013-10-01
A new additive flux minimization technique is proposed for carrying out the verification and validation (V&V) of anomalous transport models. In this approach, the plasma profiles are computed in time dependent predictive simulations in which an additional effective diffusivity is varied. The goal is to obtain an optimal match between the computed and experimental profile. This new technique has several advantages over traditional V&V methods for transport models in tokamaks and takes advantage of uncertainty quantification methods developed by the applied math community. As a demonstration of its efficiency, the technique is applied to the hypothesis that the paleoclassical density transport dominates in the plasma edge region in DIII-D tokamak discharges. A simplified version of the paleoclassical model that utilizes the Spitzer resistivity for the parallel neoclassical resistivity and neglects the trapped particle effects is tested in this paper. It is shown that a contribution to density transport, in addition to the paleoclassical density transport, is needed in order to describe the experimental profiles. It is found that more additional diffusivity is needed at the top of the H-mode pedestal, and almost no additional diffusivity is needed at the pedestal bottom. The implementation of this V&V technique uses the FACETS::Core transport solver and the DAKOTA toolkit for design optimization and uncertainty quantification. The FACETS::Core solver is used for advancing the plasma density profiles. The DAKOTA toolkit is used for the optimization of plasma profiles and the computation of the additional diffusivity that is required for the predicted density profile to match the experimental profile.
The lattice parameters of alpha iron with binary additions of all the transition metals, except technetium, have been accurately determined on solid...samples. No direct correlation with solute size is observed, but an effect of electron configuration is noted. The solubility limits of alpha iron with
Parameter estimation of hydrologic models using data assimilation
NASA Astrophysics Data System (ADS)
Kaheil, Y. H.
2005-12-01
The uncertainties associated with the modeling of hydrologic systems sometimes demand that data should be incorporated in an on-line fashion in order to understand the behavior of the system. This paper represents a Bayesian strategy to estimate parameters for hydrologic models in an iterative mode. The paper presents a modified technique called localized Bayesian recursive estimation (LoBaRE) that efficiently identifies the optimum parameter region, avoiding convergence to a single best parameter set. The LoBaRE methodology is tested for parameter estimation for two different types of models: a support vector machine (SVM) model for predicting soil moisture, and the Sacramento Soil Moisture Accounting (SAC-SMA) model for estimating streamflow. The SAC-SMA model has 13 parameters that must be determined. The SVM model has three parameters. Bayesian inference is used to estimate the best parameter set in an iterative fashion. This is done by narrowing the sampling space by imposing uncertainty bounds on the posterior best parameter set and/or updating the "parent" bounds based on their fitness. The new approach results in fast convergence towards the optimal parameter set using minimum training/calibration data and evaluation of fewer parameter sets. The efficacy of the localized methodology is also compared with the previously used Bayesian recursive estimation (BaRE) algorithm.
Isolating parameter sensitivity in reach scale transient storage modeling
NASA Astrophysics Data System (ADS)
Schmadel, Noah M.; Neilson, Bethany T.; Heavilin, Justin E.; Wörman, Anders
2016-03-01
Parameter sensitivity analyses, although necessary to assess identifiability, may not lead to an increased understanding or accurate representation of transient storage processes when associated parameter sensitivities are muted. Reducing the number of uncertain calibration parameters through field-based measurements may allow for more realistic representations and improved predictive capabilities of reach scale stream solute transport. Using a two-zone transient storage model, we examined the spatial detail necessary to set parameters describing hydraulic characteristics and isolate the sensitivity of the parameters associated with transient storage processes. We represented uncertain parameter distributions as triangular fuzzy numbers and used closed form statistical moment solutions to express parameter sensitivity thus avoiding copious model simulations. These solutions also allowed for the direct incorporation of different levels of spatial information regarding hydraulic characteristics. To establish a baseline for comparison, we performed a sensitivity analysis considering all model parameters as uncertain. Next, we set hydraulic parameters as the reach averages, leaving the transient storage parameters as uncertain, and repeated the analysis. Lastly, we incorporated high resolution hydraulic information assessed from aerial imagery to examine whether more spatial detail was necessary to isolate the sensitivity of transient storage parameters. We found that a reach-average hydraulic representation, as opposed to using detailed spatial information, was sufficient to highlight transient storage parameter sensitivity and provide more information regarding the potential identifiability of these parameters.
The Effect of Nondeterministic Parameters on Shock-Associated Noise Prediction Modeling
NASA Technical Reports Server (NTRS)
Dahl, Milo D.; Khavaran, Abbas
2010-01-01
Engineering applications for aircraft noise prediction contain models for physical phenomenon that enable solutions to be computed quickly. These models contain parameters that have an uncertainty not accounted for in the solution. To include uncertainty in the solution, nondeterministic computational methods are applied. Using prediction models for supersonic jet broadband shock-associated noise, fixed model parameters are replaced by probability distributions to illustrate one of these methods. The results show the impact of using nondeterministic parameters both on estimating the model output uncertainty and on the model spectral level prediction. In addition, a global sensitivity analysis is used to determine the influence of the model parameters on the output, and to identify the parameters with the least influence on model output.
Accuracy of Parameter Estimation in Gibbs Sampling under the Two-Parameter Logistic Model.
ERIC Educational Resources Information Center
Kim, Seock-Ho; Cohen, Allan S.
The accuracy of Gibbs sampling, a Markov chain Monte Carlo procedure, was considered for estimation of item and ability parameters under the two-parameter logistic model. Memory test data were analyzed to illustrate the Gibbs sampling procedure. Simulated data sets were analyzed using Gibbs sampling and the marginal Bayesian method. The marginal…
Karr, Jonathan R.; Williams, Alex H.; Zucker, Jeremy D.; Raue, Andreas; Steiert, Bernhard; Timmer, Jens; Kreutz, Clemens; Wilkinson, Simon; Allgood, Brandon A.; Bot, Brian M.; Hoff, Bruce R.; Kellen, Michael R.; Covert, Markus W.; Stolovitzky, Gustavo A.; Meyer, Pablo
2015-01-01
Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM) 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model’s structure and in silico “experimental” data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation. PMID:26020786
Dynamics in the Parameter Space of a Neuron Model
NASA Astrophysics Data System (ADS)
Paulo, C. Rech
2012-06-01
Some two-dimensional parameter-space diagrams are numerically obtained by considering the largest Lyapunov exponent for a four-dimensional thirteen-parameter Hindmarsh—Rose neuron model. Several different parameter planes are considered, and it is shown that depending on the combination of parameters, a typical scenario can be preserved: for some choice of two parameters, the parameter plane presents a comb-shaped chaotic region embedded in a large periodic region. It is also shown that there exist regions close to these comb-shaped chaotic regions, separated by the comb teeth, organizing themselves in period-adding bifurcation cascades.
Statistical Parameters for Describing Model Accuracy
1989-03-20
mean and the standard deviation, approximately characterizes the accuracy of the model, since the width of the confidence interval whose center is at...Using a modified version of Chebyshev’s inequality, a similar result is obtained for the upper bound of the confidence interval width for any
A simulation of water pollution model parameter estimation
NASA Technical Reports Server (NTRS)
Kibler, J. F.
1976-01-01
A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.
A Logical Difficulty of the Parameter Setting Model.
ERIC Educational Resources Information Center
Sasaki, Yoshinori
1990-01-01
Seeks to prove that the parameter setting model (PSM) of Chomsky's Universal Grammar theory contains an internal contradiction when it is seriously taken to model the internal state of language learners. (six references) (JL)
Determining extreme parameter correlation in ground water models.
Hill, M.C.; Osterby, O.
2003-01-01
In ground water flow system models with hydraulic-head observations but without significant imposed or observed flows, extreme parameter correlation generally exists. As a result, hydraulic conductivity and recharge parameters cannot be uniquely estimated. In complicated problems, such correlation can go undetected even by experienced modelers. Extreme parameter correlation can be detected using parameter correlation coefficients, but their utility depends on the presence of sufficient, but not excessive, numerical imprecision of the sensitivities, such as round-off error. This work investigates the information that can be obtained from parameter correlation coefficients in the presence of different levels of numerical imprecision, and compares it to the information provided by an alternative method called the singular value decomposition (SVD). Results suggest that (1) calculated correlation coefficients with absolute values that round to 1.00 were good indicators of extreme parameter correlation, but smaller values were not necessarily good indicators of lack of correlation and resulting unique parameter estimates; (2) the SVD may be more difficult to interpret than parameter correlation coefficients, but it required sensitivities that were one to two significant digits less accurate than those that required using parameter correlation coefficients; and (3) both the SVD and parameter correlation coefficients identified extremely correlated parameters better when the parameters were more equally sensitive. When the statistical measures fail, parameter correlation can be identified only by the tedious process of executing regression using different sets of starting values, or, in some circumstances, through graphs of the objective function.
Exploring the interdependencies between parameters in a material model.
Silling, Stewart Andrew; Fermen-Coker, Muge
2014-01-01
A method is investigated to reduce the number of numerical parameters in a material model for a solid. The basis of the method is to detect interdependencies between parameters within a class of materials of interest. The method is demonstrated for a set of material property data for iron and steel using the Johnson-Cook plasticity model.
On Interpreting the Parameters for Any Item Response Model
ERIC Educational Resources Information Center
Thissen, David
2009-01-01
Maris and Bechger's article is an exercise in technical virtuosity and provides much to be learned by students of psychometrics. In this commentary, the author begins with making two observations. The first is that the title, "On Interpreting the Model Parameters for the Three Parameter Logistic Model," belies the generality of parts of Maris and…
Solar Parameters for Modeling the Interplanetary Background
NASA Astrophysics Data System (ADS)
Bzowski, Maciej; Sokół, Justyna M.; Tokumaru, Munetoshi; Fujiki, Kenichi; Quémerais, Eric; Lallement, Rosine; Ferron, Stéphane; Bochsler, Peter; McComas, David J.
The goal of the working group on cross-calibration of past and present ultraviolet (UV) datasets of the International Space Science Institute (ISSI) in Bern, Switzerland was to establish a photometric cross-calibration of various UV and extreme ultraviolet (EUV) heliospheric observations. Realization of this goal required a credible and up-to-date model of the spatial distribution of neutral interstellar hydrogen in the heliosphere, and to that end, a credible model of the radiation pressure and ionization processes was needed. This chapter describes the latter part of the project: the solar factors responsible for shaping the distribution of neutral interstellar H in the heliosphere. In this paper we present the solar Lyman-α flux and the topics of solar Lyman-α resonant radiation pressure force acting on neutral H atoms in the heliosphere. We will also discuss solar EUV radiation and resulting photoionization of heliospheric hydrogen along with their evolution in time and the still hypothetical variation with heliolatitude. Furthermore, solar wind and its evolution with solar activity is presented, mostly in the context of charge exchange ionization of heliospheric neutral hydrogen, and dynamic pressure variations. Also electron-impact ionization of neutral heliospheric hydrogen and its variation with time, heliolatitude, and solar distance is discussed. After a review of the state of the art in all of those topics, we proceed to present an interim model of the solar wind and the other solar factors based on up-to-date in situ and remote sensing observations. This model was used by Izmodenov et al. (2013, this volume) to calculate the distribution of heliospheric hydrogen, which in turn was the basis for intercalibrating the heliospheric UV and EUV measurements discussed in Quémerais et al. (2013, this volume). Results of this joint effort will also be used to improve the model of the solar wind evolution, which will be an invaluable asset in interpretation of
Product versus additive threshold models for analysis of reproduction outcomes in animal genetics.
David, I; Bodin, L; Gianola, D; Legarra, A; Manfredi, E; Robert-Granié, C
2009-08-01
The phenotypic observation of some reproduction traits (e.g., insemination success, interval from lambing to insemination) is the result of environmental and genetic factors acting on 2 individuals: the male and female involved in a mating couple. In animal genetics, the main approach (called additive model) proposed for studying such traits assumes that the phenotype is linked to a purely additive combination, either on the observed scale for continuous traits or on some underlying scale for discrete traits, of environmental and genetic effects affecting the 2 individuals. Statistical models proposed for studying human fecundability generally consider reproduction outcomes as the product of hypothetical unobservable variables. Taking inspiration from these works, we propose a model (product threshold model) for studying a binary reproduction trait that supposes that the observed phenotype is the product of 2 unobserved phenotypes, 1 for each individual. We developed a Gibbs sampling algorithm for fitting a Bayesian product threshold model including additive genetic effects and showed by simulation that it is feasible and that it provides good estimates of the parameters. We showed that fitting an additive threshold model to data that are simulated under a product threshold model provides biased estimates, especially for individuals with high breeding values. A main advantage of the product threshold model is that, in contrast to the additive model, it provides distinct estimates of fixed effects affecting each of the 2 unobserved phenotypes.
Additional Research Needs to Support the GENII Biosphere Models
Napier, Bruce A.; Snyder, Sandra F.; Arimescu, Carmen
2013-11-30
In the course of evaluating the current parameter needs for the GENII Version 2 code (Snyder et al. 2013), areas of possible improvement for both the data and the underlying models have been identified. As the data review was implemented, PNNL staff identified areas where the models can be improved both to accommodate the locally significant pathways identified and also to incorporate newer models. The areas are general data needs for the existing models and improved formulations for the pathway models. It is recommended that priorities be set by NRC staff to guide selection of the most useful improvements in a cost-effective manner. Suggestions are made based on relatively easy and inexpensive changes, and longer-term more costly studies. In the short term, there are several improved model formulations that could be applied to the GENII suite of codes to make them more generally useful. • Implementation of the separation of the translocation and weathering processes • Implementation of an improved model for carbon-14 from non-atmospheric sources • Implementation of radon exposure pathways models • Development of a KML processor for the output report generator module data that are calculated on a grid that could be superimposed upon digital maps for easier presentation and display • Implementation of marine mammal models (manatees, seals, walrus, whales, etc.). Data needs in the longer term require extensive (and potentially expensive) research. Before picking any one radionuclide or food type, NRC staff should perform an in-house review of current and anticipated environmental analyses to select “dominant” radionuclides of interest to allow setting of cost-effective priorities for radionuclide- and pathway-specific research. These include • soil-to-plant uptake studies for oranges and other citrus fruits, and • Development of models for evaluation of radionuclide concentration in highly-processed foods such as oils and sugars. Finally, renewed
Identification of hydrological model parameter variation using ensemble Kalman filter
NASA Astrophysics Data System (ADS)
Deng, Chao; Liu, Pan; Guo, Shenglian; Li, Zejun; Wang, Dingbao
2016-12-01
Hydrological model parameters play an important role in the ability of model prediction. In a stationary context, parameters of hydrological models are treated as constants; however, model parameters may vary with time under climate change and anthropogenic activities. The technique of ensemble Kalman filter (EnKF) is proposed to identify the temporal variation of parameters for a two-parameter monthly water balance model (TWBM) by assimilating the runoff observations. Through a synthetic experiment, the proposed method is evaluated with time-invariant (i.e., constant) parameters and different types of parameter variations, including trend, abrupt change and periodicity. Various levels of observation uncertainty are designed to examine the performance of the EnKF. The results show that the EnKF can successfully capture the temporal variations of the model parameters. The application to the Wudinghe basin shows that the water storage capacity (SC) of the TWBM model has an apparent increasing trend during the period from 1958 to 2000. The identified temporal variation of SC is explained by land use and land cover changes due to soil and water conservation measures. In contrast, the application to the Tongtianhe basin shows that the estimated SC has no significant variation during the simulation period of 1982-2013, corresponding to the relatively stationary catchment properties. The evapotranspiration parameter (C) has temporal variations while no obvious change patterns exist. The proposed method provides an effective tool for quantifying the temporal variations of the model parameters, thereby improving the accuracy and reliability of model simulations and forecasts.
ERIC Educational Resources Information Center
Mota, A. R.; Lopes dos Santos, J. M. B.
2014-01-01
Students' misconceptions concerning colour phenomena and the apparent complexity of the underlying concepts--due to the different domains of knowledge involved--make its teaching very difficult. We have developed and tested a teaching device, the addition table of colours (ATC), that encompasses additive and subtractive mixtures in a single…
NASA Astrophysics Data System (ADS)
Weigand, M.; Kemna, A.
2016-06-01
Spectral induced polarization (SIP) data are commonly analysed using phenomenological models. Among these models the Cole-Cole (CC) model is the most popular choice to describe the strength and frequency dependence of distinct polarization peaks in the data. More flexibility regarding the shape of the spectrum is provided by decomposition schemes. Here the spectral response is decomposed into individual responses of a chosen elementary relaxation model, mathematically acting as kernel in the involved integral, based on a broad range of relaxation times. A frequently used kernel function is the Debye model, but also the CC model with some other a priorly specified frequency dispersion (e.g. Warburg model) has been proposed as kernel in the decomposition. The different decomposition approaches in use, also including conductivity and resistivity formulations, pose the question to which degree the integral spectral parameters typically derived from the obtained relaxation time distribution are biased by the approach itself. Based on synthetic SIP data sampled from an ideal CC response, we here investigate how the two most important integral output parameters deviate from the corresponding CC input parameters. We find that the total chargeability may be underestimated by up to 80 per cent and the mean relaxation time may be off by up to three orders of magnitude relative to the original values, depending on the frequency dispersion of the analysed spectrum and the proximity of its peak to the frequency range limits considered in the decomposition. We conclude that a quantitative comparison of SIP parameters across different studies, or the adoption of parameter relationships from other studies, for example when transferring laboratory results to the field, is only possible on the basis of a consistent spectral analysis procedure. This is particularly important when comparing effective CC parameters with spectral parameters derived from decomposition results.
Universally sloppy parameter sensitivities in systems biology models.
Gutenkunst, Ryan N; Waterfall, Joshua J; Casey, Fergal P; Brown, Kevin S; Myers, Christopher R; Sethna, James P
2007-10-01
Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a "sloppy" spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters.
NASA Astrophysics Data System (ADS)
Homsher-Ritosa, Caryn Nicole
TNR_Tor was determined by finding the intersection point of two linear regressions fit to the data. The TNR_Tor values were compared with measured TNR values from double-hit compression tests and with predicted values using empirical equations from the literature. Light optical micrographs and electron backscatter diffraction scans were examined for samples quenched from just above and just below the experimentally determined values of TNR_Tor for the high Nb, low Ti, and commercially produced 10V45 alloys to help verify the prior austenite grain morphology. For all processing conditions, the low Nb alloy was the least effective in increasing TNR_Tor and the high additions of Ti were the most effective at increasing TNR_Tor. The additions of V were not significantly effective in altering TNR_Tor and it is believed the Nb overpowered any influence the V additions may have had on TNR_Tor. An increase in strain or an increase strain rate decreased TNR_Tor. The T NR values measured from multistep hot torsion testing were lower than the TNR values measured from double-hit compression tests. The use of the mean flow stress versus inverse temperature curve to determine TNR_Tor does not correlate to the microstructural meaning of T NR (i.e. no recrystallization). The transition from completely recrystallized grains to less than complete recrystallization is not properly modeled by the intersection of two linear regions and is more gradual than the mechanical test implies. From the microstructural analysis of a10V45 steel, there is evidence of recrystallization at temperatures 200 °C below the measured TNR_Tor. The slope change on the mean flow stress versus inverse temperature curves is believed to be, in part, accumulated strain as well as refinement of continuously recrystallized grains causing a Hall-Petch type strength increase.
The definition of hydrologic model parameters using remote sensing techniques
NASA Technical Reports Server (NTRS)
Ragan, R. M.; Salomonson, V. V.
1978-01-01
The reported investigation is concerned with the use of Landsat remote sensing to define input parameters for an array of hydrologic models which are used to synthesize streamflow and water quality parameters in the planning or management process. The ground truth sampling and problems involved in translating the remotely sensed information into hydrologic model parameters are discussed. Questions related to the modification of existing models for compatibility with remote sensing capabilities are also examined. It is shown that the input parameters of many models are presently overdefined in terms of the sensitivity and accuracy of the model. When this overdefinition is recognized many of the models currently considered to be incompatible with remote sensing capabilities can be modified to make possible use with sensors having rather low resolutions.
State and parameter estimation for canonic models of neural oscillators.
Tyukin, Ivan; Steur, Erik; Nijmeijer, Henk; Fairhurst, David; Song, Inseon; Semyanov, Alexey; Van Leeuwen, Cees
2010-06-01
We consider the problem of how to recover the state and parameter values of typical model neurons, such as Hindmarsh-Rose, FitzHugh-Nagumo, Morris-Lecar, from in-vitro measurements of membrane potentials. In control theory, in terms of observer design, model neurons qualify as locally observable. However, unlike most models traditionally addressed in control theory, no parameter-independent diffeomorphism exists, such that the original model equations can be transformed into adaptive canonic observer form. For a large class of model neurons, however, state and parameter reconstruction is possible nevertheless. We propose a method which, subject to mild conditions on the richness of the measured signal, allows model parameters and state variables to be reconstructed up to an equivalence class.
Evaluation of the storage function model parameter characteristics
NASA Astrophysics Data System (ADS)
Sugiyama, Hironobu; Kadoya, Mutsumi; Nagai, Akihiro; Lansey, Kevin
1997-04-01
The storage function hydrograph model is one of the most commonly used models for flood runoff analysis in Japan. This paper studies the generality of the approach and its application to Japanese basins. Through a comparison of the basic equations for the models, the storage function model parameters, K, P, and T1, are shown to be related to the terms, k and p, in the kinematic wave model. This analysis showed that P and p are identical and K and T1 can be related to k, the basin area and its land use. To apply the storage function model throughout Japan, regional parameter relationships for K and T1 were developed for different land-use conditions using data from 22 watersheds and 91 flood events. These relationships combine the kinematic wave parameters with general topographic information using Hack's Law. The sensitivity of the parameters and their physical significance are also described.
Extraction of exposure modeling parameters of thick resist
NASA Astrophysics Data System (ADS)
Liu, Chi; Du, Jinglei; Liu, Shijie; Duan, Xi; Luo, Boliang; Zhu, Jianhua; Guo, Yongkang; Du, Chunlei
2004-12-01
Experimental and theoretical analysis indicates that many nonlinear factors existing in the exposure process of thick resist can remarkably affect the PAC concentration distribution in the resist. So the effects should be fully considered in the exposure model of thick resist, and exposure parameters should not be treated as constants because there exists certain relationship between the parameters and resist thickness. In this paper, an enhanced Dill model for the exposure process of thick resist is presented, and the experimental setup for measuring exposure parameters of thick resist is developed. We measure the intensity transmittance curve of thick resist AZ4562 under different processing conditions, and extract the corresponding exposure parameters based on the experiment results and the calculations from the beam propagation matrix of the resist films. With these modified modeling parameters and enhanced Dill model, simulation of thick-resist exposure process can be effectively developed in the future.
Software reliability: Additional investigations into modeling with replicated experiments
NASA Technical Reports Server (NTRS)
Nagel, P. M.; Schotz, F. M.; Skirvan, J. A.
1984-01-01
The effects of programmer experience level, different program usage distributions, and programming languages are explored. All these factors affect performance, and some tentative relational hypotheses are presented. An analytic framework for replicated and non-replicated (traditional) software experiments is presented. A method of obtaining an upper bound on the error rate of the next error is proposed. The method was validated empirically by comparing forecasts with actual data. In all 14 cases the bound exceeded the observed parameter, albeit somewhat conservatively. Two other forecasting methods are proposed and compared to observed results. Although demonstrated relative to this framework that stages are neither independent nor exponentially distributed, empirical estimates show that the exponential assumption is nearly valid for all but the extreme tails of the distribution. Except for the dependence in the stage probabilities, Cox's model approximates to a degree what is being observed.
Additive Manufacturing of Medical Models--Applications in Rhinology.
Raos, Pero; Klapan, Ivica; Galeta, Tomislav
2015-09-01
In the paper we are introducing guidelines and suggestions for use of 3D image processing SW in head pathology diagnostic and procedures for obtaining physical medical model by additive manufacturing/rapid prototyping techniques, bearing in mind the improvement of surgery performance, its maximum security and faster postoperative recovery of patients. This approach has been verified in two case reports. In the treatment we used intelligent classifier-schemes for abnormal patterns using computer-based system for 3D-virtual and endoscopic assistance in rhinology, with appropriate visualization of anatomy and pathology within the nose, paranasal sinuses, and scull base area.
Estimation of Kalman filter model parameters from an ensemble of tests
NASA Technical Reports Server (NTRS)
Gibbs, B. P.; Haley, D. R.; Levine, W.; Porter, D. W.; Vahlberg, C. J.
1980-01-01
A methodology for estimating initial mean and covariance parameters in a Kalman filter model from an ensemble of nonidentical tests is presented. In addition, the problem of estimating time constants and process noise levels is addressed. Practical problems such as developing and validating inertial instrument error models from laboratory test data or developing error models of individual phases of a test are generally considered.
Inverse estimation of parameters for an estuarine eutrophication model
Shen, J.; Kuo, A.Y.
1996-11-01
An inverse model of an estuarine eutrophication model with eight state variables is developed. It provides a framework to estimate parameter values of the eutrophication model by assimilation of concentration data of these state variables. The inverse model using the variational technique in conjunction with a vertical two-dimensional eutrophication model is general enough to be applicable to aid model calibration. The formulation is illustrated by conducting a series of numerical experiments for the tidal Rappahannock River, a western shore tributary of the Chesapeake Bay. The numerical experiments of short-period model simulations with different hypothetical data sets and long-period model simulations with limited hypothetical data sets demonstrated that the inverse model can be satisfactorily used to estimate parameter values of the eutrophication model. The experiments also showed that the inverse model is useful to address some important questions, such as uniqueness of the parameter estimation and data requirements for model calibration. Because of the complexity of the eutrophication system, degrading of speed of convergence may occur. Two major factors which cause degradation of speed of convergence are cross effects among parameters and the multiple scales involved in the parameter system.
Multiscale Modeling of Powder Bed-Based Additive Manufacturing
NASA Astrophysics Data System (ADS)
Markl, Matthias; Körner, Carolin
2016-07-01
Powder bed fusion processes are additive manufacturing technologies that are expected to induce the third industrial revolution. Components are built up layer by layer in a powder bed by selectively melting confined areas, according to sliced 3D model data. This technique allows for manufacturing of highly complex geometries hardly machinable with conventional technologies. However, the underlying physical phenomena are sparsely understood and difficult to observe during processing. Therefore, an intensive and expensive trial-and-error principle is applied to produce components with the desired dimensional accuracy, material characteristics, and mechanical properties. This review presents numerical modeling approaches on multiple length scales and timescales to describe different aspects of powder bed fusion processes. In combination with tailored experiments, the numerical results enlarge the process understanding of the underlying physical mechanisms and support the development of suitable process strategies and component topologies.
Incorporation of shuttle CCT parameters in computer simulation models
NASA Technical Reports Server (NTRS)
Huntsberger, Terry
1990-01-01
Computer simulations of shuttle missions have become increasingly important during recent years. The complexity of mission planning for satellite launch and repair operations which usually involve EVA has led to the need for accurate visibility and access studies. The PLAID modeling package used in the Man-Systems Division at Johnson currently has the necessary capabilities for such studies. In addition, the modeling package is used for spatial location and orientation of shuttle components for film overlay studies such as the current investigation of the hydrogen leaks found in the shuttle flight. However, there are a number of differences between the simulation studies and actual mission viewing. These include image blur caused by the finite resolution of the CCT monitors in the shuttle and signal noise from the video tubes of the cameras. During the course of this investigation the shuttle CCT camera and monitor parameters are incorporated into the existing PLAID framework. These parameters are specific for certain camera/lens combinations and the SNR characteristics of these combinations are included in the noise models. The monitor resolution is incorporated using a Gaussian spread function such as that found in the screen phosphors in the shuttle monitors. Another difference between the traditional PLAID generated images and actual mission viewing lies in the lack of shadows and reflections of light from surfaces. Ray tracing of the scene explicitly includes the lighting and material characteristics of surfaces. The results of some preliminary studies using ray tracing techniques for the image generation process combined with the camera and monitor effects are also reported.
Technical Work Plan for: Additional Multoscale Thermohydrologic Modeling
B. Kirstein
2006-08-24
The primary objective of Revision 04 of the MSTHM report is to provide TSPA with revised repository-wide MSTHM analyses that incorporate updated percolation flux distributions, revised hydrologic properties, updated IEDs, and information pertaining to the emplacement of transport, aging, and disposal (TAD) canisters. The updated design information is primarily related to the incorporation of TAD canisters, but also includes updates related to superseded IEDs describing emplacement drift cross-sectional geometry and layout. The intended use of the results of Revision 04 of the MSTHM report, as described in this TWP, is to predict the evolution of TH conditions (temperature, relative humidity, liquid-phase saturation, and liquid-phase flux) at specified locations within emplacement drifts and in the adjoining near-field host rock along all emplacement drifts throughout the repository. This information directly supports the TSPA for the nominal and seismic scenarios. The revised repository-wide analyses are required to incorporate updated parameters and design information and to extend those analyses out to 1,000,000 years. Note that the previous MSTHM analyses reported in Revision 03 of Multiscale Thermohydrologic Model (BSC 2005 [DIRS 173944]) only extend out to 20,000 years. The updated parameters are the percolation flux distributions, including incorporation of post-10,000-year distributions, and updated calibrated hydrologic property values for the host-rock units. The applied calibrated hydrologic properties will be an updated version of those available in Calibrated Properties Model (BSC 2004 [DIRS 169857]). These updated properties will be documented in an Appendix of Revision 03 of UZ Flow Models and Submodels (BSC 2004 [DIRS 169861]). The updated calibrated properties are applied because they represent the latest available information. The reasonableness of applying the updated calibrated' properties to the prediction of near-fieldin-drift TH conditions
WATEQ3 geochemical model: thermodynamic data for several additional solids
Krupka, K.M.; Jenne, E.A.
1982-09-01
Geochemical models such as WATEQ3 can be used to model the concentrations of water-soluble pollutants that may result from the disposal of nuclear waste and retorted oil shale. However, for a model to competently deal with these water-soluble pollutants, an adequate thermodynamic data base must be provided that includes elements identified as important in modeling these pollutants. To this end, several minerals and related solid phases were identified that were absent from the thermodynamic data base of WATEQ3. In this study, the thermodynamic data for the identified solids were compiled and selected from several published tabulations of thermodynamic data. For these solids, an accepted Gibbs free energy of formation, ..delta..G/sup 0//sub f,298/, was selected for each solid phase based on the recentness of the tabulated data and on considerations of internal consistency with respect to both the published tabulations and the existing data in WATEQ3. For those solids not included in these published tabulations, Gibbs free energies of formation were calculated from published solubility data (e.g., lepidocrocite), or were estimated (e.g., nontronite) using a free-energy summation method described by Mattigod and Sposito (1978). The accepted or estimated free energies were then combined with internally consistent, ancillary thermodynamic data to calculate equilibrium constants for the hydrolysis reactions of these minerals and related solid phases. Including these values in the WATEQ3 data base increased the competency of this geochemical model in applications associated with the disposal of nuclear waste and retorted oil shale. Additional minerals and related solid phases that need to be added to the solubility submodel will be identified as modeling applications continue in these two programs.
Model and Parameter Discretization Impacts on Estimated ASR Recovery Efficiency
NASA Astrophysics Data System (ADS)
Forghani, A.; Peralta, R. C.
2015-12-01
We contrast computed recovery efficiency of one Aquifer Storage and Recovery (ASR) well using several modeling situations. Test situations differ in employed finite difference grid discretization, hydraulic conductivity, and storativity. We employ a 7-layer regional groundwater model calibrated for Salt Lake Valley. Since the regional model grid is too coarse for ASR analysis, we prepare two local models with significantly smaller discretization capable of analyzing ASR recovery efficiency. Some addressed situations employ parameters interpolated from the coarse valley model. Other situations employ parameters derived from nearby well logs or pumping tests. The intent of the evaluations and subsequent sensitivity analysis is to show how significantly the employed discretization and aquifer parameters affect estimated recovery efficiency. Most of previous studies to evaluate ASR recovery efficiency only consider hypothetical uniform specified boundary heads and gradient assuming homogeneous aquifer parameters. The well is part of the Jordan Valley Water Conservancy District (JVWCD) ASR system, that lies within Salt Lake Valley.
Computationally Inexpensive Identification of Non-Informative Model Parameters
NASA Astrophysics Data System (ADS)
Mai, J.; Cuntz, M.; Kumar, R.; Zink, M.; Samaniego, L. E.; Schaefer, D.; Thober, S.; Rakovec, O.; Musuuza, J. L.; Craven, J. R.; Spieler, D.; Schrön, M.; Prykhodko, V.; Dalmasso, G.; Langenberg, B.; Attinger, S.
2014-12-01
Sensitivity analysis is used, for example, to identify parameters which induce the largest variability in model output and are thus informative during calibration. Variance-based techniques are employed for this purpose, which unfortunately require a large number of model evaluations and are thus ineligible for complex environmental models. We developed, therefore, a computational inexpensive screening method, which is based on Elementary Effects, that automatically separates informative and non-informative model parameters. The method was tested using the mesoscale hydrologic model (mHM) with 52 parameters. The model was applied in three European catchments with different hydrological characteristics, i.e. Neckar (Germany), Sava (Slovenia), and Guadalquivir (Spain). The method identified the same informative parameters as the standard Sobol method but with less than 1% of model runs. In Germany and Slovenia, 22 of 52 parameters were informative mostly in the formulations of evapotranspiration, interflow and percolation. In Spain 19 of 52 parameters were informative with an increased importance of soil parameters. We showed further that Sobol' indexes calculated for the subset of informative parameters are practically the same as Sobol' indexes before the screening but the number of model runs was reduced by more than 50%. The model mHM was then calibrated twice in the three test catchments. First all 52 parameters were taken into account and then only the informative parameters were calibrated while all others are kept fixed. The Nash-Sutcliffe efficiencies were 0.87 and 0.83 in Germany, 0.89 and 0.88 in Slovenia, and 0.86 and 0.85 in Spain, respectively. This minor loss of at most 4% in model performance comes along with a substantial decrease of at least 65% in model evaluations. In summary, we propose an efficient screening method to identify non-informative model parameters that can be discarded during further applications. We have shown that sensitivity
A dimensionless parameter model for arc welding processes
Fuerschbach, P.W.
1994-12-31
A dimensionless parameter model previously developed for C0{sub 2} laser beam welding has been shown to be applicable to GTAW and PAW autogenous arc welding processes. The model facilitates estimates of weld size, power, and speed based on knowledge of the material`s thermal properties. The dimensionless parameters can also be used to estimate the melting efficiency, which eases development of weld schedules with lower heat input to the weldment. The mathematical relationship between the dimensionless parameters in the model has been shown to be dependent on the heat flow geometry in the weldment.
Estimation of the input parameters in the Feller neuronal model
NASA Astrophysics Data System (ADS)
Ditlevsen, Susanne; Lansky, Petr
2006-06-01
The stochastic Feller neuronal model is studied, and estimators of the model input parameters, depending on the firing regime of the process, are derived. Closed expressions for the first two moments of functionals of the first-passage time (FTP) through a constant boundary in the suprathreshold regime are derived, which are used to calculate moment estimators. In the subthreshold regime, the exponentiality of the FTP is utilized to characterize the input parameters. The methods are illustrated on simulated data. Finally, approximations of the first-passage-time moments are suggested, and biological interpretations and comparisons of the parameters in the Feller and the Ornstein-Uhlenbeck models are discussed.
Alwin, Jennifer Louise
1999-08-01
The effect of process parameters and chemical additives on the specific cake resistance of zinc hydroxide precipitates was investigated. The ability of a slurry to be filtered is dependent upon the particle habit of the solid and the particle habit is influenced by certain process variables. The process variables studied include neutralization temperature, agitation type, and alkalinity source used for neutralization. Several commercially available chemical additives advertised to aid in solid/liquid separation were also examined in conjunction with hydroxide precipitation. A statistical analysis revealed that the neutralization temperature and the source of alkalinity were statistically significant in influencing the specific cake resistance of zinc hydroxide precipitates in this study. The type of agitation did not significantly effect the specific cake resistance of zinc hydroxide precipitates. The use of chemical additives in conjunction with hydroxide precipitation had a favorable effect on the filterability. The morphology of the hydroxide precipitates was analyzed using scanning electron microscopy.
Quantifying Key Climate Parameter Uncertainties Using an Earth System Model with a Dynamic 3D Ocean
NASA Astrophysics Data System (ADS)
Olson, R.; Sriver, R. L.; Goes, M. P.; Urban, N.; Matthews, D.; Haran, M.; Keller, K.
2011-12-01
Climate projections hinge critically on uncertain climate model parameters such as climate sensitivity, vertical ocean diffusivity and anthropogenic sulfate aerosol forcings. Climate sensitivity is defined as the equilibrium global mean temperature response to a doubling of atmospheric CO2 concentrations. Vertical ocean diffusivity parameterizes sub-grid scale ocean vertical mixing processes. These parameters are typically estimated using Intermediate Complexity Earth System Models (EMICs) that lack a full 3D representation of the oceans, thereby neglecting the effects of mixing on ocean dynamics and meridional overturning. We improve on these studies by employing an EMIC with a dynamic 3D ocean model to estimate these parameters. We carry out historical climate simulations with the University of Victoria Earth System Climate Model (UVic ESCM) varying parameters that affect climate sensitivity, vertical ocean mixing, and effects of anthropogenic sulfate aerosols. We use a Bayesian approach whereby the likelihood of each parameter combination depends on how well the model simulates surface air temperature and upper ocean heat content. We use a Gaussian process emulator to interpolate the model output to an arbitrary parameter setting. We use Markov Chain Monte Carlo method to estimate the posterior probability distribution function (pdf) of these parameters. We explore the sensitivity of the results to prior assumptions about the parameters. In addition, we estimate the relative skill of different observations to constrain the parameters. We quantify the uncertainty in parameter estimates stemming from climate variability, model and observational errors. We explore the sensitivity of key decision-relevant climate projections to these parameters. We find that climate sensitivity and vertical ocean diffusivity estimates are consistent with previously published results. The climate sensitivity pdf is strongly affected by the prior assumptions, and by the scaling
Estimating winter wheat phenological parameters: Implications for crop modeling
Technology Transfer Automated Retrieval System (TEKTRAN)
Crop parameters, such as the timing of developmental events, are critical for accurate simulation results in crop simulation models, yet uncertainty often exists in determining the parameters. Factors contributing to the uncertainty include: a) sources of variation within a plant (i.e., within diffe...
Retrospective forecast of ETAS model with daily parameters estimate
NASA Astrophysics Data System (ADS)
Falcone, Giuseppe; Murru, Maura; Console, Rodolfo; Marzocchi, Warner; Zhuang, Jiancang
2016-04-01
We present a retrospective ETAS (Epidemic Type of Aftershock Sequence) model based on the daily updating of free parameters during the background, the learning and the test phase of a seismic sequence. The idea was born after the 2011 Tohoku-Oki earthquake. The CSEP (Collaboratory for the Study of Earthquake Predictability) Center in Japan provided an appropriate testing benchmark for the five 1-day submitted models. Of all the models, only one was able to successfully predict the number of events that really happened. This result was verified using both the real time and the revised catalogs. The main cause of the failure was in the underestimation of the forecasted events, due to model parameters maintained fixed during the test. Moreover, the absence in the learning catalog of an event similar to the magnitude of the mainshock (M9.0), which drastically changed the seismicity in the area, made the learning parameters not suitable to describe the real seismicity. As an example of this methodological development we show the evolution of the model parameters during the last two strong seismic sequences in Italy: the 2009 L'Aquila and the 2012 Reggio Emilia episodes. The achievement of the model with daily updated parameters is compared with that of same model where the parameters remain fixed during the test time.
Analysis of the Second Model Parameter Estimation Experiment Workshop Results
NASA Astrophysics Data System (ADS)
Duan, Q.; Schaake, J.; Koren, V.; Mitchell, K.; Lohmann, D.
2002-05-01
The goal of Model Parameter Estimation Experiment (MOPEX) is to investigate techniques for the a priori parameter estimation for land surface parameterization schemes of atmospheric models and for hydrologic models. A comprehensive database has been developed which contains historical hydrometeorologic time series data and land surface characteristics data for 435 basins in the United States and many international basins. A number of international MOPEX workshops have been convened or planned for MOPEX participants to share their parameter estimation experience. The Second International MOPEX Workshop is held in Tucson, Arizona, April 8-10, 2002. This paper presents the MOPEX goal/objectives and science strategy. Results from our participation in developing and testing of the a priori parameter estimation procedures for the National Weather Service (NWS) Sacramento Soil Moisture Accounting (SAC-SMA) model, the Simple Water Balance (SWB) model, and the National Center for Environmental Prediction Center (NCEP) NOAH Land Surface Model (NOAH LSM) are highlighted. The test results will include model simulations using both a priori parameters and calibrated parameters for 12 basins selected for the Tucson MOPEX Workshop.
Effect of Noise in the Three-Parameter Logistic Model.
ERIC Educational Resources Information Center
Samejima, Fumiko
In a preceding research report, ONR/RR-82-1 (Information Loss Caused by Noise in Models for Dichotomous Items), observations were made on the effect of noise accommodated in different types of models on the dichotomous response level. In the present paper, focus is put upon the three-parameter logistic model, which is widely used among…
Parameter Estimates in Differential Equation Models for Population Growth
ERIC Educational Resources Information Center
Winkel, Brian J.
2011-01-01
We estimate the parameters present in several differential equation models of population growth, specifically logistic growth models and two-species competition models. We discuss student-evolved strategies and offer "Mathematica" code for a gradient search approach. We use historical (1930s) data from microbial studies of the Russian biologist,…
Lumped Parameter Model (LPM) for Light-Duty Vehicles
EPA’s Lumped Parameter Model (LPM) is a free, desktop computer application that estimates the effectiveness (CO2 Reduction) of various technology combinations or “packages,” in a manner that accounts for synergies between technologies.
Online parameter estimation for surgical needle steering model.
Yan, Kai Guo; Podder, Tarun; Xiao, Di; Yu, Yan; Liu, Tien-I; Ling, Keck Voon; Ng, Wan Sing
2006-01-01
Estimation of the system parameters, given noisy input/output data, is a major field in control and signal processing. Many different estimation methods have been proposed in recent years. Among various methods, Extended Kalman Filtering (EKF) is very useful for estimating the parameters of a nonlinear and time-varying system. Moreover, it can remove the effects of noises to achieve significantly improved results. Our task here is to estimate the coefficients in a spring-beam-damper needle steering model. This kind of spring-damper model has been adopted by many researchers in studying the tissue deformation. One difficulty in using such model is to estimate the spring and damper coefficients. Here, we proposed an online parameter estimator using EKF to solve this problem. The detailed design is presented in this paper. Computer simulations and physical experiments have revealed that the simulator can estimate the parameters accurately with fast convergent speed and improve the model efficacy.
Moran, Robert F.; McKay, David; Pickard, Chris J.; Berry, Andrew J.; Griffin, John M.
2016-01-01
The structural chemistry of materials containing low levels of nonstoichiometric hydrogen is difficult to determine, and producing structural models is challenging where hydrogen has no fixed crystallographic site. Here we demonstrate a computational approach employing ab initio random structure searching (AIRSS) to generate a series of candidate structures for hydrous wadsleyite (β-Mg2SiO4 with 1.6 wt% H2O), a high-pressure mineral proposed as a repository for water in the Earth's transition zone. Aligning with previous experimental work, we solely consider models with Mg3 (over Mg1, Mg2 or Si) vacancies. We adapt the AIRSS method by starting with anhydrous wadsleyite, removing a single Mg2+ and randomly placing two H+ in a unit cell model, generating 819 candidate structures. 103 geometries were then subjected to more accurate optimisation under periodic DFT. Using this approach, we find the most favourable hydration mechanism involves protonation of two O1 sites around the Mg3 vacancy. The formation of silanol groups on O3 or O4 sites (with loss of stable O1–H hydroxyls) coincides with an increase in total enthalpy. Importantly, the approach we employ allows observables such as NMR parameters to be computed for each structure. We consider hydrous wadsleyite (∼1.6 wt%) to be dominated by protonated O1 sites, with O3/O4–H silanol groups present as defects, a model that maps well onto experimental studies at higher levels of hydration (J. M. Griffin et al., Chem. Sci., 2013, 4, 1523). The AIRSS approach adopted herein provides the crucial link between atomic-scale structure and experimental studies. PMID:27020937
Estimation of propensity scores using generalized additive models.
Woo, Mi-Ja; Reiter, Jerome P; Karr, Alan F
2008-08-30
Propensity score matching is often used in observational studies to create treatment and control groups with similar distributions of observed covariates. Typically, propensity scores are estimated using logistic regressions that assume linearity between the logistic link and the predictors. We evaluate the use of generalized additive models (GAMs) for estimating propensity scores. We compare logistic regressions and GAMs in terms of balancing covariates using simulation studies with artificial and genuine data. We find that, when the distributions of covariates in the treatment and control groups overlap sufficiently, using GAMs can improve overall covariate balance, especially for higher-order moments of distributions. When the distributions in the two groups overlap insufficiently, GAM more clearly reveals this fact than logistic regression does. We also demonstrate via simulation that matching with GAMs can result in larger reductions in bias when estimating treatment effects than matching with logistic regression.
[Critical of the additive model of the randomized controlled trial].
Boussageon, Rémy; Gueyffier, François; Bejan-Angoulvant, Theodora; Felden-Dominiak, Géraldine
2008-01-01
Randomized, double-blind, placebo-controlled clinical trials are currently the best way to demonstrate the clinical effectiveness of drugs. Its methodology relies on the method of difference (John Stuart Mill), through which the observed difference between two groups (drug vs placebo) can be attributed to the pharmacological effect of the drug being tested. However, this additive model can be questioned in the event of statistical interactions between the pharmacological and the placebo effects. Evidence in different domains has shown that the placebo effect can influence the effect of the active principle. This article evaluates the methodological, clinical and epistemological consequences of this phenomenon. Topics treated include extrapolating results, accounting for heterogeneous results, demonstrating the existence of several factors in the placebo effect, the necessity to take these factors into account for given symptoms or pathologies, as well as the problem of the "specific" effect.
NASA Astrophysics Data System (ADS)
Xia, Youlong; Yang, Zong-Liang; Stoffa, Paul L.; Sen, Mrinal K.
2005-01-01
Most previous land-surface model calibration studies have defined global ranges for their parameters to search for optimal parameter sets. Little work has been conducted to study the impacts of realistic versus global ranges as well as model complexities on the calibration and uncertainty estimates. The primary purpose of this paper is to investigate these impacts by employing Bayesian Stochastic Inversion (BSI) to the Chameleon Surface Model (CHASM). The CHASM was designed to explore the general aspects of land-surface energy balance representation within a common modeling framework that can be run from a simple energy balance formulation to a complex mosaic type structure. The BSI is an uncertainty estimation technique based on Bayes theorem, importance sampling, and very fast simulated annealing. The model forcing data and surface flux data were collected at seven sites representing a wide range of climate and vegetation conditions. For each site, four experiments were performed with simple and complex CHASM formulations as well as realistic and global parameter ranges. Twenty eight experiments were conducted and 50 000 parameter sets were used for each run. The results show that the use of global and realistic ranges gives similar simulations for both modes for most sites, but the global ranges tend to produce some unreasonable optimal parameter values. Comparison of simple and complex modes shows that the simple mode has more parameters with unreasonable optimal values. Use of parameter ranges and model complexities have significant impacts on frequency distribution of parameters, marginal posterior probability density functions, and estimates of uncertainty of simulated sensible and latent heat fluxes. Comparison between model complexity and parameter ranges shows that the former has more significant impacts on parameter and uncertainty estimations.
Agricultural and Environmental Input Parameters for the Biosphere Model
K. Rasmuson; K. Rautenstrauch
2004-09-14
This analysis is one of 10 technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) (i.e., the biosphere model). It documents development of agricultural and environmental input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the ERMYN and its input parameters.
Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty
Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.; Cantrell, Kirk J.
2004-03-01
The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates based on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four
Yan, Ying; Yi, Grace Y
2016-07-01
Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods.
Parameter Estimation and Model Selection in Computational Biology
Lillacci, Gabriele; Khammash, Mustafa
2010-01-01
A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants) are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection. PMID:20221262
SPOTting Model Parameters Using a Ready-Made Python Package.
Houska, Tobias; Kraft, Philipp; Chamorro-Chavez, Alejandro; Breuer, Lutz
2015-01-01
The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI). We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function.
SPOTting Model Parameters Using a Ready-Made Python Package
Houska, Tobias; Kraft, Philipp; Chamorro-Chavez, Alejandro; Breuer, Lutz
2015-01-01
The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI). We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function. PMID:26680783
An Effective Parameter Screening Strategy for High Dimensional Watershed Models
NASA Astrophysics Data System (ADS)
Khare, Y. P.; Martinez, C. J.; Munoz-Carpena, R.
2014-12-01
Watershed simulation models can assess the impacts of natural and anthropogenic disturbances on natural systems. These models have become important tools for tackling a range of water resources problems through their implementation in the formulation and evaluation of Best Management Practices, Total Maximum Daily Loads, and Basin Management Action Plans. For accurate applications of watershed models they need to be thoroughly evaluated through global uncertainty and sensitivity analyses (UA/SA). However, due to the high dimensionality of these models such evaluation becomes extremely time- and resource-consuming. Parameter screening, the qualitative separation of important parameters, has been suggested as an essential step before applying rigorous evaluation techniques such as the Sobol' and Fourier Amplitude Sensitivity Test (FAST) methods in the UA/SA framework. The method of elementary effects (EE) (Morris, 1991) is one of the most widely used screening methodologies. Some of the common parameter sampling strategies for EE, e.g. Optimized Trajectories [OT] (Campolongo et al., 2007) and Modified Optimized Trajectories [MOT] (Ruano et al., 2012), suffer from inconsistencies in the generated parameter distributions, infeasible sample generation time, etc. In this work, we have formulated a new parameter sampling strategy - Sampling for Uniformity (SU) - for parameter screening which is based on the principles of the uniformity of the generated parameter distributions and the spread of the parameter sample. A rigorous multi-criteria evaluation (time, distribution, spread and screening efficiency) of OT, MOT, and SU indicated that SU is superior to other sampling strategies. Comparison of the EE-based parameter importance rankings with those of Sobol' helped to quantify the qualitativeness of the EE parameter screening approach, reinforcing the fact that one should use EE only to reduce the resource burden required by FAST/Sobol' analyses but not to replace it.
Parameter Transferability Across Spatial and Temporal Resolutions in Hydrological Modelling
NASA Astrophysics Data System (ADS)
Melsen, L. A.; Teuling, R.; Torfs, P. J.; Zappa, M.; Mizukami, N.; Clark, M. P.; Uijlenhoet, R.
2015-12-01
Improvements in computational power and data availability provided new opportunities for hydrological modeling. The increased complexity of hydrological models, however, also leads to time consuming optimization procedures. Moreover, observations are still required to calibrate the model. Both to decrease calculation time of the optimization and to be able to apply the model in poorly gauged basins, many studies have focused on transferability of parameters. We adopted a probabilistic approach to systematically investigate parameter transferability across both temporal and spatial resolution. A Variable Infiltration Capacity model for the Thur basin (1703km2, Switzerland) was set-up and run at four different spatial resolutions (1x1, 5x5, 10x10km, lumped) and three different temporal resolutions (hourly, daily, monthly). Three objective functions were used to evaluate the model: Kling-Gupta Efficiency (KGE(Q)), Nash-Sutcliffe Efficiency (NSE(Q)) and NSE(logQ). We used a Hierarchical Latin Hypercube Sample (Vorechovsky, 2014) to efficiently sample the most sensitive parameters. The model was run 3150 times and the best 1% of the runs was selected as behavioral. The overlap in selected behavioral sets for different spatial and temporal resolutions was used as indicators for parameter transferability. There was a large overlap in selected sets for the different spatial resolutions, implying that parameters were to a large extent transferable across spatial resolutions. The temporal resolution, however, had a larger impact on the parameters; it significantly affected the parameter distributions for at least four out of seven parameters. The parameter values for the monthly time step were found to be substantially different from those for daily and hourly time steps. This suggests that the output from models which are calibrated on a monthly time step, cannot be interpreted or analysed on an hourly or daily time step. It was also shown that the selected objective
Improved input parameters for diffusion models of skin absorption.
Hansen, Steffi; Lehr, Claus-Michael; Schaefer, Ulrich F
2013-02-01
To use a diffusion model for predicting skin absorption requires accurate estimates of input parameters on model geometry, affinity and transport characteristics. This review summarizes methods to obtain input parameters for diffusion models of skin absorption focusing on partition and diffusion coefficients. These include experimental methods, extrapolation approaches, and correlations that relate partition and diffusion coefficients to tabulated physico-chemical solute properties. Exhaustive databases on lipid-water and corneocyte protein-water partition coefficients are presented and analyzed to provide improved approximations to estimate lipid-water and corneocyte protein-water partition coefficients. The most commonly used estimates of lipid and corneocyte diffusion coefficients are also reviewed. In order to improve modeling of skin absorption in the future diffusion models should include the vertical stratum corneum heterogeneity, slow equilibration processes, the absorption from complex non-aqueous formulations, and an improved representation of dermal absorption processes. This will require input parameters for which no suitable estimates are yet available.
A six-parameter Iwan model and its application
NASA Astrophysics Data System (ADS)
Li, Yikun; Hao, Zhiming
2016-02-01
Iwan model is a practical tool to describe the constitutive behaviors of joints. In this paper, a six-parameter Iwan model based on a truncated power-law distribution with two Dirac delta functions is proposed, which gives a more comprehensive description of joints than the previous Iwan models. Its analytical expressions including backbone curve, unloading curves and energy dissipation are deduced. Parameter identification procedures and the discretization method are also provided. A model application based on Segalman et al.'s experiment works with bolted joints is carried out. Simulation effects of different numbers of Jenkins elements are discussed. The results indicate that the six-parameter Iwan model can be used to accurately reproduce the experimental phenomena of joints.
Behaviour of the cosmological model with variable deceleration parameter
NASA Astrophysics Data System (ADS)
Tiwari, R. K.; Beesham, A.; Shukla, B. K.
2016-12-01
We consider the Bianchi type-VI0 massive string universe with decaying cosmological constant Λ. To solve Einstein's field equations, we assume that the shear scalar is proportional to the expansion scalar and that the deceleration parameter q is a linear function of the Hubble parameter H, i.e., q=α +β H, which yields the scale factor a = e^{1/β√{2β t+k1}}. The model expands exponentially with cosmic time t. The value of the cosmological constant Λ is small and positive. Also, we discuss physical parameters as well as the jerk parameter j, which predict that the universe in this model originates as in the Λ CDM model.
NASA Technical Reports Server (NTRS)
Sovers, O. J.; Jacobs, C. S.
1994-01-01
This report is a revision of the document Observation Model and Parameter Partials for the JPL VLBI Parameter Estimation Software 'MODEST'---1991, dated August 1, 1991. It supersedes that document and its four previous versions (1983, 1985, 1986, and 1987). A number of aspects of the very long baseline interferometry (VLBI) model were improved from 1991 to 1994. Treatment of tidal effects is extended to model the effects of ocean tides on universal time and polar motion (UTPM), including a default model for nearly diurnal and semidiurnal ocean tidal UTPM variations, and partial derivatives for all (solid and ocean) tidal UTPM amplitudes. The time-honored 'K(sub 1) correction' for solid earth tides has been extended to include analogous frequency-dependent response of five tidal components. Partials of ocean loading amplitudes are now supplied. The Zhu-Mathews-Oceans-Anisotropy (ZMOA) 1990-2 and Kinoshita-Souchay models of nutation are now two of the modeling choices to replace the increasingly inadequate 1980 International Astronomical Union (IAU) nutation series. A rudimentary model of antenna thermal expansion is provided. Two more troposphere mapping functions have been added to the repertoire. Finally, corrections among VLBI observations via the model of Treuhaft and lanyi improve modeling of the dynamic troposphere. A number of minor misprints in Rev. 4 have been corrected.
NASA Astrophysics Data System (ADS)
Morin, José A.; Ibarra, Borja; Cao, Francisco J.
2016-05-01
Single-molecule manipulation experiments of molecular motors provide essential information about the rate and conformational changes of the steps of the reaction located along the manipulation coordinate. This information is not always sufficient to define a particular kinetic cycle. Recent single-molecule experiments with optical tweezers showed that the DNA unwinding activity of a Phi29 DNA polymerase mutant presents a complex pause behavior, which includes short and long pauses. Here we show that different kinetic models, considering different connections between the active and the pause states, can explain the experimental pause behavior. Both the two independent pause model and the two connected pause model are able to describe the pause behavior of a mutated Phi29 DNA polymerase observed in an optical tweezers single-molecule experiment. For the two independent pause model all parameters are fixed by the observed data, while for the more general two connected pause model there is a range of values of the parameters compatible with the observed data (which can be expressed in terms of two of the rates and their force dependencies). This general model includes models with indirect entry and exit to the long-pause state, and also models with cycling in both directions. Additionally, assuming that detailed balance is verified, which forbids cycling, this reduces the ranges of the values of the parameters (which can then be expressed in terms of one rate and its force dependency). The resulting model interpolates between the independent pause model and the indirect entry and exit to the long-pause state model
Parameter Estimation for Groundwater Models under Uncertain Irrigation Data.
Demissie, Yonas; Valocchi, Albert; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen
2015-01-01
The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p < 0.05) bias in estimated parameters and model predictions that persist despite calibrating the models to different calibration data and sample sizes. However, by directly accounting for the irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.
Parameters of cosmological models and recent astronomical observations
Sharov, G.S.; Vorontsova, E.G. E-mail: elenavor@inbox.ru
2014-10-01
For different gravitational models we consider limitations on their parameters coming from recent observational data for type Ia supernovae, baryon acoustic oscillations, and from 34 data points for the Hubble parameter H(z) depending on redshift. We calculate parameters of 3 models describing accelerated expansion of the universe: the ΛCDM model, the model with generalized Chaplygin gas (GCG) and the multidimensional model of I. Pahwa, D. Choudhury and T.R. Seshadri. In particular, for the ΛCDM model 1σ estimates of parameters are: H{sub 0}=70.262±0.319 km {sup -1}Mp {sup -1}, Ω{sub m}=0.276{sub -0.008}{sup +0.009}, Ω{sub Λ}=0.769±0.029, Ω{sub k}=-0.045±0.032. The GCG model under restriction 0α≥ is reduced to the ΛCDM model. Predictions of the multidimensional model essentially depend on 3 data points for H(z) with z≥2.3.
Parameters of cosmological models and recent astronomical observations
NASA Astrophysics Data System (ADS)
Sharov, G. S.; Vorontsova, E. G.
2014-10-01
For different gravitational models we consider limitations on their parameters coming from recent observational data for type Ia supernovae, baryon acoustic oscillations, and from 34 data points for the Hubble parameter H(z) depending on redshift. We calculate parameters of 3 models describing accelerated expansion of the universe: the ΛCDM model, the model with generalized Chaplygin gas (GCG) and the multidimensional model of I. Pahwa, D. Choudhury and T.R. Seshadri. In particular, for the ΛCDM model 1σ estimates of parameters are: H0=70.262±0.319 km -1Mp -1, Ωm=0.276-0.008+0.009, ΩΛ=0.769±0.029, Ωk=-0.045±0.032. The GCG model under restriction 0α>= is reduced to the ΛCDM model. Predictions of the multidimensional model essentially depend on 3 data points for H(z) with z>=2.3.
2012-01-01
Background Bronchodilator response in patients with asthma is evaluated based on post-bronchodilator increase in forced expiratory volume in one second (FEV1) and forced vital capacity (FVC). However, the need for additional parameters, mainly among patients with severe asthma, has already been demonstrated. Methods The aim of this study was to evaluate the usefulness of vital capacity (VC) and inspiratory capacity (IC) to evaluate bronchodilator response in asthma patients with persistent airflow obstruction. The 43 asthma patients enrolled in the study were stratified into moderate or severe airflow obstruction groups based on baseline FEV1. All patients performed a 6-minute walk test before and after the bronchodilator (BD). A bipolar visual analogue scale post-BD was performed to assess clinical effect. The correlation between VC and IC and clinical response, determined by visual analogue scale (VAS) and 6-minute walk test (6MWT), was investigated. Results Patients in the severe group presented: 1) greater bronchodilator response in VC (48% vs 15%, p = 0.02), 2) a significant correlation between VC variation and the reduction in air trapping (Rs = 0.70; p < 0.01), 3) a significant agreement between VC and VAS score (kappa = 0.57; p < 0.01). There was no correlation between IC and the reduction in air trapping or clinical data. Conclusions VC may be a useful additional parameter to evaluate bronchodilator response in asthma patients with severe airflow obstruction. PMID:22950529
Environmental Transport Input Parameters for the Biosphere Model
M. Wasiolek
2004-09-10
This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment for the license application (TSPA-LA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA-LA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]) (TWP). This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA). This report is one of the five reports that develop input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the conceptual model and the mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed description of the model input parameters. The output of this report is used as direct input in the ''Nominal Performance Biosphere Dose Conversion Factor Analysis'' and in the ''Disruptive Event Biosphere Dose Conversion Factor Analysis'' that calculate the values of biosphere dose conversion factors (BDCFs) for the groundwater and volcanic ash exposure scenarios, respectively. The purpose of this analysis was to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or in volcanic ash). The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573]).
Inhalation Exposure Input Parameters for the Biosphere Model
K. Rautenstrauch
2004-09-10
This analysis is one of 10 reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. Inhalation Exposure Input Parameters for the Biosphere Model is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the Technical Work Plan for Biosphere Modeling and Expert Support (BSC 2004 [DIRS 169573]). This analysis report defines and justifies values of mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception.
Parameter uncertainty analysis of a biokinetic model of caesium
Li, W. B.; Klein, W.; Blanchardon, Eric; Puncher, M; Leggett, Richard Wayne; Oeh, U.; Breustedt, B.; Nosske, Dietmar; Lopez, M.
2014-04-17
Parameter uncertainties for the biokinetic model of caesium (Cs) developed by Leggett et al. were inventoried and evaluated. The methods of parameter uncertainty analysis were used to assess the uncertainties of model predictions with the assumptions of model parameter uncertainties and distributions. Furthermore, the importance of individual model parameters was assessed by means of sensitivity analysis. The calculated uncertainties of model predictions were compared with human data of Cs measured in blood and in the whole body. It was found that propagating the derived uncertainties in model parameter values reproduced the range of bioassay data observed in human subjects at different times after intake. The maximum ranges, expressed as uncertainty factors (UFs) (defined as a square root of ratio between 97.5th and 2.5th percentiles) of blood clearance, whole-body retention and urinary excretion of Cs predicted at earlier time after intake were, respectively: 1.5, 1.0 and 2.5 at the first day; 1.8, 1.1 and 2.4 at Day 10 and 1.8, 2.0 and 1.8 at Day 100; for the late times (1000 d) after intake, the UFs were increased to 43, 24 and 31, respectively. The model parameters of transfer rates between kidneys and blood, muscle and blood and the rate of transfer from kidneys to urinary bladder content are most influential to the blood clearance and to the whole-body retention of Cs. For the urinary excretion, the parameters of transfer rates from urinary bladder content to urine and from kidneys to urinary bladder content impact mostly. The implication and effect on the estimated equivalent and effective doses of the larger uncertainty of 43 in whole-body retention in the later time, say, after Day 500 will be explored in a successive work in the framework of EURADOS.
Parameter uncertainty analysis of a biokinetic model of caesium.
Li, W B; Klein, W; Blanchardon, E; Puncher, M; Leggett, R W; Oeh, U; Breustedt, B; Noßke, D; Lopez, M A
2015-01-01
Parameter uncertainties for the biokinetic model of caesium (Cs) developed by Leggett et al. were inventoried and evaluated. The methods of parameter uncertainty analysis were used to assess the uncertainties of model predictions with the assumptions of model parameter uncertainties and distributions. Furthermore, the importance of individual model parameters was assessed by means of sensitivity analysis. The calculated uncertainties of model predictions were compared with human data of Cs measured in blood and in the whole body. It was found that propagating the derived uncertainties in model parameter values reproduced the range of bioassay data observed in human subjects at different times after intake. The maximum ranges, expressed as uncertainty factors (UFs) (defined as a square root of ratio between 97.5th and 2.5th percentiles) of blood clearance, whole-body retention and urinary excretion of Cs predicted at earlier time after intake were, respectively: 1.5, 1.0 and 2.5 at the first day; 1.8, 1.1 and 2.4 at Day 10 and 1.8, 2.0 and 1.8 at Day 100; for the late times (1000 d) after intake, the UFs were increased to 43, 24 and 31, respectively. The model parameters of transfer rates between kidneys and blood, muscle and blood and the rate of transfer from kidneys to urinary bladder content are most influential to the blood clearance and to the whole-body retention of Cs. For the urinary excretion, the parameters of transfer rates from urinary bladder content to urine and from kidneys to urinary bladder content impact mostly. The implication and effect on the estimated equivalent and effective doses of the larger uncertainty of 43 in whole-body retention in the later time, say, after Day 500 will be explored in a successive work in the framework of EURADOS.
Parameter uncertainty analysis of a biokinetic model of caesium
Li, W. B.; Klein, W.; Blanchardon, Eric; ...
2014-04-17
Parameter uncertainties for the biokinetic model of caesium (Cs) developed by Leggett et al. were inventoried and evaluated. The methods of parameter uncertainty analysis were used to assess the uncertainties of model predictions with the assumptions of model parameter uncertainties and distributions. Furthermore, the importance of individual model parameters was assessed by means of sensitivity analysis. The calculated uncertainties of model predictions were compared with human data of Cs measured in blood and in the whole body. It was found that propagating the derived uncertainties in model parameter values reproduced the range of bioassay data observed in human subjects atmore » different times after intake. The maximum ranges, expressed as uncertainty factors (UFs) (defined as a square root of ratio between 97.5th and 2.5th percentiles) of blood clearance, whole-body retention and urinary excretion of Cs predicted at earlier time after intake were, respectively: 1.5, 1.0 and 2.5 at the first day; 1.8, 1.1 and 2.4 at Day 10 and 1.8, 2.0 and 1.8 at Day 100; for the late times (1000 d) after intake, the UFs were increased to 43, 24 and 31, respectively. The model parameters of transfer rates between kidneys and blood, muscle and blood and the rate of transfer from kidneys to urinary bladder content are most influential to the blood clearance and to the whole-body retention of Cs. For the urinary excretion, the parameters of transfer rates from urinary bladder content to urine and from kidneys to urinary bladder content impact mostly. The implication and effect on the estimated equivalent and effective doses of the larger uncertainty of 43 in whole-body retention in the later time, say, after Day 500 will be explored in a successive work in the framework of EURADOS.« less
Environmental Transport Input Parameters for the Biosphere Model
M. A. Wasiolek
2003-06-27
This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (TWP) (BSC 2003 [163602]). Some documents in Figure 1-1 may be under development and not available when this report is issued. This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA), but access to the listed documents is not required to understand the contents of this report. This report is one of the reports that develops input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2003 [160699]) describes the conceptual model, the mathematical model, and the input parameters. The purpose of this analysis is to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or volcanic ash). The analysis was performed in accordance with the TWP (BSC 2003 [163602]). This analysis develops values of parameters associated with many features, events, and processes (FEPs) applicable to the reference biosphere (DTN: M00303SEPFEPS2.000 [162452]), which are addressed in the biosphere model (BSC 2003 [160699]). The treatment of these FEPs is described in BSC (2003 [160699], Section 6.2). Parameter values
Amin Yavari, S; Chai, Y C; Böttger, A J; Wauthle, R; Schrooten, J; Weinans, H; Zadpoor, A A
2015-06-01
Anodizing could be used for bio-functionalization of the surfaces of titanium alloys. In this study, we use anodizing for creating nanotubes on the surface of porous titanium alloy bone substitutes manufactured using selective laser melting. Different sets of anodizing parameters (voltage: 10 or 20V anodizing time: 30min to 3h) are used for anodizing porous titanium structures that were later heat treated at 500°C. The nanotopographical features are examined using electron microscopy while the bioactivity of anodized surfaces is measured using immersion tests in the simulated body fluid (SBF). Moreover, the effects of anodizing and heat treatment on the performance of one representative anodized porous titanium structures are evaluated using in vitro cell culture assays using human periosteum-derived cells (hPDCs). It has been shown that while anodizing with different anodizing parameters results in very different nanotopographical features, i.e. nanotubes in the range of 20 to 55nm, anodized surfaces have limited apatite-forming ability regardless of the applied anodizing parameters. The results of in vitro cell culture show that both anodizing, and thus generation of regular nanotopographical feature, and heat treatment improve the cell culture response of porous titanium. In particular, cell proliferation measured using metabolic activity and DNA content was improved for anodized and heat treated as well as for anodized but not heat-treated specimens. Heat treatment additionally improved the cell attachment of porous titanium surfaces and upregulated expression of osteogenic markers. Anodized but not heat-treated specimens showed some limited signs of upregulated expression of osteogenic markers. In conclusion, while varying the anodizing parameters creates different nanotube structure, it does not improve apatite-forming ability of porous titanium. However, both anodizing and heat treatment at 500°C improve the cell culture response of porous titanium.
Hubble Expansion Parameter in a New Model of Dark Energy
NASA Astrophysics Data System (ADS)
Saadat, Hassan
2012-01-01
In this study, we consider new model of dark energy based on Taylor expansion of its density and calculate the Hubble expansion parameter for various parameterizations of equation of state. This model is useful to probe a possible evolving of dark energy component in comparison with current observational data.
Separability of Item and Person Parameters in Response Time Models.
ERIC Educational Resources Information Center
Van Breukelen, Gerard J. P.
1997-01-01
Discusses two forms of separability of item and person parameters in the context of response time models. The first is "separate sufficiency," and the second is "ranking independence." For each form a theorem stating sufficient conditions is proved. The two forms are shown to include several cases of models from psychometric…
Multiple Model Parameter Adaptive Control for In-Flight Simulation.
1988-03-01
dynamics of an aircraft. The plant is control- lable by a proportional-plus-integral ( PI ) control law. This section describes two methods of calculating...adaptive model-following PI control law [20-24]. The control law bases its control gains upon the parameters of a linear difference equation model which
Dynamic Factor Analysis Models with Time-Varying Parameters
ERIC Educational Resources Information Center
Chow, Sy-Miin; Zu, Jiyun; Shifren, Kim; Zhang, Guangjian
2011-01-01
Dynamic factor analysis models with time-varying parameters offer a valuable tool for evaluating multivariate time series data with time-varying dynamics and/or measurement properties. We use the Dynamic Model of Activation proposed by Zautra and colleagues (Zautra, Potter, & Reich, 1997) as a motivating example to construct a dynamic factor…
Uncertainty Analysis and Parameter Estimation For Nearshore Hydrodynamic Models
NASA Astrophysics Data System (ADS)
Ardani, S.; Kaihatu, J. M.
2012-12-01
Numerical models represent deterministic approaches used for the relevant physical processes in the nearshore. Complexity of the physics of the model and uncertainty involved in the model inputs compel us to apply a stochastic approach to analyze the robustness of the model. The Bayesian inverse problem is one powerful way to estimate the important input model parameters (determined by apriori sensitivity analysis) and can be used for uncertainty analysis of the outputs. Bayesian techniques can be used to find the range of most probable parameters based on the probability of the observed data and the residual errors. In this study, the effect of input data involving lateral (Neumann) boundary conditions, bathymetry and off-shore wave conditions on nearshore numerical models are considered. Monte Carlo simulation is applied to a deterministic numerical model (the Delft3D modeling suite for coupled waves and flow) for the resulting uncertainty analysis of the outputs (wave height, flow velocity, mean sea level and etc.). Uncertainty analysis of outputs is performed by random sampling from the input probability distribution functions and running the model as required until convergence to the consistent results is achieved. The case study used in this analysis is the Duck94 experiment, which was conducted at the U.S. Army Field Research Facility at Duck, North Carolina, USA in the fall of 1994. The joint probability of model parameters relevant for the Duck94 experiments will be found using the Bayesian approach. We will further show that, by using Bayesian techniques to estimate the optimized model parameters as inputs and applying them for uncertainty analysis, we can obtain more consistent results than using the prior information for input data which means that the variation of the uncertain parameter will be decreased and the probability of the observed data will improve as well. Keywords: Monte Carlo Simulation, Delft3D, uncertainty analysis, Bayesian techniques
Addition of a 5/cm Spectral Resolution Band Model Option to LOWTRAN5.
1980-10-01
FORM I. REPORT NUMBER .GOVT ACCESSION NO. 3 . RECIPIENT’S CATALCI UMISER ARI-RR-232 -9 1 0. T Ct IIIM INNY S TYPE OF REPORT & PERIOD COVERED I ddition of...5r/TPAN (2) the addition of temperature dependent ecular absorption coefficients,’ and ( 3 ) the use of a multi-parameter, Dp 71pForentz band model for...LOWTRA.I5 and LOWTRAN5(IMOD) ..... 2-10 2.8 Comparison of LOWTRAN5 Models to Measurements 2-16 3 . MODIFICATIONS TO LOWTRAN5
NB-PLC channel modelling with cyclostationary noise addition & OFDM implementation for smart grid
NASA Astrophysics Data System (ADS)
Thomas, Togis; Gupta, K. K.
2016-03-01
Power line communication (PLC) technology can be a viable solution for the future ubiquitous networks because it provides a cheaper alternative to other wired technology currently being used for communication. In smart grid Power Line Communication (PLC) is used to support communication with low rate on low voltage (LV) distribution network. In this paper, we propose the channel modelling of narrowband (NB) PLC in the frequency range 5 KHz to 500 KHz by using ABCD parameter with cyclostationary noise addition. Behaviour of the channel was studied by the addition of 11KV/230V transformer, by varying load location and load. Bit error rate (BER) Vs signal to noise ratio SNR) was plotted for the proposed model by employing OFDM. Our simulation results based on the proposed channel model show an acceptable performance in terms of bit error rate versus signal to noise ratio, which enables communication required for smart grid applications.
Bayesian methods for characterizing unknown parameters of material models
Emery, J. M.; Grigoriu, M. D.; Field Jr., R. V.
2016-02-04
A Bayesian framework is developed for characterizing the unknown parameters of probabilistic models for material properties. In this framework, the unknown parameters are viewed as random and described by their posterior distributions obtained from prior information and measurements of quantities of interest that are observable and depend on the unknown parameters. The proposed Bayesian method is applied to characterize an unknown spatial correlation of the conductivity field in the definition of a stochastic transport equation and to solve this equation by Monte Carlo simulation and stochastic reduced order models (SROMs). As a result, the Bayesian method is also employed tomore » characterize unknown parameters of material properties for laser welds from measurements of peak forces sustained by these welds.« less
Bayesian methods for characterizing unknown parameters of material models
Emery, J. M.; Grigoriu, M. D.; Field Jr., R. V.
2016-02-04
A Bayesian framework is developed for characterizing the unknown parameters of probabilistic models for material properties. In this framework, the unknown parameters are viewed as random and described by their posterior distributions obtained from prior information and measurements of quantities of interest that are observable and depend on the unknown parameters. The proposed Bayesian method is applied to characterize an unknown spatial correlation of the conductivity field in the definition of a stochastic transport equation and to solve this equation by Monte Carlo simulation and stochastic reduced order models (SROMs). As a result, the Bayesian method is also employed to characterize unknown parameters of material properties for laser welds from measurements of peak forces sustained by these welds.
Soil-related Input Parameters for the Biosphere Model
A. J. Smith
2003-07-02
This analysis is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the geologic repository at Yucca Mountain. The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A graphical representation of the documentation hierarchy for the ERMYN biosphere model is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (BSC 2003 [163602]). It should be noted that some documents identified in Figure 1-1 may be under development at the time this report is issued and therefore not available. This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. ''The Biosphere Model Report'' (BSC 2003 [160699]) describes in detail the conceptual model as well as the mathematical model and its input parameters. The purpose of this analysis was to develop the biosphere model parameters needed to evaluate doses from pathways associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation and ash
Automatic Determination of the Conic Coronal Mass Ejection Model Parameters
NASA Technical Reports Server (NTRS)
Pulkkinen, A.; Oates, T.; Taktakishvili, A.
2009-01-01
Characterization of the three-dimensional structure of solar transients using incomplete plane of sky data is a difficult problem whose solutions have potential for societal benefit in terms of space weather applications. In this paper transients are characterized in three dimensions by means of conic coronal mass ejection (CME) approximation. A novel method for the automatic determination of cone model parameters from observed halo CMEs is introduced. The method uses both standard image processing techniques to extract the CME mass from white-light coronagraph images and a novel inversion routine providing the final cone parameters. A bootstrap technique is used to provide model parameter distributions. When combined with heliospheric modeling, the cone model parameter distributions will provide direct means for ensemble predictions of transient propagation in the heliosphere. An initial validation of the automatic method is carried by comparison to manually determined cone model parameters. It is shown using 14 halo CME events that there is reasonable agreement, especially between the heliocentric locations of the cones derived with the two methods. It is argued that both the heliocentric locations and the opening half-angles of the automatically determined cones may be more realistic than those obtained from the manual analysis
Parameter Estimation for Single Diode Models of Photovoltaic Modules
Hansen, Clifford
2015-03-01
Many popular models for photovoltaic system performance employ a single diode model to compute the I - V curve for a module or string of modules at given irradiance and temperature conditions. A single diode model requires a number of parameters to be estimated from measured I - V curves. Many available parameter estimation methods use only short circuit, o pen circuit and maximum power points for a single I - V curve at standard test conditions together with temperature coefficients determined separately for individual cells. In contrast, module testing frequently records I - V curves over a wide range of irradi ance and temperature conditions which, when available , should also be used to parameterize the performance model. We present a parameter estimation method that makes use of a fu ll range of available I - V curves. We verify the accuracy of the method by recov ering known parameter values from simulated I - V curves . We validate the method by estimating model parameters for a module using outdoor test data and predicting the outdoor performance of the module.
The parameter landscape of a mammalian circadian clock model
NASA Astrophysics Data System (ADS)
Jolley, Craig; Ueda, Hiroki
2013-03-01
In mammals, an intricate system of feedback loops enables autonomous, robust oscillations synchronized with the daily light/dark cycle. Based on recent experimental evidence, we have developed a simplified dynamical model and parameterized it by compiling experimental data on the amplitude, phase, and average baseline of clock gene oscillations. Rather than identifying a single ``optimal'' parameter set, we used Monte Carlo sampling to explore the fitting landscape. The resulting ensemble of model parameter sets is highly anisotropic, with very large variances along some (non-trivial) linear combinations of parameters and very small variances along others. This suggests that our model exhibits ``sloppy'' features that have previously been identified in various multi-parameter fitting problems. We will discuss the implications of this model fitting behavior for the reliability of both individual parameter estimates and systems-level predictions of oscillator characteristics, as well as the impact of experimental constraints. The results of this study are likely to be important both for improved understanding of the mammalian circadian oscillator and as a test case for more general questions about the features of systems biology models.
Inducible mouse models illuminate parameters influencing epigenetic inheritance.
Wan, Mimi; Gu, Honggang; Wang, Jingxue; Huang, Haichang; Zhao, Jiugang; Kaundal, Ravinder K; Yu, Ming; Kushwaha, Ritu; Chaiyachati, Barbara H; Deerhake, Elizabeth; Chi, Tian
2013-02-01
Environmental factors can stably perturb the epigenome of exposed individuals and even that of their offspring, but the pleiotropic effects of these factors have posed a challenge for understanding the determinants of mitotic or transgenerational inheritance of the epigenetic perturbation. To tackle this problem, we manipulated the epigenetic states of various target genes using a tetracycline-dependent transcription factor. Remarkably, transient manipulation at appropriate times during embryogenesis led to aberrant epigenetic modifications in the ensuing adults regardless of the modification patterns, target gene sequences or locations, and despite lineage-specific epigenetic programming that could reverse the epigenetic perturbation, thus revealing extraordinary malleability of the fetal epigenome, which has implications for 'metastable epialleles'. However, strong transgenerational inheritance of these perturbations was observed only at transgenes integrated at the Col1a1 locus, where both activating and repressive chromatin modifications were heritable for multiple generations; such a locus is unprecedented. Thus, in our inducible animal models, mitotic inheritance of epigenetic perturbation seems critically dependent on the timing of the perturbation, whereas transgenerational inheritance additionally depends on the location of the perturbation. In contrast, other parameters examined, particularly the chromatin modification pattern and DNA sequence, appear irrelevant.
Evaluation of Personnel Parameters in Software Cost Estimating Models
2007-11-02
ACAP , 1.42; all other parameters would be set to the nominal value of one. The effort multiplier will be a fixed value if the model uses linear...data. The calculated multiplier values were the 45 Table 8. COSTAR Trials For Multiplier Calculation Run ACAP PCAP PCON APEX PLEX LTEX Effort...impact. Table 9. COCOMO II Personnel Parameters Effort Multipliers Driver Lowest Nominal Highest Analyst Capability ( ACAP ) 1.42 1.00 0.71
Dagar, S S; Singh, N; Goel, N; Kumar, S; Puniya, A K
2015-01-01
In the present study, rumen microbial groups, i.e. total rumen microbes (TRM), total anaerobic fungi (TAF), avicel enriched bacteria (AEB) and neutral detergent fibre enriched bacteria (NEB) were evaluated for wheat straw (WS) degradability and different fermentation parameters in vitro. Highest WS degradation was shown for TRM, followed by TAF, NEB and least by AEB. Similar patterns were observed with total gas production and short chain fatty acid profiles. Overall, TAF emerged as the most potent individual microbial group. In order to enhance the fibrolytic and rumen fermentation potential of TAF, we evaluated 18 plant feed additives in vitro. Among these, six plant additives namely Albizia lebbeck, Alstonia scholaris, Bacopa monnieri, Lawsonia inermis, Psidium guajava and Terminalia arjuna considerably improved WS degradation by TAF. Further evaluation showed A. lebbeck as best feed additive. The study revealed that TAF plays a significant role in WS degradation and their fibrolytic activities can be improved by inclusion of A. lebbeck in fermentation medium. Further studies are warranted to elucidate its active constituents, effect on fungal population and in vivo potential in animal system.
ERIC Educational Resources Information Center
Lai, Keke; Kelley, Ken
2011-01-01
In addition to evaluating a structural equation model (SEM) as a whole, often the model parameters are of interest and confidence intervals for those parameters are formed. Given a model with a good overall fit, it is entirely possible for the targeted effects of interest to have very wide confidence intervals, thus giving little information about…
Application of physical parameter identification to finite-element models
NASA Technical Reports Server (NTRS)
Bronowicki, Allen J.; Lukich, Michael S.; Kuritz, Steven P.
1987-01-01
The time domain parameter identification method described previously is applied to TRW's Large Space Structure Truss Experiment. Only control sensors and actuators are employed in the test procedure. The fit of the linear structural model to the test data is improved by more than an order of magnitude using a physically reasonable parameter set. The electro-magnetic control actuators are found to contribute significant damping due to a combination of eddy current and back electro-motive force (EMF) effects. Uncertainties in both estimated physical parameters and modal behavior variables are given.
Parameter identifiability and estimation of HIV/AIDS dynamic models.
Wu, Hulin; Zhu, Haihong; Miao, Hongyu; Perelson, Alan S
2008-04-01
We use a technique from engineering (Xia and Moog, in IEEE Trans. Autom. Contr. 48(2):330-336, 2003; Jeffrey and Xia, in Tan, W.Y., Wu, H. (Eds.), Deterministic and Stochastic Models of AIDS Epidemics and HIV Infections with Intervention, 2005) to investigate the algebraic identifiability of a popular three-dimensional HIV/AIDS dynamic model containing six unknown parameters. We find that not all six parameters in the model can be identified if only the viral load is measured, instead only four parameters and the product of two parameters (N and lambda) are identifiable. We introduce the concepts of an identification function and an identification equation and propose the multiple time point (MTP) method to form the identification function which is an alternative to the previously developed higher-order derivative (HOD) method (Xia and Moog, in IEEE Trans. Autom. Contr. 48(2):330-336, 2003; Jeffrey and Xia, in Tan, W.Y., Wu, H. (Eds.), Deterministic and Stochastic Models of AIDS Epidemics and HIV Infections with Intervention, 2005). We show that the newly proposed MTP method has advantages over the HOD method in the practical implementation. We also discuss the effect of the initial values of state variables on the identifiability of unknown parameters. We conclude that the initial values of output (observable) variables are part of the data that can be used to estimate the unknown parameters, but the identifiability of unknown parameters is not affected by these initial values if the exact initial values are measured with error. These noisy initial values only increase the estimation error of the unknown parameters. However, having the initial values of the latent (unobservable) state variables exactly known may help to identify more parameters. In order to validate the identifiability results, simulation studies are performed to estimate the unknown parameters and initial values from simulated noisy data. We also apply the proposed methods to a clinical data set
Parameter fitting for piano sound synthesis by physical modeling
NASA Astrophysics Data System (ADS)
Bensa, Julien; Gipouloux, Olivier; Kronland-Martinet, Richard
2005-07-01
A difficult issue in the synthesis of piano tones by physical models is to choose the values of the parameters governing the hammer-string model. In fact, these parameters are hard to estimate from static measurements, causing the synthesis sounds to be unrealistic. An original approach that estimates the parameters of a piano model, from the measurement of the string vibration, by minimizing a perceptual criterion is proposed. The minimization process that was used is a combination of a gradient method and a simulated annealing algorithm, in order to avoid convergence problems in case of multiple local minima. The criterion, based on the tristimulus concept, takes into account the spectral energy density in three bands, each allowing particular parameters to be estimated. The optimization process has been run on signals measured on an experimental setup. The parameters thus estimated provided a better sound quality than the one obtained using a global energetic criterion. Both the sound's attack and its brightness were better preserved. This quality gain was obtained for parameter values very close to the initial ones, showing that only slight deviations are necessary to make synthetic sounds closer to the real ones.
Control of the SCOLE configuration using distributed parameter models
NASA Astrophysics Data System (ADS)
Hsiao, Min-Hung; Huang, Jen-Kuang
1994-06-01
A continuum model for the SCOLE configuration has been derived using transfer matrices. Controller designs for distributed parameter systems have been analyzed. Pole-assignment controller design is considered easy to implement but stability is not guaranteed. An explicit transfer function of dynamic controllers has been obtained and no model reduction is required before the controller is realized. One specific LQG controller for continuum models had been derived, but other optimal controllers for more general performances need to be studied.
Doligez, B.; Eschard, R.; Geffroy, F.
1997-08-01
The classical approach to construct reservoir models is to start with a fine scale geological model which is informed with petrophysical properties. Then scaling-up techniques allow to obtain a reservoir model which is compatible with the fluid flow simulators. Geostatistical modelling techniques are widely used to build the geological models before scaling-up. These methods provide equiprobable images of the area under investigation, which honor the well data, and which variability is the same than the variability computed from the data. At an appraisal phase, when few data are available, or when the wells are insufficient to describe all the heterogeneities and the behavior of the field, additional constraints are needed to obtain a more realistic geological model. For example, seismic data or stratigraphic models can provide average reservoir information with an excellent areal coverage, but with a poor vertical resolution. New advances in modelisation techniques allow now to integrate this type of additional external information in order to constrain the simulations. In particular, 2D or 3D seismic derived information grids, or sand-shale ratios maps coming from stratigraphic models can be used as external drifts to compute the geological image of the reservoir at the fine scale. Examples are presented to illustrate the use of these new tools, their impact on the final reservoir model, and their sensitivity to some key parameters.
Forest Productivity for Soft Calibration of Soil Parameters in Eco-hydrologic Modeling
NASA Astrophysics Data System (ADS)
Garcia, E.; Tague, C.
2014-12-01
Calibration of soil drainage parameters in hydrologic models is typically achieved using statistics based on streamflow. Models that couple hydrology with ecosystem carbon and nutrient cycling also calculate estimates of carbon and nutrient stores and fluxes. Particularly in water-limited environments, these estimates will be sensitive to soil drainage parameters. We investigate the use of estimates of annual net primary productivity (annNPP) as an additional data source for soil parameter calibration. We combine literature-based estimates of annNPP with streamflow statistics to calibrate for soil parameters in three Western U.S. watersheds using a coupled eco-hydrology model. We show that for all sites, estimates of annNPP vary significantly across soil parameters selected solely using streamflow calibration. In all watersheds streamflow metrics select soil parameters that yield a range of annNPP estimates that can exceed literature-derived bounds for annNPP by 58-77%. Only 1-10% of the original soil parameter sets met both annNPP and streamflow criteria - a substantial reduction when compared to the percentage of acceptable parameter sets selected using annNPP or streamflow separately. Similarly, streamflow performance varies substantially across soil parameters selected based solely on annNPP criteria. Results show that annNPP in combination with streamflow-based metrics can better constrain soil parameters, although the usefulness varies across watersheds.
Bayesian parameter inference and model selection by population annealing in systems biology.
Murakami, Yohei
2014-01-01
Parameter inference and model selection are very important for mathematical modeling in systems biology. Bayesian statistics can be used to conduct both parameter inference and model selection. Especially, the framework named approximate Bayesian computation is often used for parameter inference and model selection in systems biology. However, Monte Carlo methods needs to be used to compute Bayesian posterior distributions. In addition, the posterior distributions of parameters are sometimes almost uniform or very similar to their prior distributions. In such cases, it is difficult to choose one specific value of parameter with high credibility as the representative value of the distribution. To overcome the problems, we introduced one of the population Monte Carlo algorithms, population annealing. Although population annealing is usually used in statistical mechanics, we showed that population annealing can be used to compute Bayesian posterior distributions in the approximate Bayesian computation framework. To deal with un-identifiability of the representative values of parameters, we proposed to run the simulations with the parameter ensemble sampled from the posterior distribution, named "posterior parameter ensemble". We showed that population annealing is an efficient and convenient algorithm to generate posterior parameter ensemble. We also showed that the simulations with the posterior parameter ensemble can, not only reproduce the data used for parameter inference, but also capture and predict the data which was not used for parameter inference. Lastly, we introduced the marginal likelihood in the approximate Bayesian computation framework for Bayesian model selection. We showed that population annealing enables us to compute the marginal likelihood in the approximate Bayesian computation framework and conduct model selection depending on the Bayes factor.
Bayesian Parameter Inference and Model Selection by Population Annealing in Systems Biology
Murakami, Yohei
2014-01-01
Parameter inference and model selection are very important for mathematical modeling in systems biology. Bayesian statistics can be used to conduct both parameter inference and model selection. Especially, the framework named approximate Bayesian computation is often used for parameter inference and model selection in systems biology. However, Monte Carlo methods needs to be used to compute Bayesian posterior distributions. In addition, the posterior distributions of parameters are sometimes almost uniform or very similar to their prior distributions. In such cases, it is difficult to choose one specific value of parameter with high credibility as the representative value of the distribution. To overcome the problems, we introduced one of the population Monte Carlo algorithms, population annealing. Although population annealing is usually used in statistical mechanics, we showed that population annealing can be used to compute Bayesian posterior distributions in the approximate Bayesian computation framework. To deal with un-identifiability of the representative values of parameters, we proposed to run the simulations with the parameter ensemble sampled from the posterior distribution, named “posterior parameter ensemble”. We showed that population annealing is an efficient and convenient algorithm to generate posterior parameter ensemble. We also showed that the simulations with the posterior parameter ensemble can, not only reproduce the data used for parameter inference, but also capture and predict the data which was not used for parameter inference. Lastly, we introduced the marginal likelihood in the approximate Bayesian computation framework for Bayesian model selection. We showed that population annealing enables us to compute the marginal likelihood in the approximate Bayesian computation framework and conduct model selection depending on the Bayes factor. PMID:25089832
Percolation model with an additional source of disorder
NASA Astrophysics Data System (ADS)
Kundu, Sumanta; Manna, S. S.
2016-06-01
The ranges of transmission of the mobiles in a mobile ad hoc network are not uniform in reality. They are affected by the temperature fluctuation in air, obstruction due to the solid objects, even the humidity difference in the environment, etc. How the varying range of transmission of the individual active elements affects the global connectivity in the network may be an important practical question to ask. Here a model of percolation phenomena, with an additional source of disorder, is introduced for a theoretical understanding of this problem. As in ordinary percolation, sites of a square lattice are occupied randomly with probability p . Each occupied site is then assigned a circular disk of random value R for its radius. A bond is defined to be occupied if and only if the radii R1 and R2 of the disks centered at the ends satisfy a certain predefined condition. In a very general formulation, one divides the R1-R2 plane into two regions by an arbitrary closed curve. One defines a point within one region as representing an occupied bond; otherwise it is a vacant bond. The study of three different rules under this general formulation indicates that the percolation threshold always varies continuously. This threshold has two limiting values, one is pc(sq) , the percolation threshold for the ordinary site percolation on the square lattice, and the other is unity. The approach of the percolation threshold to its limiting values are characterized by two exponents. In a special case, all lattice sites are occupied by disks of random radii R ∈{0 ,R0} and a percolation transition is observed with R0 as the control variable, similar to the site occupation probability.
QCD-inspired determination of NJL model parameters
NASA Astrophysics Data System (ADS)
Springer, Paul; Braun, Jens; Rechenberger, Stefan; Rennecke, Fabian
2017-03-01
The QCD phase diagram at finite temperature and density has attracted considerable interest over many decades now, not least because of its relevance for a better understanding of heavy-ion collision experiments. Models provide some insight into the QCD phase structure but usually rely on various parameters. Based on renormalization group arguments, we discuss how the parameters of QCD low-energy models can be determined from the fundamental theory of the strong interaction. We particularly focus on a determination of the temperature dependence of these parameters in this work and comment on the effect of a finite quark chemical potential. We present first results and argue that our findings can be used to improve the predictive power of future model calculations.
Utilizing Soize's Approach to Identify Parameter and Model Uncertainties
Bonney, Matthew S.; Brake, Matthew Robert
2014-10-01
Quantifying uncertainty in model parameters is a challenging task for analysts. Soize has derived a method that is able to characterize both model and parameter uncertainty independently. This method is explained with the assumption that some experimental data is available, and is divided into seven steps. Monte Carlo analyses are performed to select the optimal dispersion variable to match the experimental data. Along with the nominal approach, an alternative distribution can be used along with corrections that can be utilized to expand the scope of this method. This method is one of a very few methods that can quantify uncertainty in the model form independently of the input parameters. Two examples are provided to illustrate the methodology, and example code is provided in the Appendix.
Fachet, Melanie; Flassig, Robert J; Rihko-Struckmann, Liisa; Sundmacher, Kai
2014-12-01
In this work, a photoautotrophic growth model incorporating light and nutrient effects on growth and pigmentation of Dunaliella salina was formulated. The model equations were taken from literature and modified according to the experimental setup with special emphasis on model reduction. The proposed model has been evaluated with experimental data of D. salina cultivated in a flat-plate photobioreactor under stressed and non-stressed conditions. Simulation results show that the model can represent the experimental data accurately. The identifiability of the model parameters was studied using the profile likelihood method. This analysis revealed that three model parameters are practically non-identifiable. However, some of these non-identifiabilities can be resolved by model reduction and additional measurements. As a conclusion, our results suggest that the proposed model equations result in a predictive growth model for D. salina.
SPOTting model parameters using a ready-made Python package
NASA Astrophysics Data System (ADS)
Houska, Tobias; Kraft, Philipp; Breuer, Lutz
2015-04-01
The selection and parameterization of reliable process descriptions in ecological modelling is driven by several uncertainties. The procedure is highly dependent on various criteria, like the used algorithm, the likelihood function selected and the definition of the prior parameter distributions. A wide variety of tools have been developed in the past decades to optimize parameters. Some of the tools are closed source. Due to this, the choice for a specific parameter estimation method is sometimes more dependent on its availability than the performance. A toolbox with a large set of methods can support users in deciding about the most suitable method. Further, it enables to test and compare different methods. We developed the SPOT (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of modules, to analyze and optimize parameters of (environmental) models. SPOT comes along with a selected set of algorithms for parameter optimization and uncertainty analyses (Monte Carlo, MC; Latin Hypercube Sampling, LHS; Maximum Likelihood, MLE; Markov Chain Monte Carlo, MCMC; Scuffled Complex Evolution, SCE-UA; Differential Evolution Markov Chain, DE-MCZ), together with several likelihood functions (Bias, (log-) Nash-Sutcliff model efficiency, Correlation Coefficient, Coefficient of Determination, Covariance, (Decomposed-, Relative-, Root-) Mean Squared Error, Mean Absolute Error, Agreement Index) and prior distributions (Binomial, Chi-Square, Dirichlet, Exponential, Laplace, (log-, multivariate-) Normal, Pareto, Poisson, Cauchy, Uniform, Weibull) to sample from. The model-independent structure makes it suitable to analyze a wide range of applications. We apply all algorithms of the SPOT package in three different case studies. Firstly, we investigate the response of the Rosenbrock function, where the MLE algorithm shows its strengths. Secondly, we study the Griewank function, which has a challenging response surface for
Dynamic Factor Analysis Models With Time-Varying Parameters.
Chow, Sy-Miin; Zu, Jiyun; Shifren, Kim; Zhang, Guangjian
2011-04-11
Dynamic factor analysis models with time-varying parameters offer a valuable tool for evaluating multivariate time series data with time-varying dynamics and/or measurement properties. We use the Dynamic Model of Activation proposed by Zautra and colleagues (Zautra, Potter, & Reich, 1997) as a motivating example to construct a dynamic factor model with vector autoregressive relations and time-varying cross-regression parameters at the factor level. Using techniques drawn from the state-space literature, the model was fitted to a set of daily affect data (over 71 days) from 10 participants who had been diagnosed with Parkinson's disease. Our empirical results lend partial support and some potential refinement to the Dynamic Model of Activation with regard to how the time dependencies between positive and negative affects change over time. A simulation study is conducted to examine the performance of the proposed techniques when (a) changes in the time-varying parameters are represented using the true model of change, (b) supposedly time-invariant parameters are represented as time-varying, and
Synchronous Generator Model Parameter Estimation Based on Noisy Dynamic Waveforms
NASA Astrophysics Data System (ADS)
Berhausen, Sebastian; Paszek, Stefan
2016-01-01
In recent years, there have occurred system failures in many power systems all over the world. They have resulted in a lack of power supply to a large number of recipients. To minimize the risk of occurrence of power failures, it is necessary to perform multivariate investigations, including simulations, of power system operating conditions. To conduct reliable simulations, the current base of parameters of the models of generating units, containing the models of synchronous generators, is necessary. In the paper, there is presented a method for parameter estimation of a synchronous generator nonlinear model based on the analysis of selected transient waveforms caused by introducing a disturbance (in the form of a pseudorandom signal) in the generator voltage regulation channel. The parameter estimation was performed by minimizing the objective function defined as a mean square error for deviations between the measurement waveforms and the waveforms calculated based on the generator mathematical model. A hybrid algorithm was used for the minimization of the objective function. In the paper, there is described a filter system used for filtering the noisy measurement waveforms. The calculation results of the model of a 44 kW synchronous generator installed on a laboratory stand of the Institute of Electrical Engineering and Computer Science of the Silesian University of Technology are also given. The presented estimation method can be successfully applied to parameter estimation of different models of high-power synchronous generators operating in a power system.
[Parameter uncertainty analysis for urban rainfall runoff modelling].
Huang, Jin-Liang; Lin, Jie; Du, Peng-Fei
2012-07-01
An urban watershed in Xiamen was selected to perform the parameter uncertainty analysis for urban stormwater runoff modeling in terms of identification and sensitivity analysis based on storm water management model (SWMM) using Monte-Carlo sampling and regionalized sensitivity analysis (RSA) algorithm. Results show that Dstore-Imperv, Dstore-Perv and Curve Number (CN) are the identifiable parameters with larger K-S values in hydrological and hydraulic module, and the rank of K-S values in hydrological and hydraulic module is Dstore-Imperv > CN > Dstore-Perv > N-Perv > conductivity > Con-Mann > N-Imperv. With regards to water quality module, the parameters in exponent washoff model including Coefficient and Exponent and the Max. Buildup parameter of saturation buildup model in three land cover types are the identifiable parameters with the larger K-S values. In comparison, the K-S value of rate constant in three landuse/cover types is smaller than that of Max. Buildup, Coefficient and Exponent.
Optimizing Muscle Parameters in Musculoskeletal Modeling Using Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Hanson, Andrea; Reed, Erik; Cavanagh, Peter
2011-01-01
Astronauts assigned to long-duration missions experience bone and muscle atrophy in the lower limbs. The use of musculoskeletal simulation software has become a useful tool for modeling joint and muscle forces during human activity in reduced gravity as access to direct experimentation is limited. Knowledge of muscle and joint loads can better inform the design of exercise protocols and exercise countermeasure equipment. In this study, the LifeModeler(TM) (San Clemente, CA) biomechanics simulation software was used to model a squat exercise. The initial model using default parameters yielded physiologically reasonable hip-joint forces. However, no activation was predicted in some large muscles such as rectus femoris, which have been shown to be active in 1-g performance of the activity. Parametric testing was conducted using Monte Carlo methods and combinatorial reduction to find a muscle parameter set that more closely matched physiologically observed activation patterns during the squat exercise. Peak hip joint force using the default parameters was 2.96 times body weight (BW) and increased to 3.21 BW in an optimized, feature-selected test case. The rectus femoris was predicted to peak at 60.1% activation following muscle recruitment optimization, compared to 19.2% activation with default parameters. These results indicate the critical role that muscle parameters play in joint force estimation and the need for exploration of the solution space to achieve physiologically realistic muscle activation.
Tokarz-Deptuła, B; Niedźwiedzka-Rystwej, P; Adamiak, M; Hukowska-Szematowicz, B; Trzeciak-Ryczek, A; Deptuła, W
2015-01-01
In the paper we studied haematologic values, such as haemoglobin concentration, haematocrit value, thrombocytes, leucocytes: lymphocytes, neutrophils, basophils, eosinophils and monocytes in the pheral blood in Polish mixed-breeds with addition of meat breed blood in order to obtain the reference values which are until now not available for this animals. In studying this indices we took into consideration the impact of the season (spring, summer, autumn, winter), and sex of the animals. The studies have shown a high impact of the season of the year on those rabbits, but only in spring and summer. Moreover we observed that the sex has mean impact on the studied values of haematological parameters in those rabbits. According to our knowledge, this is the first paper on haematologic values in this widely used group of rabbits, so they may serve as reference values.
Chapman, Michael S.; Trzynka, Andrew; Chapman, Brynmor K.
2013-01-01
When refining the fit of component atomic structures into electron microscopic reconstructions, use of a resolution-dependent atomic density function makes it possible to jointly optimize the atomic model and imaging parameters of the microscope. Atomic density is calculated by one-dimensional Fourier transform of atomic form factors convoluted with a microscope envelope correction and a low-pass filter, allowing refinement of imaging parameters such as resolution, by optimizing the agreement of calculated and experimental maps. A similar approach allows refinement of atomic displacement parameters, providing indications of molecular flexibility even at low resolution. A modest improvement in atomic coordinates is possible following optimization of these additional parameters. Methods have been implemented in a Python program that can be used in stand-alone mode for rigid-group refinement, or embedded in other optimizers for flexible refinement with stereochemical restraints. The approach is demonstrated with refinements of virus and chaperonin structures at resolutions of 9 through 4.5 Å, representing regimes where rigid-group and fully flexible parameterizations are appropriate. Through comparisons to known crystal structures, flexible fitting by RSRef is shown to be an improvement relative to other methods and to generate models with all-atom rms accuracies of 1.5–2.5 Å at resolutions of 4.5–6 Å. PMID:23376441
Chapman, Michael S; Trzynka, Andrew; Chapman, Brynmor K
2013-04-01
When refining the fit of component atomic structures into electron microscopic reconstructions, use of a resolution-dependent atomic density function makes it possible to jointly optimize the atomic model and imaging parameters of the microscope. Atomic density is calculated by one-dimensional Fourier transform of atomic form factors convoluted with a microscope envelope correction and a low-pass filter, allowing refinement of imaging parameters such as resolution, by optimizing the agreement of calculated and experimental maps. A similar approach allows refinement of atomic displacement parameters, providing indications of molecular flexibility even at low resolution. A modest improvement in atomic coordinates is possible following optimization of these additional parameters. Methods have been implemented in a Python program that can be used in stand-alone mode for rigid-group refinement, or embedded in other optimizers for flexible refinement with stereochemical restraints. The approach is demonstrated with refinements of virus and chaperonin structures at resolutions of 9 through 4.5 Å, representing regimes where rigid-group and fully flexible parameterizations are appropriate. Through comparisons to known crystal structures, flexible fitting by RSRef is shown to be an improvement relative to other methods and to generate models with all-atom rms accuracies of 1.5-2.5 Å at resolutions of 4.5-6 Å.
NASA Astrophysics Data System (ADS)
Marton, F. C.
2001-12-01
The thermal, mineralogical, and buoyancy structures of thermal-kinetic models of subducting slabs are highly dependent upon a number of parameters, especially if the metastable persistence of olivine in the transition zone is investigated. The choice of starting thermal model for the lithosphere, whether a cooling halfspace (HS) or plate model, can have a significant effect, resulting in metastable wedges of olivine that differ in size by up to two to three times for high values of the thermal parameter (ǎrphi). Moreover, as ǎrphi is the product of the age of the lithosphere at the trench, convergence rate, and dip angle, slabs with similar ǎrphis can show great variations in structures as these constituents change. This is especially true for old lithosphere, as the lithosphere continually cools and thickens with age for HS models, but plate models, with parameters from Parson and Sclater [1977] (PS) or Stein and Stein [1992] (GDH1), achieve a thermal steady-state and constant thickness in about 70 My. In addition, the latent heats (q) of the phase transformations of the Mg2SiO4 polymorphs can also have significant effects in the slabs. Including q feedback in models raises the temperature and reduces the extent of metastable olivine, causing the sizes of the metastable wedges to vary by factors of up to two times. The effects of the choice of thermal model, inclusion and non-inclusion of q feedback, and variations in the constituents of ǎrphi are investigated for several model slabs.
Estimation of the parameters of ETAS models by Simulated Annealing.
Lombardi, Anna Maria
2015-02-12
This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is more significant. These results give new insights into the ETAS model and the efficiency of the maximum-likelihood method within this context.
Estimation of the parameters of ETAS models by Simulated Annealing
NASA Astrophysics Data System (ADS)
Lombardi, Anna Maria
2015-02-01
This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is more significant. These results give new insights into the ETAS model and the efficiency of the maximum-likelihood method within this context.
Estimation of the parameters of ETAS models by Simulated Annealing
Lombardi, Anna Maria
2015-01-01
This paper proposes a new algorithm to estimate the maximum likelihood parameters of an Epidemic Type Aftershock Sequences (ETAS) model. It is based on Simulated Annealing, a versatile method that solves problems of global optimization and ensures convergence to a global optimum. The procedure is tested on both simulated and real catalogs. The main conclusion is that the method performs poorly as the size of the catalog decreases because the effect of the correlation of the ETAS parameters is more significant. These results give new insights into the ETAS model and the efficiency of the maximum-likelihood method within this context. PMID:25673036
Determination of modeling parameters for power IGBTs under pulsed power conditions
Dale, Gregory E; Van Gordon, Jim A; Kovaleski, Scott D
2010-01-01
While the power insulated gate bipolar transistor (IGRT) is used in many applications, it is not well characterized under pulsed power conditions. This makes the IGBT difficult to model for solid state pulsed power applications. The Oziemkiewicz implementation of the Hefner model is utilized to simulate IGBTs in some circuit simulation software packages. However, the seventeen parameters necessary for the Oziemkiewicz implementation must be known for the conditions under which the device will be operating. Using both experimental and simulated data with a least squares curve fitting technique, the parameters necessary to model a given IGBT can be determined. This paper presents two sets of these seventeen parameters that correspond to two different models of power IGBTs. Specifically, these parameters correspond to voltages up to 3.5 kV, currents up to 750 A, and pulse widths up to 10 {micro}s. Additionally, comparisons of the experimental and simulated data will be presented.
Climate change decision-making: Model & parameter uncertainties explored
Dowlatabadi, H.; Kandlikar, M.; Linville, C.
1995-12-31
A critical aspect of climate change decision-making is uncertainties in current understanding of the socioeconomic, climatic and biogeochemical processes involved. Decision-making processes are much better informed if these uncertainties are characterized and their implications understood. Quantitative analysis of these uncertainties serve to inform decision makers about the likely outcome of policy initiatives, and help set priorities for research so that outcome ambiguities faced by the decision-makers are reduced. A family of integrated assessment models of climate change have been developed at Carnegie Mellon. These models are distinguished from other integrated assessment efforts in that they were designed from the outset to characterize and propagate parameter, model, value, and decision-rule uncertainties. The most recent of these models is ICAM 2.1. This model includes representation of the processes of demographics, economic activity, emissions, atmospheric chemistry, climate and sea level change and impacts from these changes and policies for emissions mitigation, and adaptation to change. The model has over 800 objects of which about one half are used to represent uncertainty. In this paper we show, that when considering parameter uncertainties, the relative contribution of climatic uncertainties are most important, followed by uncertainties in damage calculations, economic uncertainties and direct aerosol forcing uncertainties. When considering model structure uncertainties we find that the choice of policy is often dominated by model structure choice, rather than parameter uncertainties.
Improving weather predictability by including land-surface model parameter uncertainty
NASA Astrophysics Data System (ADS)
Orth, Rene; Dutra, Emanuel; Pappenberger, Florian
2016-04-01
The land surface forms an important component of Earth system models and interacts nonlinearly with other parts such as ocean and atmosphere. To capture the complex and heterogenous hydrology of the land surface, land surface models include a large number of parameters impacting the coupling to other components of the Earth system model. Focusing on ECMWF's land-surface model HTESSEL we present in this study a comprehensive parameter sensitivity evaluation using multiple observational datasets in Europe. We select 6 poorly constrained effective parameters (surface runoff effective depth, skin conductivity, minimum stomatal resistance, maximum interception, soil moisture stress function shape, total soil depth) and explore their sensitivity to model outputs such as soil moisture, evapotranspiration and runoff using uncoupled simulations and coupled seasonal forecasts. Additionally we investigate the possibility to construct ensembles from the multiple land surface parameters. In the uncoupled runs we find that minimum stomatal resistance and total soil depth have the most influence on model performance. Forecast skill scores are moreover sensitive to the same parameters as HTESSEL performance in the uncoupled analysis. We demonstrate the robustness of our findings by comparing multiple best performing parameter sets and multiple randomly chosen parameter sets. We find better temperature and precipitation forecast skill with the best-performing parameter perturbations demonstrating representativeness of model performance across uncoupled (and hence less computationally demanding) and coupled settings. Finally, we construct ensemble forecasts from ensemble members derived with different best-performing parameterizations of HTESSEL. This incorporation of parameter uncertainty in the ensemble generation yields an increase in forecast skill, even beyond the skill of the default system. Orth, R., E. Dutra, and F. Pappenberger, 2016: Improving weather predictability by
Inhalation Exposure Input Parameters for the Biosphere Model
M. Wasiolek
2006-06-05
This analysis is one of the technical reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), referred to in this report as the biosphere model. ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. ''Inhalation Exposure Input Parameters for the Biosphere Model'' is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the biosphere model is presented in Figure 1-1 (based on BSC 2006 [DIRS 176938]). This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and how this analysis report contributes to biosphere modeling. This analysis report defines and justifies values of atmospheric mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of the biosphere model to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception. This report is concerned primarily with the
Yakovenko, I A; Cheremushkin, E A; Kozlov, M K
2015-01-01
The research of changes of a beta rhythm parameters on condition of working memory loading by extension of a interstimuli interval between the target and triggering stimuli to 16 sec is investigated on 70 healthy adults in two series of experiments with set to a facial expression. In the second series at the middle of this interval for strengthening of the load was entered the additional cognitive task in the form of conditioning stimuli like Go/NoGo--circles of blue or green color. Data analysis of the research was carried out by means of continuous wavelet-transformation on the basis of "mather" complex Morlet-wavelet in the range of 1-35 Hz. Beta rhythm power was characterized by the mean level, maxima of wavelet-transformation coefficient (WLC) and latent periods of maxima. Introduction of additional cognitive task to pause between the target and triggering stimuli led to essential increase in absolute values of the mean level of beta rhythm WLC and relative sizes of maxima of beta rhythm WLC. In the series of experiments without conditioning stimulus subjects with large number of mistakes (from 6 to 40), i.e. rigid set, in comparison with subjects with small number of mistakes (to 5), i.e. plastic set, at the forming stage were characterized by higher values of the mean level of beta rhythm WLC. Introduction of the conditioning stimuli led to smoothing of intergroup distinctions throughout the experiment.
Important observations and parameters for a salt water intrusion model
Shoemaker, W.B.
2004-01-01
Sensitivity analysis with a density-dependent ground water flow simulator can provide insight and understanding of salt water intrusion calibration problems far beyond what is possible through intuitive analysis alone. Five simple experimental simulations presented here demonstrate this point. Results show that dispersivity is a very important parameter for reproducing a steady-state distribution of hydraulic head, salinity, and flow in the transition zone between fresh water and salt water in a coastal aquifer system. When estimating dispersivity, the following conclusions can be drawn about the data types and locations considered. (1) The "toe" of the transition zone is the most effective location for hydraulic head and salinity observations. (2) Areas near the coastline where submarine ground water discharge occurs are the most effective locations for flow observations. (3) Salinity observations are more effective than hydraulic head observations. (4) The importance of flow observations aligned perpendicular to the shoreline varies dramatically depending on distance seaward from the shoreline. Extreme parameter correlation can prohibit unique estimation of permeability parameters such as hydraulic conductivity and flow parameters such as recharge in a density-dependent ground water flow model when using hydraulic head and salinity observations. Adding flow observations perpendicular to the shoreline in areas where ground water is exchanged with the ocean body can reduce the correlation, potentially resulting in unique estimates of these parameter values. Results are expected to be directly applicable to many complex situations, and have implications for model development whether or not formal optimization methods are used in model calibration.
Estimation of growth parameters using a nonlinear mixed Gompertz model.
Wang, Z; Zuidhof, M J
2004-06-01
In order to maximize the utility of simulation models for decision making, accurate estimation of growth parameters and associated variances is crucial. A mixed Gompertz growth model was used to account for between-bird variation and heterogeneous variance. The mixed model had several advantages over the fixed effects model. The mixed model partitioned BW variation into between- and within-bird variation, and the covariance structure assumed with the random effect accounted for part of the BW correlation across ages in the same individual. The amount of residual variance decreased by over 55% with the mixed model. The mixed model reduced estimation biases that resulted from selective sampling. For analysis of longitudinal growth data, the mixed effects growth model is recommended.
Modelling Biophysical Parameters of Maize Using Landsat 8 Time Series
NASA Astrophysics Data System (ADS)
Dahms, Thorsten; Seissiger, Sylvia; Conrad, Christopher; Borg, Erik
2016-06-01
Open and free access to multi-frequent high-resolution data (e.g. Sentinel - 2) will fortify agricultural applications based on satellite data. The temporal and spatial resolution of these remote sensing datasets directly affects the applicability of remote sensing methods, for instance a robust retrieving of biophysical parameters over the entire growing season with very high geometric resolution. In this study we use machine learning methods to predict biophysical parameters, namely the fraction of absorbed photosynthetic radiation (FPAR), the leaf area index (LAI) and the chlorophyll content, from high resolution remote sensing. 30 Landsat 8 OLI scenes were available in our study region in Mecklenburg-Western Pomerania, Germany. In-situ data were weekly to bi-weekly collected on 18 maize plots throughout the summer season 2015. The study aims at an optimized prediction of biophysical parameters and the identification of the best explaining spectral bands and vegetation indices. For this purpose, we used the entire in-situ dataset from 24.03.2015 to 15.10.2015. Random forest and conditional inference forests were used because of their explicit strong exploratory and predictive character. Variable importance measures allowed for analysing the relation between the biophysical parameters with respect to the spectral response, and the performance of the two approaches over the plant stock evolvement. Classical random forest regression outreached the performance of conditional inference forests, in particular when modelling the biophysical parameters over the entire growing period. For example, modelling biophysical parameters of maize for the entire vegetation period using random forests yielded: FPAR: R² = 0.85; RMSE = 0.11; LAI: R² = 0.64; RMSE = 0.9 and chlorophyll content (SPAD): R² = 0.80; RMSE=4.9. Our results demonstrate the great potential in using machine-learning methods for the interpretation of long-term multi-frequent remote sensing datasets to model
Revised Parameters for the AMOEBA Polarizable Atomic Multipole Water Model
Pande, Vijay S.; Head-Gordon, Teresa; Ponder, Jay W.
2016-01-01
A set of improved parameters for the AMOEBA polarizable atomic multipole water model is developed. The protocol uses an automated procedure, ForceBalance, to adjust model parameters to enforce agreement with ab initio-derived results for water clusters and experimentally obtained data for a variety of liquid phase properties across a broad temperature range. The values reported here for the new AMOEBA14 water model represent a substantial improvement over the previous AMOEBA03 model. The new AMOEBA14 water model accurately predicts the temperature of maximum density and qualitatively matches the experimental density curve across temperatures ranging from 249 K to 373 K. Excellent agreement is observed for the AMOEBA14 model in comparison to a variety of experimental properties as a function of temperature, including the 2nd virial coefficient, enthalpy of vaporization, isothermal compressibility, thermal expansion coefficient and dielectric constant. The viscosity, self-diffusion constant and surface tension are also well reproduced. In comparison to high-level ab initio results for clusters of 2 to 20 water molecules, the AMOEBA14 model yields results similar to the AMOEBA03 and the direct polarization iAMOEBA models. With advances in computing power, calibration data, and optimization techniques, we recommend the use of the AMOEBA14 water model for future studies employing a polarizable water model. PMID:25683601
Revised Parameters for the AMOEBA Polarizable Atomic Multipole Water Model.
Laury, Marie L; Wang, Lee-Ping; Pande, Vijay S; Head-Gordon, Teresa; Ponder, Jay W
2015-07-23
A set of improved parameters for the AMOEBA polarizable atomic multipole water model is developed. An automated procedure, ForceBalance, is used to adjust model parameters to enforce agreement with ab initio-derived results for water clusters and experimental data for a variety of liquid phase properties across a broad temperature range. The values reported here for the new AMOEBA14 water model represent a substantial improvement over the previous AMOEBA03 model. The AMOEBA14 model accurately predicts the temperature of maximum density and qualitatively matches the experimental density curve across temperatures from 249 to 373 K. Excellent agreement is observed for the AMOEBA14 model in comparison to experimental properties as a function of temperature, including the second virial coefficient, enthalpy of vaporization, isothermal compressibility, thermal expansion coefficient, and dielectric constant. The viscosity, self-diffusion constant, and surface tension are also well reproduced. In comparison to high-level ab initio results for clusters of 2-20 water molecules, the AMOEBA14 model yields results similar to AMOEBA03 and the direct polarization iAMOEBA models. With advances in computing power, calibration data, and optimization techniques, we recommend the use of the AMOEBA14 water model for future studies employing a polarizable water model.
A Hamiltonian Model of Generator With AVR and PSS Parameters*
NASA Astrophysics Data System (ADS)
Qian, Jing.; Zeng, Yun.; Zhang, Lixiang.; Xu, Tianmao.
Take the typical thyristor excitation system including the automatic voltage regulator (AVR) and the power system stabilizer (PSS) as an example, the supply rate of AVR and PSS branch are selected as the energy function of controller, and that is added to the Hamiltonian function of the generator to compose the total energy function. By proper transformation, the standard form of the Hamiltonian model of the generator including AVR and PSS is derived. The structure matrix and damping matrix of the model include feature parameters of AVR and PSS, which gives a foundation to study the interaction mechanism of parameters between AVR, PSS and the generator. Finally, the structural relationships and interactions of the system model are studied, the results show that the relationship of structure and damping characteristic reflected by model consistent with practical system.
Prediction of interest rate using CKLS model with stochastic parameters
Ying, Khor Chia; Hin, Pooi Ah
2014-06-19
The Chan, Karolyi, Longstaff and Sanders (CKLS) model is a popular one-factor model for describing the spot interest rates. In this paper, the four parameters in the CKLS model are regarded as stochastic. The parameter vector φ{sup (j)} of four parameters at the (J+n)-th time point is estimated by the j-th window which is defined as the set consisting of the observed interest rates at the j′-th time point where j≤j′≤j+n. To model the variation of φ{sup (j)}, we assume that φ{sup (j)} depends on φ{sup (j−m)}, φ{sup (j−m+1)},…, φ{sup (j−1)} and the interest rate r{sub j+n} at the (j+n)-th time point via a four-dimensional conditional distribution which is derived from a [4(m+1)+1]-dimensional power-normal distribution. Treating the (j+n)-th time point as the present time point, we find a prediction interval for the future value r{sub j+n+1} of the interest rate at the next time point when the value r{sub j+n} of the interest rate is given. From the above four-dimensional conditional distribution, we also find a prediction interval for the future interest rate r{sub j+n+d} at the next d-th (d≥2) time point. The prediction intervals based on the CKLS model with stochastic parameters are found to have better ability of covering the observed future interest rates when compared with those based on the model with fixed parameters.
Systematic Parameter Estimation of a Density-Dependent Groundwater-Flow and Solute-Transport Model
NASA Astrophysics Data System (ADS)
Stanko, Z.; Nishikawa, T.; Traum, J. A.
2013-12-01
A SEAWAT-based, flow and transport model of seawater-intrusion was developed for the Santa Barbara groundwater basin in southern California that utilizes dual-domain porosity. Model calibration can be difficult when simulating flow and transport in large-scale hydrologic systems with extensive heterogeneity. To facilitate calibration, the hydrogeologic properties in this model are based on the fraction of coarse and fine-grained sediment interpolated from drillers' logs. This approach prevents over-parameterization by assigning one set of parameters to coarse material and another set to fine material. Estimated parameters include boundary conditions (such as areal recharge and surface-water seepage), hydraulic conductivities, dispersivities, and mass-transfer rate. As a result, the model has 44 parameters that were estimated by using the parameter-estimation software PEST, which uses the Gauss-Marquardt-Levenberg algorithm, along with various features such as singular value decomposition to improve calibration efficiency. The model is calibrated by using 36 years of observed water-level and chloride-concentration measurements, as well as first-order changes in head and concentration. Prior information on hydraulic properties is also provided to PEST as additional observations. The calibration objective is to minimize the squared sum of weighted residuals. In addition, observation sensitivities are investigated to effectively calibrate the model. An iterative parameter-estimation procedure is used to dynamically calibrate steady state and transient simulation models. The resulting head and concentration states from the steady-state-model provide the initial conditions for the transient model. The transient calibration provides updated parameter values for the next steady-state simulation. This process repeats until a reasonable fit is obtained. Preliminary results from the systematic calibration process indicate that tuning PEST by using a set of synthesized
A Simple Model for Ion Solvation with Non Additive Cores
1993-04-15
constant of the solvent, _., the permittivity of free space, ri, the radius of the ion, and No, the Avogadro constant . 50 is equal to rs/k,, where rs is...the radius of the solvent (the effective radius) and the MSA polarization parameter, X is calculated from the dielectric constant of the pure solvent...1)2 j This expression may be simplified considerably when one considers the range of values typical for X•. For water whose dielectric constant is
Assessing Parameter Identifiability in Phylogenetic Models Using Data Cloning
Ponciano, José Miguel; Burleigh, J. Gordon; Braun, Edward L.; Taper, Mark L.
2012-01-01
The success of model-based methods in phylogenetics has motivated much research aimed at generating new, biologically informative models. This new computer-intensive approach to phylogenetics demands validation studies and sound measures of performance. To date there has been little practical guidance available as to when and why the parameters in a particular model can be identified reliably. Here, we illustrate how Data Cloning (DC), a recently developed methodology to compute the maximum likelihood estimates along with their asymptotic variance, can be used to diagnose structural parameter nonidentifiability (NI) and distinguish it from other parameter estimability problems, including when parameters are structurally identifiable, but are not estimable in a given data set (INE), and when parameters are identifiable, and estimable, but only weakly so (WE). The application of the DC theorem uses well-known and widely used Bayesian computational techniques. With the DC approach, practitioners can use Bayesian phylogenetics software to diagnose nonidentifiability. Theoreticians and practitioners alike now have a powerful, yet simple tool to detect nonidentifiability while investigating complex modeling scenarios, where getting closed-form expressions in a probabilistic study is complicated. Furthermore, here we also show how DC can be used as a tool to examine and eliminate the influence of the priors, in particular if the process of prior elicitation is not straightforward. Finally, when applied to phylogenetic inference, DC can be used to study at least two important statistical questions: assessing identifiability of discrete parameters, like the tree topology, and developing efficient sampling methods for computationally expensive posterior densities. PMID:22649181
Ko, Yi-An; Mukherjee, Bhramar; Smith, Jennifer A; Park, Sung Kyun; Kardia, Sharon L R; Allison, Matthew A; Vokonas, Pantel S; Chen, Jinbo; Diez-Roux, Ana V
2014-12-20
While there has been extensive research developing gene-environment interaction (GEI) methods in case-control studies, little attention has been given to sparse and efficient modeling of GEI in longitudinal studies. In a two-way table for GEI with rows and columns as categorical variables, a conventional saturated interaction model involves estimation of a specific parameter for each cell, with constraints ensuring identifiability. The estimates are unbiased but are potentially inefficient because the number of parameters to be estimated can grow quickly with increasing categories of row/column factors. On the other hand, Tukey's one-degree-of-freedom model for non-additivity treats the interaction term as a scaled product of row and column main effects. Because of the parsimonious form of interaction, the interaction estimate leads to enhanced efficiency, and the corresponding test could lead to increased power. Unfortunately, Tukey's model gives biased estimates and low power if the model is misspecified. When screening multiple GEIs where each genetic and environmental marker may exhibit a distinct interaction pattern, a robust estimator for interaction is important for GEI detection. We propose a shrinkage estimator for interaction effects that combines estimates from both Tukey's and saturated interaction models and use the corresponding Wald test for testing interaction in a longitudinal setting. The proposed estimator is robust to misspecification of interaction structure. We illustrate the proposed methods using two longitudinal studies-the Normative Aging Study and the Multi-ethnic Study of Atherosclerosis.
Ko, Yi-An; Mukherjee, Bhramar; Smith, Jennifer A.; Park, Sung Kyun; Kardia, Sharon L.R.; Allison, Matthew A.; Vokonas, Pantel S.; Chen, Jinbo; Diez-Roux, Ana V.
2014-01-01
While there has been extensive research developing gene-environment interaction (GEI) methods in case-control studies, little attention has been given to sparse and efficient modeling of GEI in longitudinal studies. In a two-way table for GEI with rows and columns as categorical variables, a conventional saturated interaction model involves estimation of a specific parameter for each cell, with constraints ensuring identifiability. The estimates are unbiased but are potentially inefficient because the number of parameters to be estimated can grow quickly with increasing categories of row/column factors. On the other hand, Tukey’s one degree of freedom (df) model for non-additivity treats the interaction term as a scaled product of row and column main effects. Due to the parsimonious form of interaction, the interaction estimate leads to enhanced efficiency and the corresponding test could lead to increased power. Unfortunately, Tukey’s model gives biased estimates and low power if the model is misspecified. When screening multiple GEIs where each genetic and environmental marker may exhibit a distinct interaction pattern, a robust estimator for interaction is important for GEI detection. We propose a shrinkage estimator for interaction effects that combines estimates from both Tukey’s and saturated interaction models and use the corresponding Wald test for testing interaction in a longitudinal setting. The proposed estimator is robust to misspecification of interaction structure. We illustrate the proposed methods using two longitudinal studies — the Normative Aging Study and the Multi-Ethnic Study of Atherosclerosis. PMID:25112650
Constraint on Seesaw Model Parameters with Electroweak Vacuum Stability
NASA Astrophysics Data System (ADS)
Okane, H.; Morozumi, T.
2017-03-01
Within the standard model, the electroweak vacuum is metastable. We study how heavy right-handed neutrinos in seesaw model have impact on the stability through their loop effect for the Higgs potential. Requiring the lifetime of the electroweak vacuum is longer than the age of the Universe, the constraint on parameters such as their masses and the strength of the Yukawa couplings is obtained.
Estimation of dynamic stability parameters from drop model flight tests
NASA Technical Reports Server (NTRS)
Chambers, J. R.; Iliff, K. W.
1981-01-01
The overall remotely piloted drop model operation, descriptions, instrumentation, launch and recovery operations, piloting concept, and parameter identification methods are discussed. Static and dynamic stability derivatives were obtained for an angle attack range from -20 deg to 53 deg. It is indicated that the variations of the estimates with angle of attack are consistent for most of the static derivatives, and the effects of configuration modifications to the model were apparent in the static derivative estimates.
Parabolic problems with parameters arising in evolution model for phytromediation
NASA Astrophysics Data System (ADS)
Sahmurova, Aida; Shakhmurov, Veli
2012-12-01
The past few decades, efforts have been made to clean sites polluted by heavy metals as chromium. One of the new innovative methods of eradicating metals from soil is phytoremediation. This uses plants to pull metals from the soil through the roots. This work develops a system of differential equations with parameters to model the plant metal interaction of phytoremediation (see [1]).
Modeling and simulation of HTS cables for scattering parameter analysis
NASA Astrophysics Data System (ADS)
Bang, Su Sik; Lee, Geon Seok; Kwon, Gu-Young; Lee, Yeong Ho; Chang, Seung Jin; Lee, Chun-Kwon; Sohn, Songho; Park, Kijun; Shin, Yong-June
2016-11-01
Most of modeling and simulation of high temperature superconducting (HTS) cables are inadequate for high frequency analysis since focus of the simulation's frequency is fundamental frequency of the power grid, which does not reflect transient characteristic. However, high frequency analysis is essential process to research the HTS cables transient for protection and diagnosis of the HTS cables. Thus, this paper proposes a new approach for modeling and simulation of HTS cables to derive the scattering parameter (S-parameter), an effective high frequency analysis, for transient wave propagation characteristics in high frequency range. The parameters sweeping method is used to validate the simulation results to the measured data given by a network analyzer (NA). This paper also presents the effects of the cable-to-NA connector in order to minimize the error between the simulated and the measured data under ambient and superconductive conditions. Based on the proposed modeling and simulation technique, S-parameters of long-distance HTS cables can be accurately derived in wide range of frequency. The results of proposed modeling and simulation can yield the characteristics of the HTS cables and will contribute to analyze the HTS cables.
Left-right-symmetric model parameters: Updated bounds
Polak, J.; Zralek, M. )
1992-11-01
Using the available updated experimental data, including the last results from the CERN {ital e}{sup +}{ital e{minus}} collider LEP and improved parity-violation results, we find new constraints on the parameters in the left-right-symmetric model in the case of light right-handed neutrinos.
Consistency of Rasch Model Parameter Estimation: A Simulation Study.
ERIC Educational Resources Information Center
van den Wollenberg, Arnold L.; And Others
1988-01-01
The unconditional--simultaneous--maximum likelihood (UML) estimation procedure for the one-parameter logistic model produces biased estimators. The UML method is inconsistent and is not a good alternative to conditional maximum likelihood method, at least with small numbers of items. The minimum Chi-square estimation procedure produces unbiased…
NASA Astrophysics Data System (ADS)
Michalik, Thomas; Multsch, Sebastian; Frede, Hans-Georg; Breuer, Lutz
2016-04-01
Water for agriculture is strongly limited in arid and semi-arid regions and often of low quality in terms of salinity. The application of saline waters for irrigation increases the salt load in the rooting zone and has to be managed by leaching to maintain a healthy soil, i.e. to wash out salts by additional irrigation. Dynamic simulation models are helpful tools to calculate the root zone water fluxes and soil salinity content in order to investigate best management practices. However, there is little information on structural and parameter uncertainty for simulations regarding the water and salt balance of saline irrigation. Hence, we established a multi-model system with four different models (AquaCrop, RZWQM, SWAP, Hydrus1D/UNSATCHEM) to analyze the structural and parameter uncertainty by using the Global Likelihood and Uncertainty Estimation (GLUE) method. Hydrus1D/UNSATCHEM and SWAP were set up with multiple sets of different implemented functions (e.g. matric and osmotic stress for root water uptake) which results in a broad range of different model structures. The simulations were evaluated against soil water and salinity content observations. The posterior distribution of the GLUE analysis gives behavioral parameters sets and reveals uncertainty intervals for parameter uncertainty. Throughout all of the model sets, most parameters accounting for the soil water balance show a low uncertainty, only one or two out of five to six parameters in each model set displays a high uncertainty (e.g. pore-size distribution index in SWAP and Hydrus1D/UNSATCHEM). The differences between the models and model setups reveal the structural uncertainty. The highest structural uncertainty is observed for deep percolation fluxes between the model sets of Hydrus1D/UNSATCHEM (~200 mm) and RZWQM (~500 mm) that are more than twice as high for the latter. The model sets show a high variation in uncertainty intervals for deep percolation as well, with an interquartile range (IQR) of
Parameter Perturbations with the GFDL Model: Smoothness and Uncertainty
NASA Astrophysics Data System (ADS)
Zamboni, L.; Jacob, R. L.; Neelin, J.; Kotamarthi, V. R.; Held, I.; Zhao, M.; Williams, T. J.; McWilliams, J. C.; Moore, T. L.; Wilde, M.; Nangia, N.
2013-12-01
We found that smoothness characterizes the response of global precipitation to perturbations of 6 parameters related to cloud physics and circulation in 50-year AMIP simulations performed with the GFDL model at 1x1 degree resolution. Specifically, the AGCM depends quadratically to parameters (Fig.1a). Linearization of the derivative of a cost function (the globally averaged squared difference between model and observations; here illustrated for the entrainment rate) up to at least the 2nd order around the standard case (eo=10) proofs necessary for optimization purposes to correctly predict where the optimum value lies (Fig.1b), and reflects the relevance of the non linearity of the response. The linearization also provides indications about desirable changes in the parameters' values for regional optimization, which may be locally different from that of the global average. Uncertainty of precipitation varies from -9 to 6% of the model's standard version and is highest for the ice-fall-speed in stratiform clouds and the entrainment in convective clouds, which are the parameters with the widest range of possible values (Fig.2). The smooth behavior and a quantified measure of the sensitivity we report here are the backbones for the design of computationally effective multi-parameter perturbations and model optimization, which ultimately improve the reliability of AGCMs simulations Smoothness and optimum parameter value for the entrainment rate. a) Root mean squared error and fits based on values eo=[8,16] and extrapolated over eo=[4,6]; b) derivative of the cost function computed at different levels of precision in the linearization (blue, green and black lines) and numerically using 1) the quadratic fit n the expression of the cost function (red line) and 2) only AGCM output (pink line). Note that the linearization determines the correct value of the minimum without using any information about model's output in that point: the quadratic fit is based on data
Budic, Lara; Didenko, Gregor; Dormann, Carsten F
2016-01-01
In species distribution analyses, environmental predictors and distribution data for large spatial extents are often available in long-lat format, such as degree raster grids. Long-lat projections suffer from unequal cell sizes, as a degree of longitude decreases in length from approximately 110 km at the equator to 0 km at the poles. Here we investigate whether long-lat and equal-area projections yield similar model parameter estimates, or result in a consistent bias. We analyzed the environmental effects on the distribution of 12 ungulate species with a northern distribution, as models for these species should display the strongest effect of projectional distortion. Additionally we choose four species with entirely continental distributions to investigate the effect of incomplete cell coverage at the coast. We expected that including model weights proportional to the actual cell area should compensate for the observed bias in model coefficients, and similarly that using land coverage of a cell should decrease bias in species with coastal distribution. As anticipated, model coefficients were different between long-lat and equal-area projections. Having progressively smaller and a higher number of cells with increasing latitude influenced the importance of parameters in models, increased the sample size for the northernmost parts of species ranges, and reduced the subcell variability of those areas. However, this bias could be largely removed by weighting long-lat cells by the area they cover, and marginally by correcting for land coverage. Overall we found little effect of using long-lat rather than equal-area projections in our analysis. The fitted relationship between environmental parameters and occurrence probability differed only very little between the two projection types. We still recommend using equal-area projections to avoid possible bias. More importantly, our results suggest that the cell area and the proportion of a cell covered by land should be
Soil-Related Input Parameters for the Biosphere Model
A. J. Smith
2004-09-09
This report presents one of the analyses that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the details of the conceptual model as well as the mathematical model and the required input parameters. The biosphere model is one of a series of process models supporting the postclosure Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A schematic representation of the documentation flow for the Biosphere input to TSPA is presented in Figure 1-1. This figure shows the evolutionary relationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (TWP) (BSC 2004 [DIRS 169573]). This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil-Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. The purpose of this analysis was to develop the biosphere model parameters associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation or ash deposition and, as a direct consequence, radionuclide concentration in other environmental media that are affected by radionuclide concentrations in soil. The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573]) where the governing procedure was defined as AP-SIII.9Q, ''Scientific Analyses''. This
Chen, B.C.J.; Hull, J.R.; Seitz, M.G.; Sha, W.T.; Shah, V.L.; Soo, S.L.
1984-07-01
Computer model simulation is required to evaluate the performance of proposed or future high-level radioactive waste geological repositories. However, the accuracy of a model in predicting the real situation depends on how well the values of the transport properties are prescribed as input parameters. Knowledge of transport parameters is therefore essential. We have modeled ANL's Experiment Analog Program which was designed to simulate long-term radwaste migration process by groundwater flowing through a high-level radioactive waste repository. Using this model and experimental measurements, we have evaluated neptunium (actinide) deposition velocity and analyzed the complex phenomena of simultaneous deposition, erosion, and reentrainment of bentonite when groundwater is flowing through a narrow crack in a basalt rock. The present modeling demonstrates that we can obtain the values of transport parameters, as added information without any additional cost, from the available measurements of laboratory analog experiments. 8 figures, 3 tables.
A state parameter-based model for static recrystallization interacting with precipitation
NASA Astrophysics Data System (ADS)
Buken, Heinrich; Sherstnev, Pavel; Kozeschnik, Ernst
2016-03-01
In the present work, we develop a state parameter-based model for the treatment of simultaneous precipitation and recrystallization based on a single-parameter representation of the total dislocation density and a multi-particle multi-component framework for precipitation kinetics. In contrast to conventional approaches, the interaction of particles with recrystallization is described with a non-zero grain boundary mobility even for the case where the Zener pressure exceeds the driving pressure for recrystallization. The model successfully reproduces the experimentally observed particle-induced recrystallization stasis and subsequent continuation in micro-alloyed steel with a single consistent set of input parameters. In addition, as a state parameter-based approach, our model naturally supports introspection into the physical mechanisms governing the competing recrystallization and recovery processes.
Realistic uncertainties on Hapke model parameters from photometric measurement
NASA Astrophysics Data System (ADS)
Schmidt, Frédéric; Fernando, Jennifer
2015-11-01
The single particle phase function describes the manner in which an average element of a granular material diffuses the light in the angular space usually with two parameters: the asymmetry parameter b describing the width of the scattering lobe and the backscattering fraction c describing the main direction of the scattering lobe. Hapke proposed a convenient and widely used analytical model to describe the spectro-photometry of granular materials. Using a compilation of the published data, Hapke (Hapke, B. [2012]. Icarus 221, 1079-1083) recently studied the relationship of b and c for natural examples and proposed the hockey stick relation (excluding b > 0.5 and c > 0.5). For the moment, there is no theoretical explanation for this relationship. One goal of this article is to study a possible bias due to the retrieval method. We expand here an innovative Bayesian inversion method in order to study into detail the uncertainties of retrieved parameters. On Emission Phase Function (EPF) data, we demonstrate that the uncertainties of the retrieved parameters follow the same hockey stick relation, suggesting that this relation is due to the fact that b and c are coupled parameters in the Hapke model instead of a natural phenomena. Nevertheless, the data used in the Hapke (Hapke, B. [2012]. Icarus 221, 1079-1083) compilation generally are full Bidirectional Reflectance Diffusion Function (BRDF) that are shown not to be subject to this artifact. Moreover, the Bayesian method is a good tool to test if the sampling geometry is sufficient to constrain the parameters (single scattering albedo, surface roughness, b, c , opposition effect). We performed sensitivity tests by mimicking various surface scattering properties and various single image-like/disk resolved image, EPF-like and BRDF-like geometric sampling conditions. The second goal of this article is to estimate the favorable geometric conditions for an accurate estimation of photometric parameters in order to provide
Using Generalized Additive Models to Analyze Single-Case Designs
ERIC Educational Resources Information Center
Shadish, William; Sullivan, Kristynn
2013-01-01
Many analyses for single-case designs (SCDs)--including nearly all the effect size indicators-- currently assume no trend in the data. Regression and multilevel models allow for trend, but usually test only linear trend and have no principled way of knowing if higher order trends should be represented in the model. This paper shows how Generalized…
Marginal regression approach for additive hazards models with clustered current status data.
Su, Pei-Fang; Chi, Yunchan
2014-01-15
Current status data arise naturally from tumorigenicity experiments, epidemiology studies, biomedicine, econometrics and demographic and sociology studies. Moreover, clustered current status data may occur with animals from the same litter in tumorigenicity experiments or with subjects from the same family in epidemiology studies. Because the only information extracted from current status data is whether the survival times are before or after the monitoring or censoring times, the nonparametric maximum likelihood estimator of survival function converges at a rate of n(1/3) to a complicated limiting distribution. Hence, semiparametric regression models such as the additive hazards model have been extended for independent current status data to derive the test statistics, whose distributions converge at a rate of n(1/2) , for testing the regression parameters. However, a straightforward application of these statistical methods to clustered current status data is not appropriate because intracluster correlation needs to be taken into account. Therefore, this paper proposes two estimating functions for estimating the parameters in the additive hazards model for clustered current status data. The comparative results from simulation studies are presented, and the application of the proposed estimating functions to one real data set is illustrated.
Optimising muscle parameters in musculoskeletal models using Monte Carlo simulation.
Reed, Erik B; Hanson, Andrea M; Cavanagh, Peter R
2015-01-01
The use of musculoskeletal simulation software has become a useful tool for modelling joint and muscle forces during human activity, including in reduced gravity because direct experimentation is difficult. Knowledge of muscle and joint loads can better inform the design of exercise protocols and exercise countermeasure equipment. In this study, the LifeModeler™ (San Clemente, CA, USA) biomechanics simulation software was used to model a squat exercise. The initial model using default parameters yielded physiologically reasonable hip-joint forces but no activation was predicted in some large muscles such as rectus femoris, which have been shown to be active in 1-g performance of the activity. Parametric testing was conducted using Monte Carlo methods and combinatorial reduction to find a muscle parameter set that more closely matched physiologically observed activation patterns during the squat exercise. The rectus femoris was predicted to peak at 60.1% activation in the same test case compared to 19.2% activation using default parameters. These results indicate the critical role that muscle parameters play in joint force estimation and the need for exploration of the solution space to achieve physiologically realistic muscle activation.
Prediction of mortality rates using a model with stochastic parameters
NASA Astrophysics Data System (ADS)
Tan, Chon Sern; Pooi, Ah Hin
2016-10-01
Prediction of future mortality rates is crucial to insurance companies because they face longevity risks while providing retirement benefits to a population whose life expectancy is increasing. In the past literature, a time series model based on multivariate power-normal distribution has been applied on mortality data from the United States for the years 1933 till 2000 to forecast the future mortality rates for the years 2001 till 2010. In this paper, a more dynamic approach based on the multivariate time series will be proposed where the model uses stochastic parameters that vary with time. The resulting prediction intervals obtained using the model with stochastic parameters perform better because apart from having good ability in covering the observed future mortality rates, they also tend to have distinctly shorter interval lengths.
Spherical Harmonics Functions Modelling of Meteorological Parameters in PWV Estimation
NASA Astrophysics Data System (ADS)
Deniz, Ilke; Mekik, Cetin; Gurbuz, Gokhan
2016-08-01
Aim of this study is to derive temperature, pressure and humidity observations using spherical harmonics modelling and to interpolate for the derivation of precipitable water vapor (PWV) of TUSAGA-Active stations in the test area encompassing 38.0°-42.0° northern latitudes and 28.0°-34.0° eastern longitudes of Turkey. In conclusion, the meteorological parameters computed by using GNSS observations for the study area have been modelled with a precision of ±1.74 K in temperature, ±0.95 hPa in pressure and ±14.88 % in humidity. Considering studies on the interpolation of meteorological parameters, the precision of temperature and pressure models provide adequate solutions. This study funded by the Scientific and Technological Research Council of Turkey (TUBITAK) (The Estimation of Atmospheric Water Vapour with GPS Project, Project No: 112Y350).
Comparison of Cone Model Parameters for Halo Coronal Mass Ejections
NASA Astrophysics Data System (ADS)
Na, Hyeonock; Moon, Y.-J.; Jang, Soojeong; Lee, Kyoung-Sun; Kim, Hae-Yeon
2013-11-01
Halo coronal mass ejections (HCMEs) are a major cause of geomagnetic storms, hence their three-dimensional structures are important for space weather. We compare three cone models: an elliptical-cone model, an ice-cream-cone model, and an asymmetric-cone model. These models allow us to determine three-dimensional parameters of HCMEs such as radial speed, angular width, and the angle [ γ] between sky plane and cone axis. We compare these parameters obtained from three models using 62 HCMEs observed by SOHO/LASCO from 2001 to 2002. Then we obtain the root-mean-square (RMS) error between the highest measured projection speeds and their calculated projection speeds from the cone models. As a result, we find that the radial speeds obtained from the models are well correlated with one another ( R > 0.8). The correlation coefficients between angular widths range from 0.1 to 0.48 and those between γ-values range from -0.08 to 0.47, which is much smaller than expected. The reason may be the different assumptions and methods. The RMS errors between the highest measured projection speeds and the highest estimated projection speeds of the elliptical-cone model, the ice-cream-cone model, and the asymmetric-cone model are 376 km s-1, 169 km s-1, and 152 km s-1. We obtain the correlation coefficients between the location from the models and the flare location ( R > 0.45). Finally, we discuss strengths and weaknesses of these models in terms of space-weather application.
Parameter identification for a suction-dependent plasticity model
NASA Astrophysics Data System (ADS)
Simoni, L.; Schrefler, B. A.
2001-03-01
In this paper, the deterministic parameter identification procedure proposed in a companion paper is applied to suction-dependent elasto-plasticity problems. A mathematical model for such type of problems is firstly presented, then it is applied to the parameter identification using laboratory data. The identification procedure is applied in a second example to exploitation of a gas reservoir. The effects of the extraction of underground fluids appear during and after quite long periods of time and strongly condition the decision to profit or not of the natural resources. Identification procedures can be very useful tools for reliable long-term predictions.
Inversion of canopy reflectance models for estimation of vegetation parameters
NASA Technical Reports Server (NTRS)
Goel, Narendra S.
1987-01-01
One of the keys to successful remote sensing of vegetation is to be able to estimate important agronomic parameters like leaf area index (LAI) and biomass (BM) from the bidirectional canopy reflectance (CR) data obtained by a space-shuttle or satellite borne sensor. One approach for such an estimation is through inversion of CR models which relate these parameters to CR. The feasibility of this approach was shown. The overall objective of the research carried out was to address heretofore uninvestigated but important fundamental issues, develop the inversion technique further, and delineate its strengths and limitations.
Additive Manufacturing of Anatomical Models from Computed Tomography Scan Data.
Gür, Y
2014-12-01
The purpose of the study presented here was to investigate the manufacturability of human anatomical models from Computed Tomography (CT) scan data via a 3D desktop printer which uses fused deposition modelling (FDM) technology. First, Digital Imaging and Communications in Medicine (DICOM) CT scan data were converted to 3D Standard Triangle Language (STL) format by using In Vaselius digital imaging program. Once this STL file is obtained, a 3D physical version of the anatomical model can be fabricated by a desktop 3D FDM printer. As a case study, a patient's skull CT scan data was considered, and a tangible version of the skull was manufactured by a 3D FDM desktop printer. During the 3D printing process, the skull was built using acrylonitrile-butadiene-styrene (ABS) co-polymer plastic. The printed model showed that the 3D FDM printing technology is able to fabricate anatomical models with high accuracy. As a result, the skull model can be used for preoperative surgical planning, medical training activities, implant design and simulation to show the potential of the FDM technology in medical field. It will also improve communication between medical stuff and patients. Current result indicates that a 3D desktop printer which uses FDM technology can be used to obtain accurate anatomical models.
Parameter estimation for a nonlinear control-oriented tokamak profile evolution model
NASA Astrophysics Data System (ADS)
Geelen, P.; Felici, F.; Merle, A.; Sauter, O.
2015-12-01
A control-oriented tokamak profile evolution model is crucial for the development and testing of control schemes for a fusion plasma. The RAPTOR (RApid Plasma Transport simulatOR) code was developed with this aim in mind (Felici 2011 Nucl. Fusion 51 083052). The performance of the control system strongly depends on the quality of the control-oriented model predictions. In RAPTOR a semi-empirical transport model is used, instead of a first-principles physics model, to describe the electron heat diffusivity {χ\\text{e}} in view of computational speed. The structure of the empirical model is given by the physics knowledge, and only some unknown physics of {χ\\text{e}} , which is more complicated and less well understood, is captured in its model parameters. Additionally, time-averaged sawtooth behavior is modeled by an ad hoc addition to the neoclassical conductivity {σ\\parallel} and electron heat diffusivity. As a result, RAPTOR contains parameters that need to be estimated for a tokamak plasma to make reliable predictions. In this paper a generic parameter estimation method, based on the nonlinear least-squares theory, was developed to estimate these model parameters. For the TCV tokamak, interpretative transport simulations that used measured {{T}\\text{e}} profiles were performed and it was shown that the developed method is capable of finding the model parameters such that RAPTOR’s predictions agree within ten percent with the simulated q profile and twenty percent with the measured {{T}\\text{e}} profile. The newly developed model-parameter estimation procedure now results in a better description of a fusion plasma and allows for a less ad hoc and more automated method to implement RAPTOR on a variety of tokamaks.
Enhancing debris flow modeling parameters integrating Bayesian networks
NASA Astrophysics Data System (ADS)
Graf, C.; Stoffel, M.; Grêt-Regamey, A.
2009-04-01
Applied debris-flow modeling requires suitably constraint input parameter sets. Depending on the used model, there is a series of parameters to define before running the model. Normally, the data base describing the event, the initiation conditions, the flow behavior, the deposition process and mainly the potential range of possible debris flow events in a certain torrent is limited. There are only some scarce places in the world, where we fortunately can find valuable data sets describing event history of debris flow channels delivering information on spatial and temporal distribution of former flow paths and deposition zones. Tree-ring records in combination with detailed geomorphic mapping for instance provide such data sets over a long time span. Considering the significant loss potential associated with debris-flow disasters, it is crucial that decisions made in regard to hazard mitigation are based on a consistent assessment of the risks. This in turn necessitates a proper assessment of the uncertainties involved in the modeling of the debris-flow frequencies and intensities, the possible run out extent, as well as the estimations of the damage potential. In this study, we link a Bayesian network to a Geographic Information System in order to assess debris-flow risk. We identify the major sources of uncertainty and show the potential of Bayesian inference techniques to improve the debris-flow model. We model the flow paths and deposition zones of a highly active debris-flow channel in the Swiss Alps using the numerical 2-D model RAMMS. Because uncertainties in run-out areas cause large changes in risk estimations, we use the data of flow path and deposition zone information of reconstructed debris-flow events derived from dendrogeomorphological analysis covering more than 400 years to update the input parameters of the RAMMS model. The probabilistic model, which consistently incorporates this available information, can serve as a basis for spatial risk
Kock, A.
1996-05-01
The objectives of this research are: (1) to calculate and compare off site doses from atmospheric tritium releases at the Savannah River Site using monthly versus 5 year meteorological data and annual source terms, including additional seasonal and site specific parameters not included in present annual assessments; and (2) to calculate the range of the above dose estimates based on distributions in model parameters given by uncertainty estimates found in the literature. Consideration will be given to the sensitivity of parameters given in former studies.
Serna-Galvis, Efraím A; Silva-Agredo, Javier; Giraldo-Aguirre, Ana L; Torres-Palma, Ricardo A
2015-08-15
Fluoxetine (FLX), one of the most widely used antidepressants in the world, is an emergent pollutant found in natural waters that causes disrupting effects on the endocrine systems of some aquatic species. This work explores the total elimination of FLX by sonochemical treatment coupled to a biological system. The biological process acting alone was shown to be unable to remove the pollutant, even under favourable conditions of pH and temperature. However, sonochemical treatment (600 kHz) was shown to be able to remove the pharmaceutical. Several parameters were evaluated for the ultrasound application: the applied power (20-60 W), dissolved gas (air, Ar and He), pH (3-11) and initial concentration of fluoxetine (2.9-162.0 μmol L(-1)). Additionally, the presence of organic (1-hexanol and 2-propanol) and inorganic (Fe(2+)) compounds in the water matrix and the degradation of FLX in a natural mineral water were evaluated. The sonochemical treatment readily eliminates FLX following a kinetic Langmuir. After 360 min of ultrasonic irradiation, 15% mineralization was achieved. Analysis of the biodegradability provided evidence that the sonochemical process transforms the pollutant into biodegradable substances, which can then be mineralized in a subsequent biological treatment.
Modeling of additive manufacturing processes for metals: Challenges and opportunities
Francois, Marianne M.; Sun, Amy; King, Wayne E.; ...
2017-01-09
Here, with the technology being developed to manufacture metallic parts using increasingly advanced additive manufacturing processes, a new era has opened up for designing novel structural materials, from designing shapes and complex geometries to controlling the microstructure (alloy composition and morphology). The material properties used within specific structural components are also designable in order to meet specific performance requirements that are not imaginable with traditional metal forming and machining (subtractive) techniques.
Unrealistic parameter estimates in inverse modelling: A problem or a benefit for model calibration?
Poeter, E.P.; Hill, M.C.
1996-01-01
Estimation of unrealistic parameter values by inverse modelling is useful for constructed model discrimination. This utility is demonstrated using the three-dimensional, groundwater flow inverse model MODFLOWP to estimate parameters in a simple synthetic model where the true conditions and character of the errors are completely known. When a poorly constructed model is used, unreasonable parameter values are obtained even when using error free observations and true initial parameter values. This apparent problem is actually a benefit because it differentiates accurately and inaccurately constructed models. The problems seem obvious for a synthetic problem in which the truth is known, but are obscure when working with field data. Situations in which unrealistic parameter estimates indicate constructed model problems are illustrated in applications of inverse modelling to three field sites and to complex synthetic test cases in which it is shown that prediction accuracy also suffers when constructed models are inaccurate.
Neural Models: An Option to Estimate Seismic Parameters of Accelerograms
NASA Astrophysics Data System (ADS)
Alcántara, L.; García, S.; Ovando-Shelley, E.; Macías, M. A.
2014-12-01
Seismic instrumentation for recording strong earthquakes, in Mexico, goes back to the 60´s due the activities carried out by the Institute of Engineering at Universidad Nacional Autónoma de México. However, it was after the big earthquake of September 19, 1985 (M=8.1) when the project of seismic instrumentation assumes a great importance. Currently, strong ground motion networks have been installed for monitoring seismic activity mainly along the Mexican subduction zone and in Mexico City. Nevertheless, there are other major regions and cities that can be affected by strong earthquakes and have not yet begun their seismic instrumentation program or this is still in development.Because of described situation some relevant earthquakes (e.g. Huajuapan de León Oct 24, 1980 M=7.1, Tehuacán Jun 15, 1999 M=7 and Puerto Escondido Sep 30, 1999 M= 7.5) have not been registered properly in some cities, like Puebla and Oaxaca, and that were damaged during those earthquakes. Fortunately, the good maintenance work carried out in the seismic network has permitted the recording of an important number of small events in those cities. So in this research we present a methodology based on the use of neural networks to estimate significant duration and in some cases the response spectra for those seismic events. The neural model developed predicts significant duration in terms of magnitude, epicenter distance, focal depth and soil characterization. Additionally, for response spectra we used a vector of spectral accelerations. For training the model we selected a set of accelerogram records obtained from the small events recorded in the strong motion instruments installed in the cities of Puebla and Oaxaca. The final results show that neural networks as a soft computing tool that use a multi-layer feed-forward architecture provide good estimations of the target parameters and they also have a good predictive capacity to estimate strong ground motion duration and response spectra.
Addition of a Hydrological Cycle to the EPIC Jupiter Model
NASA Astrophysics Data System (ADS)
Dowling, T. E.; Palotai, C. J.
2002-09-01
We present a progress report on the development of the EPIC atmospheric model to include clouds, moist convection, and precipitation. Two major goals are: i) to study the influence that convective water clouds have on Jupiter's jets and vortices, such as those to the northwest of the Great Red Spot, and ii) to predict ammonia-cloud evolution for direct comparison to visual images (instead of relying on surrogates for clouds like potential vorticity). Data structures in the model are now set up to handle the vapor, liquid, and solid phases of the most common chemical species in planetary atmospheres. We have adapted the Prather conservation of second-order moments advection scheme to the model, which yields high accuracy for dealing with cloud edges. In collaboration with computer scientists H. Dietz and T. Mattox at the U. Kentucky, we have built a dedicated 40-node parallel computer that achieves 34 Gflops (double precision) at 74 cents per Mflop, and have updated the EPIC-model code to use cache-aware memory layouts and other modern optimizations. The latest test-case results of cloud evolution in the model will be presented. This research is funded by NASA's Planetary Atmospheres and EPSCoR programs.
Link, William A.; Barker, Richard J.
2005-01-01
We present a hierarchical extension of the Cormack–Jolly–Seber (CJS) model for open population capture–recapture data. In addition to recaptures of marked animals, we model first captures of animals and losses on capture. The parameter set includes capture probabilities, survival rates, and birth rates. The survival rates and birth rates are treated as a random sample from a bivariate distribution, thus the model explicitly incorporates correlation in these demographic rates. A key feature of the model is that the likelihood function, which includes a CJS model factor, is expressed entirely in terms of identifiable parameters; losses on capture can be factored out of the model. Since the computational complexity of classical likelihood methods is prohibitive, we use Markov chain Monte Carlo in a Bayesian analysis. We describe an efficient candidate-generation scheme for Metropolis–Hastings sampling of CJS models and extensions. The procedure is illustrated using mark-recapture data for the moth Gonodontis bidentata.
Dynamic imaging model and parameter optimization for a star tracker.
Yan, Jinyun; Jiang, Jie; Zhang, Guangjun
2016-03-21
Under dynamic conditions, star spots move across the image plane of a star tracker and form a smeared star image. This smearing effect increases errors in star position estimation and degrades attitude accuracy. First, an analytical energy distribution model of a smeared star spot is established based on a line segment spread function because the dynamic imaging process of a star tracker is equivalent to the static imaging process of linear light sources. The proposed model, which has a clear physical meaning, explicitly reflects the key parameters of the imaging process, including incident flux, exposure time, velocity of a star spot in an image plane, and Gaussian radius. Furthermore, an analytical expression of the centroiding error of the smeared star spot is derived using the proposed model. An accurate and comprehensive evaluation of centroiding accuracy is obtained based on the expression. Moreover, analytical solutions of the optimal parameters are derived to achieve the best performance in centroid estimation. Finally, we perform numerical simulations and a night sky experiment to validate the correctness of the dynamic imaging model, the centroiding error expression, and the optimal parameters.
Dependency of parameter values of a crop model on the spatial scale of simulation
NASA Astrophysics Data System (ADS)
Iizumi, Toshichika; Tanaka, Yukiko; Sakurai, Gen; Ishigooka, Yasushi; Yokozawa, Masayuki
2014-09-01
Reliable regional-scale representation of crop growth and yields has been increasingly important in earth system modeling for the simulation of atmosphere-vegetation-soil interactions in managed ecosystems. While the parameter values in many crop models are location specific or cultivar specific, the validity of such values for regional simulation is in question. We present the scale dependency of likely parameter values that are related to the responses of growth rate and yield to temperature, using the paddy rice model applied to Japan as an example. For all regions, values of the two parameters that determine the degree of yield response to low temperature (the base temperature for calculating cooling degree days and the curvature factor of spikelet sterility caused by low temperature) appeared to change relative to the grid interval. Two additional parameters (the air temperature at which the developmental rate is half of the maximum rate at the optimum temperature and the value of developmental index at which point the crop becomes sensitive to the photoperiod) showed scale dependency in a limited region, whereas the remaining three parameters that determine the phenological characteristics of a rice cultivar and the technological level show no clear scale dependency. These results indicate the importance of using appropriate parameter values for the spatial scale at which a crop model operates. We recommend avoiding the use of location-specific or cultivar-specific parameter values for regional crop simulation, unless a rationale is presented suggesting these values are insensitive to spatial scale.
Modeling crash spatial heterogeneity: random parameter versus geographically weighting.
Xu, Pengpeng; Huang, Helai
2015-02-01
The widely adopted techniques for regional crash modeling include the negative binomial model (NB) and Bayesian negative binomial model with conditional autoregressive prior (CAR). The outputs from both models consist of a set of fixed global parameter estimates. However, the impacts of predicting variables on crash counts might not be stationary over space. This study intended to quantitatively investigate this spatial heterogeneity in regional safety modeling using two advanced approaches, i.e., random parameter negative binomial model (RPNB) and semi-parametric geographically weighted Poisson regression model (S-GWPR). Based on a 3-year data set from the county of Hillsborough, Florida, results revealed that (1) both RPNB and S-GWPR successfully capture the spatially varying relationship, but the two methods yield notably different sets of results; (2) the S-GWPR performs best with the highest value of Rd(2) as well as the lowest mean absolute deviance and Akaike information criterion measures. Whereas the RPNB is comparable to the CAR, in some cases, it provides less accurate predictions; (3) a moderately significant spatial correlation is found in the residuals of RPNB and NB, implying the inadequacy in accounting for the spatial correlation existed across adjacent zones. As crash data are typically collected with reference to location dimension, it is desirable to firstly make use of the geographical component to explore explicitly spatial aspects of the crash data (i.e., the spatial heterogeneity, or the spatially structured varying relationships), then is the unobserved heterogeneity by non-spatial or fuzzy techniques. The S-GWPR is proven to be more appropriate for regional crash modeling as the method outperforms the global models in capturing the spatial heterogeneity occurring in the relationship that is model, and compared with the non-spatial model, it is capable of accounting for the spatial correlation in crash data.
Tradeoffs among watershed model calibration targets for parameter estimation
NASA Astrophysics Data System (ADS)
Price, Katie; Purucker, S. Thomas; Kraemer, Stephen R.; Babendreier, Justin E.
2012-10-01
Hydrologic models are commonly calibrated by optimizing a single objective function target to compare simulated and observed flows, although individual targets are influenced by specific flow modes. Nash-Sutcliffe efficiency (NSE) emphasizes flood peaks in evaluating simulation fit, while modified Nash-Sutcliffe efficiency (MNS) emphasizes lower flows, and the ratio of the simulated to observed standard deviations (RSD) prioritizes flow variability. We investigated tradeoffs of calibrating streamflow on three standard objective functions (NSE, MNS, and RSD), as well as a multiobjective function aggregating these three targets to simultaneously address a range of flow conditions, for calibration of the Soil and Water Assessment Tool (SWAT) daily streamflow simulations in two watersheds. A suite of objective functions was explored to select a minimally redundant set of metrics addressing a range of flow characteristics. After each pass of 2001 simulations, an iterative informal likelihood procedure was used to subset parameter ranges. The ranges from each best-fit simulation set were used for model validation. Values for optimized parameters vary among calibrations using different objective functions, which underscores the importance of linking modeling objectives to calibration target selection. The simulation set approach yielded validated models of similar quality as seen with a single best-fit parameter set, with the added benefit of uncertainty estimations. Our approach represents a novel compromise between equifinality-based approaches and Pareto optimization. Combining the simulation set approach with the multiobjective function was demonstrated to be a practicable and flexible approach for model calibration, which can be readily modified to suit modeling goals, and is not model or location specific.
Generalized Additive Models, Cubic Splines and Penalized Likelihood.
1987-05-22
in case control studies ). All models in the table include dummy variable to account for the matching. The first 3 lines of the table indicate that OA...Ausoc. Breslow, N. and Day, N. (1980). Statistical methods in cancer research, volume 1- the analysis of case - control studies . International agency
NASA Astrophysics Data System (ADS)
Samadhi, TMAA; Sumihartati, Atin
2016-02-01
The most critical stage in a garment industry is sewing process, because generally, it consists of a number of operations and a large number of sewing machines for each operation. Therefore, it requires a balancing method that can assign task to work station with balance workloads. Many studies on assembly line balancing assume a new assembly line, but in reality, due to demand fluctuation and demand increased a re-balancing is needed. To cope with those fluctuating demand changes, additional capacity can be carried out by investing in spare sewing machine and paying for sewing service through outsourcing. This study develops an assembly line balancing (ALB) model on existing line to cope with fluctuating demand change. Capacity redesign is decided if the fluctuation demand exceeds the available capacity through a combination of making investment on new machines and outsourcing while considering for minimizing the cost of idle capacity in the future. The objective of the model is to minimize the total cost of the line assembly that consists of operating costs, machine cost, adding capacity cost, losses cost due to idle capacity and outsourcing costs. The model develop is based on an integer programming model. The model is tested for a set of data of one year demand with the existing number of sewing machines of 41 units. The result shows that additional maximum capacity up to 76 units of machine required when there is an increase of 60% of the average demand, at the equal cost parameters..
Hadrup, Niels; Taxvig, Camilla; Pedersen, Mikael; Nellemann, Christine; Hass, Ulla; Vinggaard, Anne Marie
2013-01-01
Humans are concomitantly exposed to numerous chemicals. An infinite number of combinations and doses thereof can be imagined. For toxicological risk assessment the mathematical prediction of mixture effects, using knowledge on single chemicals, is therefore desirable. We investigated pros and cons of the concentration addition (CA), independent action (IA) and generalized concentration addition (GCA) models. First we measured effects of single chemicals and mixtures thereof on steroid synthesis in H295R cells. Then single chemical data were applied to the models; predictions of mixture effects were calculated and compared to the experimental mixture data. Mixture 1 contained environmental chemicals adjusted in ratio according to human exposure levels. Mixture 2 was a potency adjusted mixture containing five pesticides. Prediction of testosterone effects coincided with the experimental Mixture 1 data. In contrast, antagonism was observed for effects of Mixture 2 on this hormone. The mixtures contained chemicals exerting only limited maximal effects. This hampered prediction by the CA and IA models, whereas the GCA model could be used to predict a full dose response curve. Regarding effects on progesterone and estradiol, some chemicals were having stimulatory effects whereas others had inhibitory effects. The three models were not applicable in this situation and no predictions could be performed. Finally, the expected contributions of single chemicals to the mixture effects were calculated. Prochloraz was the predominant but not sole driver of the mixtures, suggesting that one chemical alone was not responsible for the mixture effects. In conclusion, the GCA model seemed to be superior to the CA and IA models for the prediction of testosterone effects. A situation with chemicals exerting opposing effects, for which the models could not be applied, was identified. In addition, the data indicate that in non-potency adjusted mixtures the effects cannot always be
Hussain, Faraz; Jha, Sumit K; Jha, Susmit; Langmead, Christopher J
2014-01-01
Stochastic models are increasingly used to study the behaviour of biochemical systems. While the structure of such models is often readily available from first principles, unknown quantitative features of the model are incorporated into the model as parameters. Algorithmic discovery of parameter values from experimentally observed facts remains a challenge for the computational systems biology community. We present a new parameter discovery algorithm that uses simulated annealing, sequential hypothesis testing, and statistical model checking to learn the parameters in a stochastic model. We apply our technique to a model of glucose and insulin metabolism used for in-silico validation of artificial pancreata and demonstrate its effectiveness by developing parallel CUDA-based implementation for parameter synthesis in this model.
Jha, Sumit K.; Jha, Susmit; Langmead, Christopher J.
2015-01-01
Stochastic models are increasingly used to study the behaviour of biochemical systems. While the structure of such models is often readily available from first principles, unknown quantitative features of the model are incorporated into the model as parameters. Algorithmic discovery of parameter values from experimentally observed facts remains a challenge for the computational systems biology community. We present a new parameter discovery algorithm that uses simulated annealing, sequential hypothesis testing, and statistical model checking to learn the parameters in a stochastic model. We apply our technique to a model of glucose and insulin metabolism used for in-silico validation of artificial pancreata and demonstrate its effectiveness by developing parallel CUDA-based implementation for parameter synthesis in this model. PMID:24989866
Analysis and Modeling of soil hydrology under different soil additives in artificial runoff plots
NASA Astrophysics Data System (ADS)
Ruidisch, M.; Arnhold, S.; Kettering, J.; Huwe, B.; Kuzyakov, Y.; Ok, Y.; Tenhunen, J. D.
2009-12-01
The impact of monsoon events during June and July in the Korean project region Haean Basin, which is located in the northeastern part of South Korea plays a key role for erosion, leaching and groundwater pollution risk by agrochemicals. Therefore, the project investigates the main hydrological processes in agricultural soils under field and laboratory conditions on different scales (plot, hillslope and catchment). Soil hydrological parameters were analysed depending on different soil additives, which are known for prevention of soil erosion and nutrient loss as well as increasing of water infiltration, aggregate stability and soil fertility. Hence, synthetic water-soluble Polyacrylamides (PAM), Biochar (Black Carbon mixed with organic fertilizer), both PAM and Biochar were applied in runoff plots at three agricultural field sites. Additionally, as control a subplot was set up without any additives. The field sites were selected in areas with similar hillslope gradients and with emphasis on the dominant land management form of dryland farming in Haean, which is characterised by row planting and row covering by foil. Hydrological parameters like satured water conductivity, matrix potential and water content were analysed by infiltration experiments, continuous tensiometer measurements, time domain reflectometry as well as pressure plates to indentify characteristic water retention curves of each horizon. Weather data were observed by three weather stations next to the runoff plots. Measured data also provide the input data for modeling water transport in the unsatured zone in runoff plots with HYDRUS 1D/2D/3D and SWAT (Soil & Water Assessment Tool).
Zhou, Liming; Yang, Yuxing; Yuan, Shiying
2006-02-01
A new algorithm, the coordinates transform iterative optimizing method based on the least square curve fitting model, is presented. This arithmetic is used for extracting the bio-impedance model parameters. It is superior to other methods, for example, its speed of the convergence is quicker, and its calculating precision is higher. The objective to extract the model parameters, such as Ri, Re, Cm and alpha, has been realized rapidly and accurately. With the aim at lowering the power consumption, decreasing the price and improving the price-to-performance ratio, a practical bio-impedance measure system with double CPUs has been built. It can be drawn from the preliminary results that the intracellular resistance Ri increased largely with an increase in working load during sitting, which reflects the ischemic change of lower limbs.
Neural mass model parameter identification for MEG/EEG
NASA Astrophysics Data System (ADS)
Kybic, Jan; Faugeras, Olivier; Clerc, Maureen; Papadopoulo, Théo
2007-03-01
Electroencephalography (EEG) and magnetoencephalography (MEG) have excellent time resolution. However, the poor spatial resolution and small number of sensors do not permit to reconstruct a general spatial activation pattern. Moreover, the low signal to noise ratio (SNR) makes accurate reconstruction of a time course also challenging. We therefore propose to use constrained reconstruction, modeling the relevant part of the brain using a neural mass model: There is a small number of zones that are considered as entities, neurons within a zone are assumed to be activated simultaneously. The location and spatial extend of the zones as well as the interzonal connection pattern can be determined from functional MRI (fMRI), diffusion tensor MRI (DTMRI), and other anatomical and brain mapping observation techniques. The observation model is linear, its deterministic part is known from EEG/MEG forward modeling, the statistics of the stochastic part can be estimated. The dynamics of the neural model is described by a moderate number of parameters that can be estimated from the recorded EEG/MEG data. We explicitly model the long-distance communication delays. Our parameters have physiological meaning and their plausible range is known. Since the problem is highly nonlinear, a quasi-Newton optimization method with random sampling and automatic success evaluation is used. The actual connection topology can be identified from several possibilities. The method was tested on synthetic data as well as on true MEG somatosensory-evoked field (SEF) data.
Complex Parameter Landscape for a Complex Neuron Model
Achard, Pablo; De Schutter, Erik
2006-01-01
The electrical activity of a neuron is strongly dependent on the ionic channels present in its membrane. Modifying the maximal conductances from these channels can have a dramatic impact on neuron behavior. But the effect of such modifications can also be cancelled out by compensatory mechanisms among different channels. We used an evolution strategy with a fitness function based on phase-plane analysis to obtain 20 very different computational models of the cerebellar Purkinje cell. All these models produced very similar outputs to current injections, including tiny details of the complex firing pattern. These models were not completely isolated in the parameter space, but neither did they belong to a large continuum of good models that would exist if weak compensations between channels were sufficient. The parameter landscape of good models can best be described as a set of loosely connected hyperplanes. Our method is efficient in finding good models in this complex landscape. Unraveling the landscape is an important step towards the understanding of functional homeostasis of neurons. PMID:16848639
The definition of input parameters for modelling of energetic subsystems
NASA Astrophysics Data System (ADS)
Ptacek, M.
2013-06-01
This paper is a short review and a basic description of mathematical models of renewable energy sources which present individual investigated subsystems of a system created in Matlab/Simulink. It solves the physical and mathematical relationships of photovoltaic and wind energy sources that are often connected to the distribution networks. The fuel cell technology is much less connected to the distribution networks but it could be promising in the near future. Therefore, the paper informs about a new dynamic model of the low-temperature fuel cell subsystem, and the main input parameters are defined as well. Finally, the main evaluated and achieved graphic results for the suggested parameters and for all the individual subsystems mentioned above are shown.
Auxiliary Parameter MCMC for Exponential Random Graph Models
NASA Astrophysics Data System (ADS)
Byshkin, Maksym; Stivala, Alex; Mira, Antonietta; Krause, Rolf; Robins, Garry; Lomi, Alessandro
2016-11-01
Exponential random graph models (ERGMs) are a well-established family of statistical models for analyzing social networks. Computational complexity has so far limited the appeal of ERGMs for the analysis of large social networks. Efficient computational methods are highly desirable in order to extend the empirical scope of ERGMs. In this paper we report results of a research project on the development of snowball sampling methods for ERGMs. We propose an auxiliary parameter Markov chain Monte Carlo (MCMC) algorithm for sampling from the relevant probability distributions. The method is designed to decrease the number of allowed network states without worsening the mixing of the Markov chains, and suggests a new approach for the developments of MCMC samplers for ERGMs. We demonstrate the method on both simulated and actual (empirical) network data and show that it reduces CPU time for parameter estimation by an order of magnitude compared to current MCMC methods.
Automated parameter estimation for biological models using Bayesian statistical model checking
2015-01-01
Background Probabilistic models have gained widespread acceptance in the systems biology community as a useful way to represent complex biological systems. Such models are developed using existing knowledge of the structure and dynamics of the system, experimental observations, and inferences drawn from statistical analysis of empirical data. A key bottleneck in building such models is that some system variables cannot be measured experimentally. These variables are incorporated into the model as numerical parameters. Determining values of these parameters that justify existing experiments and provide reliable predictions when model simulations are performed is a key research problem. Domain experts usually estimate the values of these parameters by fitting the model to experimental data. Model fitting is usually expressed as an optimization problem that requires minimizing a cost-function which measures some notion of distance between the model and the data. This optimization problem is often solved by combining local and global search methods that tend to perform well for the specific application domain. When some prior information about parameters is available, methods such as Bayesian inference are commonly used for parameter learning. Choosing the appropriate parameter search technique requires detailed domain knowledge and insight into the underlying system. Results Using an agent-based model of the dynamics of acute inflammation, we demonstrate a novel parameter estimation algorithm by discovering the amount and schedule of doses of bacterial lipopolysaccharide that guarantee a set of observed clinical outcomes with high probability. We synthesized values of twenty-eight unknown parameters such that the parameterized model instantiated with these parameter values satisfies four specifications describing the dynamic behavior of the model. Conclusions We have developed a new algorithmic technique for discovering parameters in complex stochastic models of
A lumped parameter model of the polymer electrolyte fuel cell
NASA Astrophysics Data System (ADS)
Chu, Keonyup; Ryu, Junghwan; Sunwoo, Myoungho
A model of a polymer electrolyte fuel cell (PEFC) is developed that captures dynamic behaviour for control purposes. The model is mathematically simple, but accounts for the essential phenomena that define PEFC performance. In particular, performance depends principally on humidity, temperature and gas pressure in the fuel cell system. To simulate accurately PEFC operation, the effects of water transport, hydration in the membrane, temperature, and mass transport in the fuel cells system are simultaneously coupled in the model. The PEFC model address three physically distinctive fuel cell components, namely, the anode channel, the cathode channel, and the membrane electrode assembly (MEA). The laws of mass and energy conservation are applied to describe each physical component as a control volume. In addition, the MEA model includes a steady-state electrochemical model, which consists of membrane hydration and the stack voltage models.
Incorporating Model Parameter Uncertainty into Prostate IMRT Treatment Planning
2005-04-01
Distribution Unlimited The views, opinions and/or findings contained in this report are those of the author( s ) and should not be construed as an...Incorporating Model Parameter Uncertainty into Prostate DAMD17-03-1-0019 IMRT Treatment Planning 6. AUTHOR( S ) David Y. Yang, Ph.D. 7. PERFORMING ORGANIZA TION...NAME( S ) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION Stanford University REPORT NUMBER Stanford, California 94305-5401 E-Mail: yong@reyes .stanford
Can ligand addition to soil enhance Cd phytoextraction? A mechanistic model study.
Lin, Zhongbing; Schneider, André; Nguyen, Christophe; Sterckeman, Thibault
2014-11-01
Phytoextraction is a potential method for cleaning Cd-polluted soils. Ligand addition to soil is expected to enhance Cd phytoextraction. However, experimental results show that this addition has contradictory effects on plant Cd uptake. A mechanistic model simulating the reaction kinetics (adsorption on solid phase, complexation in solution), transport (convection, diffusion) and root absorption (symplastic, apoplastic) of Cd and its complexes in soil was developed. This was used to calculate plant Cd uptake with and without ligand addition in a great number of combinations of soil, ligand and plant characteristics, varying the parameters within defined domains. Ligand addition generally strongly reduced hydrated Cd (Cd(2+)) concentration in soil solution through Cd complexation. Dissociation of Cd complex ([Formula: see text]) could not compensate for this reduction, which greatly lowered Cd(2+) symplastic uptake by roots. The apoplastic uptake of [Formula: see text] was not sufficient to compensate for the decrease in symplastic uptake. This explained why in the majority of the cases, ligand addition resulted in the reduction of the simulated Cd phytoextraction. A few results showed an enhanced phytoextraction in very particular conditions (strong plant transpiration with high apoplastic Cd uptake capacity), but this enhancement was very limited, making chelant-enhanced phytoextraction poorly efficient for Cd.
NASA Astrophysics Data System (ADS)
Sim, Minseob; Park, Hyunbin; Kim, Shiho
2015-11-01
We have presented both modeling and a method for extracting parasitic thermal conductance as well as intrinsic device parameters of a thermoelectric module based on information readily available in vendor datasheets. An equivalent circuit model that is compatible with circuit simulators is derived, followed by a methodology for extracting both intrinsic and parasitic model parameters. For the first time, the effective thermal resistance of the ceramic and copper interconnect layers of the thermoelectric module is extracted using only parameters listed in vendor datasheets. In the experimental condition, including under condition of varying electric current, the parameters extracted from the model accurately reproduce the performance of commercial thermoelectric modules.
Szerman, N; Gonzalez, C B; Sancho, A M; Grigioni, G; Carduza, F; Vaudagna, S R
2012-03-01
Beef muscles submitted to four enhancement treatments (1.88% whey protein concentrate (WPC)+1.25% sodium chloride (NaCl); 1.88% modified whey protein concentrate (MWPC)+1.25%NaCl; 0.25% sodium tripolyphosphate (STPP)+1.25%NaCl; 1.25%NaCl) and a control treatment (non-injected muscles) were sous vide cooked. Muscles with STPP+NaCl presented a significantly higher total yield (106.5%) in comparison to those with WPC/MWPC+NaCl (94.7% and 92.9%, respectively), NaCl alone (84.8%) or controls (72.1%). Muscles with STPP+NaCl presented significantly lower shear force values than control ones; also, WPC/MWPC+NaCl added muscles presented similar values than those from the other treatments. After cooking, muscles with STPP+NaCl or WPC/MWPC+NaCl depicted compacted and uniform microstructures. Muscles with STPP+NaCl showed a pink colour, meanwhile other treatment muscles presented colours between pinkish-grey and grey-brown. STPP+NaCl added samples presented the highest values of global tenderness and juiciness. The addition of STPP+NaCl had a better performance than WPC/MWPC+NaCl. However, the addition of WPC/MWPC+NaCl improved total yield in comparison to NaCl added or control ones.
Nonlocal order parameters for the 1D Hubbard model.
Montorsi, Arianna; Roncaglia, Marco
2012-12-07
We characterize the Mott-insulator and Luther-Emery phases of the 1D Hubbard model through correlators that measure the parity of spin and charge strings along the chain. These nonlocal quantities order in the corresponding gapped phases and vanish at the critical point U(c)=0, thus configuring as hidden order parameters. The Mott insulator consists of bound doublon-holon pairs, which in the Luther-Emery phase turn into electron pairs with opposite spins, both unbinding at U(c). The behavior of the parity correlators is captured by an effective free spinless fermion model.
Bayesian Estimation in the One-Parameter Latent Trait Model.
1980-03-01
3 MASSACHUSETTS LNIV AMHERST LAB OF PSYCHOMETRIC AND -- ETC F/G 12/1 BAYESIAN ESTIMATION IN THE ONE-PARA1ETER LATENT TRAIT MODEL. (U) MAR 80 H...TEST CHART VVNN lfl’ ,. [’ COD BAYESIAN ESTIMATION IN THE ONE-PARAMETER LATENT TRAIT MODEL 0 wtHAR IHARAN SWA I NATHAN AND JANICE A. GIFFORD Research...block numbef) latent trait theory Bayesain estimation 20. ABSTRACT (Continue on reveso aide If neceaar and identlfy by Nock mambe) ,-When several
Estimation of kinetic model parameters in fluorescence optical diffusion tomography.
Milstein, Adam B; Webb, Kevin J; Bouman, Charles A
2005-07-01
We present a technique for reconstructing the spatially dependent dynamics of a fluorescent contrast agent in turbid media. The dynamic behavior is described by linear and nonlinear parameters of a compartmental model or some other model with a deterministic functional form. The method extends our previous work in fluorescence optical diffusion tomography by parametrically reconstructing the time-dependent fluorescent yield. The reconstruction uses a Bayesian framework and parametric iterative coordinate descent optimization, which is closely related to Gauss-Seidel methods. We demonstrate the method with a simulation study.
Nonlocal Order Parameters for the 1D Hubbard Model
NASA Astrophysics Data System (ADS)
Montorsi, Arianna; Roncaglia, Marco
2012-12-01
We characterize the Mott-insulator and Luther-Emery phases of the 1D Hubbard model through correlators that measure the parity of spin and charge strings along the chain. These nonlocal quantities order in the corresponding gapped phases and vanish at the critical point Uc=0, thus configuring as hidden order parameters. The Mott insulator consists of bound doublon-holon pairs, which in the Luther-Emery phase turn into electron pairs with opposite spins, both unbinding at Uc. The behavior of the parity correlators is captured by an effective free spinless fermion model.
Systematic parameter estimation for PEM fuel cell models
NASA Astrophysics Data System (ADS)
Carnes, Brian; Djilali, Ned
The problem of parameter estimation is considered for the case of mathematical models for polymer electrolyte membrane fuel cells (PEMFCs). An algorithm for nonlinear least squares constrained by partial differential equations is defined and applied to estimate effective membrane conductivity, exchange current densities and oxygen diffusion coefficients in a one-dimensional PEMFC model for transport in the principal direction of current flow. Experimental polarization curves are fitted for conventional and low current density PEMFCs. Use of adaptive mesh refinement is demonstrated to increase the computational efficiency.
Parameter and uncertainty estimation for mechanistic, spatially explicit epidemiological models
NASA Astrophysics Data System (ADS)
Finger, Flavio; Schaefli, Bettina; Bertuzzo, Enrico; Mari, Lorenzo; Rinaldo, Andrea
2014-05-01
Epidemiological models can be a crucially important tool for decision-making during disease outbreaks. The range of possible applications spans from real-time forecasting and allocation of health-care resources to testing alternative intervention mechanisms such as vaccines, antibiotics or the improvement of sanitary conditions. Our spatially explicit, mechanistic models for cholera epidemics have been successfully applied to several epidemics including, the one that struck Haiti in late 2010 and is still ongoing. Calibration and parameter estimation of such models represents a major challenge because of properties unusual in traditional geoscientific domains such as hydrology. Firstly, the epidemiological data available might be subject to high uncertainties due to error-prone diagnosis as well as manual (and possibly incomplete) data collection. Secondly, long-term time-series of epidemiological data are often unavailable. Finally, the spatially explicit character of the models requires the comparison of several time-series of model outputs with their real-world counterparts, which calls for an appropriate weighting scheme. It follows that the usual assumption of a homoscedastic Gaussian error distribution, used in combination with classical calibration techniques based on Markov chain Monte Carlo algorithms, is likely to be violated, whereas the construction of an appropriate formal likelihood function seems close to impossible. Alternative calibration methods, which allow for accurate estimation of total model uncertainty, particularly regarding the envisaged use of the models for decision-making, are thus needed. Here we present the most recent developments regarding methods for parameter and uncertainty estimation to be used with our mechanistic, spatially explicit models for cholera epidemics, based on informal measures of goodness of fit.
Zhang, Xiaoying; Liu, Chongxuan; Hu, Bill X.; Hu, Qinhong
2015-09-28
This study statistically analyzed a grain-size based additivity model that has been proposed to scale reaction rates and parameters from laboratory to field. The additivity model assumed that reaction properties in a sediment including surface area, reactive site concentration, reaction rate, and extent can be predicted from field-scale grain size distribution by linearly adding reaction properties for individual grain size fractions. This study focused on the statistical analysis of the additivity model with respect to reaction rate constants using multi-rate uranyl (U(VI)) surface complexation reactions in a contaminated sediment as an example. Experimental data of rate-limited U(VI) desorption in a stirred flow-cell reactor were used to estimate the statistical properties of multi-rate parameters for individual grain size fractions. The statistical properties of the rate constants for the individual grain size fractions were then used to analyze the statistical properties of the additivity model to predict rate-limited U(VI) desorption in the composite sediment, and to evaluate the relative importance of individual grain size fractions to the overall U(VI) desorption. The result indicated that the additivity model provided a good prediction of the U(VI) desorption in the composite sediment. However, the rate constants were not directly scalable using the additivity model, and U(VI) desorption in individual grain size fractions have to be simulated in order to apply the additivity model. An approximate additivity model for directly scaling rate constants was subsequently proposed and evaluated. The result found that the approximate model provided a good prediction of the experimental results within statistical uncertainty. This study also found that a gravel size fraction (2-8mm), which is often ignored in modeling U(VI) sorption and desorption, is statistically significant to the U(VI) desorption in the sediment.
Accelerated gravitational wave parameter estimation with reduced order modeling.
Canizares, Priscilla; Field, Scott E; Gair, Jonathan; Raymond, Vivien; Smith, Rory; Tiglio, Manuel
2015-02-20
Inferring the astrophysical parameters of coalescing compact binaries is a key science goal of the upcoming advanced LIGO-Virgo gravitational-wave detector network and, more generally, gravitational-wave astronomy. However, current approaches to parameter estimation for these detectors require computationally expensive algorithms. Therefore, there is a pressing need for new, fast, and accurate Bayesian inference techniques. In this Letter, we demonstrate that a reduced order modeling approach enables rapid parameter estimation to be performed. By implementing a reduced order quadrature scheme within the LIGO Algorithm Library, we show that Bayesian inference on the 9-dimensional parameter space of nonspinning binary neutron star inspirals can be sped up by a factor of ∼30 for the early advanced detectors' configurations (with sensitivities down to around 40 Hz) and ∼70 for sensitivities down to around 20 Hz. This speedup will increase to about 150 as the detectors improve their low-frequency limit to 10 Hz, reducing to hours analyses which could otherwise take months to complete. Although these results focus on interferometric gravitational wave detectors, the techniques are broadly applicable to any experiment where fast Bayesian analysis is desirable.
Impact of GNSS Orbit Modeling on Reference Frame Parameters
NASA Astrophysics Data System (ADS)
Arnold, Daniel; Meindl, Michael; Lutz, Simon; Steigenberger, Peter; Beutler, Gerhard; Dach, Rolf; Schaer, Stefan; Prange, Lars; Sosnica, Krzysztof; Jäggi, Adrian
2015-04-01
The Center for Orbit Determination in Europe (CODE) contributes with a re-processing solution covering the years 1994 to 2013 (IGS repro2 effort) to the next ITRF release. The measurements to the GLONASS satellites are included since January 2002 in a rigorously combined solution. Around the year 2008 the network of combined GPS/GLONASS tracking stations became truly global. Since December 2011, 24 GLONASS satellites are active in their nominal positions. Since then the re-processing series shows - as the CODE operational solution - spurious signals in geophysical parameters, in particular in the Earth Rotation Parameters (ERPs) and in the estimated geocenter coordinates. These signals grew creepingly with the increasing influence of GLONASS. The problems could be attributed to deficiencies of the Empirical CODE Orbit Model (ECOM) for the GLONASS satellites. Based on the GPS-only, GLONASS-only, and combined GPS/GLONASS observations of recent years we study the impact of different orbit parameterizations on geodynamically relevant parameters, namely on ERPs, geocenter coordinates, and station coordinates. We also asses the quality of the GNSS orbits by measuring the orbit misclosures at the day boundaries and by validating the orbits using satellite laser ranging observations. We present an updated ECOM, which substantially reduces spurious signals in the estimated parameters in 1-day and in 3-day solutions.
Accelerated Gravitational Wave Parameter Estimation with Reduced Order Modeling
NASA Astrophysics Data System (ADS)
Canizares, Priscilla; Field, Scott E.; Gair, Jonathan; Raymond, Vivien; Smith, Rory; Tiglio, Manuel
2015-02-01
Inferring the astrophysical parameters of coalescing compact binaries is a key science goal of the upcoming advanced LIGO-Virgo gravitational-wave detector network and, more generally, gravitational-wave astronomy. However, current approaches to parameter estimation for these detectors require computationally expensive algorithms. Therefore, there is a pressing need for new, fast, and accurate Bayesian inference techniques. In this Letter, we demonstrate that a reduced order modeling approach enables rapid parameter estimation to be performed. By implementing a reduced order quadrature scheme within the LIGO Algorithm Library, we show that Bayesian inference on the 9-dimensional parameter space of nonspinning binary neutron star inspirals can be sped up by a factor of ˜30 for the early advanced detectors' configurations (with sensitivities down to around 40 Hz) and ˜70 for sensitivities down to around 20 Hz. This speedup will increase to about 150 as the detectors improve their low-frequency limit to 10 Hz, reducing to hours analyses which could otherwise take months to complete. Although these results focus on interferometric gravitational wave detectors, the techniques are broadly applicable to any experiment where fast Bayesian analysis is desirable.
ERIC Educational Resources Information Center
Samejima, Fumiko
Item analysis data fitting the normal ogive model were simulated in order to investigate the problems encountered when applying the three-parameter logistic model. Binary item tests containing 10 and 35 items were created, and Monte Carlo methods simulated the responses of 2,000 and 500 examinees. Item parameters were obtained using Logist 5.…
NASA Astrophysics Data System (ADS)
Reimer, Joscha; Piwonski, Jaroslaw; Slawig, Thomas
2016-04-01
The statistical significance of any model-data comparison strongly depends on the quality of the used data and the criterion used to measure the model-to-data misfit. The statistical properties (such as mean values, variances and covariances) of the data should be taken into account by choosing a criterion as, e.g., ordinary, weighted or generalized least squares. Moreover, the criterion can be restricted onto regions or model quantities which are of special interest. This choice influences the quality of the model output (also for not measured quantities) and the results of a parameter estimation or optimization process. We have estimated the parameters of a three-dimensional and time-dependent marine biogeochemical model describing the phosphorus cycle in the ocean. For this purpose, we have developed a statistical model for measurements of phosphate and dissolved organic phosphorus. This statistical model includes variances and correlations varying with time and location of the measurements. We compared the obtained estimations of model output and parameters for different criteria. Another question is if (and which) further measurements would increase the model's quality at all. Using experimental design criteria, the information content of measurements can be quantified. This may refer to the uncertainty in unknown model parameters as well as the uncertainty regarding which model is closer to reality. By (another) optimization, optimal measurement properties such as locations, time instants and quantities to be measured can be identified. We have optimized such properties for additional measurement for the parameter estimation of the marine biogeochemical model. For this purpose, we have quantified the uncertainty in the optimal model parameters and the model output itself regarding the uncertainty in the measurement data using the (Fisher) information matrix. Furthermore, we have calculated the uncertainty reduction by additional measurements depending on time
Modelling rock-avalanche induced impact waves: Sensitivity of the model chains to model parameters
NASA Astrophysics Data System (ADS)
Schaub, Yvonne; Huggel, Christian
2014-05-01
New lakes are forming in high-mountain areas all over the world due to glacier recession. Often they will be located below steep, destabilized flanks and are therefore exposed to impacts from rock-/ice-avalanches. Several events worldwide are known, where an outburst flood has been triggered by such an impact. In regions such as in the European Alps or in the Cordillera Blanca in Peru, where valley bottoms are densely populated, these far-travelling, high-magnitude events can result in major disasters. Usually natural hazards are assessed as single hazardous processes, for the above mentioned reasons, however, development of assessment and reproduction methods of the hazardous process chain for the purpose of hazard map generation have to be brought forward. A combination of physical process models have already been suggested and illustrated by means of lake outburst in the Cordillera Blanca, Peru, where on April 11th 2010 an ice-avalanche of approx. 300'000m3 triggered an impact wave, which overtopped the 22m freeboard of the rock-dam for 5 meters and caused and outburst flood which travelled 23 km to the city of Carhuaz. We here present a study, where we assessed the sensitivity of the model chain from ice-avalanche and impact wave to single parameters considering rock-/ice-avalanche modeling by RAMMS and impact wave modeling by IBER. Assumptions on the initial rock-/ice-avalanche volume, calibration of the friction parameters in RAMMS and assumptions on erosion considered in RAMMS were parameters tested regarding their influence on overtopping parameters that are crucial for outburst flood modeling. Further the transformation of the RAMMS-output (flow height and flow velocities on the shoreline of the lake) into an inflow-hydrograph for IBER was also considered a possible source of uncertainties. Overtopping time, volume, and wave height as much as mean and maximum discharge were considered decisive parameters for the outburst flood modeling and were therewith
Computational approaches to parameter estimation and model selection in immunology
NASA Astrophysics Data System (ADS)
Baker, C. T. H.; Bocharov, G. A.; Ford, J. M.; Lumb, P. M.; Norton, S. J.; Paul, C. A. H.; Junt, T.; Krebs, P.; Ludewig, B.
2005-12-01
One of the significant challenges in biomathematics (and other areas of science) is to formulate meaningful mathematical models. Our problem is to decide on a parametrized model which is, in some sense, most likely to represent the information in a set of observed data. In this paper, we illustrate the computational implementation of an information-theoretic approach (associated with a maximum likelihood treatment) to modelling in immunology.The approach is illustrated by modelling LCMV infection using a family of models based on systems of ordinary differential and delay differential equations. The models (which use parameters that have a scientific interpretation) are chosen to fit data arising from experimental studies of virus-cytotoxic T lymphocyte kinetics; the parametrized models that result are arranged in a hierarchy by the computation of Akaike indices. The practical illustration is used to convey more general insight. Because the mathematical equations that comprise the models are solved numerically, the accuracy in the computation has a bearing on the outcome, and we address this and other practical details in our discussion.
Campbell, D A; Chkrebtii, O
2013-12-01
Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories.
Estimating recharge rates with analytic element models and parameter estimation
Dripps, W.R.; Hunt, R.J.; Anderson, M.P.
2006-01-01
Quantifying the spatial and temporal distribution of recharge is usually a prerequisite for effective ground water flow modeling. In this study, an analytic element (AE) code (GFLOW) was used with a nonlinear parameter estimation code (UCODE) to quantify the spatial and temporal distribution of recharge using measured base flows as calibration targets. The ease and flexibility of AE model construction and evaluation make this approach well suited for recharge estimation. An AE flow model of an undeveloped watershed in northern Wisconsin was optimized to match median annual base flows at four stream gages for 1996 to 2000 to demonstrate the approach. Initial optimizations that assumed a constant distributed recharge rate provided good matches (within 5%) to most of the annual base flow estimates, but discrepancies of >12% at certain gages suggested that a single value of recharge for the entire watershed is inappropriate. Subsequent optimizations that allowed for spatially distributed recharge zones based on the distribution of vegetation types improved the fit and confirmed that vegetation can influence spatial recharge variability in this watershed. Temporally, the annual recharge values varied >2.5-fold between 1996 and 2000 during which there was an observed 1.7-fold difference in annual precipitation, underscoring the influence of nonclimatic factors on interannual recharge variability for regional flow modeling. The final recharge values compared favorably with more labor-intensive field measurements of recharge and results from studies, supporting the utility of using linked AE-parameter estimation codes for recharge estimation. Copyright ?? 2005 The Author(s).
Parameter and Process Significance in Mechanistic Modeling of Cellulose Hydrolysis
NASA Astrophysics Data System (ADS)
Rotter, B.; Barry, A.; Gerhard, J.; Small, J.; Tahar, B.
2005-12-01
The rate of cellulose hydrolysis, and of associated microbial processes, is important in determining the stability of landfills and their potential impact on the environment, as well as associated time scales. To permit further exploration in this field, a process-based model of cellulose hydrolysis was developed. The model, which is relevant to both landfill and anaerobic digesters, includes a novel approach to biomass transfer between a cellulose-bound biofilm and biomass in the surrounding liquid. Model results highlight the significance of the bacterial colonization of cellulose particles by attachment through contact in solution. Simulations revealed that enhanced colonization, and therefore cellulose degradation, was associated with reduced cellulose particle size, higher biomass populations in solution, and increased cellulose-binding ability of the biomass. A sensitivity analysis of the system parameters revealed different sensitivities to model parameters for a typical landfill scenario versus that for an anaerobic digester. The results indicate that relative surface area of cellulose and proximity of hydrolyzing bacteria are key factors determining the cellulose degradation rate.
Anisotropic effects on constitutive model parameters of aluminum alloys
NASA Astrophysics Data System (ADS)
Brar, Nachhatter S.; Joshi, Vasant S.
2012-03-01
Simulation of low velocity impact on structures or high velocity penetration in armor materials heavily rely on constitutive material models. Model constants are determined from tension, compression or torsion stress-strain at low and high strain rates at different temperatures. These model constants are required input to computer codes (LS-DYNA, DYNA3D or SPH) to accurately simulate fragment impact on structural components made of high strength 7075-T651 aluminum alloy. Johnson- Cook model constants determined for Al7075-T651 alloy bar material failed to simulate correctly the penetration into 1' thick Al-7075-T651plates. When simulation go well beyond minor parameter tweaking and experimental results show drastically different behavior it becomes important to determine constitutive parameters from the actual material used in impact/penetration experiments. To investigate anisotropic effects on the yield/flow stress of this alloy quasi-static and high strain rate tensile tests were performed on specimens fabricated in the longitudinal "L", transverse "T", and thickness "TH" directions of 1' thick Al7075 Plate. While flow stress at a strain rate of ~1/s as well as ~1100/s in the thickness and transverse directions are lower than the longitudinal direction. The flow stress in the bar was comparable to flow stress in the longitudinal direction of the plate. Fracture strain data from notched tensile specimens fabricated in the L, T, and Thickness directions of 1' thick plate are used to derive fracture constants.
Additional Developments in Atmosphere Revitalization Modeling and Simulation
NASA Technical Reports Server (NTRS)
Coker, Robert F.; Knox, James C.; Cummings, Ramona; Brooks, Thomas; Schunk, Richard G.; Gomez, Carlos
2013-01-01
NASA's Advanced Exploration Systems (AES) program is developing prototype systems, demonstrating key capabilities, and validating operational concepts for future human missions beyond Earth orbit. These forays beyond the confines of earth's gravity will place unprecedented demands on launch systems. They must launch the supplies needed to sustain a crew over longer periods for exploration missions beyond earth's moon. Thus all spacecraft systems, including those for the separation of metabolic carbon dioxide and water from a crewed vehicle, must be minimized with respect to mass, power, and volume. Emphasis is also placed on system robustness both to minimize replacement parts and ensure crew safety when a quick return to earth is not possible. Current efforts are focused on improving the current state-of-the-art systems utilizing fixed beds of sorbent pellets by evaluating structured sorbents, seeking more robust pelletized sorbents, and examining alternate bed configurations to improve system efficiency and reliability. These development efforts combine testing of sub-scale systems and multi-physics computer simulations to evaluate candidate approaches, select the best performing options, and optimize the configuration of the selected approach. This paper describes the continuing development of atmosphere revitalization models and simulations in support of the Atmosphere Revitalization Recovery and Environmental Monitoring (ARREM) project within the AES program.
Additional Developments in Atmosphere Revitalization Modeling and Simulation
NASA Technical Reports Server (NTRS)
Coker, Robert F.; Knox, James C.; Cummings, Ramona; Brooks, Thomas; Schunk, Richard G.
2013-01-01
NASA's Advanced Exploration Systems (AES) program is developing prototype systems, demonstrating key capabilities, and validating operational concepts for future human missions beyond Earth orbit. These forays beyond the confines of earth's gravity will place unprecedented demands on launch systems. They must launch the supplies needed to sustain a crew over longer periods for exploration missions beyond earth's moon. Thus all spacecraft systems, including those for the separation of metabolic carbon dioxide and water from a crewed vehicle, must be minimized with respect to mass, power, and volume. Emphasis is also placed on system robustness both to minimize replacement parts and ensure crew safety when a quick return to earth is not possible. Current efforts are focused on improving the current state-of-the-art systems utilizing fixed beds of sorbent pellets by evaluating structured sorbents, seeking more robust pelletized sorbents, and examining alternate bed configurations to improve system efficiency and reliability. These development efforts combine testing of sub-scale systems and multi-physics computer simulations to evaluate candidate approaches, select the best performing options, and optimize the configuration of the selected approach. This paper describes the continuing development of atmosphere revitalization models and simulations in support of the Atmosphere Revitalization Recovery and Environmental Monitoring (ARREM)
Parameter estimation for models of ligninolytic and cellulolytic enzyme kinetics
Wang, Gangsheng; Post, Wilfred M; Mayes, Melanie; Frerichs, Joshua T; Jagadamma, Sindhu
2012-01-01
While soil enzymes have been explicitly included in the soil organic carbon (SOC) decomposition models, there is a serious lack of suitable data for model parameterization. This study provides well-documented enzymatic parameters for application in enzyme-driven SOC decomposition models from a compilation and analysis of published measurements. In particular, we developed appropriate kinetic parameters for five typical ligninolytic and cellulolytic enzymes ( -glucosidase, cellobiohydrolase, endo-glucanase, peroxidase, and phenol oxidase). The kinetic parameters included the maximum specific enzyme activity (Vmax) and half-saturation constant (Km) in the Michaelis-Menten equation. The activation energy (Ea) and the pH optimum and sensitivity (pHopt and pHsen) were also analyzed. pHsen was estimated by fitting an exponential-quadratic function. The Vmax values, often presented in different units under various measurement conditions, were converted into the same units at a reference temperature (20 C) and pHopt. Major conclusions are: (i) Both Vmax and Km were log-normal distributed, with no significant difference in Vmax exhibited between enzymes originating from bacteria or fungi. (ii) No significant difference in Vmax was found between cellulases and ligninases; however, there was significant difference in Km between them. (iii) Ligninases had higher Ea values and lower pHopt than cellulases; average ratio of pHsen to pHopt ranged 0.3 0.4 for the five enzymes, which means that an increase or decrease of 1.1 1.7 pH units from pHopt would reduce Vmax by 50%. (iv) Our analysis indicated that the Vmax values from lab measurements with purified enzymes were 1 2 orders of magnitude higher than those for use in SOC decomposition models under field conditions.
Input parameters for LEAP and analysis of the Model 22C data base
Stewart, L.; Goldstein, M.
1981-05-01
The input data for the Long-Term Energy Analysis Program (LEAP) employed by EIA for projections of long-term energy supply and demand in the US were studied and additional documentation provided. Particular emphasis has been placed on the LEAP Model 22C input data base, which was used in obtaining the output projections which appear in the 1978 Annual Report to Congress. Definitions, units, associated model parameters, and translation equations are given in detail. Many parameters were set to null values in Model 22C so as to turn off certain complexities in LEAP; these parameters are listed in Appendix B along with parameters having constant values across all activities. The values of the parameters for each activity are tabulated along with the source upon which each parameter is based - and appropriate comments provided, where available. The structure of the data base is briefly outlined and an attempt made to categorize the parameters according to the methods employed for estimating the numerical values. Due to incomplete documentation and/or lack of specific parameter definitions, few of the input values could be traced and uniquely interpreted using the information provided in the primary and secondary sources. Input parameter choices were noted which led to output projections which are somewhat suspect. Other data problems encountered are summarized. Some of the input data were corrected and a revised base case was constructed. The output projections for this revised case are compared with the Model 22C output for the year 2020, for the Transportation Sector. LEAP could be a very useful tool, especially so in the study of emerging technologies over long-time frames.
Modelling of some parameters from thermoelectric power plants
NASA Astrophysics Data System (ADS)
Popa, G. N.; Diniş, C. M.; Deaconu, S. I.; Maksay, Şt; Popa, I.
2016-02-01
Paper proposing new mathematical models for the main electrical parameters (active power P, reactive power Q of power supplies) and technological (mass flow rate of steam M from boiler and dust emission E from the output of precipitator) from a thermoelectric power plants using industrial plate-type electrostatic precipitators with three sections used in electrical power plants. The mathematical models were used experimental results taken from industrial facility, from boiler and plate-type electrostatic precipitators with three sections, and has used the least squares method for their determination. The modelling has been used equations of degree 1, 2 and 3. The equations were determined between dust emission depending on active power of power supplies and mass flow rate of steam from boiler, and, also, depending on reactive power of power supplies and mass flow rate of steam from boiler. These equations can be used to control the process from electrostatic precipitators.
NASA Astrophysics Data System (ADS)
Rudy, Ashley C. A.; Lamoureux, Scott F.; Treitz, Paul; van Ewijk, Karin Y.
2016-07-01
To effectively assess and mitigate risk of permafrost disturbance, disturbance-prone areas can be predicted through the application of susceptibility models. In this study we developed regional susceptibility models for permafrost disturbances using a field disturbance inventory to test the transferability of the model to a broader region in the Canadian High Arctic. Resulting maps of susceptibility were then used to explore the effect of terrain variables on the occurrence of disturbances within this region. To account for a large range of landscape characteristics, the model was calibrated using two locations: Sabine Peninsula, Melville Island, NU, and Fosheim Peninsula, Ellesmere Island, NU. Spatial patterns of disturbance were predicted with a generalized linear model (GLM) and generalized additive model (GAM), each calibrated using disturbed and randomized undisturbed locations from both locations and GIS-derived terrain predictor variables including slope, potential incoming solar radiation, wetness index, topographic position index, elevation, and distance to water. Each model was validated for the Sabine and Fosheim Peninsulas using independent data sets while the transferability of the model to an independent site was assessed at Cape Bounty, Melville Island, NU. The regional GLM and GAM validated well for both calibration sites (Sabine and Fosheim) with the area under the receiver operating curves (AUROC) > 0.79. Both models were applied directly to Cape Bounty without calibration and validated equally with AUROC's of 0.76; however, each model predicted disturbed and undisturbed samples differently. Additionally, the sensitivity of the transferred model was assessed using data sets with different sample sizes. Results indicated that models based on larger sample sizes transferred more consistently and captured the variability within the terrain attributes in the respective study areas. Terrain attributes associated with the initiation of disturbances were
NASA Astrophysics Data System (ADS)
Mattern, Jann Paul; Edwards, Christopher A.
2017-01-01
Parameter estimation is an important part of numerical modeling and often required when a coupled physical-biogeochemical ocean model is first deployed. However, 3-dimensional ocean model simulations are computationally expensive and models typically contain upwards of 10 parameters suitable for estimation. Hence, manual parameter tuning can be lengthy and cumbersome. Here, we present four easy to implement and flexible parameter estimation techniques and apply them to two 3-dimensional biogeochemical models of different complexities. Based on a Monte Carlo experiment, we first develop a cost function measuring the model-observation misfit based on multiple data types. The parameter estimation techniques are then applied and yield a substantial cost reduction over ∼ 100 simulations. Based on the outcome of multiple replicate experiments, they perform on average better than random, uninformed parameter search but performance declines when more than 40 parameters are estimated together. Our results emphasize the complex cost function structure for biogeochemical parameters and highlight dependencies between different parameters as well as different cost function formulations.
Variational estimation of process parameters in a simplified atmospheric general circulation model
NASA Astrophysics Data System (ADS)
Lv, Guokun; Koehl, Armin; Stammer, Detlef
2016-04-01
Parameterizations are used to simulate effects of unresolved sub-grid-scale processes in current state-of-the-art climate model. The values of the process parameters, which determine the model's climatology, are usually manually adjusted to reduce the difference of model mean state to the observed climatology. This process requires detailed knowledge of the model and its parameterizations. In this work, a variational method was used to estimate process parameters in the Planet Simulator (PlaSim). The adjoint code was generated using automatic differentiation of the source code. Some hydrological processes were switched off to remove the influence of zero-order discontinuities. In addition, the nonlinearity of the model limits the feasible assimilation window to about 1day, which is too short to tune the model's climatology. To extend the feasible assimilation window, nudging terms for all state variables were added to the model's equations, which essentially suppress all unstable directions. In identical twin experiments, we found that the feasible assimilation window could be extended to over 1-year and accurate parameters could be retrieved. Although the nudging terms transform to a damping of the adjoint variables and therefore tend to erases the information of the data over time, assimilating climatological information is shown to provide sufficient information on the parameters. Moreover, the mechanism of this regularization is discussed.
On the parameters of absorbing layers for shallow water models
NASA Astrophysics Data System (ADS)
Modave, Axel; Deleersnijder, Éric; Delhez, Éric J. M.
2010-02-01
Absorbing/sponge layers used as boundary conditions for ocean/marine models are examined in the context of the shallow water equations with the aim to minimize the reflection of outgoing waves at the boundary of the computational domain. The optimization of the absorption coefficient is not an issue in continuous models, for the reflection coefficient of outgoing waves can then be made as small as we please by increasing the absorption coefficient. The optimization of the parameters of absorbing layers is therefore a purely discrete problem. A balance must be found between the efficient damping of outgoing waves and the limited spatial resolution with which the resulting spatial gradients must be described. Using a one-dimensional model as a test case, the performances of various spatial distributions of the absorption coefficient are compared. Two shifted hyperbolic distributions of the absorption coefficient are derived from theoretical considerations for a pure propagative and a pure advective problems. These distribution show good performances. Their free parameter has a well-defined interpretation and can therefore be determined on a physical basis. The properties of the two shifted hyperbolas are illustrated using the classical two-dimensional problems of the collapse of a Gaussian-shaped mound of water and of its advection by a mean current. The good behavior of the resulting boundary scheme remains when a full non-linear dynamics is taken into account.
Anisotropic Effects on Constitutive Model Parameters of Aluminum Alloys
NASA Astrophysics Data System (ADS)
Brar, Nachhatter; Joshi, Vasant
2011-06-01
Simulation of low velocity impact on structures or high velocity penetration in armor materials heavily rely on constitutive material models. The model constants are required input to computer codes (LS-DYNA, DYNA3D or SPH) to accurately simulate fragment impact on structural components made of high strength 7075-T651 aluminum alloys. Johnson-Cook model constants determined for Al7075-T651 alloy bar material failed to simulate correctly the penetration into 1' thick Al-7075-T651plates. When simulations go well beyond minor parameter tweaking and experimental results are drastically different it is important to determine constitutive parameters from the actual material used in impact/penetration experiments. To investigate anisotropic effects on the yield/flow stress of this alloy we performed quasi-static and high strain rate tensile tests on specimens fabricated in the longitudinal, transverse, and thickness directions of 1' thick Al7075-T651 plate. Flow stresses at a strain rate of ~1100/s in the longitudinal and transverse direction are similar around 670MPa and decreases to 620 MPa in the thickness direction. These data are lower than the flow stress of 760 MPa measured in Al7075-T651 bar stock.
Ely, D. Matthew
2006-01-01
routing parameter. Although the primary objective of this study was to identify, by geographic region, the importance of the parameter value to the simulation of ground-water recharge, the secondary objectives proved valuable for future modeling efforts. The value of a rigorous sensitivity analysis can (1) make the calibration process more efficient, (2) guide additional data collection, (3) identify model limitations, and (4) explain simulated results.
Alderman, Phillip D.; Stanfill, Bryan
2016-10-06
Recent international efforts have brought renewed emphasis on the comparison of different agricultural systems models. Thus far, analysis of model-ensemble simulated results has not clearly differentiated between ensemble prediction uncertainties due to model structural differences per se and those due to parameter value uncertainties. Additionally, despite increasing use of Bayesian parameter estimation approaches with field-scale crop models, inadequate attention has been given to the full posterior distributions for estimated parameters. The objectives of this study were to quantify the impact of parameter value uncertainty on prediction uncertainty for modeling spring wheat phenology using Bayesian analysis and to assess the relativemore » contributions of model-structure-driven and parameter-value-driven uncertainty to overall prediction uncertainty. This study used a random walk Metropolis algorithm to estimate parameters for 30 spring wheat genotypes using nine phenology models based on multi-location trial data for days to heading and days to maturity. Across all cases, parameter-driven uncertainty accounted for between 19 and 52% of predictive uncertainty, while model-structure-driven uncertainty accounted for between 12 and 64%. Here, this study demonstrated the importance of quantifying both model-structure- and parameter-value-driven uncertainty when assessing overall prediction uncertainty in modeling spring wheat phenology. More generally, Bayesian parameter estimation provided a useful framework for quantifying and analyzing sources of prediction uncertainty.« less
Alderman, Phillip D.; Stanfill, Bryan
2016-10-06
Recent international efforts have brought renewed emphasis on the comparison of different agricultural systems models. Thus far, analysis of model-ensemble simulated results has not clearly differentiated between ensemble prediction uncertainties due to model structural differences per se and those due to parameter value uncertainties. Additionally, despite increasing use of Bayesian parameter estimation approaches with field-scale crop models, inadequate attention has been given to the full posterior distributions for estimated parameters. The objectives of this study were to quantify the impact of parameter value uncertainty on prediction uncertainty for modeling spring wheat phenology using Bayesian analysis and to assess the relative contributions of model-structure-driven and parameter-value-driven uncertainty to overall prediction uncertainty. This study used a random walk Metropolis algorithm to estimate parameters for 30 spring wheat genotypes using nine phenology models based on multi-location trial data for days to heading and days to maturity. Across all cases, parameter-driven uncertainty accounted for between 19 and 52% of predictive uncertainty, while model-structure-driven uncertainty accounted for between 12 and 64%. Here, this study demonstrated the importance of quantifying both model-structure- and parameter-value-driven uncertainty when assessing overall prediction uncertainty in modeling spring wheat phenology. More generally, Bayesian parameter estimation provided a useful framework for quantifying and analyzing sources of prediction uncertainty.
Parameter optimization in differential geometry based solvation models
Wang, Bao; Wei, G. W.
2015-01-01
Differential geometry (DG) based solvation models are a new class of variational implicit solvent approaches that are able to avoid unphysical solvent-solute boundary definitions and associated geometric singularities, and dynamically couple polar and non-polar interactions in a self-consistent framework. Our earlier study indicates that DG based non-polar solvation model outperforms other methods in non-polar solvation energy predictions. However, the DG based full solvation model has not shown its superiority in solvation analysis, due to its difficulty in parametrization, which must ensure the stability of the solution of strongly coupled nonlinear Laplace-Beltrami and Poisson-Boltzmann equations. In this work, we introduce new parameter learning algorithms based on perturbation and convex optimization theories to stabilize the numerical solution and thus achieve an optimal parametrization of the DG based solvation models. An interesting feature of the present DG based solvation model is that it provides accurate solvation free energy predictions for both polar and non-polar molecules in a unified formulation. Extensive numerical experiment demonstrates that the present DG based solvation model delivers some of the most accurate predictions of the solvation free energies for a large number of molecules. PMID:26450304
NASA Astrophysics Data System (ADS)
Tillman, Fred D.; Weaver, James W.
Migration of volatile chemicals from the subsurface into overlying buildings is known as vapor intrusion (VI). Under certain circumstances, people living in homes above contaminated soil or ground water may be exposed to harmful levels of these vapors. VI is a particularly difficult pathway to assess, as challenges exist in delineating subsurface contributions to measured indoor-air concentrations as well as in adequate characterization of subsurface parameters necessary to calibrate a predictive flow and transport model. Often, a screening-level model is employed to determine if a potential indoor inhalation exposure pathway exists and, if such a pathway is complete, whether long-term exposure increases the occupants' risk for cancer or other toxic effects to an unacceptable level. A popular screening-level algorithm currently in wide use in the United States, Canada and the UK for making such determinations is the "Johnson and Ettinger" (J&E) model. Concern exists over using the J&E model for deciding whether or not further action is necessary at sites as many parameters are not routinely measured (or are un-measurable). Many screening decisions are then made based on simulations using "best estimate" look-up parameter values. While research exists on the sensitivity of the J&E model to individual parameter uncertainty, little published information is available on the combined effects of multiple uncertain parameters and their effect on screening decisions. This paper presents results of multiple-parameter uncertainty analyses using the J&E model to evaluate risk to humans from VI. Software was developed to produce automated uncertainty analyses of the model. Results indicate an increase in predicted cancer risk from multiple-parameter uncertainty by nearly a factor of 10 compared with single-parameter uncertainty. Additionally, a positive skew in model response to variation of some parameters was noted for both single and multiple parameter uncertainty analyses
Important Scaling Parameters for Testing Model-Scale Helicopter Rotors
NASA Technical Reports Server (NTRS)
Singleton, Jeffrey D.; Yeager, William T., Jr.
1998-01-01
An investigation into the effects of aerodynamic and aeroelastic scaling parameters on model scale helicopter rotors has been conducted in the NASA Langley Transonic Dynamics Tunnel. The effect of varying Reynolds number, blade Lock number, and structural elasticity on rotor performance has been studied and the performance results are discussed herein for two different rotor blade sets at two rotor advance ratios. One set of rotor blades were rigid and the other set of blades were dynamically scaled to be representative of a main rotor design for a utility class helicopter. The investigation was con-densities permits the acquisition of data for several Reynolds and Lock number combinations.
NASA Astrophysics Data System (ADS)
Erdal, D.; Neuweiler, I.; Huisman, J. A.
2012-06-01
Estimates of effective parameters for unsaturated flow models are typically based on observations taken on length scales smaller than the modeling scale. This complicates parameter estimation for heterogeneous soil structures. In this paper we attempt to account for soil structure not present in the flow model by using so-called external error models, which correct for bias in the likelihood function of a parameter estimation algorithm. The performance of external error models are investigated using data from three virtual reality experiments and one real world experiment. All experiments are multistep outflow and inflow experiments in columns packed with two sand types with different structures. First, effective parameters for equivalent homogeneous models for the different columns were estimated using soil moisture measurements taken at a few locations. This resulted in parameters that had a low predictive power for the averaged states of the soil moisture if the measurements did not adequately capture a representative elementary volume of the heterogeneous soil column. Second, parameter estimation was performed using error models that attempted to correct for bias introduced by soil structure not taken into account in the first estimation. Three different error models that required different amounts of prior knowledge about the heterogeneous structure were considered. The results showed that the introduction of an error model can help to obtain effective parameters with more predictive power with respect to the average soil water content in the system. This was especially true when the dynamic behavior of the flow process was analyzed.
NASA Astrophysics Data System (ADS)
Thyer, Mark; Kavetski, Dmitri; Evin, Guillaume; Kuczera, George; Renard, Ben; McInerney, David
2015-04-01
All scientific and statistical analysis, particularly in natural sciences, is based on approximations and assumptions. For example, the calibration of hydrological models using approaches such as Nash-Sutcliffe efficiency and/or simple least squares (SLS) objective functions may appear to be 'assumption-free'. However, this is a naïve point of view, as SLS assumes that the model residuals (residuals=observed-predictions) are independent, homoscedastic and Gaussian. If these assumptions are poor, parameter inference and model predictions will be correspondingly poor. An essential step in model development is therefore to verify the assumptions and approximations made in the modeling process. Diagnostics play a key role in verifying modeling assumptions. An important advantage of the formal Bayesian approach is that the modeler is required to make the assumptions explicit. Specialized diagnostics can then be developed and applied to test and verify their assumptions. This paper presents a suite of statistical and modeling diagnostics that can be used by environmental modelers to test their modeling calibration assumptions and diagnose model deficiencies. Three major types of diagnostics are presented: Residual Diagnostics Residual diagnostics are used to test whether the assumptions of the residual error model within the likelihood function are compatible with the data. This includes testing for statistical independence, homoscedasticity, unbiasedness, Gaussianity and any distributional assumptions. Parameter Uncertainty and MCMC Diagnostics An important part of Bayesian analysis is assess parameter uncertainty. Markov Chain Monte Carlo (MCMC) methods are a powerful numerical tool for estimating these uncertainties. Diagnostics based on posterior parameter distributions can be used to assess parameter identifiability, interactions and correlations. This provides a very useful tool for detecting and remedying model deficiencies. In addition, numerical diagnostics are
Comparison of PDT parameters for RIF and H460 tumor models during HPPH-mediated PDT.
Liu, Baochang; Kim, Michele M; Gallagher-Colombo, Shannon M; Busch, Theresa M; Zhu, Timothy C
2014-03-05
Singlet oxygen ((1)O2) is the major cytotoxic species producing PDT effects, but it is difficult to monitor in vivo due to its short life time in real biological environments. Mathematical models are then useful to calculate (1)O2 concentrations for PDT dosimetry. Our previously introduced macroscopic model has four PDT parameters: ξ, σ, β and g describing initial oxygen consumption rate, ratio of photobleaching to reaction between (1)O2 and cellular targets, ratio of triplet state (T) phosphorescence to reaction between T and oxygen ((3)O2), and oxygen supply rate to tissue, respectively. In addition, the model calculates a fifth parameter, threshold (1)O2 dose ([(1)O2]rx,sd). These PDT parameters have been investigated for HPPH using radiation-induced fibrosarcoma (RIF) tumors in an in-vivo C3H mouse model. In recent studies, we additionally investigated these parameters in human non-small cell lung carcinoma (H460) tumor xenografts, also using HPPH-mediated PDT. In-vivo studies are performed with nude female mice with H460 tumors grown intradermally on their right shoulders. HPPH (0.25 mg/kg) is injected i.v. at 24 hours prior to light delivery. Initial in vivo HPPH concentration is quantified via interstitial HPPH fluorescence measurements after correction for tissue optical properties. Light is delivered by a linear source at various light doses (12-50 J/cm) with powers ranging from 12 to 150 mW per cm length. The necrosis radius is quantified using ScanScope after tumor sectioning and hematoxylin and eosin (H&E) staining. The macroscopic optimization model is used to fit the results and generate four PDT parameters. Initial results of the parameters for H460 tumors will be reported and compared with those for the RIF tumor.
NASA Technical Reports Server (NTRS)
1979-01-01
The computer model for erythropoietic control was adapted to the mouse system by altering system parameters originally given for the human to those which more realistically represent the mouse. Parameter values were obtained from a variety of literature sources. Using the mouse model, the mouse was studied as a potential experimental model for spaceflight. Simulation studies of dehydration and hypoxia were performed. A comparison of system parameters for the mouse and human models is presented. Aside from the obvious differences expected in fluid volumes, blood flows and metabolic rates, larger differences were observed in the following: erythrocyte life span, erythropoietin half-life, and normal arterial pO2.
Ludwig, C; Grimmer, S; Seyfarth, A; Maus, H-M
2012-09-21
The spring-loaded inverted pendulum (SLIP) model is a well established model for describing bouncy gaits like human running. The notion of spring-like leg behavior has led many researchers to compute the corresponding parameters, predominantly stiffness, in various experimental setups and in various ways. However, different methods yield different results, making the comparison between studies difficult. Further, a model simulation with experimentally obtained leg parameters typically results in comparatively large differences between model and experimental center of mass trajectories. Here, we pursue the opposite approach which is calculating model parameters that allow reproduction of an experimental sequence of steps. In addition, to capture energy fluctuations, an extension of the SLIP (ESLIP) is required and presented. The excellent match of the models with the experiment validates the description of human running by the SLIP with the obtained parameters which we hence call dynamical leg parameters.
Parameter estimation and analysis model selections in fluorescence correlation spectroscopy
NASA Astrophysics Data System (ADS)
Dong, Shiqing; Zhou, Jie; Ding, Xuemei; Wang, Yuhua; Xie, Shusen; Yang, Hongqin
2016-10-01
Fluorescence correlation spectroscopy (FCS) is a powerful technique that could provide high temporal resolution and detection for the diffusions of biomolecules at extremely low concentrations. The accuracy of this approach primarily depends on experimental condition requirements and the data analysis model. In this study, we have set up a confocal-based FCS system. And then we used a Rhodamine6G solution to calibrate the system and get the related parameters. An experimental measurement was carried out on one-component solution to evaluate the relationship between a certain number of molecules and concentrations. The results showed FCS system we built was stable and valid. Finally, a two-component solution experiment was carried out to show the importance of analysis model selection. It is a promising method for single molecular diffusion study in living cells.
Batstone, D J; Torrijos, M; Ruiz, C; Schmidt, J E
2004-01-01
The model structure in anaerobic digestion has been clarified following publication of the IWA Anaerobic Digestion Model No. 1 (ADM1). However, parameter values are not well known, and uncertainty and variability in the parameter values given is almost unknown. Additionally, platforms for identification of parameters, namely continuous-flow laboratory digesters, and batch tests suffer from disadvantages such as long run times, and difficulty in defining initial conditions, respectively. Anaerobic sequencing batch reactors (ASBRs) are sequenced into fill-react-settle-decant phases, and offer promising possibilities for estimation of parameters, as they are by nature, dynamic in behaviour, and allow repeatable behaviour to establish initial conditions, and evaluate parameters. In this study, we estimated parameters describing winery wastewater (most COD as ethanol) degradation using data from sequencing operation, and validated these parameters using unsequenced pulses of ethanol and acetate. The model used was the ADM1, with an extension for ethanol degradation. Parameter confidence spaces were found by non-linear, correlated analysis of the two main Monod parameters; maximum uptake rate (k(m)), and half saturation concentration (K(S)). These parameters could be estimated together using only the measured acetate concentration (20 points per cycle). From interpolating the single cycle acetate data to multiple cycles, we estimate that a practical "optimal" identifiability could be achieved after two cycles for the acetate parameters, and three cycles for the ethanol parameters. The parameters found performed well in the short term, and represented the pulses of acetate and ethanol (within 4 days of the winery-fed cycles) very well. The main discrepancy was poor prediction of pH dynamics, which could be due to an unidentified buffer with an overall influence the same as a weak base (possibly CaCO3). Based on this work, ASBR systems are effective for parameter
A novel criterion for determination of material model parameters
NASA Astrophysics Data System (ADS)
Andrade-Campos, A.; de-Carvalho, R.; Valente, R. A. F.
2011-05-01
Parameter identification problems have emerged due to the increasing demanding of precision in the numerical results obtained by Finite Element Method (FEM) software. High result precision can only be obtained with confident input data and robust numerical techniques. The determination of parameters should always be performed confronting numerical and experimental results leading to the minimum difference between them. However, the success of this task is dependent of the specification of the cost/objective function, defined as the difference between the experimental and the numerical results. Recently, various objective functions have been formulated to assess the errors between the experimental and computed data (Lin et al., 2002; Cao and Lin, 2008; among others). The objective functions should be able to efficiently lead the optimisation process. An ideal objective function should have the following properties: (i) all the experimental data points on the curve and all experimental curves should have equal opportunity to be optimised; and (ii) different units and/or the number of curves in each sub-objective should not affect the overall performance of the fitting. These two criteria should be achieved without manually choosing the weighting factors. However, for some non-analytical specific problems, this is very difficult in practice. Null values of experimental or numerical values also turns the task difficult. In this work, a novel objective function for constitutive model parameter identification is presented. It is a generalization of the work of Cao and Lin and it is suitable for all kinds of constitutive models and mechanical tests, including cyclic tests and Baushinger tests with null values.
Sampling schemes and parameter estimation for nonlinear Bernoulli-Gaussian sparse models
NASA Astrophysics Data System (ADS)
Boudineau, Mégane; Carfantan, Hervé; Bourguignon, Sébastien; Bazot, Michael
2016-06-01
We address the sparse approximation problem in the case where the data are approximated by the linear combination of a small number of elementary signals, each of these signals depending non-linearly on additional parameters. Sparsity is explicitly expressed through a Bernoulli-Gaussian hierarchical model in a Bayesian framework. Posterior mean estimates are computed using Markov Chain Monte-Carlo algorithms. We generalize the partially marginalized Gibbs sampler proposed in the linear case in [1], and build an hybrid Hastings-within-Gibbs algorithm in order to account for the nonlinear parameters. All model parameters are then estimated in an unsupervised procedure. The resulting method is evaluated on a sparse spectral analysis problem. It is shown to converge more efficiently than the classical joint estimation procedure, with only a slight increase of the computational cost per iteration, consequently reducing the global cost of the estimation procedure.
Iglesias, Juan Eugenio; Sabuncu, Mert Rory; Van Leemput, Koen
2012-01-01
Many successful segmentation algorithms are based on Bayesian models in which prior anatomical knowledge is combined with the available image information. However, these methods typically have many free parameters that are estimated to obtain point estimates only, whereas a faithful Bayesian analysis would also consider all possible alternate values these parameters may take. In this paper, we propose to incorporate the uncertainty of the free parameters in Bayesian segmentation models more accurately by using Monte Carlo sampling. We demonstrate our technique by sampling atlas warps in a recent method for hippocampal subfield segmentation, and show a significant improvement in an Alzheimer's disease classification task. As an additional benefit, the method also yields informative "error bars" on the segmentation results for each of the individual sub-structures.
Estimating input parameters from intracellular recordings in the Feller neuronal model
NASA Astrophysics Data System (ADS)
Bibbona, Enrico; Lansky, Petr; Sirovich, Roberta
2010-03-01
We study the estimation of the input parameters in a Feller neuronal model from a trajectory of the membrane potential sampled at discrete times. These input parameters are identified with the drift and the infinitesimal variance of the underlying stochastic diffusion process with multiplicative noise. The state space of the process is restricted from below by an inaccessible boundary. Further, the model is characterized by the presence of an absorbing threshold, the first hitting of which determines the length of each trajectory and which constrains the state space from above. We compare, both in the presence and in the absence of the absorbing threshold, the efficiency of different known estimators. In addition, we propose an estimator for the drift term, which is proved to be more efficient than the others, at least in the explored range of the parameters. The presence of the threshold makes the estimates of the drift term biased, and two methods to correct it are proposed.
Automated Optimization of Water–Water Interaction Parameters for a Coarse-Grained Model
2015-01-01
We have developed an automated parameter optimization software framework (ParOpt) that implements the Nelder–Mead simplex algorithm and applied it to a coarse-grained polarizable water model. The model employs a tabulated, modified Morse potential with decoupled short- and long-range interactions incorporating four water molecules per interaction site. Polarizability is introduced by the addition of a harmonic angle term defined among three charged points within each bead. The target function for parameter optimization was based on the experimental density, surface tension, electric field permittivity, and diffusion coefficient. The model was validated by comparison of statistical quantities with experimental observation. We found very good performance of the optimization procedure and good agreement of the model with experiment. PMID:24460506
Assessing uncertainty in model parameters based on sparse and noisy experimental data.
Hiroi, Noriko; Swat, Maciej; Funahashi, Akira
2014-01-01
To perform parametric identification of mathematical models of biological events, experimental data are rare to be sufficient to estimate target behaviors produced by complex non-linear systems. We performed parameter fitting to a cell cycle model with experimental data as an in silico experiment. We calibrated model parameters with the generalized least squares method with randomized initial values and checked local and global sensitivity of the model. Sensitivity analyses showed that parameter optimization induced less sensitivity except for those related to the metabolism of the transcription factors c-Myc and E2F, which are required to overcome a restriction point (R-point). We performed bifurcation analyses with the optimized parameters and found the bimodality was lost. This result suggests that accumulation of c-Myc and E2F induced dysfunction of R-point. We performed a second parameter optimization based on the results of sensitivity analyses and incorporating additional derived from recent in vivo data. This optimization returned the bimodal characteristics of the model with a narrower range of hysteresis than the original. This result suggests that the optimized model can more easily go through R-point and come back to the gap phase after once having overcome it. Two parameter space analyses showed metabolism of c-Myc is transformed as it can allow cell bimodal behavior with weak stimuli of growth factors. This result is compatible with the character of the cell line used in our experiments. At the same time, Rb, an inhibitor of E2F, can allow cell bimodal behavior with only a limited range of stimuli when it is activated, but with a wider range of stimuli when it is inactive. These results provide two insights; biologically, the two transcription factors play an essential role in malignant cells to overcome R-point with weaker growth factor stimuli, and theoretically, sparse time-course data can be used to change a model to a biologically expected state.
Assessing uncertainty in model parameters based on sparse and noisy experimental data
Swat, Maciej; Funahashi, Akira
2014-01-01
To perform parametric identification of mathematical models of biological events, experimental data are rare to be sufficient to estimate target behaviors produced by complex non-linear systems. We performed parameter fitting to a cell cycle model with experimental data as an in silico experiment. We calibrated model parameters with the generalized least squares method with randomized initial values and checked local and global sensitivity of the model. Sensitivity analyses showed that parameter optimization induced less sensitivity except for those related to the metabolism of the transcription factors c-Myc and E2F, which are required to overcome a restriction point (R-point). We performed bifurcation analyses with the optimized parameters and found the bimodality was lost. This result suggests that accumulation of c-Myc and E2F induced dysfunction of R-point. We performed a second parameter optimization based on the results of sensitivity analyses and incorporating additional derived from recent in vivo data. This optimization returned the bimodal characteristics of the model with a narrower range of hysteresis than the original. This result suggests that the optimized model can more easily go through R-point and come back to the gap phase after once having overcome it. Two parameter space analyses showed metabolism of c-Myc is transformed as it can allow cell bimodal behavior with weak stimuli of growth factors. This result is compatible with the character of the cell line used in our experiments. At the same time, Rb, an inhibitor of E2F, can allow cell bimodal behavior with only a limited range of stimuli when it is activated, but with a wider range of stimuli when it is inactive. These results provide two insights; biologically, the two transcription factors play an essential role in malignant cells to overcome R-point with weaker growth factor stimuli, and theoretically, sparse time-course data can be used to change a model to a biologically expected state
NASA Astrophysics Data System (ADS)
Ghanbari, M.; Najafi, G.; Ghobadian, B.; Mamat, R.; Noor, M. M.; Moosavian, A.
2015-12-01
This paper studies the use of adaptive neuro-fuzzy inference system (ANFIS) to predict the performance parameters and exhaust emissions of a diesel engine operating on nanodiesel blended fuels. In order to predict the engine parameters, the whole experimental data were randomly divided into training and testing data. For ANFIS modelling, Gaussian curve membership function (gaussmf) and 200 training epochs (iteration) were found to be optimum choices for training process. The results demonstrate that ANFIS is capable of predicting the diesel engine performance and emissions. In the experimental step, Carbon nano tubes (CNT) (40, 80 and 120 ppm) and nano silver particles (40, 80 and 120 ppm) with nanostructure were prepared and added as additive to the diesel fuel. Six cylinders, four-stroke diesel engine was fuelled with these new blended fuels and operated at different engine speeds. Experimental test results indicated the fact that adding nano particles to diesel fuel, increased diesel engine power and torque output. For nano-diesel it was found that the brake specific fuel consumption (bsfc) was decreased compared to the net diesel fuel. The results proved that with increase of nano particles concentrations (from 40 ppm to 120 ppm) in diesel fuel, CO2 emission increased. CO emission in diesel fuel with nano-particles was lower significantly compared to pure diesel fuel. UHC emission with silver nano-diesel blended fuel decreased while with fuels that contains CNT nano particles increased. The trend of NOx emission was inverse compared to the UHC emission. With adding nano particles to the blended fuels, NOx increased compared to the net diesel fuel. The tests revealed that silver & CNT nano particles can be used as additive in diesel fuel to improve combustion of the fuel and reduce the exhaust emissions significantly.
Variational methods to estimate terrestrial ecosystem model parameters
NASA Astrophysics Data System (ADS)
Delahaies, Sylvain; Roulstone, Ian
2016-04-01
Carbon is at the basis of the chemistry of life. Its ubiquity in the Earth system is the result of complex recycling processes. Present in the atmosphere in the form of carbon dioxide it is adsorbed by marine and terrestrial ecosystems and stored within living biomass and decaying organic matter. Then soil chemistry and a non negligible amount of time transform the dead matter into fossil fuels. Throughout this cycle, carbon dioxide is released in the atmosphere through respiration and combustion of fossils fuels. Model-data fusion techniques allow us to combine our understanding of these complex processes with an ever-growing amount of observational data to help improving models and predictions. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Over the last decade several studies have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF, 4DVAR) to estimate model parameters and initial carbon stocks for DALEC and to quantify the uncertainty in the predictions. Despite its simplicity, DALEC represents the basic processes at the heart of more sophisticated models of the carbon cycle. Using adjoint based methods we study inverse problems for DALEC with various data streams (8 days MODIS LAI, monthly MODIS LAI, NEE). The framework of constraint optimization allows us to incorporate ecological common sense into the variational framework. We use resolution matrices to study the nature of the inverse problems and to obtain data importance and information content for the different type of data. We study how varying the time step affect the solutions, and we show how "spin up" naturally improves the conditioning of the inverse problems.
A parameter model for dredge plume sediment source terms
NASA Astrophysics Data System (ADS)
Decrop, Boudewijn; De Mulder, Tom; Toorman, Erik; Sas, Marc
2017-01-01
, which is not available in all situations. For example, to allow correct representation of overflow plume dispersion in a real-time forecasting model, a fast assessment of the near-field behaviour is needed. For this reason, a semi-analytical parameter model has been developed that reproduces the near-field sediment dispersion obtained with the CFD model in a relatively accurate way. In this paper, this so-called grey-box model is presented.
NASA Astrophysics Data System (ADS)
Humbird, Kelli; Peterson, J. Luc; Brandon, Scott; Field, John; Nora, Ryan; Spears, Brian
2016-10-01
Next-generation supercomputer architecture and in-transit data analysis have been used to create a large collection of 2-D ICF capsule implosion simulations. The database includes metrics for approximately 60,000 implosions, with x-ray images and detailed physics parameters available for over 20,000 simulations. To map and explore this large database, surrogate models for numerous quantities of interest are built using supervised machine learning algorithms. Response surfaces constructed using the predictive capabilities of the surrogates allow for continuous exploration of parameter space without requiring additional simulations. High performing regions of the input space are identified to guide the design of future experiments. In particular, a model for the yield built using a random forest regression algorithm has a cross validation score of 94.3% and is consistently conservative for high yield predictions. The model is used to search for robust volumes of parameter space where high yields are expected, even given variations in other input parameters. Surrogates for additional quantities of interest relevant to ignition are used to further characterize the high yield regions. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344, Lawrence Livermore National Security, LLC. LLNL-ABS-697277.
Awad, W A; Ghareeb, K; Böhm, J
2010-08-01
Probiotics might be one of the solutions to reduce the effects of the recent ban on antimicrobial growth promoters in feed. However, the mode of action of probiotics still not fully understood. Therefore, evaluating probiotics (microbial feed additives) is essential. Thus the objective of this work was to investigate the efficacy of a new microbial feed additive (Lactobacillus salivarius and Lactobacillus reuteri) in broiler nutrition. The body weight (BW), average daily weight gain was relatively increased by the dietary inclusion of Lactobacillus sp. in broiler diets. Furthermore, the Lactobacillus feed additive influenced the histomorphological measurements of small intestinal villi. The addition of Lactobacillus sp. increased (p < 0.05) the villus height (VH)/crypt depth ratio and the VH was numerically increased in duodenum. The duodenal crypt depth remained unaffected (p > 0.05), while the ileal crypt depth was decreased by dietary supplementation of Lactobacillus sp. compared with the control. At the end of the feeding period, the basal and glucose stimulated short-circuit current (Isc) and electrical tissue conductivity were measured in the isolated gut mucosa to characterize the electrical properties of the gut. The addition of glucose on the mucosal side in Ussing chamber produced a significant increase (p = 0.001) in Isc in both jejunum and colon relative to the basal values in Lactobacillus probiotic group. This increase in Isc for probiotic group in jejunum is equivalent to an increase of about two times that for the basal values, while in the control group is about half fold that for the basal value. In addition, the DeltaIsc after glucose addition to the large intestine was greater than the DeltaIsc in the small intestine in both control and probiotic group. Moreover in both jejunum and colon, the increase in Isc for birds fed Lactobacillus was higher than their control counterparts (p < or = 0.1). This result suggests that the addition of
NASA Astrophysics Data System (ADS)
Boers, Niklas; Goswami, Bedartha; Chekroun, Mickael; Svensson, Anders; Rousseau, Denis-Didier; Ghil, Michael
2016-04-01
In the recent past, empirical stochastic models have been successfully applied to model a wide range of climatic phenomena [1,2]. In addition to enhancing our understanding of the geophysical systems under consideration, multilayer stochastic models (MSMs) have been shown to be solidly grounded in the Mori-Zwanzig formalism of statistical physics [3]. They are also well-suited for predictive purposes, e.g., for the El Niño Southern Oscillation [4] and the Madden-Julian Oscillation [5]. In general, these models are trained on a given time series under consideration, and then assumed to reproduce certain dynamical properties of the underlying natural system. Most existing approaches are based on least-squares fitting to determine optimal model parameters, which does not allow for an uncertainty estimation of these parameters. This approach significantly limits the degree to which dynamical characteristics of the time series can be safely inferred from the model. Here, we are specifically interested in fitting low-dimensional stochastic models to time series obtained from paleoclimatic proxy records, such as the oxygen isotope ratio and dust concentration of the NGRIP record [6]. The time series derived from these records exhibit substantial dating uncertainties, in addition to the proxy measurement errors. In particular, for time series of this kind, it is crucial to obtain uncertainty estimates for the final model parameters. Following [7], we first propose a statistical procedure to shift dating uncertainties from the time axis to the proxy axis of layer-counted paleoclimatic records. Thereafter, we show how Maximum Likelihood Estimation in combination with Markov Chain Monte Carlo parameter sampling can be employed to translate all uncertainties present in the original proxy time series to uncertainties of the parameter estimates of the stochastic model. We compare time series simulated by the empirical model to the original time series in terms of standard
ORBSIM- ESTIMATING GEOPHYSICAL MODEL PARAMETERS FROM PLANETARY GRAVITY DATA
NASA Technical Reports Server (NTRS)
Sjogren, W. L.
1994-01-01
The ORBSIM program was developed for the accurate extraction of geophysical model parameters from Doppler radio tracking data acquired from orbiting planetary spacecraft. The model of the proposed planetary structure is used in a numerical integration of the spacecraft along simulated trajectories around the primary body. Using line of sight (LOS) Doppler residuals, ORBSIM applies fast and efficient modelling and optimization procedures which avoid the traditional complex dynamic reduction of data. ORBSIM produces quantitative geophysical results such as size, depth, and mass. ORBSIM has been used extensively to investigate topographic features on the Moon, Mars, and Venus. The program has proven particulary suitable for modelling gravitational anomalies and mascons. The basic observable for spacecraft-based gravity data is the Doppler frequency shift of a transponded radio signal. The time derivative of this signal carries information regarding the gravity field acting on the spacecraft in the LOS direction (the LOS direction being the path between the spacecraft and the receiving station, either Earth or another satellite). There are many dynamic factors taken into account: earth rotation, solar radiation, acceleration from planetary bodies, tracking station time and location adjustments, etc. The actual trajectories of the spacecraft are simulated using least squares fitted to conic motion. The theoretical Doppler readings from the simulated orbits are compared to actual Doppler observations and another least squares adjustment is made. ORBSIM has three modes of operation: trajectory simulation, optimization, and gravity modelling. In all cases, an initial gravity model of curved and/or flat disks, harmonics, and/or a force table are required input. ORBSIM is written in FORTRAN 77 for batch execution and has been implemented on a DEC VAX 11/780 computer operating under VMS. This program was released in 1985.
Measuring morphological parameters of the pelvic floor for finite element modelling purposes.
Janda, Stepán; van der Helm, Frans C T; de Blok, Sjoerd B
2003-06-01
The goal of this study was to obtain a complete data set needed for studying the complex biomechanical behaviour of the pelvic floor muscles using a computer model based on the finite element (FE) theory. The model should be able to predict the effect of surgical interventions and give insight into the function of pelvic floor muscles. Because there was a lack of any information concerning morphological parameters of the pelvic floor muscle structures, we performed an experimental measurement to uncover those morphological parameters. Geometric parameters as well as muscle parameters of the pelvic floor muscles were measured on an embalmed female cadaver. A three-dimensional (3D) geometric data set of the pelvic floor including muscle fibre directions was obtained using a palpator device. A 3D surface model based on the experimental data, needed for mathematical modelling of the pelvic floor, was created. For all parts of the diaphragma pelvis, the optimal muscle fibre length was determined by laser diffraction measurements of the sarcomere length. In addition, other muscle parameters such as physiological cross-sectional area and total muscle fibre length were determined. Apart from these measurements we obtained a data set of the pelvic floor structures based on nuclear magnetic resonance imaging (MRI) on the same cadaver specimen. The purpose of this experiment was to discover the relationship between the MRI morphology and geometrical parameters obtained from the previous measurements. The produced data set is not only important for biomechanical modelling of the pelvic floor muscles, but it also describes the geometry of muscle fibres and is useful for functional analysis of the pelvic floor in general. By the use of many reference landmarks all these morphologic data concerning fibre directions and optimal fibre length can be morphed to the geometrical data based on segmentation from MRI scans. These data can be directly used as an input for building a
Neuert, Mark A C; Dunning, Cynthia E
2013-09-01
Strain energy-based adaptive material models are used to predict bone resorption resulting from stress shielding induced by prosthetic joint implants. Generally, such models are governed by two key parameters: a homeostatic strain-energy state (K) and a threshold deviation from this state required to initiate bone reformation (s). A refinement procedure has been performed to estimate these parameters in the femur and glenoid; this study investigates the specific influences of these parameters on resulting density distributions in the distal ulna. A finite element model of a human ulna was created using micro-computed tomography (µCT) data, initialized to a homogeneous density distribution, and subjected to approximate in vivo loading. Values for K and s were tested, and the resulting steady-state density distribution compared with values derived from µCT images. The sensitivity of these parameters to initial conditions was examined by altering the initial homogeneous density value. The refined model parameters selected were then applied to six additional human ulnae to determine their performance across individuals. Model accuracy using the refined parameters was found to be comparable with that found in previous studies of the glenoid and femur, and gross bone structures, such as the cortical shell and medullary canal, were reproduced. The model was found to be insensitive to initial conditions; however, a fair degree of variation was observed between the six specimens. This work represents an important contribution to the study of changes in load transfer in the distal ulna following the implementation of commercial orthopedic implants.
NASA Astrophysics Data System (ADS)
Bingham, Q. G.; Neilson, B. T.; Neale, C. M. U.; Cardenas, M. B.
2012-08-01
This paper presents a method that uses high-resolution multispectral and thermal infrared imagery from airborne remote sensing for estimating two model parameters within the two-zone in-stream temperature and solute (TZTS) model. Previous TZTS modeling efforts have provided accurate in-stream temperature predictions; however, model parameter ranges resulting from the multiobjective calibrations were quite large. In addition to the data types previously required to populate and calibrate the TZTS model, high-resolution, remotely sensed thermal infrared (TIR) and near-infrared, red, and green (multispectral) band imagery were collected to help estimate two previously calibrated parameters: (1) average total channel width (BTOT) and (2) the fraction of the channel comprising surface transient storage zones (β). Multispectral imagery in combination with the TIR imagery provided high-resolution estimates ofBTOT. In-stream temperature distributions provided by the TIR imagery enabled the calculation of temperature thresholds at which main channel temperatures could be delineated from surface transient storage, permitting the estimation ofβ. It was found that an increase in the resolution and frequency at which BTOT and β were physically estimated resulted in similar objective functions in the main channel and transient storage zones, but the uncertainty associated with the estimated parameters decreased.
Gershgorin, B.; Harlim, J. Majda, A.J.
2010-01-01
The filtering and predictive skill for turbulent signals is often limited by the lack of information about the true dynamics of the system and by our inability to resolve the assumed dynamics with sufficiently high resolution using the current computing power. The standard approach is to use a simple yet rich family of constant parameters to account for model errors through parameterization. This approach can have significant skill by fitting the parameters to some statistical feature of the true signal; however in the context of real-time prediction, such a strategy performs poorly when intermittent transitions to instability occur. Alternatively, we need a set of dynamic parameters. One strategy for estimating parameters on the fly is a stochastic parameter estimation through partial observations of the true signal. In this paper, we extend our newly developed stochastic parameter estimation strategy, the Stochastic Parameterization Extended Kalman Filter (SPEKF), to filtering sparsely observed spatially extended turbulent systems which exhibit abrupt stability transition from time to time despite a stable average behavior. For our primary numerical example, we consider a turbulent system of externally forced barotropic Rossby waves with instability introduced through intermittent negative damping. We find high filtering skill of SPEKF applied to this toy model even in the case of very sparse observations (with only 15 out of the 105 grid points observed) and with unspecified external forcing and damping. Additive and multiplicative bias corrections are used to learn the unknown features of the true dynamics from observations. We also present a comprehensive study of predictive skill in the one-mode context including the robustness toward variation of stochastic parameters, imperfect initial conditions and finite ensemble effect. Furthermore, the proposed stochastic parameter estimation scheme applied to the same spatially extended Rossby wave system demonstrates
ERIC Educational Resources Information Center
Gugel, John F.
A new method for estimating the parameters of the normal ogive three-parameter model for multiple-choice test items--the normalized direct (NDIR) procedure--is examined. The procedure is compared to a more commonly used estimation procedure, Lord's LOGIST, using computer simulations. The NDIR procedure uses the normalized (mid-percentile)…
ERIC Educational Resources Information Center
Gao, Furong; Chen, Lisue
2005-01-01
Through a large-scale simulation study, this article compares item parameter estimates obtained by the marginal maximum likelihood estimation (MMLE) and marginal Bayes modal estimation (MBME) procedures in the 3-parameter logistic model. The impact of different prior specifications on the MBME estimates is also investigated using carefully…
Sun, Yu; Hou, Zhangshuan; Huang, Maoyi; Tian, Fuqiang; Leung, Lai-Yung R.
2013-12-10
This study demonstrates the possibility of inverting hydrologic parameters using surface flux and runoff observations in version 4 of the Community Land Model (CLM4). Previous studies showed that surface flux and runoff calculations are sensitive to major hydrologic parameters in CLM4 over different watersheds, and illustrated the necessity and possibility of parameter calibration. Two inversion strategies, the deterministic least-square fitting and stochastic Markov-Chain Monte-Carlo (MCMC) - Bayesian inversion approaches, are evaluated by applying them to CLM4 at selected sites. The unknowns to be estimated include surface and subsurface runoff generation parameters and vadose zone soil water parameters. We find that using model parameters calibrated by the least-square fitting provides little improvements in the model simulations but the sampling-based stochastic inversion approaches are consistent - as more information comes in, the predictive intervals of the calibrated parameters become narrower and the misfits between the calculated and observed responses decrease. In general, parameters that are identified to be significant through sensitivity analyses and statistical tests are better calibrated than those with weak or nonlinear impacts on flux or runoff observations. Temporal resolution of observations has larger impacts on the results of inverse modeling using heat flux data than runoff data. Soil and vegetation cover have important impacts on parameter sensitivities, leading to the different patterns of posterior distributions of parameters at different sites. Overall, the MCMC-Bayesian inversion approach effectively and reliably improves the simulation of CLM under different climates and environmental conditions. Bayesian model averaging of the posterior estimates with different reference acceptance probabilities can smooth the posterior distribution and provide more reliable parameter estimates, but at the expense of wider uncertainty bounds.
Relevant parameters in models of cell division control
NASA Astrophysics Data System (ADS)
Grilli, Jacopo; Osella, Matteo; Kennard, Andrew S.; Lagomarsino, Marco Cosentino
2017-03-01
A recent burst of dynamic single-cell data makes it possible to characterize the stochastic dynamics of cell division control in bacteria. Different models were used to propose specific mechanisms, but the links between them are poorly explored. The lack of comparative studies makes it difficult to appreciate how well any particular mechanism is supported by the data. Here, we describe a simple and generic framework in which two common formalisms can be used interchangeably: (i) a continuous-time division process described by a hazard function and (ii) a discrete-time equation describing cell size across generations (where the unit of time is a cell cycle). In our framework, this second process is a discrete-time Langevin equation with simple physical analogues. By perturbative expansion around the mean initial size (or interdivision time), we show how this framework describes a wide range of division control mechanisms, including combinations of time and size control, as well as the constant added size mechanism recently found to capture several aspects of the cell division behavior of different bacteria. As we show by analytical estimates and numerical simulations, the available data are described precisely by the first-order approximation of this expansion, i.e., by a "linear response" regime for the correction of size fluctuations. Hence, a single dimensionless parameter defines the strength and action of the division control against cell-to-cell variability (quantified by a single "noise" parameter). However, the same strength of linear response may emerge from several mechanisms, which are distinguished only by higher-order terms in the perturbative expansion. Our analytical estimate of the sample size needed to distinguish between second-order effects shows that this value is close to but larger than the values of the current datasets. These results provide a unified framework for future studies and clarify the relevant parameters at play in the control of
Extracting Structure Parameters of Dimers for Molecular Tunneling Ionization Model
NASA Astrophysics Data System (ADS)
Song-Feng, Zhao; Fang, Huang; Guo-Li, Wang; Xiao-Xin, Zhou
2016-03-01
We determine structure parameters of the highest occupied molecular orbital (HOMO) of 27 dimers for the molecular tunneling ionization (so called MO-ADK) model of Tong et al. [Phys. Rev. A 66 (2002) 033402]. The molecular wave functions with correct asymptotic behavior are obtained by solving the time-independent Schrödinger equation with B-spline functions and molecular potentials which are numerically created using the density functional theory. We examine the alignment-dependent tunneling ionization probabilities from MO-ADK model for several molecules by comparing with the molecular strong-field approximation (MO-SFA) calculations. We show the molecular Perelomov-Popov-Terent'ev (MO-PPT) can successfully give the laser wavelength dependence of ionization rates (or probabilities). Based on the MO-PPT model, two diatomic molecules having valence orbital with antibonding systems (i.e., Cl2, Ne2) show strong ionization suppression when compared with their corresponding closest companion atoms. Supported by National Natural Science Foundation of China under Grant Nos. 11164025, 11264036, 11465016, 11364038, the Specialized Research Fund for the Doctoral Program of Higher Education of China under Grant No. 20116203120001, and the Basic Scientific Research Foundation for Institution of Higher Learning of Gansu Province
Sound propagation and absorption in foam - A distributed parameter model.
NASA Technical Reports Server (NTRS)
Manson, L.; Lieberman, S.
1971-01-01
Liquid-base foams are highly effective sound absorbers. A better understanding of the mechanisms of sound absorption in foams was sought by exploration of a mathematical model of bubble pulsation and coupling and the development of a distributed-parameter mechanical analog. A solution by electric-circuit analogy was thus obtained and transmission-line theory was used to relate the physical properties of the foams to the characteristic impedance and propagation constants of the analog transmission line. Comparison of measured physical properties of the foam with values obtained from measured acoustic impedance and propagation constants and the transmission-line theory showed good agreement. We may therefore conclude that the sound propagation and absorption mechanisms in foam are accurately described by the resonant response of individual bubbles coupled to neighboring bubbles.
Parameter Estimation and Model Validation of Nonlinear Dynamical Networks
Abarbanel, Henry; Gill, Philip
2015-03-31
In the performance period of this work under a DOE contract, the co-PIs, Philip Gill and Henry Abarbanel, developed new methods for statistical data assimilation for problems of DOE interest, including geophysical and biological problems. This included numerical optimization algorithms for variational principles, new parallel processing Monte Carlo routines for performing the path integrals of statistical data assimilation. These results have been summarized in the monograph: “Predicting the Future: Completing Models of Observed Complex Systems” by Henry Abarbanel, published by Spring-Verlag in June 2013. Additional results and details have appeared in the peer reviewed literature.
NASA Astrophysics Data System (ADS)
Wentworth, Mami Tonoe
Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification
NASA Astrophysics Data System (ADS)
Bayliss, Matthew B.; Sharon, Keren; Johnson, Traci
2015-03-01
We test the effects of varying the cosmological parameter values used in the strong lens modeling process for the six Hubble Frontier Field (HFF) galaxy clusters. The standard procedure for generating high-fidelity strong lens models includes careful consideration of uncertainties in the output models that result from varying model parameters within the bounds of available data constraints. It is not, however, common practice to account for the effects of cosmological parameter value uncertainties. The convention is to instead use a single fiducial “concordance cosmology” and generate lens models assuming zero uncertainty in cosmological parameter values. We find that the magnification maps of the individual HFF clusters vary significantly when lens models are computed using different cosmological parameter values taken from recent literature constraints from space- and ground-based experiments. Specifically, the magnification maps have average variances across the best-fit models computed using different cosmologies that are comparable in magnitude to—and as much as 2.5× larger than—the model-fitting uncertainties in each best-fit model. We also find that estimates of the mass profiles of the cluster cores themselves vary only slightly when different input cosmological parameters are used. We conclude that cosmological parameter uncertainty is a non-negligible source of uncertainty in lens model products for the HFF clusters and that it is important that current and future work that relies on precision strong-lensing models take care to account for this additional source of uncertainty.
Parameter estimates of a zero-dimensional ecosystem model applying the adjoint method
NASA Astrophysics Data System (ADS)
Schartau, Markus; Oschlies, Andreas; Willebrand, Jürgen
Assimilation experiments with data from the Bermuda Atlantic Time-series Study (BATS, 1989-1993) were performed with a simple mixed-layer ecosystem model of dissolved inorganic nitrogen ( N), phytoplankton ( P) and herbivorous zooplankton ( H). Our aim is to optimize the biological model parameters, such that the misfits between model results and observations are minimized. The utilized assimilation method is the variational adjoint technique, starting from a wide range of first-parameter guesses. A twin experiment displayed two kinds of solutions, when Gaussian noise was added to the model-generated data. The expected solution refers to the global minimum of the misfit model-data function, whereas the other solution is biologically implausible and is associated with a local minimum. Experiments with real data showed either bottom-up or top-down controlled ecosystem dynamics, depending on the deep nutrient availability. To confine the solutions, an additional constraint on zooplankton biomass was added to the optimization procedure. This inclusion did not produce optimal model results that were consistent with observations. The modelled zooplankton biomass still exceeded the observations. From the model-data discrepancies systematic model errors could be determined, in particular when the chlorophyll concentration started to decline before primary production reached its maximum. A direct comparision of measured 14C-production data with modelled phytoplankton production rates is inadequate at BATS, at least when a constant carbon to nitrogen C : N ratio is assumed for data assimilation.
NASA Astrophysics Data System (ADS)
Spangler, J.; Schulz, C. J.; Childers, G. W.
2009-12-01
Modeling microbial respiration and growth is an important tool for understanding many geochemical systems. The estimation of growth parameters relies on fitting experimental data to a selected model, such as the Monod equation or some variation, most often under batch or continuous culture conditions. While continuous culture conditions can be analogous to some natural environments, it often isn’t the case. More often, microorganisms are subject to fluctuating temperature, substrate concentrations, pH, water activity, and inhibitory compounds, to name a few. Microbial growth estimation under non-isothermal conditions has been possible through use of numerical solutions and has seen use in the field of food microbiology. In this study, numerical solutions were used to extend growth models under more non-isostatic conditions using momentary growth rate estimates. Using a model organism common in wastewater (Paracoccus denitrificans), growth and respiration rate parameters were estimated under varying static conditions (temperature, pH, electron donor/acceptor concentrations) and used to construct a non-isostatic growth model. After construction of the model, additional experiments were conducted to validate the model. These non-isostatic models hold the potential for allowing the prediction of cell biomass and respiration rates under a diverse array of conditions. By not restricting models to constant environmental conditions, the general applicability of the model can be greatly improved.
Use of generalised additive models to categorise continuous variables in clinical prediction
2013-01-01
Background In medical practice many, essentially continuous, clinical parameters tend to be categorised by physicians for ease of decision-making. Indeed, categorisation is a common practice both in medical research and in the development of clinical prediction rules, particularly where the ensuing models are to be applied in daily clinical practice to support clinicians in the decision-making process. Since the number of categories into which a continuous predictor must be categorised depends partly on the relationship between the predictor and the outcome, the need for more than two categories must be borne in mind. Methods We propose a categorisation methodology for clinical-prediction models, using Generalised Additive Models (GAMs) with P-spline smoothers to determine the relationship between the continuous predictor and the outcome. The proposed method consists of creating at least one average-risk category along with high- and low-risk categories based on the GAM smooth function. We applied this methodology to a prospective cohort of patients with exacerbated chronic obstructive pulmonary disease. The predictors selected were respiratory rate and partial pressure of carbon dioxide in the blood (PCO2), and the response variable was poor evolution. An additive logistic regression model was used to show the relationship between the covariates and the dichotomous response variable. The proposed categorisation was compared to the continuous predictor as the best option, using the AIC and AUC evaluation parameters. The sample was divided into a derivation (60%) and validation (40%) samples. The first was used to obtain the cut points while the second was used to validate the proposed methodology. Results The three-category proposal for the respiratory rate was ≤ 20;(20,24];> 24, for which the following values were obtained: AIC=314.5 and AUC=0.638. The respective values for the continuous predictor were AIC=317.1 and AUC=0.634, with no statistically
Small-signal model parameter extraction for AlGaN/GaN HEMT
NASA Astrophysics Data System (ADS)
Le, Yu; Yingkui, Zheng; Sheng, Zhang; Lei, Pang; Ke, Wei; Xiaohua, Ma
2016-03-01
A new 22-element small signal equivalent circuit model for the AlGaN/GaN high electron mobility transistor (HEMT) is presented. Compared with the traditional equivalent circuit model, the gate forward and breakdown conductions (G gsf and G gdf) are introduced into the new model to characterize the gate leakage current. Additionally, for the new gate-connected field plate and the source-connected field plate of the device, an improved method for extracting the parasitic capacitances is proposed, which can be applied to the small-signal extraction for an asymmetric device. To verify the model, S-parameters are obtained from the modeling and measurements. The good agreement between the measured and the simulated results indicate that this model is accurate, stable and comparatively clear in physical significance.
Chouteau, J; Lerat, J L; Testa, R; Moyen, B; Banks, S A
2007-01-01
Model-image registration techniques have been used extensively for the measurement of joint kinematics in vivo. These techniques typically utilize an explicit measurement of X-ray projection parameters (principal distance, principal point), which is easily done for prospective studies. However, there is vast opportunity to derive useful information from previously collected clinical radiographic films where the projection parameters are unknown. The purpose of this study was to determine variation in measured knee arthroplasty kinematics when the X-ray projection parameters were unknown, but bounded. Based on the clinical radiographic protocol, a nominal principal point was chosen and eight additional points +/-2 and +/-5 cm in the horizontal and vertical directions were defined. Tibiofemoral kinematics were determined for all nine projection parameter sets for a series of 10 lateral radiographs. In addition, the principal distance was varied +/-15 cm and tibiofemoral kinematics were determined for these two projection sets. Measured joint kinematics varied less than 0.6 degrees and 0.4 mm for +/-2 cm variations in principal point location, and 0.7 degrees and 0.6 mm for +/-5 cm variations in principal point location. Measured joint kinematics varied less than 0.6 degrees and 0.7 mm for +/-15 cm variations in principal distance. Variation in X-ray principal point and principal distance over clinically bounded ranges has a small effect on knee arthroplasty kinematics computed from model-image registration with high-quality clinical radiographs.
Ramin, Elham; Sin, Gürkan; Mikkelsen, Peter Steen; Plósz, Benedek Gy
2014-10-15
Current research focuses on predicting and mitigating the impacts of high hydraulic loadings on centralized wastewater treatment plants (WWTPs) under wet-weather conditions. The maximum permissible inflow to WWTPs depends not only on the settleability of activated sludge in secondary settling tanks (SSTs) but also on the hydraulic behaviour of SSTs. The present study investigates the impacts of ideal and non-ideal flow (dry and wet weather) and settling (good settling and bulking) boundary conditions on the sensitivity of WWTP model outputs to uncertainties intrinsic to the one-dimensional (1-D) SST model structures and parameters. We identify the critical sources of uncertainty in WWTP models through global sensitivity analysis (GSA) using the Benchmark simulation model No. 1 in combination with first- and second-order 1-D SST models. The results obtained illustrate that the contribution of settling parameters to the total variance of the key WWTP process outputs significantly depends on the influent flow and settling conditions. The magnitude of the impact is found to vary, depending on which type of 1-D SST model is used. Therefore, we identify and recommend potential parameter subsets for WWTP model calibration, and propose optimal choice of 1-D SST models under different flow and settling boundary conditions. Additionally, the hydraulic parameters in the second-order SST model are found significant under dynamic wet-weather flow conditions. These results highlight the importance of developing a more mechanistic based flow-dependent hydraulic sub-model in second-order 1-D SST models in the future.
Are Subject-Specific Musculoskeletal Models Robust to the Uncertainties in Parameter Identification?
Valente, Giordano; Pitto, Lorenzo; Testi, Debora; Seth, Ajay; Delp, Scott L.; Stagni, Rita; Viceconti, Marco; Taddei, Fulvia
2014-01-01
Subject-specific musculoskeletal modeling can be applied to study musculoskeletal disorders, allowing inclusion of personalized anatomy and properties. Independent of the tools used for model creation, there are unavoidable uncertainties associated with parameter identification, whose effect on model predictions is still not fully understood. The aim of the present study was to analyze the sensitivity of subject-specific model predictions (i.e., joint angles, joint moments, muscle and joint contact forces) during walking to the uncertainties in the identification of body landmark positions, maximum muscle tension and musculotendon geometry. To this aim, we created an MRI-based musculoskeletal model of the lower limbs, defined as a 7-segment, 10-degree-of-freedom articulated linkage, actuated by 84 musculotendon units. We then performed a Monte-Carlo probabilistic analysis perturbing model parameters according to their uncertainty, and solving a typical inverse dynamics and static optimization problem using 500 models that included the different sets of perturbed variable values. Model creation and gait simulations were performed by using freely available software that we developed to standardize the process of model creation, integrate with OpenSim and create probabilistic simulations of movement. The uncertainties in input variables had a moderate effect on model predictions, as muscle and joint contact forces showed maximum standard deviation of 0.3 times body-weight and maximum range of 2.1 times body-weight. In addition, the output variables significantly correlated with few input variables (up to 7 out of 312) across the gait cycle, including the geometry definition of larger muscles and the maximum muscle tension in limited gait portions. Although we found subject-specific models not markedly sensitive to parameter identification, researchers should be aware of the model precision in relation to the intended application. In fact, force predictions could be
A novel parameter for predicting arterial fusion and ablation in finite element models
NASA Astrophysics Data System (ADS)
Fankell, Douglas; Kramer, Eric; Taylor, Kenneth; Ferguson, Virginia; Rentschler, Mark E.
2015-03-01
Tissue fusion devices apply heat and pressure to ligate or ablate blood vessels during surgery. Although this process is widely used, a predictive finite element (FE) model incorporating both structural mechanics and heat transfer has not been developed, limiting improvements to empirical evidence. This work presents the development of a novel damage parameter, which incorporates stress, water content and temperature, and demonstrates its application in a FE model. A FE model, using the Holzapfel-Gasser-Ogden strain energy function to represent the structural mechanics and equations developed by Cezo to model water content and heat transfer, was created to simulate the fusion or ablation of a porcine splenic artery. Using state variables, the stresses, temperature and water content are recorded and combined to create a single parameter at each integration point. The parameter is then compared to a critical value (determined through experiments). If the critical value is reached, the element loses all strength. If the value is not reached, no change occurs. Little experimental data exists for validation, but the resulting stresses, temperatures and water content fall within ranges predicted by prior work. Due to the lack of published data, additional experimental studies are being conducted to rigorously validate and accurately determine the critical value. Ultimately, a novel method for demonstrating tissue damage and fusion in a FE model is presented, providing the first step towards in-depth FE models simulating fusion and ablation of arteries.
Physical property parameter set for modeling ICPP aqueous wastes with ASPEN electrolyte NRTL model
Schindler, R.E.
1996-09-01
The aqueous waste evaporators at the Idaho Chemical Processing Plant (ICPP) are being modeled using ASPEN software. The ASPEN software calculates chemical and vapor-liquid equilibria with activity coefficients calculated using the electrolyte Non-Random Two Liquid (NRTL) model for local excess Gibbs free energies of interactions between ions and molecules in solution. The use of the electrolyte NRTL model requires the determination of empirical parameters for the excess Gibbs free energies of the interactions between species in solution. This report covers the development of a set parameters, from literature data, for the use of the electrolyte NRTL model with the major solutes in the ICPP aqueous wastes.
Sensitivity of numerical dispersion modeling to explosive source parameters
Baskett, R.L. ); Cederwall, R.T. )
1991-02-13
The calculation of downwind concentrations from non-traditional sources, such as explosions, provides unique challenges to dispersion models. The US Department of Energy has assigned the Atmospheric Release Advisory Capability (ARAC) at the Lawrence Livermore National Laboratory (LLNL) the task of estimating the impact of accidental radiological releases to the atmosphere anywhere in the world. Our experience includes responses to over 25 incidents in the past 16 years, and about 150 exercises a year. Examples of responses to explosive accidents include the 1980 Titan 2 missile fuel explosion near Damascus, Arkansas and the hydrogen gas explosion in the 1986 Chernobyl nuclear power plant accident. Based on judgment and experience, we frequently estimate the source geometry and the amount of toxic material aerosolized as well as its particle size distribution. To expedite our real-time response, we developed some automated algorithms and default assumptions about several potential sources. It is useful to know how well these algorithms perform against real-world measurements and how sensitive our dispersion model is to the potential range of input values. In this paper we present the algorithms we use to simulate explosive events, compare these methods with limited field data measurements, and analyze their sensitivity to input parameters. 14 refs., 7 figs., 2 tabs.
Fundamental parameters of pulsating stars from atmospheric models
NASA Astrophysics Data System (ADS)
Barcza, S.
2006-12-01
A purely photometric method is reviewed to determine distance, mass, equilibrium temperature, and luminosity of pulsating stars by using model atmospheres and hydrodynamics. T Sex is given as an example: on the basis of Kurucz atmospheric models and UBVRI (in both Johnson and Kron-Cousins systems) data, variation of angular diameter, effective temperature, and surface gravity is derived as a function of phase, mass M=(0.76± 0.09) M⊙, distance d=530± 67 pc, Rmax=2.99R⊙, Rmin=2.87R⊙, magnitude averaged visual absolute brightness < MVmag>=1.17± 0.26 mag are found. During a pulsation cycle four standstills of the atmosphere are pointed out indicating the occurrence of two shocks in the atmosphere. The derived equilibrium temperature Teq=7781 K and luminosity (28.3± 8.8)L⊙ locate T Sex on the blue edge of the instability strip in a theoretical Hertzsprung-Russell diagram. The differences of the physical parameters from this study and Liu & Janes (1990) are discussed.
Mechanical models for insect locomotion: stability and parameter studies
NASA Astrophysics Data System (ADS)
Schmitt, John; Holmes, Philip
2001-08-01
We extend the analysis of simple models for the dynamics of insect locomotion in the horizontal plane, developed in [Biol. Cybern. 83 (6) (2000) 501] and applied to cockroach running in [Biol. Cybern. 83 (6) (2000) 517]. The models consist of a rigid body with a pair of effective legs (each representing the insect’s support tripod) placed intermittently in ground contact. The forces generated may be prescribed as functions of time, or developed by compression of a passive leg spring. We find periodic gaits in both cases, and show that prescribed (sinusoidal) forces always produce unstable gaits, unless they are allowed to rotate with the body during stride, in which case a (small) range of physically unrealistic stable gaits does exist. Stability is much more robust in the passive spring case, in which angular momentum transfer at touchdown/liftoff can result in convergence to asymptotically straight motions with bounded yaw, fore-aft and lateral velocity oscillations. Using a non-dimensional formulation of the equations of motion, we also develop exact and approximate scaling relations that permit derivation of gait characteristics for a range of leg stiffnesses, lengths, touchdown angles, body masses and inertias, from a single gait family computed at ‘standard’ parameter values.
NASA Astrophysics Data System (ADS)
Rueter, Keiti; Novikov, Ivan
2016-09-01
Parameters of a nuclear density distribution for an exotic nuclei with halo or skin structures can be determined from the experimentally measure interaction cross-section. In the presented work, to extract parameters for a halo and core, we compare experimental data on interaction cross section with reaction cross-sections calculated using expressions obtained in the Glauber Model and its optical approximation. These calculations are performed using Markov Chain Monte Carlo algorithm. In addition, we discuss the accuracy of the Monte Carlo approach to calculating the interaction and reaction cross-sections. The dependence of the accuracy of the density parameters of various exotic nuclei on the ``quality'' of the random numbers chains (here, ``quality'' is defined by lag-1 autocorrelation time of a sequence of random numbers) is obtained for the Gaussian density distribution for a core and the Gaussian density distribution for a halo. KY NSF EPSCoR Research Scholars Program.
Extending the nonequilibrium square-gradient model with temperature-dependent influence parameters
NASA Astrophysics Data System (ADS)
Magnanelli, Elisa; Wilhelmsen, Øivind; Bedeaux, Dick; Kjelstrup, Signe
2014-09-01
Nonequilibrium interface phenomena play a key role in crystallization, hydrate formation, pipeline depressurization, and a multitude of other examples. Square gradient theory extended to the nonequilibrium domain is a powerful tool for understanding these processes. The theory gives an accurate prediction of surface tension at equilibrium, only with temperature-dependent influence parameters. We extend in this work the nonequilibrium square gradient model to have temperature-dependent influence parameters. The extension leads to thermodynamic quantities which depend on temperature gradients. Remarkably the Gibbs relation proposed in earlier work is still valid. Also for the extended framework, the "Gibbs surface" described by excess variables is found to be in local equilibrium. The temperature-dependent influence parameters give significantly different interface resistivities (˜9%-50%), due to changed density gradients and additional terms in the enthalpy. The presented framework facilitates a more accurate description of transport across interfaces with square gradient theory.
Estimation of parameters in a distributed precipitation-runoff model for Norway
NASA Astrophysics Data System (ADS)
Beldring, Stein; Engeland, Kolbjørn; Roald, Lars A.; Roar Sælthun, Nils; Voksø, Astrid
A distributed version of the HBV-model using 1 km2 grid cells and daily time step was used to simulate runoff from the entire land surface of Norway for the period 1961-1990. The model was sensitive to changes in small scale properties of the land surface and the climatic input data, through explicit representation of differences between model elements, and by implicit consideration of sub-grid variations in moisture status. A geographically transferable set of model parameters was determined by a multi-criteria calibration strategy, which simultaneously minimised the residuals between model simulated and observed runoff from 141 Norwegian catchments located in areas with different runoff regimes and landscape characteristics. Model discretisation units with identical landscape classification were assigned similar parameter values. Model performance was evaluated by simulating discharge from 43 independent catchments. Finally, a river routing procedure using a kinematic wave approximation to open channel flow was introduced in the model, and discharges from three additional catchments were calculated and compared with observations. The model was used to produce a map of average annual runoff for Norway for the period 1961-1990.
NASA Astrophysics Data System (ADS)
Kettle, H.
2009-08-01
Biogeochemical models of the ocean carbon cycle are frequently validated by, or tuned to, satellite chlorophyll data. However, ocean carbon cycle models are required to accurately model the movement of carbon, not chlorophyll, and due to the high variability of the carbon to chlorophyll ratio in phytoplankton, chlorophyll is not a robust proxy for carbon. Using inherent optical property (IOP) inversion algorithms it is now possible to also derive the amount of light backscattered by the upper ocean (bb) which is related to the amount of particulate organic carbon (POC) present. Using empirical relationships between POC and bb, a 1-D marine biogeochemical model is used to simulate bb at 490 nm thereby allowing the model to be compared with both remotely-sensed chlorophyll or bb data. Here I investigate the possibility of using bb in conjunction with chlorophyll data to help constrain the parameters in a simple 1-D NPZD model. The parameters of the biogeochemical model are tuned with a genetic algorithm, so that the model is fitted to either chlorophyll data or to both chlorophyll and bb data at three sites in the Atlantic with very different characteristics. Several inherent optical property (IOP) algorithms are available for estimating bb, three of which are used here. The effect of the different bb datasets on the behaviour of the tuned model is examined to ascertain whether the uncertainty in bb is significant. The results show that the addition of bb data does not consistently alter the same model parameters at each site and in fact can lead to some parameters becoming less well constrained, implying there is still much work to be done on the mechanisms relating chlorophyll to POC and bb within the model. However, this study does indicate that including bb data has the potential to significantly effect the modelled mixed layer detritus and that uncertainties in bb due to the different IOP algorithms are not particularly significant.
Impact of the hard-coded parameters on the hydrologic fluxes of the land surface model Noah-MP
NASA Astrophysics Data System (ADS)
Cuntz, Matthias; Mai, Juliane; Samaniego, Luis; Clark, Martyn; Wulfmeyer, Volker; Attinger, Sabine; Thober, Stephan
2016-04-01
Land surface models incorporate a large number of processes, described by physical, chemical and empirical equations. The process descriptions contain a number of parameters that can be soil or plant type dependent and are typically read from tabulated input files. Land surface models may have, however, process descriptions that contain fixed, hard-coded numbers in the computer code, which are not identified as model parameters. Here we searched for hard-coded parameters in the computer code of the land surface model Noah with multiple process options (Noah-MP) to assess the importance of the fixed values on restricting the model's agility during parameter estimation. We found 139 hard-coded values in all Noah-MP process options, which are mostly spatially constant values. This is in addition to the 71 standard parameters of Noah-MP, which mostly get distributed spatially by given vegetation and soil input maps. We performed a Sobol' global sensitivity analysis of Noah-MP to variations of the standard and hard-coded parameters for a specific set of process options. 42 standard parameters and 75 hard-coded parameters were active with the chosen process options. The sensitivities of the hydrologic output fluxes latent heat and total runoff as well as their component fluxes were evaluated. These sensitivities were evaluated at twelve catchments of the Eastern United States with very different hydro-meteorological regimes. Noah-MP's hydrologic output fluxes are sensitive to two thirds of its standard parameters. The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for evaporation, which proved to be oversensitive in other land surface models as well. Surface runoff is sensitive to almost all hard-coded parameters of the snow processes and the meteorological inputs. These parameter sensitivities diminish in total runoff. Assessing these parameters in model calibration would require detailed snow observations or the
A NEW VARIANCE ESTIMATOR FOR PARAMETERS OF SEMI-PARAMETRIC GENERALIZED ADDITIVE MODELS. (R829213)
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
Cheng, C. L.; Gragg, M. J.; Perfect, E.; White, Mark D.; Lemiszki, P. J.; McKay, L. D.
2013-08-24
Numerical simulations are widely used in feasibility studies for geologic carbon sequestration. Accurate estimates of petrophysical parameters are needed as inputs for these simulations. However, relatively few experimental values are available for CO2-brine systems. Hence, a sensitivity analysis was performed using the STOMP numerical code for supercritical CO2 injected into a model confined deep saline aquifer. The intrinsic permeability, porosity, pore compressibility, and capillary pressure-saturation/relative permeability parameters (residual liquid saturation, residual gas saturation, and van Genuchten alpha and m values) were varied independently. Their influence on CO2 injection rates and costs were determined and the parameters were ranked based on normalized coefficients of variation. The simulations resulted in differences of up to tens of millions of dollars over the life of the project (i.e., the time taken to inject 10.8 million metric tons of CO2). The two most influential parameters were the intrinsic permeability and the van Genuchten m value. Two other parameters, the residual gas saturation and the residual liquid saturation, ranked above the porosity. These results highlight the need for accurate estimates of capillary pressure-saturation/relative permeability parameters for geologic carbon sequestration simulations in addition to measurements of porosity and intrinsic permeability.
Validating Mechanistic Sorption Model Parameters and Processes for Reactive Transport in Alluvium
Zavarin, M; Roberts, S K; Rose, T P; Phinney, D L
2002-05-02
The laboratory batch and flow-through experiments presented in this report provide a basis for validating the mechanistic surface complexation and ion exchange model we use in our hydrologic source term (HST) simulations. Batch sorption experiments were used to examine the effect of solution composition on sorption. Flow-through experiments provided for an analysis of the transport behavior of sorbing elements and tracers which includes dispersion and fluid accessibility effects. Analysis of downstream flow-through column fluids allowed for evaluation of weakly-sorbing element transport. Secondary Ion Mass Spectrometry (SIMS) analysis of the core after completion of the flow-through experiments permitted the evaluation of transport of strongly sorbing elements. A comparison between these data and model predictions provides additional constraints to our model and improves our confidence in near-field HST model parameters. In general, cesium, strontium, samarium, europium, neptunium, and uranium behavior could be accurately predicted using our mechanistic approach but only after some adjustment was made to the model parameters. The required adjustments included a reduction in strontium affinity for smectite, an increase in cesium affinity for smectite and illite, a reduction in iron oxide and calcite reactive surface area, and a change in clinoptilolite reaction constants to reflect a more recently published set of data. In general, these adjustments are justifiable because they fall within a range consistent with our understanding of the parameter uncertainties. These modeling results suggest that the uncertainty in the sorption model parameters must be accounted for to validate the mechanistic approach. The uncertainties in predicting the sorptive behavior of U-1a and UE-5n alluvium also suggest that these uncertainties must be propagated to nearfield HST and large-scale corrective action unit (CAU) models.
NASA Astrophysics Data System (ADS)
Garcia, Carlos; Hernandez, Teresa; Costa, Francisco
1992-11-01
The organic fraction of a municipal solid waste was added in different doses to an eroded soil formed of loam and with no vegetal cover. After three years, the changes in macronutrient content and the chemical-structural composition of its organic matter were studied. The addition of the organic fraction from a municipal solid waste had a positive effect on soil regeneration, the treated soils being covered with spontaneous vegetation from 1 yr onwards. An increase in electrical conductivity and a fall in pH were noted in the treated soils as were increases in macronutrients, particularly N and available P and the different carbon fractions. Optical density measurements of the organic matter extracted with sodium pyrophosphate showed that the treated soils contained an organic matter with less condensed compounds and with a greater tendency to evolve than the control. A pyrolysis-gas chromatography study of the organic matter extracted with pyrophosphate showed large quantities of benzene both in the treated soils and control; pyrrole was also relatively abundant, although this fragment decreased as the dose rose. Xylenes and pyridine were present in greater quantities in the control and furfural in the treated soils. Three years after addition to the soil, the organic matter had a higher proportion of fragments derived from aromatic compounds and a smaller proportion derived from hydrocarbons. Similarity indices showed that, although the added and newly formed organic matter 3 yr after addition continued to differ from that of the original soil and to be more mineralizable, the transformations it has undergone made it more similar to the original organic matter of the soil than it was at the moment of being added.
Simultaneous model discrimination and parameter estimation in dynamic models of cellular systems
2013-01-01
Background Model development is a key task in systems biology, which typically starts from an initial model candidate and, involving an iterative cycle of hypotheses-driven model modifications, leads to new experimentation and subsequent model identification steps. The final product of this cycle is a satisfactory refined model of the biological phenomena under study. During such iterative model development, researchers frequently propose a set of model candidates from which the best alternative must be selected. Here we consider this problem of model selection and formulate it as a simultaneous model selection and parameter identification problem. More precisely, we consider a general mixed-integer nonlinear programming (MINLP) formulation for model selection and identification, with emphasis on dynamic models consisting of sets of either ODEs (ordinary differential equations) or DAEs (differential algebraic equations). Results We solved the MINLP formulation for model selection and identification using an algorithm based on Scatter Search (SS). We illustrate the capabilities and efficiency of the proposed strategy with a case study considering the KdpD/KdpE system regulating potassium homeostasis in Escherichia coli. The proposed approach resulted in a final model that presents a better fit to the in silico generated experimental data. Conclusions The presented MINLP-based optimization approach for nested-model selection and identification is a powerful methodology for model development in systems biology. This strategy can be used to perform model selection and parameter estimation in one single step, thus greatly reducing the number of experiments and computations of traditional modeling approaches. PMID:23938131
Waqas, Muhammad; Kim, Yoon-Ha; Khan, Abdul Latif; Shahzad, Raheem; Asaf, Sajjad; Hamayun, Muhammad; Kang, Sang-Mo; Khan, Muhammad Aaqil; Lee, In-Jung
2017-01-01
We studied the effects of hardwood-derived biochar (BC) and the phytohormone-producing endophyte Galactomyces geotrichum WLL1 in soybean (Glycine max (L.) Merr.) with respect to basic, macro-and micronutrient uptakes and assimilations, and their subsequent effects on the regulation of functional amino acids, isoflavones, fatty acid composition, total sugar contents, total phenolic contents, and 1,1-diphenyl-2-picrylhydrazyl (DPPH)-scavenging activity. The assimilation of basic nutrients such as nitrogen was up-regulated, leaving carbon, oxygen, and hydrogen unaffected in BC+G. geotrichum-treated soybean plants. In comparison, the uptakes of macro-and micronutrients fluctuated in the individual or co-application of BC and G. geotrichum in soybean plant organs and rhizospheric substrate. Moreover, the same attribute was recorded for the regulation of functional amino acids, isoflavones, fatty acid composition, total sugar contents, total phenolic contents, and DPPH-scavenging activity. Collectively, these results showed that BC+G. geotrichum-treated soybean yielded better results than did the plants treated with individual applications. It was concluded that BC is an additional nutriment source and that the G. geotrichum acts as a plant biostimulating source and the effects of both are additive towards plant growth promotion. Strategies involving the incorporation of BC and endophytic symbiosis may help achieve eco-friendly agricultural production, thus reducing the excessive use of chemical agents. PMID:28124840
Waller, Niels G; Feuerstahler, Leah
2017-03-17
In this study, we explored item and person parameter recovery of the four-parameter model (4PM) in over 24,000 real, realistic, and idealized data sets. In the first analyses, we fit the 4PM and three alternative models to data from three Minnesota Multiphasic Personality Inventory-Adolescent form factor scales using Bayesian modal estimation (BME). Our results indicated that the 4PM fits these scales better than simpler item Response Theory (IRT) models. Next, using the parameter estimates from these real data analyses, we estimated 4PM item parameters in 6,000 realistic data sets to establish minimum sample size requirements for accurate item and person parameter recovery. Using a factorial design that crossed discrete levels of item parameters, sample size, and test length, we also fit the 4PM to an additional 18,000 idealized data sets to extend our parameter recovery findings. Our combined results demonstrated that 4PM item parameters and parameter functions (e.g., item response functions) can be accurately estimated using BME in moderate to large samples (N ⩾ 5, 000) and person parameters can be accurately estimated in smaller samples (N ⩾ 1, 000). In the supplemental files, we report annotated [Formula: see text] code that shows how to estimate 4PM item and person parameters in [Formula: see text] (Chalmers, 2012 ).
Lesyk, Ia V; Fedoruk, R S; Dolaĭchuk, O P
2013-01-01
We studied the content of glycoproteins and their individual carbohydrate components, the phagocyte activity of neutrophils, phagocyte index, phagocyte number lizotsym and bactericidal activity of the serum concentration of circulating immune complexes and middle mass molecules in the blood of rabbits following administration into the diet chlorella suspension, sodium sulfate, chromium citrate and chromium chloride. The studies were conducted on rabbits weighing 3.7-3.9 kg with altered diet from the first day of life to 118 days old. Rabbits were divided into five groups: the control one and four experimental groups. We found that in the blood of rabbits of experimental groups recieved sodium sulphate, chromium chloride and chromium citrate, the content of glycoprotein's and their carbohydrate components was significantly higher during the 118 days of the study compared with the control group. Feeding rabbits with mineral supplements likely reflected the differences compared with the control parameters of nonspecific resistance in the blood for the study period, which was more pronounced in the first two months of life.
Response to selection in finite locus models with non-additive effects.
Esfandyari, Hadi; Henryon, Mark; Berg, Peer; Thomasen, Jorn Rind; Bijma, Piter; Sørensen, Anders Christian
2017-01-12
Under the finite-locus model in the absence of mutation, the additive genetic variation is expected to decrease when directional selection is acting on a population, according to quantitative-genetic theory. However, some theoretical studies of selection suggest that the level of additive variance can be sustained or even increased when non-additive genetic effects are present. We tested the hypothesis that finite-locus models with both additive and non-additive genetic effects maintain more additive genetic variance (V_A) and realize larger medium-to-long term genetic gains than models with only additive effects when the trait under selection is subject to truncation selection. Four genetic models that included additive, dominance, and additive-by-additive epistatic effects were simulated. The simulated genome for individuals consisted of 25 chromosomes, each with a length of 1M. One hundred bi-allelic QTL, four on each chromosome, were considered. In each generation, 100 sires and 100 dams were mated, producing five progeny per mating. The population was selected for a single trait (h(2)=0.1) for 100 discrete generations with selection on phenotype or BLUP-EBV. V_A decreased with directional truncation selection even in presence of non-additive genetic effects. Non-additive effects influenced long-term response to selection and among genetic models additive gene action had highest response to selection. In addition, in all genetic models, BLUP-EBV resulted in a greater fixation of favourable and unfavourable alleles and higher response than phenotypic selection. In conclusion, for the schemes we simulated, the presence of non-additive genetic effects had little effect in changes of additive variance and V_A decreased by directional selection.
NASA Astrophysics Data System (ADS)
Hostache, R.; Hissler, C.; Matgen, P.; Guignard, C.; Bates, P.
2014-09-01
Fine sediments represent an important vector of pollutant diffusion in rivers. When deposited in floodplains and riverbeds, they can be responsible for soil pollution. In this context, this paper proposes a modelling exercise aimed at predicting transport and diffusion of fine sediments and dissolved pollutants. The model is based upon the Telemac hydro-informatic system (dynamical coupling Telemac-2D-Sysiphe). As empirical and semiempirical parameters need to be calibrated for such a modelling exercise, a sensitivity analysis is proposed. An innovative point in this study is the assessment of the usefulness of dissolved trace metal contamination information for model calibration. Moreover, for supporting the modelling exercise, an extensive database was set up during two flood events. It includes water surface elevation records, discharge measurements and geochemistry data such as time series of dissolved/particulate contaminants and suspended-sediment concentrations. The most sensitive parameters were found to be the hydraulic friction coefficients and the sediment particle settling velocity in water. It was also found that model calibration did not benefit from dissolved trace metal contamination information. Using the two monitored hydrological events as calibration and validation, it was found that the model is able to satisfyingly predict suspended sediment and dissolve pollutant transport in the river channel. In addition, a qualitative comparison between simulated sediment deposition in the floodplain and a soil contamination map shows that the preferential zones for deposition identified by the model are realistic.
Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.
2012-01-01
An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.
NASA Technical Reports Server (NTRS)
Seufzer, William J.
2014-01-01
Additive manufacturing is coming into industrial use and has several desirable attributes. Control of the deposition remains a complex challenge, and so this literature review was initiated to capture current modeling efforts in the field of additive manufacturing. This paper summarizes about 10 years of modeling and simulation related to both welding and additive manufacturing. The goals were to learn who is doing what in modeling and simulation, to summarize various approaches taken to create models, and to identify research gaps. Later sections in the report summarize implications for closed-loop-control of the process, implications for local research efforts, and implications for local modeling efforts.
Swaminathan-Gopalan, Krishnan; Stephani, Kelly A.
2016-02-15
A systematic approach for calibrating the direct simulation Monte Carlo (DSMC) collision model parameters to achieve consistency in the transport processes is presented. The DSMC collision cross section model parameters are calibrated for high temperature atmospheric conditions by matching the collision integrals from DSMC against ab initio based collision integrals that are currently employed in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and Data Parallel Line Relaxation (DPLR) high temperature computational fluid dynamics solvers. The DSMC parameter values are computed for the widely used Variable Hard Sphere (VHS) and the Variable Soft Sphere (VSS) models using the collision-specific pairing approach. The recommended best-fit VHS/VSS parameter values are provided over a temperature range of 1000-20 000 K for a thirteen-species ionized air mixture. Use of the VSS model is necessary to achieve consistency in transport processes of ionized gases. The agreement of the VSS model transport properties with the transport properties as determined by the ab initio collision integral fits was found to be within 6% in the entire temperature range, regardless of the composition of the mixture. The recommended model parameter values can be readily applied to any gas mixture involving binary collisional interactions between the chemical species presented for the specified temperature range.
Observation model and parameter partials for the JPL geodetic GPS modeling software GPSOMC
NASA Technical Reports Server (NTRS)
Sovers, O. J.; Border, J. S.
1988-01-01
The physical models employed in GPSOMC and the modeling module of the GIPSY software system developed at JPL for analysis of geodetic Global Positioning Satellite (GPS) measurements are described. Details of the various contributions to range and phase observables are given, as well as the partial derivatives of the observed quantities with respect to model parameters. A glossary of parameters is provided to enable persons doing data analysis to identify quantities in the current report with their counterparts in the computer programs. There are no basic model revisions, with the exceptions of an improved ocean loading model and some new options for handling clock parametrization. Such misprints as were discovered were corrected. Further revisions include modeling improvements and assurances that the model description is in accord with the current software.
Parameter identification in a generalized time-harmonic Rayleigh damping model for elastography.
Van Houten, Elijah E W
2014-01-01
The identifiability of the two damping components of a Generalized Rayleigh Damping model is investigated through analysis of the continuum equilibrium equations as well as a simple spring-mass system. Generalized Rayleigh Damping provides a more diversified attenuation model than pure Viscoelasticity, with two parameters to describe attenuation effects and account for the complex damping behavior found in biological tissue. For heterogeneous Rayleigh Damped materials, there is no equivalent Viscoelastic system to describe the observed motions. For homogeneous systems, the inverse problem to determine the two Rayleigh Damping components is seen to be uniquely posed, in the sense that the inverse matrix for parameter identification is full rank, with certain conditions: when either multi-frequency data is available or when both shear and dilatational wave propagation is taken into account. For the multi-frequency case, the frequency dependency of the elastic parameters adds a level of complexity to the reconstruction problem that must be addressed for reasonable solutions. For the dilatational wave case, the accuracy of compressional wave measurement in fluid saturated soft tissues becomes an issue for qualitative parameter identification. These issues can be addressed with reasonable assumptions on the negligible damping levels of dilatational waves in soft tissue. In general, the parameters of a Generalized Rayleigh Damping model are identifiable for the elastography inverse problem, although with more complex conditions than the simpler Viscoelastic damping model. The value of this approach is the additional structural information provided by the Generalized Rayleigh Damping model, which can be linked to tissue composition as well as rheological interpretations.
NASA Astrophysics Data System (ADS)
Mikhailov, E.; Merkulov, V.; Vlasenko, S.; Rose, D.; Pöschl, U.
2011-11-01
In this study we derive and apply a mass-based hygroscopicity parameter interaction model for efficient description of concentration-dependent water uptake by atmospheric aerosol particles. The model approach builds on the single hygroscopicity parameter model of Petters and Kreidenweis (2007). We introduce an observable mass-based hygroscopicity parameter κm, which can be deconvoluted into a dilute intrinsic hygroscopicity parameter (κm,∞) and additional self- and cross-interaction parameters describing non-ideal solution behavior and concentration dependencies of single- and multi-component systems. For sodium chloride, the κm-interaction model (KIM) captures the observed concentration and humidity dependence of the hygroscopicity parameter and is in good agreement with an accurate reference model based on the Pitzer ion-interaction approach (Aerosol Inorganic Model, AIM). For atmospheric aerosol samples collected from boreal rural air and from pristine tropical rainforest air (secondary organic aerosol) we present first mass-based measurements of water uptake over a wide range of relative humidity (1-99%) obtained with a new filter-based differential hygroscopicity analyzer (FDHA) technique. By application of KIM to the measurement data we can distinguish three different regimes of hygroscopicity in the investigated aerosol samples: (I) A quasi-eutonic regime at low relative humidity (~60% RH) where the solutes co-exist in an aqueous and non-aqueous phase; (II) a gradually deliquescent regime at intermediate humidity (~60%-90% RH) where different solutes undergo gradual dissolution in the aqueous phase; and (III) a dilute regime at high humidity (≳90% RH) where the solutes are fully dissolved approaching their dilute intrinsic hygroscopicity. The characteristic features of the three hygroscopicity regimes are similar for both samples, while the RH threshold values vary as expected for samples of different chemical composition. In each regime, the
Observation model and parameter partials for the JPL geodetic (GPS) modeling software 'GPSOMC'
NASA Technical Reports Server (NTRS)
Sovers, O. J.
1990-01-01
The physical models employed in GPSOMC, the modeling module of the GIPSY software system developed at JPL for analysis of geodetic Global Positioning Satellite (GPS) measurements are described. Details of the various contributions to range and phase observables are given, as well as the partial derivatives of the observed quantities with respect to model parameters. A glossary of parameters is provided to enable persons doing data analysis to identify quantities with their counterparts in the computer programs. The present version is the second revision of the original document which it supersedes. The modeling is expanded to provide the option of using Cartesian station coordinates; parameters for the time rates of change of universal time and polar motion are also introduced.
Farmer, Terry G.; Edgar, Thomas F.; Peppas, Nicholas A.
2011-01-01
Background The use of patient models describing the dynamics of glucose, insulin, and possibly other metabolic species associated with glucose regulation allows diabetes researchers to gain insights regarding novel therapies via simulation. However, such models are only useful when model parameters are effectively estimated with patient data. Methods The use of least squares to effectively estimate model parameters from simulation data was investigated by observing factors that influence the accuracy of estimates for the model parameters from a data set generated using a model with known parameters. An intravenous insulin pharmacokinetic model was used to generate the insulin response of a patient with type 1 diabetes mellitus to a series of step changes in the insulin infusion rate from an external insulin pump. The effects of using user-defined gradient and Hessian calculations on both parameter estimations and the 95% confidence limits of the estimated parameter sets were investigated. Results Estimations performed by either solver without user-supplied quantities were highly dependent on the initial guess of the parameter set, with relative confidence limits greater than ±100%. The use of user-defined quantities allowed the one-compartment model parameters to be effectively estimated. While the two-compartment model parameter estimation still depended on the initial parameter set specification, confidence limits were decreased, and all fits to simulation data were very good. Conclusions The use of user-defined gradients and Hessian matrices results in more accurate parameter estimations for insulin transport models. Improved estimation could result in more accurate simulations for use in glucose control system design. PMID:18260776
Rock thermal conductivity as key parameter for geothermal numerical models
NASA Astrophysics Data System (ADS)
Di Sipio, Eloisa; Chiesa, Sergio; Destro, Elisa; Galgaro, Antonio; Giaretta, Aurelio; Gola, Gianluca; Manzella, Adele
2013-04-01
The geothermal energy applications are undergoing a rapid development. However, there are still several challenges in the successful exploitation of geothermal energy resources. In particular, a special effort is required to characterize the thermal properties of the ground along with the implementation of efficient thermal energy transfer technologies. This paper focuses on understanding the quantitative contribution that geosciences can receive from the characterization of rock thermal conductivity. The thermal conductivity of materials is one of the main input parameters in geothermal modeling since it directly controls the steady state temperature field. An evaluation of this thermal property is required in several fields, such as Thermo-Hydro-Mechanical multiphysics analysis of frozen soils, designing ground source heat pumps plant, modeling the deep geothermal reservoirs structure, assessing the geothermal potential of subsoil. Aim of this study is to provide original rock thermal conductivity values useful for the evaluation of both low and high enthalpy resources at regional or local scale. To overcome the existing lack of thermal conductivity data of sedimentary, igneous and metamorphic rocks, a series of laboratory measurements has been performed on several samples, collected in outcrop, representative of the main lithologies of the regions included in the VIGOR Project (southern Italy). Thermal properties tests were carried out both in dry and wet conditions, using a C-Therm TCi device, operating following the Modified Transient Plane Source method.Measurements were made at standard laboratory conditions on samples both water saturated and dehydrated with a fan-forced drying oven at 70 ° C for 24 hr, for preserving the mineral assemblage and preventing the change of effective porosity. Subsequently, the samples have been stored in an air-conditioned room while bulk density, solid volume and porosity were detected. The measured thermal conductivity
Geomagnetically induced currents in Uruguay: Sensitivity to modelling parameters
NASA Astrophysics Data System (ADS)
Caraballo, R.
2016-11-01
According to the traditional wisdom, geomagnetically induced currents (GIC) should occur rarely at mid-to-low latitudes, but in the last decades a growing number of reports have addressed their effects on high-voltage (HV) power grids at mid-to-low latitudes. The growing trend to interconnect national power grids to meet regional integration objectives, may lead to an increase in the size of the present energy transmission networks to form a sort of super-grid at continental scale. Such a broad and heterogeneous super-grid can be exposed to the effects of large GIC if appropriate mitigation actions are not taken into consideration. In the present study, we present GIC estimates for the Uruguayan HV power grid during severe magnetic storm conditions. The GIC intensities are strongly dependent on the rate of variation of the geomagnetic field, conductivity of the ground, power grid resistances and configuration. Calculated GIC are analysed as functions of these parameters. The results show a reasonable agreement with measured data in Brazil and Argentina, thus confirming the reliability of the model. The expansion of the grid leads to a strong increase in GIC intensities in almost all substations. The power grid response to changes in ground conductivity and resistances shows similar results in a minor extent. This leads us to consider GIC as a non-negligible phenomenon in South America. Consequently, GIC must be taken into account in mid-to-low latitude power grids as well.
NASA Astrophysics Data System (ADS)
Shen, Chengcheng; Shi, Honghua; Liu, Yongzhi; Li, Fen; Ding, Dewen
2016-07-01
Marine ecosystem dynamic models (MEDMs) are important tools for the simulation and prediction of marine ecosystems. This article summarizes the methods and strategies used for the improvement and assessment of MEDM skill, and it attempts to establish a technical framework to inspire further ideas concerning MEDM skill improvement. The skill of MEDMs can be improved by parameter optimization (PO), which is an important step in model calibration. An efficient approach to solve the problem of PO constrained by MEDMs is the global treatment of both sensitivity analysis and PO. Model validation is an essential step following PO, which validates the efficiency of model calibration by analyzing and estimating the goodness-of-fit of the optimized model. Additionally, by focusing on the degree of impact of various factors on model skill, model uncertainty analysis can supply model users with a quantitative assessment of model confidence. Research on MEDMs is ongoing; however, improvement in model skill still lacks global treatments and its assessment is not integrated. Thus, the predictive performance of MEDMs is not strong and model uncertainties lack quantitative descriptions, limiting their application. Therefore, a large number of case studies concerning model skill should be performed to promote the development of a scientific and normative technical framework for the improvement of MEDM skill.
Vaičiulytė-Funk, Lina; Šalomskienė, Joana; Alenčikienė, Gitana; Mieželienė, Aldona
2016-01-01
Summary Retardation of microbial spoilage of bread can be achieved by the use of spontaneous sourdough with an antimicrobial activity. This study was undertaken to identify lactic acid bacteria naturally occurring in spontaneous sourdough and use them for quality improvement and prolonging shelf life of rye, wheat and rye with wheat bread. Identification of isolates from spontaneous sourdough by pyrosequencing assay showed that Lactobacillus reuteri were dominant lactic acid bacteria. The isolates showed a wide range of antimicrobial activity and displayed a synergistic activity against other lactobacilli, some lactococci and foodborne yeasts. The best application of spontaneous sourdough was noticed in the rye bread with the lowest crumb firmness of the final product, although the sensory results of wheat and rye with wheat bread did not statistically differ from control bread. L. reuteri showed a high preserving capacity against fungi during storage. This may be due to bacteriocins and various fatty acids secreted into the growth medium that were identified by agar well diffusion assay and gas chromatography. L. reuteri showing high antimicrobial activity have the potential to be used as a starter additive that could improve safety and/or shelf life of bread. PMID:27956866
NASA Astrophysics Data System (ADS)
Reusser, D.; Zehe, E.
2009-04-01
The temporal dynamics of hydrological model performance gives insights into errors that cannot be obtained from global performance measures assigning a single number to the fit of a simulated time series to an observed reference series. These errors can include errors in data, model parameters, or model structure. Dealing with a set of performance measures evaluated at a high temporal resolution implies analyzing and interpreting a high dimensional data set. We present a method for such a hydrological model performance assessment with a high temporal resolution. Information about possible relevant processes during times with distinct model performance is obtained from parameter sensitivity analysis - also with high temporal resolution. We illustrate the combined approach of temporally resolved model performance and parameter sensitivity for a rainfall-runoff modeling case study. The headwater catchment of the Wilde Weisseritz in the eastern Ore mountains is simulated with the conceptual model WaSiM-ETH. The proposed time-resolved performance assessment starts with the computation of a large set of classically used performance measures for a moving window. The key of the developed approach is a data-reduction method based on self-organizing maps (SOMs) and cluster analysis to classify the high-dimensional performance matrix. Synthetic peak errors are used to interpret the resulting error classes. The temporally resolved sensitivity analysis is based on the FAST algorithm. The final outcome of the proposed method is a time series of the occurrence of dominant error types as well as a time series of the relative parameter sensitivity. For the two case studies analyzed here, 6 error types have been identified. They show clear temporal patterns which can lead to the identification of model structural errors. The parameter sensitivity helps to identify the relevant model parts.
Verschoor, Anja J; Vink, Jos P M; Vijver, Martina G
2012-10-01
Biotic ligand models for calculation of watertype-specific no effect concentrations are recognized as a major improvement in risk assessment of metals in surface waters. Model complexity and data requirement, however, hamper the regulatory implementation. To facilitate regulatory use, biotic ligand models (BLM) for the calculation of Ni, Cu, and Zn HC5 values were simplified to linear equations with an acceptable level of accuracy, requiring a maximum of 3 measured water chemistry parameters. In single-parameter models, dissolved organic carbon (DOC) is the only significant parameter with an accuracy of 72%-75% to predict HC5s computed by the full BLMs. In 2-parameter models, Mg, Ca, or pH are selected by stepwise multiple regression for Ni, Cu, and Zn HC5, respectively, and increase the accuracy to 87%-94%. The accuracy is further increased by addition of a third parameter to 88%-97%. Three-parameter models have DOC and pH in common, the third parameter is Mg, Ca, or Na for HC5 of Ni, Cu, and Zn, respectively. Mechanisms of chemical speciation and competitive binding to the biotic ligand explain the selection of these parameters. User-defined requirements, such as desired level of reliability and the availability of measured data, determine the selection of functions to predict HC5.
NASA Astrophysics Data System (ADS)
Saikia, Partha; Saikia, Bipul Kumar; Bhuyan, Heman
2016-04-01
We report the effect of hydrogen addition on plasma parameters of argon-oxygen magnetron glow discharge plasma in the synthesis of H-doped TiO2 films. The parameters of the hydrogen-added Ar/O2 plasma influence the properties and the structural phases of the deposited TiO2 film. Therefore, the variation of plasma parameters such as electron temperature (Te), electron density (ne), ion density (ni), degree of ionization of Ar and degree of dissociation of H2 as a function of hydrogen content in the discharge is studied. Langmuir probe and Optical emission spectroscopy are used to characterize the plasma. On the basis of the different reactions in the gas phase of the magnetron discharge, the variation of plasma parameters and sputtering rate are explained. It is observed that the electron and heavy ion density decline with gradual addition of hydrogen in the discharge. Hydrogen addition significantly changes the degree of ionization of Ar which influences the structural phases of the TiO2 film.
A Note on the Item Information Function of the Four-Parameter Logistic Model
ERIC Educational Resources Information Center
Magis, David
2013-01-01
This article focuses on four-parameter logistic (4PL) model as an extension of the usual three-parameter logistic (3PL) model with an upper asymptote possibly different from 1. For a given item with fixed item parameters, Lord derived the value of the latent ability level that maximizes the item information function under the 3PL model. The…
Bredbeck, T; Rodgers, A; Walter, W
1999-07-23
The velocity structures and source parameters estimated by waveform modeling provide valuable information for CTBT monitoring. The inferred crustal and uppermost mantle structures advance understanding of tectonics and guides regionalization for event location and identification efforts. Estimation of source parameters such as seismic moment, depth and mechanism (whether earthquake, explosion or collapse) is crucial to event identification. In this paper we briefly outline some of the waveform modeling research for CTBT monitoring performed in the last year. In the future we will estimate structure for new regions by modeling waveforms of large well-observed events along additional paths. Of particular interest will be the estimation of velocity structure in aseismic regions such as most of Africa and the Former Soviet Union. Our previous work on aseismic regions in the Middle East, north Africa and south Asia give us confidence to proceed with our current methods. Using the inferred velocity models we plan to estimate source parameters for smaller events. It is especially important to obtain seismic moments of earthquakes for use in applying the Magnitude-Distance Amplitude Correction (MDAC; Taylor et al., 1999) to regional body-wave amplitudes for discrimination and calibrating the coda-based magnitude scales.
Multi-Variable Model-Based Parameter Estimation Model for Antenna Radiation Pattern Prediction
NASA Technical Reports Server (NTRS)
Deshpande, Manohar D.; Cravey, Robin L.
2002-01-01
A new procedure is presented to develop multi-variable model-based parameter estimation (MBPE) model to predict far field intensity of antenna. By performing MBPE model development procedure on a single variable at a time, the present method requires solution of smaller size matrices. The utility of the present method is demonstrated by determining far field intensity due to a dipole antenna over a frequency range of 100-1000 MHz and elevation angle range of 0-90 degrees.
Observation model and parameter partials for the JPL VLBI parameter estimation software MODEST/1991
NASA Technical Reports Server (NTRS)
Sovers, O. J.
1991-01-01
A revision is presented of MASTERFIT-1987, which it supersedes. Changes during 1988 to 1991 included introduction of the octupole component of solid Earth tides, the NUVEL tectonic motion model, partial derivatives for the precession constant and source position rates, the option to correct for source structure, a refined model for antenna offsets, modeling the unique antenna at Richmond, FL, improved nutation series due to Zhu, Groten, and Reigber, and reintroduction of the old (Woolard) nutation series for simulation purposes. Text describing the relativistic transformations and gravitational contributions to the delay model was also revised in order to reflect the computer code more faithfully.
Mohammadzadeh-Aghdash, Hossein; Ezzati Nazhad Dolatabadi, Jafar; Dehghan, Parvin; Panahi-Azar, Vahid; Barzegar, Abolfazl
2017-08-01
Sodium acetate (SA) has been used as a highly effective protectant in food industry and the possible effect of this additive on the binding to albumin should be taken into consideration. Therefore, for the first time, the mechanism of SA interaction with bovine serum albumin (BSA) has been investigated by multi-spectroscopic and molecular modeling methods under physiological conditions. Stern-Volmer fluorescence quenching analysis showed an increase in the fluorescence intensity of BSA upon increasing the amounts of SA. The high affinity of SA to BSA was demonstrated by a binding constant value (1.09×10(3) at 310°K). The thermodynamic parameters indicated that hydrophobic binding plays a main role in the binding of SA to Albumin. Furthermore, the results of UV-vis spectra confirmed the interaction of this additive to BSA. In addition, molecular modeling study demonstrated that A binding sites of BSA play the main role in the interaction with acetate.
Baqué, Michèle; Amendt, Jens
2013-01-01
Developmental data of juvenile blow flies (Diptera: Calliphoridae) are typically used to calculate the age of immature stages found on or around a corpse and thus to estimate a minimum post-mortem interval (PMI(min)). However, many of those data sets don't take into account that immature blow flies grow in a non-linear fashion. Linear models do not supply a sufficient reliability on age estimates and may even lead to an erroneous determination of the PMI(min). According to the Daubert standard and the need for improvements in forensic science, new statistic tools like smoothing methods and mixed models allow the modelling of non-linear relationships and expand the field of statistical analyses. The present study introduces into the background and application of these statistical techniques by analysing a model which describes the development of the forensically important blow fly Calliphora vicina at different temperatures. The comparison of three statistical methods (linear regression, generalised additive modelling and generalised additive mixed modelling) clearly demonstrates that only the latter provided regression parameters that reflect the data adequately. We focus explicitly on both the exploration of the data--to assure their quality and to show the importance of checking it carefully prior to conducting the statistical tests--and the validation of the resulting models. Hence, we present a common method for evaluating and testing forensic entomological data sets by using for the first time generalised additive mixed models.
Spatiotemporal and random parameter panel data models of traffic crash fatalities in Vietnam.
Truong, Long T; Kieu, Le-Minh; Vu, Tuan A
2016-09-01
This paper investigates factors associated with traffic crash fatalities in 63 provinces of Vietnam during the period from 2012 to 2014. Random effect negative binomial (RENB) and random parameter negative binomial (RPNB) panel data models are adopted to consider spatial heterogeneity across provinces. In addition, a spatiotemporal model with conditional autoregressive priors (ST-CAR) is utilised to account for spatiotemporal autocorrelation in the data. The statistical comparison indicates the ST-CAR model outperforms the RENB and RPNB models. Estimation results provide several significant findings. For example, traffic crash fatalities tend to be higher in provinces with greater numbers of level crossings. Passenger distance travelled and road lengths are also positively associated with fatalities. However, hospital densities are negatively associated with fatalities. The safety impact of the national highway 1A, the main transport corridor of the country, is also highlighted.
Frey Law, Laura A; Shields, Richard K
2005-01-01
Background Mathematical muscle models may be useful for the determination of appropriate musculoskeletal stresses that will safely maintain the integrity of muscle and bone following spinal cord injury. Several models have been proposed to represent paralyzed muscle, but there have not been any systematic comparisons of modelling approaches to better understand the relationships between model parameters and muscle contractile properties. This sensitivity analysis of simulated muscle forces using three currently available mathematical models provides insight into the differences in modelling strategies as well as any direct parameter associations with simulated muscle force properties. Methods Three mathematical muscle models were compared: a traditional linear model with 3 parameters and two contemporary nonlinear models each with 6 parameters. Simulated muscle forces were calculated for two stimulation patterns (constant frequency and initial doublet trains) at three frequencies (5, 10, and 20 Hz). A sensitivity analysis of each model was performed by altering a single parameter through a range of 8 values, while the remaining parameters were kept at baseline values. Specific simulated force characteristics were determined for each stimulation pattern and each parameter increment. Significant parameter influences for each simulated force property were determined using ANOVA and Tukey's follow-up tests (α ≤ 0.05), and compared to previously reported parameter definitions. Results Each of the 3 linear model's parameters most clearly influence either simulated force magnitude or speed properties, consistent with previous parameter definitions. The nonlinear models' parameters displayed greater redundancy between force magnitude and speed properties. Further, previous parameter definitions for one of the nonlinear models were consistently supported, while the other was only partially supported by this analysis. Conclusion These three mathematical models use
Le, Vu H.; Buscaglia, Robert; Chaires, Jonathan B.; Lewis, Edwin A.
2013-01-01
Isothermal Titration Calorimetry, ITC, is a powerful technique that can be used to estimate a complete set of thermodynamic parameters (e.g. Keq (or ΔG), ΔH, ΔS, and n) for a ligand binding interaction described by a thermodynamic model. Thermodynamic models are constructed by combination of equilibrium constant, mass balance, and charge balance equations for the system under study. Commercial ITC instruments are supplied with software that includes a number of simple interaction models, for example one binding site, two binding sites, sequential sites, and n-independent binding sites. More complex models for example, three or more binding sites, one site with multiple binding mechanisms, linked equilibria, or equilibria involving macromolecular conformational selection through ligand binding need to be developed on a case by case basis by the ITC user. In this paper we provide an algorithm (and a link to our MATLAB program) for the non-linear regression analysis of a multiple binding site model with up to four overlapping binding equilibria. Error analysis demonstrates that fitting ITC data for multiple parameters (e.g. up to nine parameters in the three binding site model) yields thermodynamic parameters with acceptable accuracy. PMID:23262283
Chen, C.J.; Bozzelli, J.W.
2000-06-01
Thermochemical kinetic analysis for the reactions of HO{sub 2} radical addition to the primary, secondary, and tertiary carbon-carbon double bonds of ethylene, propene, and isobutene are studied using canonical transition state theory (TST). Thermochemical properties of reactants, alkyl hydroperoxides (ROOH), hydroperoxy alkyl radicals (R-OOH), and transition states (TSs) are determined by ab initio and density functional calculations. Enthalpies of formation ({Delta}H{sub f 298}{degree}) of product radicals (R-OOH) are determined using isodesmic reactions with group balance at MP4(full)6-31G(d,p)/MP2(full)/6-31G(d), MP2(full)/6-31G(d), complete basis set model chemistry (CBS-q with MP2(full)/6-31g(d) and B3LYP/6-31g(d) optimized geometries), and density functional (B3LYP/6-31g(d) and B3LYP/6-311+g(3df,2p)//B3LYP/6-31g(d)) calculations. {Delta}H{sub f 298}{degree} of TSs are obtained from the {Delta}H{sub f 298}{degree} of reactants plus energy differences between reactants and TSs. Entropies (S{sub 298}{degree}) and heat capacities (Cp(T) 300 {le} T/K {le} 1,500) contributions from vibrational, translational, and external rotational are calculated using the rigid-rotor-harmonic-oscillator approximation based on geometric parameters and vibrational frequencies obtained at MP2(full)/6-31G(d) and B3LYP/6-31G(d) levels of theory. Selected potential barriers of internal rotations for hydroperoxy alkyl radicals and TSs are calculated at MP2(full)/6-31G(d) and CBS-Q//MP2(full)/6-31G(d) levels. Contributions from hindered rotors of S{sub 298}{degree} and Cp(T) are calculated by the method of Pitzer and Gwinn and by summation over the energy levels obtained by direct diagonalization of the Hamiltonian matrix of hindered internal rotations when the potential barriers of internal rotations are available. calculated rate constants obtained at CBS-q/MP2(full)/6-31G(d) and CBS-q//B3LYP/6-31G(d) levels of theory show similar trends with experimental data: HO{sub 2} radical
Sullivan, Kristynn J; Shadish, William R; Steiner, Peter M
2015-03-01
Single-case designs (SCDs) are short time series that assess intervention effects by measuring units repeatedly over time in both the presence and absence of treatment. This article introduces a statistical technique for analyzing SCD data that has not been much used in psychological and educational research: generalized additive models (GAMs). In parametric regression, the researcher must choose a functional form to impose on the data, for example, that trend over time is linear. GAMs reverse this process by letting the data inform the choice of functional form. In this article we review the problem that trend poses in SCDs, discuss how current SCD analytic methods approach trend, describe GAMs as a possible solution, suggest a GAM model testing procedure for examining the presence of trend in SCDs, present a small simulation to show the statistical properties of GAMs, and illustrate the procedure on 3 examples of different lengths. Results suggest that GAMs may be very useful both as a form of sensitivity analysis for checking the plausibility of assumptions about trend and as a primary data analysis strategy for testing treatment effects. We conclude with a discussion of some problems with GAMs and some future directions for research on the application of GAMs to SCDs.
Dynamic hydrologic modeling using the zero-parameter Budyko model with instantaneous dryness index
NASA Astrophysics Data System (ADS)
Biswal, Basudev
2016-09-01
Long-term partitioning of hydrologic quantities is achieved by using the zero-parameter Budyko model which defines a dryness index. However, this approach is not suitable for dynamic partitioning particularly at diminishing timescales, and therefore, a universally applicable zero-parameter model remains elusive. Here an instantaneous dryness index is proposed which enables dynamic hydrologic modeling using the Budyko model. By introducing a "decay function" that characterizes the effects of antecedent rainfall and solar energy on the dryness state of a basin at a time, I propose the concept of instantaneous dryness index and use the Budyko function to perform continuous hydrologic partitioning. Using the same decay function, I then obtain discharge time series from the effective rainfall time series. The model is evaluated by considering data form 63 U.S. Geological Survey basins. Results indicate the possibility of using the proposed framework as an alternative platform for prediction in ungagued basins.
Statistical inference for the additive hazards model under outcome-dependent sampling.
Yu, Jichang; Liu, Yanyan; Sandler, Dale P; Zhou, Haibo
2015-09-01
Cost-effective study design and proper inference procedures for data from such designs are always of particular interests to study investigators. In this article, we propose a biased sampling scheme, an outcome-dependent sampling (ODS) design for survival data with right censoring under the additive hazards model. We develop a weighted pseudo-score estimator for the regression parameters for the proposed design and derive the asymptotic properties of the proposed estimator. We also provide some suggestions for using the proposed method by evaluating the relative efficiency of the proposed method against simple random sampling design and derive the optimal allocation of the subsamples for the proposed design. Simulation studies show that the proposed ODS design is more powerful than other existing designs and the proposed estimator is more efficient than other estimators. We apply our method to analyze a cancer study conducted at NIEHS, the Cancer Incidence and Mortality of Uranium Miners Study, to study the risk of radon exposure to cancer.
Regression analysis of mixed recurrent-event and panel-count data with additive rate models.
Zhu, Liang; Zhao, Hui; Sun, Jianguo; Leisenring, Wendy; Robison, Leslie L
2015-03-01
Event-history studies of recurrent events are often conducted in fields such as demography, epidemiology, medicine, and social sciences (Cook and Lawless, 2007, The Statistical Analysis of Recurrent Events. New York: Springer-Verlag; Zhao et al., 2011, Test 20, 1-42). For such analysis, two types of data have been extensively investigated: recurrent-event data and panel-count data. However, in practice, one may face a third type of data, mixed recurrent-event and panel-count data or mixed event-history data. Such data occur if some study subjects are monitored or observed continuously and thus provide recurrent-event data, while the others are observed only at discrete times and hence give only panel-count data. A more general situation is that each subject is observed continuously over certain time periods but only at discrete times over other time periods. There exists little literature on the analysis of such mixed data except that published by Zhu et al. (2013, Statistics in Medicine 32, 1954-1963). In this article, we consider the regression analysis of mixed data using the additive rate model and develop some estimating equation-based approaches to estimate the regression parameters of interest. Both finite sample and asymptotic properties of the resulting estimators are established, and the numerical studies suggest that the proposed methodology works well for practical situations. The approach is applied to a Childhood Cancer Survivor Study that motivated this study.
Statistical inference for the additive hazards model under outcome-dependent sampling
Yu, Jichang; Liu, Yanyan; Sandler, Dale P.; Zhou, Haibo
2015-01-01
Cost-effective study design and proper inference procedures for data from such designs are always of particular interests to study investigators. In this article, we propose a biased sampling scheme, an outcome-dependent sampling (ODS) design for survival data with right censoring under the additive hazards model. We develop a weighted pseudo-score estimator for the regression parameters for the proposed design and derive the asymptotic properties of the proposed estimator. We also provide some suggestions for using the proposed method by evaluating the relative efficiency of the proposed method against simple random sampling design and derive the optimal allocation of the subsamples for the proposed design. Simulation studies show that the proposed ODS design is more powerful than other existing designs and the proposed estimator is more efficient than other estimators. We apply our method to analyze a cancer study conducted at NIEHS, the Cancer Incidence and Mortality of Uranium Miners Study, to study the risk of radon exposure to cancer. PMID:26379363
Exploring Factor Model Parameters across Continuous Variables with Local Structural Equation Models.
Hildebrandt, Andrea; Lüdtke, Oliver; Robitzsch, Alexander; Sommer, Christopher; Wilhelm, Oliver
2016-01-01
Using an empirical data set, we investigated variation in factor model parameters across a continuous moderator variable and demonstrated three modeling approaches: multiple-group mean and covariance structure (MGMCS) analyses, local structural equation modeling (LSEM), and moderated factor analysis (MFA). We focused on how to study variation in factor model parameters as a function of continuous variables such as age, socioeconomic status, ability levels, acculturation, and so forth. Specifically, we formalized the LSEM approach in detail as compared with previous work and investigated its statistical properties with an analytical derivation and a simulation study. We also provide code for the easy implementation of LSEM. The illustration of methods was based on cross-sectional cognitive ability data from individuals ranging in age from 4 to 23 years. Variations in factor loadings across age were examined with regard to the age differentiation hypothesis. LSEM and MFA converged with respect to the conclusions. When there was a broad age range within groups and varying relations between the indicator variables and the common factor across age, MGMCS produced distorted parameter estimates. We discuss the pros of LSEM compared with MFA and recommend using the two tools as complementary approaches for investigating moderation in factor model parameters.
Kelley, Ken; Rausch, Joseph R
2011-12-01
Longitudinal studies are necessary to examine individual change over time, with group status often being an important variable in explaining some individual differences in change. Although sample size planning for longitudinal studies has focused on statistical power, recent calls for effect sizes and their corresponding confidence intervals underscore the importance of obtaining sufficiently accurate estimates of group differences in change. We derived expressions that allow researchers to plan sample size to achieve the desired confidence interval width for group differences in change for orthogonal polynomial change parameters. The approaches developed provide the expected confidence interval width to be sufficiently narrow, with an extension that allows some specified degree of assurance (e.g., 99%) that the confidence interval will be sufficiently narrow. We make computer routines freely available, so that the methods developed can be used by researchers immediately.
Knights, Jonathan; Rohatagi, Shashank
2015-12-01
Although there is a body of literature focused on minimizing the effect of dosing inaccuracies on pharmacokinetic (PK) parameter estimation, most of the work centers on missing doses. No attempt has been made to specifically characterize the effect of error in reported dosing times. Additionally, existing work has largely dealt with cases in which the compound of interest is dosed at an interval no less than its terminal half-life. This work provides a case study investigating how error in patient reported dosing times might affect the accuracy of structural model parameter estimation under sparse sampling conditions when the dosing interval is less than the terminal half-life of the compound, and the underlying kinetics are monoexponential. Additional effects due to noncompliance with dosing events are not explored and it is assumed that the structural model and reasonable initial estimates of the model parameters are known. Under the conditions of our simulations, with structural model CV % ranging from ~20 to 60 %, parameter estimation inaccuracy derived from error in reported dosing times was largely controlled around 10 % on average. Given that no observed dosing was included in the design and sparse sampling was utilized, we believe these error results represent a practical ceiling given the variability and parameter estimates for the one-compartment model. The findings suggest additional investigations may be of interest and are noteworthy given the inability of current PK software platforms to accommodate error in dosing times.
Adaptive Detection and Parameter Estimation for Multidimensional Signal Models
1989-04-19
expected value of the non-adaptive parameter array estimator directly from Equation (5-1), using the fact that .zP = dppH = d We obtain EbI = (e-H E eI 1...depend only on the dimensional parameters of tlc problem. We will caerive these properties shcrLly, but first we wish to express the conditional pdf
He, L; Huang, G H; Lu, H W
2010-04-15
Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the "true" ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes.
Parameter sensitivity and uncertainty analysis for a storm surge and wave model
NASA Astrophysics Data System (ADS)
Bastidas, Luis A.; Knighton, James; Kline, Shaun W.
2016-09-01
Development and simulation of synthetic hurricane tracks is a common methodology used to estimate hurricane hazards in the absence of empirical coastal surge and wave observations. Such methods typically rely on numerical models to translate stochastically generated hurricane wind and pressure forcing into coastal surge and wave estimates. The model output uncertainty associated with selection of appropriate model parameters must therefore be addressed. The computational overburden of probabilistic surge hazard estimates is exacerbated by the high dimensionality of numerical surge and wave models. We present a model parameter sensitivity analysis of the Delft3D model for the simulation of hazards posed by Hurricane Bob (1991) utilizing three theoretical wind distributions (NWS23, modified Rankine, and Holland). The sensitive model parameters (of 11 total considered) include wind drag, the depth-induced breaking γB, and the bottom roughness. Several parameters show no sensitivity (threshold depth, eddy viscosity, wave triad parameters, and depth-induced breaking αB) and can therefore be excluded to reduce the computational overburden of probabilistic surge hazard estimates. The sensitive model parameters also demonstrate a large number of interactions between parameters and a nonlinear model response. While model outputs showed sensitivity to several parameters, the ability of these parameters to act as tuning parameters for calibration is somewhat limited as proper model calibration is strongly reliant on accurate wind and pressure forcing data. A comparison of the model performance with forcings from the different wind models is also presented.
Parameter sensitivity and uncertainty analysis for a storm surge and wave model
NASA Astrophysics Data System (ADS)
Bastidas, L. A.; Knighton, J.; Kline, S. W.
2015-10-01
Development and simulation of synthetic hurricane tracks is a common methodology used to estimate hurricane hazards in the absence of empirical coastal surge and wave observations. Such methods typically rely on numerical models to translate stochastically generated hurricane wind and pressure forcing into coastal surge and wave estimates. The model output uncertainty associated with selection of appropriate model parameters must therefore be addressed. The computational overburden of probabilistic surge hazard estimates is exacerbated by the high dimensionality of numerical surge and wave models. We present a model parameter sensitivity analysis of the Delft3D model for the simulation of hazards posed by Hurricane Bob (1991) utilizing three theoretical wind distributions (NWS23, modified Rankine, and Holland). The sensitive model parameters (of eleven total considered) include wind drag, the depth-induced breaking γB, and the bottom roughness. Several parameters show no sensitivity (threshold depth, eddy viscosity, wave triad parameters and depth-induced breaking αB) and can therefore be excluded to reduce the computational overburden of probabilistic surge hazard estimates. The sensitive model parameters also demonstrate a large amount of interactions between parameters and a non-linear model response. While model outputs showed sensitivity to several parameters, the ability of these parameters to act as tuning parameters for calibration is somewhat limited as proper model calibration is strongly reliant on accurate wind and pressure forcing data. A comparison of the model performance with forcings from the different wind models is also presented.
[Study on the automatic parameters identification of water pipe network model].
Jia, Hai-Feng; Zhao, Qi-Feng
2010-01-01
Based on the problems analysis on development and application of water pipe network model, the model parameters automatic identification is regarded as a kernel bottleneck of model's application in water supply enterprise. The methodology of water pipe network model parameters automatic identification based on GIS and SCADA database is proposed. Then the kernel algorithm of model parameters automatic identification is studied, RSA (Regionalized Sensitivity Analysis) is used for automatic recognition of sensitive parameters, and MCS (Monte-Carlo Sampling) is used for automatic identification of parameters, the detail technical route based on RSA and MCS is presented. The module of water pipe network model parameters automatic identification is developed. At last, selected a typical water pipe network as a case, the case study on water pipe network model parameters automatic identification is conducted and the satisfied results are achieved.
NASA Astrophysics Data System (ADS)
Hendricks Franssen, H. J.; Post, H.; Vrugt, J. A.; Fox, A. M.; Baatz, R.; Kumbhar, P.; Vereecken, H.
2015-12-01
Estimation of net ecosystem exchange (NEE) by land surface models is strongly affected by uncertain ecosystem parameters and initial conditions. A possible approach is the estimation of plant functional type (PFT) specific parameters for sites with measurement data like NEE and application of the parameters at other sites with the same PFT and no measurements. This upscaling strategy was evaluated in this work for sites in Germany and France. Ecosystem parameters and initial conditions were estimated with NEE-time series of one year length, or a time series of only one season. The DREAM(zs) algorithm was used for the estimation of parameters and initial conditions. DREAM(zs) is not limited to Gaussian distributions and can condition to large time series of measurement data simultaneously. DREAM(zs) was used in combination with the Community Land Model (CLM) v4.5. Parameter estimates were evaluated by model predictions at the same site for an independent verification period. In addition, the parameter estimates were evaluated at other, independent sites situated >500km away with the same PFT. The main conclusions are: i) simulations with estimated parameters reproduced better the NEE measurement data in the verification periods, including the annual NEE-sum (23% improvement), annual NEE-cycle and average diurnal NEE course (error reduction by factor 1,6); ii) estimated parameters based on seasonal NEE-data outperformed estimated parameters based on yearly data; iii) in addition, those seasonal parameters were often also significantly different from their yearly equivalents; iv) estimated parameters were significantly different if initial conditions were estimated together with the parameters. We conclude that estimated PFT-specific parameters improve land surface model predictions significantly at independent verification sites and for independent verification periods so that their potential for upscaling is demonstrated. However, simulation results also indicate
Yamamoto, Toshiyuki; Hashiji, Junpei; Shankar, Venkataraman N
2008-07-01
Injury severities in traffic accidents are usually recorded on ordinal scales, and statistical models have been applied to investigate the effects of driver factors, vehicle characteristics, road geometrics and environmental conditions on injury severity. The unknown parameters in the models are in general estimated assuming random sampling from the population. Traffic accident data however suffer from underreporting effects, especially for lower injury severities. As a result, traffic accident data can be regarded as outcome-based samples with unknown population shares of the injury severities. An outcome-based sample is overrepresented by accidents of higher severities. As a result, outcome-based samples result in biased parameters which skew our inferences on the effect of key safety variables such as safety belt usage. The pseudo-likelihood function for the case with unknown population shares, which is the same as the conditional maximum likelihood for the case with known population shares, is applied in this study to examine the effects of severity underreporting on the parameter estimates. Sequential binary probit models and ordered-response probit models of injury severity are developed and compared in this study. Sequential binary probit models assume that the factors determining the severity change according to the level of the severity itself, while ordered-response probit models assume that the same factors correlate across all levels of severity. Estimation results suggest that the sequential binary probit models outperform the ordered-response probit models, and that the coefficient estimates for lap and shoulder belt use are biased if underreporting is not considered. Mean parameter bias due to underreporting can be significant. The findings show that underreporting on the outcome dimension may induce bias in inferences on a variety of factors. In particular, if underreporting is not accounted for, the marginal impacts of a variety of factors appear
ERIC Educational Resources Information Center
Custer, Michael; Sharairi, Sid; Yamazaki, Kenji; Signatur, Diane; Swift, David; Frey, Sharon
2008-01-01
The present study compared item and ability invariance as well as model-data fit between the one-parameter (1PL) and three-parameter (3PL) Item Response Theory (IRT) models utilizing real data across five grades; second through sixth as well as simulated data at second, fourth and sixth grade. At each grade, the 1PL and 3PL IRT models were run…
Limitations on the recovery of the true AGN variability parameters using damped random walk modeling
NASA Astrophysics Data System (ADS)
Kozłowski, Szymon
2017-01-01
Context. The damped random walk (DRW) stochastic process is nowadays frequently used to model aperiodic light curves of active galactic nuclei (AGNs). A number of correlations between the DRW model parameters, the signal decorrelation timescale and amplitude, and the physical AGN parameters, such as the black hole mass or luminosity, have been reported. Aims: We are interested in whether or not it is plausible to correctly measure the DRW parameters from a typical ground-based survey, and, in particular, in how accurate the recovered DRW parameters are compared to the input ones. Methods: By means of Monte Carlo simulations of AGN light curves, we studied the impact of the light curve length, the source magnitude (the photometric properties of a survey), cadence, and additional light (e.g., from a host galaxy) on the DRW model parameters. Results: The most significant finding is that currently existing surveys are going to return unconstrained DRW decorrelation timescales, because typical rest-frame data do not probe long enough timescales or the white noise part of the power spectral density for DRW. The experiment length must be at least ten times longer than the true DRW decorrelation timescale, being presumably in the vicinity of one year, thus meaning the necessity for AGN light curves measuring a minimum of 10 years (rest-frame). The DRW timescales for sufficiently long light curves are typically weakly biased, and the exact bias depends on the fitting method and used priors. The DRW amplitude is mostly affected by the photometric noise (the source magnitude or the signal-to-noise ratio), cadence, and the AGN host light. Conclusions: Because the DRW parameters appear to be incorrectly determined from typically existing data, the reported correlations of the DRW variability and physical AGN parameters from other works seem unlikely to be correct. In particular, the anti-correlation of the DRW decorrelation timescale with redshift is a manifestation of the
NASA Astrophysics Data System (ADS)
Zhang, Xuefeng; Zhang, Shaoqing; Liu, Zhengyu; Wu, Xinrong; Han, Guijun
2016-09-01
Imperfect physical parameterization schemes are an important source of model bias in a coupled model and adversely impact the performance of model simulation. With a coupled ocean-atmosphere-land model of intermediate complexity, the impact of imperfect parameter estimation on model simulation with biased physics has been studied. Here, the biased physics is induced by using different outgoing longwave radiation schemes in the assimilation and "truth" models. To mitigate model bias, the parameters employed in the biased longwave radiation scheme are optimized using three different methods: least-squares parameter fitting (LSPF), single-valued parameter estimation and geography-dependent parameter optimization (GPO), the last two of which belong to the coupled model parameter estimation (CMPE) method. While the traditional LSPF method is able to improve the performance of coupled model simulations, the optimized parameter values from the CMPE, which uses the coupled model dynamics to project observational information onto the parameters, further reduce the bias of the simulated climate arising from biased physics. Further, parameters estimated by the GPO method can properly capture the climate-scale signal to improve the simulation of climate variability. These results suggest that the physical parameter estimation via the CMPE scheme is an effective approach to restrain the model climate drift during decadal climate predictions using coupled general circulation models.
Rinaldi, Antonio
2011-04-01
Traditional fiber bundles models (FBMs) have been an effective tool to understand brittle heterogeneous systems. However, fiber bundles in modern nano- and bioapplications demand a new generation of FBM capturing more complex deformation processes in addition to damage. In the context of loose bundle systems and with reference to time-independent plasticity and soft biomaterials, we formulate a generalized statistical model for ductile fracture and nonlinear elastic problems capable of handling more simultaneous deformation mechanisms by means of two order parameters (as opposed to one). As the first rational FBM for coupled damage problems, it may be the cornerstone for advanced statistical models of heterogeneous systems in nanoscience and materials design, especially to explore hierarchical and bio-inspired concepts in the arena of nanobiotechnology. Applicative examples are provided for illustrative purposes at last, discussing issues in inverse analysis (i.e., nonlinear elastic polymer fiber and ductile Cu submicron bars arrays) and direct design (i.e., strength prediction).
A Primer on the 2- and 3-Parameter Item Response Theory Models.
ERIC Educational Resources Information Center
Thornton, Artist
Item response theory (IRT) is a useful and effective tool for item response measurement if used in the proper context. This paper discusses the sets of assumptions under which responses can be modeled while exploring the framework of the IRT models relative to response testing. The one parameter model, or one parameter logistic model, is perhaps…
NASA Technical Reports Server (NTRS)
Van Dyke, Michael B.
2013-01-01
Present preliminary work using lumped parameter models to approximate dynamic response of electronic units to random vibration; Derive a general N-DOF model for application to electronic units; Illustrate parametric influence of model parameters; Implication of coupled dynamics for unit/board design; Demonstrate use of model to infer printed wiring board (PWB) dynamics from external chassis test measurement.
Parameter identification for the electrical modeling of semiconductor bridges.
Gray, Genetha Anne
2005-03-01
Semiconductor bridges (SCBs) are commonly used as initiators for explosive and pyrotechnic devices. Their advantages include reduced voltage and energy requirements and exceptional safety features. Moreover, the design of systems which implement SCBs can be expedited using electrical simulation software. Successful use of this software requires that certain parameters be correctly chosen. In this paper, we explain how these parameters can be identified using optimization. We describe the problem focusing on the application of a direct optimization method for its solution, and present some numerical results.
Ranking vocal fold model parameters by their influence on modal frequencies
Cook, Douglas D.; Nauman, Eric; Mongeau, Luc
2009-01-01
The purpose of this study was to identify, using computational models, the vocal fold parameters which are most influential in determining the vibratory characteristics of the vocal folds. The sensitivities of vocal folds modal frequencies to variations model parameters were used to determine the most influential parameters. A detailed finite element model of the human vocal fold was created. The model was defined by eight geometric and six material parameters. The model included transitional boundary regions to idealize the complex physiological structure of real human subjects. Parameters were simultaneously varied over ranges representative of actual human vocal folds. Three separate statistical analysis techniques were used to identify the most and least sensitive model parameters with respect to modal frequency. The results from all three methods consistently suggest that a set of five parameters are most influential in determining the vibratory characteristics of the vocal folds. PMID:19813811
NASA Astrophysics Data System (ADS)
Yu, Xiaolin; Zhang, Shaoqing; Lin, Xiaopei; Li, Mingkui
2017-03-01
The uncertainties in values of coupled model parameters are an important source of model bias that causes model climate drift. The values can be calibrated by a parameter estimation procedure that projects observational information onto model parameters. The signal-to-noise ratio of error covariance between the model state and the parameter being estimated directly determines whether the parameter estimation succeeds or not. With a conceptual climate model that couples the stochastic atmosphere and slow-varying ocean, this study examines the sensitivity of state-parameter covariance on the accuracy of estimated model states in different model components of a coupled system. Due to the interaction of multiple timescales, the fast-varying atmosphere
with a chaotic nature is the major source of the inaccuracy of estimated state-parameter covariance. Thus, enhancing the estimation accuracy of atmospheric states is very important for the success of coupled model parameter estimation, especially for the parameters in the air-sea interaction processes. The impact of chaotic-to-periodic ratio in state variability on parameter estimation is also discussed. This simple model study provides a guideline when real observations are used to optimize model parameters in a coupled general circulation model for improving climate analysis and predictions.
NASA Astrophysics Data System (ADS)
Suzuki, Y.
2016-05-01
This article demonstrates the practical applicability of a method of modelling shape memory alloys (SMAs) as actuators. For this study, a pair of SMA wires was installed in an antagonistic manner to form an actuator, and a linear differential equation that describes the behaviour of the actuator’s generated force relative to its input voltage was derived for the limited range below the austenite onset temperature. In this range, hysteresis need not be considered, and the proposed SMA actuator can therefore be practically applied in linear control systems, which is significant because large deformations accompanied by hysteresis do not necessarily occur in most vibration control cases. When specific values of the parameters used in the differential equation were identified experimentally, it became clear that one of the parameters was dependent on ambient airflow velocity. The values of this dependent parameter were obtained using an additional SMA wire as a sensor. In these experiments, while the airflow distribution around the SMA wires was varied by changing the rotational speed of the fans in the wind tunnels, an input voltage was conveyed to the SMA actuator circuit, and the generated force was measured. In this way, the parameter dependent on airflow velocity was estimated in real time, and it was validated that the calculated force was consistent with the measured one.
Distributed parameter modelling of flexible spacecraft: Where's the beef?
NASA Technical Reports Server (NTRS)
Hyland, D. C.
1994-01-01
This presentation discusses various misgivings concerning the directions and productivity of Distributed Parameter System (DPS) theory as applied to spacecraft vibration control. We try to show the need for greater cross-fertilization between DPS theorists and spacecraft control designers. We recommend a shift in research directions toward exploration of asymptotic frequency response characteristics of critical importance to control designers.
Constructing Approximate Confidence Intervals for Parameters with Structural Equation Models
ERIC Educational Resources Information Center
Cheung, Mike W. -L.
2009-01-01
Confidence intervals (CIs) for parameters are usually constructed based on the estimated standard errors. These are known as Wald CIs. This article argues that likelihood-based CIs (CIs based on likelihood ratio statistics) are often preferred to Wald CIs. It shows how the likelihood-based CIs and the Wald CIs for many statistics and psychometric…
NASA Astrophysics Data System (ADS)
Kim, Jang-Gyeong; Kwon, Hyun-Han; Kim, Dongkyun
2017-01-01
Poisson cluster stochastic rainfall generators (e.g., modified Bartlett-Lewis rectangular pulse, MBLRP) have been widely applied to generate synthetic sub-daily rainfall sequences. The MBLRP model reproduces the underlying distribution of the rainfall generating process. The existing optimization techniques are typically based on individual parameter estimates that treat each parameter as independent. However, parameter estimates sometimes compensate for the estimates of other parameters, which can cause high variability in the results if the covariance structure is not formally considered. Moreover, uncertainty associated with model parameters in the MBLRP rainfall generator is not usually addressed properly. Here, we develop a hierarchical Bayesian model (HBM)-based MBLRP model to jointly estimate parameters across weather stations and explicitly consider the covariance and uncertainty through a Bayesian framework. The model is tested using weather stations in South Korea. The HBM-based MBLRP model improves the identification of parameters with better reproduction of rainfall statistics at various temporal scales. Additionally, the spatial variability of the parameters across weather stations is substantially reduced compared to that of other methods.
Wang, Jing-Jing; Wu, Hai-Feng; Sun, Tao; Li, Xia; Wang, Wei; Tao, Li-Xin; Huo, Da; Lv, Ping-Xin; He, Wen; Guo, Xiu-Hua
2013-01-01
Lung cancer, one of the leading causes of cancer-related deaths, usually appears as solitary pulmonary nodules (SPNs) which are hard to diagnose using the naked eye. In this paper, curvelet-based textural features and clinical parameters are used with three prediction models [a multilevel model, a least absolute shrinkage and selection operator (LASSO) regression method, and a support vector machine (SVM)] to improve the diagnosis of benign and malignant SPNs. Dimensionality reduction of the original curvelet-based textural features was achieved using principal component analysis. In addition, non-conditional logistical regression was used to find clinical predictors among demographic parameters and morphological features. The results showed that, combined with 11 clinical predictors, the accuracy rates using 12 principal components were higher than those using the original curvelet-based textural features. To evaluate the models, 10-fold cross validation and back substitution were applied. The results obtained, respectively, were 0.8549 and 0.9221 for the LASSO method, 0.9443 and 0.9831 for SVM, and 0.8722 and 0.9722 for the multilevel model. All in all, it was found that using curvelet-based textural features after dimensionality reduction and using clinical predictors, the highest accuracy rate was achieved with SVM. The method may be used as an auxiliary tool to differentiate between benign and malignant SPNs in CT images.
Two-loop corrections to the ρ parameter in Two-Higgs-Doublet models
NASA Astrophysics Data System (ADS)
Hessenberger, Stephan; Hollik, Wolfgang
2017-03-01
Models with two scalar doublets are among the simplest extensions of the Standard Model which fulfill the relation ρ = 1 at lowest order for the ρ parameter as favored by experimental data for electroweak observables allowing only small deviations from unity. Such small deviations Δ ρ originate exclusively from quantum effects with special sensitivity to mass splittings between different isospin components of fermions and scalars. In this paper the dominant two-loop electroweak corrections to Δ ρ are calculated in the CP-conserving THDM, resulting from the top-Yukawa coupling and the self-couplings of the Higgs bosons in the gauge-less limit. The on-shell renormalization scheme is applied. With the assumption that one of the CP-even neutral scalars represents the scalar boson observed by the LHC experiments, with standard properties, the two-loop non-standard contributions in Δ ρ can be separated from the standard ones. These contributions are of particular interest since they increase with mass splittings between non-standard Higgs bosons and can be additionally enhanced by tan β and λ _5, an additional free coefficient of the Higgs potential, and can thus modify the one-loop result substantially. Numerical results are given for the dependence on the various non-standard parameters, and the influence on the calculation of electroweak precision observables is discussed.
Determination of the Parameter Sets for the Best Performance of IPS-driven ENLIL Model
NASA Astrophysics Data System (ADS)
Yun, Jongyeon; Choi, Kyu-Cheol; Yi, Jonghyuk; Kim, Jaehun; Odstrcil, Dusan
2016-12-01
Interplanetary scintillation-driven (IPS-driven) ENLIL model was jointly developed by University of California, San Diego (UCSD) and National Aeronaucics and Space Administration/Goddard Space Flight Center (NASA/GSFC). The model has been in operation by Korean Space Weather Cetner (KSWC) since 2014. IPS-driven ENLIL model has a variety of ambient solar wind parameters and the results of the model depend on the combination of these parameters. We have conducted researches to determine the best combination of parameters to improve the performance of the IPS-driven ENLIL model. The model results with input of 1,440 combinations of parameters are compared with the Advanced Composition Explorer (ACE) observation data. In this way, the top 10 parameter sets showing best performance were determined. Finally, the characteristics of the parameter sets were analyzed and application of the results to IPS-driven ENLIL model was discussed.
Using Dirichlet Priors to Improve Model Parameter Plausibility
ERIC Educational Resources Information Center
Rai, Dovan; Gong, Yue; Beck, Joseph E.
2009-01-01
Student modeling is a widely used approach to make inference about a student's attributes like knowledge, learning, etc. If we wish to use these models to analyze and better understand student learning there are two problems. First, a model's ability to predict student performance is at best weakly related to the accuracy of any one of its…
Determination of Experimental Fuel Rod Parameters using 3D Modelling of PCMI with MPS Defect
Casagranda, Albert; Spencer, Benjamin Whiting; Pastore, Giovanni; Novascone, Stephen Rhead; Hales, Jason Dean; Williamson, Richard L; Martineau, Richard Charles
2016-05-01
An in-reactor experiment is being designed in order to validate the pellet-cladding mechanical interaction (PCMI) behavior of the BISON fuel performance code. The experimental parameters for the test rod being placed in the Halden Research Reactor are being determined using BISON simulations. The 3D model includes a missing pellet surface (MPS) defect to generate large local cladding deformations, which should be measureable after typical burnup times. The BISON fuel performance code is being developed at Idaho National Laboratory (INL) and is built on the Multiphysics Object-Oriented Simulation Environment (MOOSE) framework. BISON supports both 2D and 3D finite elements and solves the fully coupled equations for solid mechanics, heat conduction and species diffusion. A number of fuel performance effects are included using models for swelling, densification, creep, relocation and fission gas production & release. In addition, the mechanical and thermal contact between the fuel and cladding is explicitly modelled using a master-slave based contact algorithm. In order to accurately predict PCMI effects, the BISON code includes the relevant physics involved and provides a scalable and robust solution procedure. The depth of the proposed MPS defect is being varied in the BISON model to establish an optimum value for the experiment. The experiment will be interrupted approximately every 6 months to measure cladding radial deformation and provide data to validate BISON. The complete rodlet (~20 discrete pellets) is being simulated using a 180° half symmetry 3D model with MPS defects at two axial locations. In addition, annular pellets will be used at the top and bottom of the pellet stack to allow thermocouples within the rod to measure the fuel centerline temperature. Simulation results will be presented to illustrate the expected PCMI behavior and support the chosen experimental design parameters.
Analysis of Experimental Fuel Rod Parameters using 3D Modelling of PCMI with MPS Defect
Casagranda, Albert; Spencer, Benjamin Whiting; Pastore, Giovanni; Novascone, Stephen Rhead; Hales, Jason Dean; Williamson, Richard L; Martineau, Richard Charles
2016-06-01
An in-reactor experiment is being designed in order to validate the pellet-cladding mechanical interaction (PCMI) behavior of the BISON fuel performance code. The experimental parameters for the test rod being placed in the Halden Research Reactor are being determined using BISON simulations. The 3D model includes a missing pellet surface (MPS) defect to generate large local cladding deformations, which should be measureable after typical burnup times. The BISON fuel performance code is being developed at Idaho National Laboratory (INL) and is built on the Multiphysics Object-Oriented Simulation Environment (MOOSE) framework. BISON supports both 2D and 3D finite elements and solves the fully coupled equations for solid mechanics, heat conduction and species diffusion. A number of fuel performance effects are included using models for swelling, densification, creep, relocation and fission gas production & release. In addition, the mechanical and thermal contact between the fuel and cladding is explicitly modelled using a master-slave based contact algorithm. In order to accurately predict PCMI effects, the BISON code includes the relevant physics involved and provides a scalable and robust solution procedure. The depth of the proposed MPS defect is being varied in the BISON model to establish an optimum value for the experiment. The experiment will be interrupted approximately every 6 months to measure cladding radial deformation and provide data to validate BISON. The complete rodlet (~20 discrete pellets) is being simulated using a 180° half symmetry 3D model with MPS defects at two axial locations. In addition, annular pellets will be used at the top and bottom of the pellet stack to allow thermocouples within the rod to measure the fuel centerline temperature. Simulation results will be presented to illustrate the expected PCMI behavior and support the chosen experimental design parameters.
Parameter Uncertainty for Aircraft Aerodynamic Modeling using Recursive Least Squares
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Morelli, Eugene A.
2016-01-01
A real-time method was demonstrated for determining accurate uncertainty levels of stability and control derivatives estimated using recursive least squares and time-domain data. The method uses a recursive formulation of the residual autocorrelation to account for colored residuals, which are routinely encountered in aircraft parameter estimation and change the predicted uncertainties. Simulation data and flight test data for a subscale jet transport aircraft were used to demonstrate the approach. Results showed that the corrected uncertainties matched the observed scatter in the parameter estimates, and did so more accurately than conventional uncertainty estimates that assume white residuals. Only small differences were observed between batch estimates and recursive estimates at the end of the maneuver. It was also demonstrated that the autocorrelation could be reduced to a small number of lags to minimize computation and memory storage requirements without significantly degrading the accuracy of predicted uncertainty levels.
Macroscopic control parameter for avalanche models for bursty transport
Chapman, S. C.; Rowlands, G.; Watkins, N. W.
2009-01-15
Similarity analysis is used to identify the control parameter R{sub A} for the subset of avalanching systems that can exhibit self-organized criticality (SOC). This parameter expresses the ratio of driving to dissipation. The transition to SOC, when the number of excited degrees of freedom is maximal, is found to occur when R{sub A}{yields}0. This is in the opposite sense to (Kolmogorov) turbulence, thus identifying a deep distinction between turbulence and SOC and suggesting an observable property that could distinguish them. A corollary of this similarity analysis is that SOC phenomenology, that is, power law scaling of avalanches, can persist for finite R{sub A} with the same R{sub A}{yields}0 exponent if the system supports a sufficiently large range of lengthscales, necessary for SOC to be a candidate for physical (R{sub A} finite) systems.
Testing a Gender Additive Model: The Role of Body Image in Adolescent Depression
ERIC Educational Resources Information Center
Bearman, Sarah Kate; Stice, Eric
2008-01-01
Despite consistent evidence that adolescent girls are at greater risk of developing depression than adolescent boys, risk factor models that account for this difference have been elusive. The objective of this research was to examine risk factors proposed by the "gender additive" model of depression that attempts to partially explain the increased…
An Integrated Tool for Estimation of Material Model Parameters (PREPRINT)
2010-04-01
irrevocable worldwide license to use, modify, reproduce, release, perform, display, or disclose the work by or on behalf of the U.S. Government. 14 ... vf , and wf. The filtered v profiles are shown in Figure 4. For the plastic deformation data we found that the filtering could not correct the...wf near the top right corner. We need to use the vf data for our parameter estimation. Since the geometry and loading are symmetric in the FEM
On Lower Confidence for PCS in Truncated Location Parameter Models
1989-06-01
statistic for the parameter 9i. The natural selection rule is to select the population yielding the largest Xi as the best population. Thus, a question ...group. Then, a reasonable question is: what kind of confidence statement can be made regarding the PCS? For this purpose, based on the above given data...Institute of Statistics Purdue University National Central University West Lafayette, IN, USA Chung-Li, Taiwan, R.O.C. TaChen Liang Department of Mathematics
NASA Astrophysics Data System (ADS)
Ajami, N. K.; Duan, Q.; Sorooshian, S.
2005-12-01
To-date single conceptual hydrologic models often applied to interpret physical processes within a watershed. Nevertheless hydrologic models regardless of their sophistication and complexity are simplified representation of the complex, spatially distributed and highly nonlinear real world system. Consequently their hydrologic predictions contain considerable uncertainty from different sources including: hydrometeorological forcing inputs, boundary/initial conditions, model structure, model parameters which need to be accounted for. Thus far the effort has gone to address these sources of uncertainty explicitly, making an implicit assumption that uncertainties from different sources are additive. Nevertheless because of the nonlinear nature of the hydrologic systems, it is not feasible to account for these uncertainties independently. Here we present the Integrated Bayesian Uncertainty Estimator (IBUNE) which accounts for total uncertainties from all major sources: inputs forcing, model structure, model parameters. This algorithm explores multi-model framework to tackle model structural uncertainty while using the Bayesian rules to estimate parameter and input uncertainty within individual models. Three hydrologic models including SACramento Soil Moisture Accounting (SAC-SMA) model, Hydrologic model (HYMOD) and Simple Water Balance (SWB) model were considered within IBUNE framework for this study. The results which are presented for the Leaf River Basin, MS, indicates that IBUNE gives a better quantification of uncertainty through hydrological modeling processes, therefore provide more reliable and less bias prediction with realistic uncertainty boundaries.
Liang, Hua; Wu, Hulin
2008-12-01
Differential equation (DE) models are widely used in many scientific fields that include engineering, physics and biomedical sciences. The so-called "forward problem", the problem of simulations and predictions of state variables for given parameter values in the DE models, has been extensively studied by mathematicians, physicists, engineers and other scientists. However, the "inverse problem", the problem of parameter estimation based on the measurements of output variables, has not been well explored using modern statistical methods, although some least squares-based approaches have been proposed and studied. In this paper, we propose parameter estimation methods for ordinary differential equation models (ODE) based on the local smoothing approach and a pseudo-least squares (PsLS) principle under a framework of measurement error in regression models. The asymptotic properties of the proposed PsLS estimator are established. We also compare the PsLS method to the corresponding SIMEX method and evaluate their finite sample performances via simulation studies. We illustrate the proposed approach using an application example from an HIV dynamic study.
An original traffic additional emission model and numerical simulation on a signalized road
NASA Astrophysics Data System (ADS)
Zhu, Wen-Xing; Zhang, Jing-Yu
2017-02-01
Based on VSP (Vehicle Specific Power) model traffic real emissions were theoretically classified into two parts: basic emission and additional emission. An original additional emission model was presented to calculate the vehicle's emission due to the signal control effects. Car-following model was developed and used to describe the traffic behavior including cruising, accelerating, decelerating and idling at a signalized intersection. Simulations were conducted under two situations: single intersection and two adjacent intersections with their respective control policy. Results are in good agreement with the theoretical analysis. It is also proved that additional emission model may be used to design the signal control policy in our modern traffic system to solve the serious environmental problems.
Yang, Huan; Meijer, Hil G. E.; Buitenweg, Jan R.; van Gils, Stephan A.
2016-01-01
Healthy or pathological states of nociceptive subsystems determine different stimulus-response relations measured from quantitative sensory testing. In turn, stimulus-response measurements may be used to assess these states. In a recently developed computational model, six model parameters characterize activation of nerve endings and spinal neurons. However, both model nonlinearity and limited information in yes-no detection responses to electrocutaneous stimuli challenge to estimate model parameters. Here, we address the question whether and how one can overcome these difficulties for reliable parameter estimation. First, we fit the computational model to experimental stimulus-response pairs by maximizing the likelihood. To evaluate the balance between model fit and complexity, i.e., the number of model parameters, we evaluate the Bayesian Information Criterion. We find that the computational model is better than a conventional logistic model regarding the balance. Second, our theoretical analysis suggests to vary the pulse width among applied stimuli as a necessary condition to prevent structural non-identifiability. In addition, the numerically implemented profile likelihood approach reveals structural and practical non-identifiability. Our model-based approach with integration of psychophysical measurements can be useful for a reliable assessment of states of the nociceptive system. PMID:27994563
Yang, Huan; Meijer, Hil G E; Buitenweg, Jan R; van Gils, Stephan A
2016-01-01
Healthy or pathological states of nociceptive subsystems determine different stimulus-response relations measured from quantitative sensory testing. In turn, stimulus-response measurements may be used to assess these states. In a recently developed computational model, six model parameters characterize activation of nerve endings and spinal neurons. However, both model nonlinearity and limited information in yes-no detection responses to electrocutaneous stimuli challenge to estimate model parameters. Here, we address the question whether and how one can overcome these difficulties for reliable parameter estimation. First, we fit the computational model to experimental stimulus-response pairs by maximizing the likelihood. To evaluate the balance between model fit and complexity, i.e., the number of model parameters, we evaluate the Bayesian Information Criterion. We find that the computational model is better than a conventional logistic model regarding the balance. Second, our theoretical analysis suggests to vary the pulse width among applied stimuli as a necessary condition to prevent structural non-identifiability. In addition, the numerically implemented profile likelihood approach reveals structural and practical non-identifiability. Our model-based approach with integration of psychophysical measurements can be useful for a reliable assessment of states of the nociceptive system.
Anisotropic Effects on Constitutive Model Parameters of Aluminum Alloys
2012-01-01
strength 7075-T651aluminum alloy . Johnson - Cook model constants determined for Al7075-T651 alloy bar material failed to simulate correctly the penetration...structural components made of high strength 7075-T651aluminum alloy . Johnson - Cook model constants determined for Al7075-T651 alloy bar material...rate sensitivity, Johnson - Cook , constitutive model. PACS: 62.20 .Dc, 62.20..Fe, S 62.50. +p, 83.60.La INTRODUCTION Aluminum 7075 alloys are
Tests for Regression Parameters in Power Transformation Models.
1980-01-01
of estimating the correct %.JI.J scale and then performing the usual linear model F-test in this estimated Ascale. We explore situations in which this...transformation model. In this model, a simple test consists of estimating the correct scale and t ihv. performin g the usutal l iiear model F-test in ’ this...X (yi,y ) will be the least squares estimaites in the estimated scale X and -(yiY2) will be the least squares estimates calculated in the true but
Discrepancy in parameter constraints for LTB models using BAO and SNIa
NASA Astrophysics Data System (ADS)
Vargas, C. Z.; Falciano, F. T.; Reis, R. R. R.
2017-01-01
In the present work we constrain three different profiles of a Lemaître–Tolman–Bondi model using supernovae type Ia and baryon acoustic oscillation data. We use two distinct parameter estimation approaches, namely, the {χ2} and the complete Likelihood functional. It has been argued that these two approaches are not equivalent and indeed our analysis shows a specific example of their departure. The combined analysis of BAO + SNIa offers a stringent test for these models. In addition, we improve common practice in the literature by carefully calibrating the supernovae in the appropriate inhomogeneous background dynamics. We address subtle issues in order to propagate the primordial BAO scale to the present epoch.
A General Approach for Specifying Informative Prior Distributions for PBPK Model Parameters
Characterization of uncertainty in model predictions is receiving more interest as more models are being used in applications that are critical to human health. For models in which parameters reflect biological characteristics, it is often possible to provide estimates of paramet...
Ramsay-Curve Item Response Theory for the Three-Parameter Logistic Item Response Model
ERIC Educational Resources Information Center
Woods, Carol M.
2008-01-01
In Ramsay-curve item response theory (RC-IRT), the latent variable distribution is estimated simultaneously with the item parameters of a unidimensional item response model using marginal maximum likelihood estimation. This study evaluates RC-IRT for the three-parameter logistic (3PL) model with comparisons to the normal model and to the empirical…
Identification of the 1PL Model with Guessing Parameter: Parametric and Semi-Parametric Results
ERIC Educational Resources Information Center
San Martin, Ernesto; Rolin, Jean-Marie; Castro, Luis M.
2013-01-01
In this paper, we study the identification of a particular case of the 3PL model, namely when the discrimination parameters are all constant and equal to 1. We term this model, 1PL-G model. The identification analysis is performed under three different specifications. The first specification considers the abilities as unknown parameters. It is…
NASA Astrophysics Data System (ADS)
Strounine, K.; Kravtsov, S.; Kondrashov, D.; Ghil, M.
2010-02-01
Low-frequency variability (LFV) of the atmosphere refers to its behavior on time scales of 10-100 days, longer than the life cycle of a mid-latitude cyclone but shorter than a season. This behavior is still poorly understood and hard to predict. The present study compares various model reduction strategies that help in deriving simplified models of LFV. Three distinct strategies are applied here to reduce a fairly realistic, high-dimensional, quasi-geostrophic, 3-level (QG3) atmospheric model to lower dimensions: (i) an empirical-dynamical method, which retains only a few components in the projection of the full QG3 model equations onto a specified basis, and finds the linear deterministic and the stochastic corrections empirically as in Selten (1995) [5]; (ii) a purely dynamics-based technique, employing the stochastic mode reduction strategy of Majda et al. (2001) [62]; and (iii) a purely empirical, multi-level regression procedure, which specifies the functional form of the reduced model and finds the model coefficients by multiple polynomial regression as in Kravtsov et al. (2005) [3]. The empirical-dynamical and dynamical reduced models were further improved by sequential parameter estimation and benchmarked against multi-level regression models; the extended Kalman filter was used for the parameter estimation. Overall, the reduced models perform better when more statistical information is used in the model construction. Thus, the purely empirical stochastic models with quadratic nonlinearity and additive noise reproduce very well the linear properties of the full QG3 model’s LFV, i.e. its autocorrelations and spectra, as well as the nonlinear properties, i.e. the persistent flow regimes that induce non-Gaussian features in the model’s probability density function. The empirical-dynamical models capture the basic statistical properties of the full model’s LFV, such as the variance and integral correlation time scales of the leading LFV modes, as well as
Use of eigendecomposition in a parameter sensitivity analysis of the Community Land Model
NASA Astrophysics Data System (ADS)
GöHler, M.; Mai, J.; Cuntz, M.
2013-06-01
This study explores the use of eigendecomposition in a sensitivity analysis of the Community Land Model CLM, revision 3.5, with respect to its parametrization. Latent heat, sensible heat, and photosynthesis are used as target variables. The eigendecomposition of a sensitivity matrix, containing numerically derived sensitivity measures, can be used to study parameter significance. Existing parameter ranking and selection methods are examined. Furthermore, a new parameter significance ranking index is proposed which is working in concert with a new proposed selection criterion. This methodology explicitly takes parameter covariations into account. The results are consistent and similar to the most elaborate method tested in this study, but the new method has fewer assumptions. The number of significant parameters depends on the degree of variation that a single parameter is allowed to generate in the cost function. The method declares two thirds out of 66 parameters to be significant model parameters for an allowed change of 1% and only 10 parameters for an allowed change of 10% of the cost function. The sensible heat flux is shown to be the least sensitive model output in comparison with latent heat or photosynthesis. Parameters that determine maximum carboxylation and the slope of stomatal conductance are very sensitive for photosynthesis, whereas soil water parameters are significant for latent heat and C4photosynthesis. It is concluded that the proposed procedure is parsimonious, can analyze sensitivities of more than one model output simultaneously, and helps to identify significant parameters while taking parameter interactions into account.
Estimate of influenza cases using generalized linear, additive and mixed models.
Oviedo, Manuel; Domínguez, Ángela; Pilar Muñoz, M
2015-01-01
We investigated the relationship between reported cases of influenza in Catalonia (Spain). Covariates analyzed were: population, age, data of report of influenza, and health region during 2010-2014 using data obtained from the SISAP program (Institut Catala de la Salut - Generalitat of Catalonia). Reported cases were related with the study of covariates using a descriptive analysis. Generalized Linear Models, Generalized Additive Models and Generalized Additive Mixed Models were used to estimate the evolution of the transmission of influenza. Additive models can estimate non-linear effects of the covariates by smooth functions; and mixed models can estimate data dependence and variability in factor variables using correlations structures and random effects, respectively. The incidence rate of influenza was calculated as the incidence per 100 000 people. The mean rate was 13.75 (range 0-27.5) in the winter months (December, January, February) and 3.38 (range 0-12.57) in the remaining months. Statistical analysis showed that Generalized Additive Mixed Models were better adapted to the temporal evolution of influenza (serial correlation 0.59) than classical linear models.
Lumpy - an interactive Lumped Parameter Modeling code based on MS Access and MS Excel.
NASA Astrophysics Data System (ADS)
Suckow, A.
2012-04-01
Several tracers for dating groundwater (18O/2H, 3H, CFCs, SF6, 85Kr) need lumped parameter modeling (LPM) to convert measured values into numbers with unit time. Other tracers (T/3He, 39Ar, 14C, 81Kr) allow the computation of apparent ages with a mathematical formula using radioactive decay without defining the age mixture that any groundwater sample represents. Also interpretation of the latter profits significantly from LPM tools that allow forward modeling of input time series to measurable output values assuming different age distributions and mixtures in the sample. This talk presents a Lumped Parameter Modeling code, Lumpy, combining up to two LPMs in parallel. The code is standalone and freeware. It is based on MS Access and Access Basic (AB) and allows using any number of measurements for both input time series and output measurements, with any, not necessarily constant, time resolution. Several tracers, also comprising very different timescales like e.g. the combination of 18O, CFCs and 14C, can be modeled, displayed and fitted simultaneously. Lumpy allows for each of the two parallel models the choice of the following age distributions: Exponential Piston flow Model (EPM), Linear Piston flow Model (LPM), Dispersion Model (DM), Piston flow Model (PM) and Gamma Model (GM). Concerning input functions, Lumpy allows delaying (passage through the unsaturated zone) shifting by a constant value (converting 18O data from a GNIP station to a different altitude), multiplying by a constant value (geochemical reduction of initial 14C) and the definition of a constant input value prior to the input time series (pre-bomb tritium). Lumpy also allows underground tracer production (4He or 39Ar) and the computation of a daughter product (tritiugenic 3He) as well as partial loss of the daughter product (partial re-equilibration of 3He). These additional parameters and the input functions can be defined independently for the two sub-LPMs to represent two different recharge
Nonlinear model predictive control using parameter varying BP-ARX combination model
NASA Astrophysics Data System (ADS)
Yang, J.-F.; Xiao, L.-F.; Qian, J.-X.; Li, H.
2012-03-01
A novel back-propagation AutoRegressive with eXternal input (BP-ARX) combination model is constructed for model predictive control (MPC) of MIMO nonlinear systems, whose steady-state relation between inputs and outputs can be obtained. The BP neural network represents the steady-state relation, and the ARX model represents the linear dynamic relation between inputs and outputs of the nonlinear systems. The BP-ARX model is a global model and is identified offline, while the parameters of the ARX model are rescaled online according to BP neural network and operating data. Sequential quadratic programming is employed to solve the quadratic objective function online, and a shift coefficient is defined to constrain the effect time of the recursive least-squares algorithm. Thus, a parameter varying nonlinear MPC (PVNMPC) algorithm that responds quickly to large changes in system set-points and shows good dynamic performance when system outputs approach set-points is proposed. Simulation results in a multivariable stirred tank and a multivariable pH neutralisation process illustrate the applicability of the proposed method and comparisons of the control effect between PVNMPC and multivariable recursive generalised predictive controller are also performed.
Model parameter uncertainty analysis for an annual field-scale P loss model
NASA Astrophysics Data System (ADS)
Bolster, Carl H.; Vadas, Peter A.; Boykin, Debbie
2016-08-01
Phosphorous (P) fate and transport models are important tools for developing and evaluating conservation practices aimed at reducing P losses from agricultural fields. Because all models are simplifications of complex systems, there will exist an inherent amount of uncertainty associated with their predictions. It is therefore important that efforts be directed at identifying, quantifying, and communicating the different sources of model uncertainties. In this study, we conducted an uncertainty analysis with the Annual P Loss Estimator (APLE) model. Our analysis included calculating parameter uncertainties and confidence and prediction intervals for five internal regression equations in APLE. We also estimated uncertainties of the model input variables based on values reported in the literature. We then predicted P loss for a suite of fields under different management and climatic conditions while accounting for uncertainties in the model parameters and inputs and compared the relative contributions of these two sources of uncertainty to the overall uncertainty associated with predictions of P loss. Both the overall magnitude of the prediction uncertainties and the relative contributions of the two sources of uncertainty varied depending on management practices and field characteristics. This was due to differences in the number of model input variables and the uncertainties in the regression equations associated with each P loss pathway. Inspection of the uncertainties in the five regression equations brought attention to a previously unrecognized limitation with the equation used to partition surface-applied fertilizer P between leaching and runoff losses. As a result, an alternate equation was identified that provided similar predictions with much less uncertainty. Our results demonstrate how a thorough uncertainty and model residual analysis can be used to identify limitations with a model. Such insight can then be used to guide future data collection and model
Diffusion parameters of indium for silicon process modeling
NASA Astrophysics Data System (ADS)
Kizilyalli, I. C.; Rich, T. L.; Stevie, F. A.; Rafferty, C. S.
1996-11-01
The diffusion parameters of indium in silicon are investigated. Systematic diffusion experiments in dry oxidizing ambients at temperatures ranging from 800 to 1050 °C are conducted using silicon wafers implanted with indium. Secondary-ion-mass spectrometry (SIMS) is used to analyze the dopant distribution before and after heat treatment. The oxidation-enhanced diffusion parameter [R. B. Fair, in Semiconductor Materials and Process Technology Handbook, edited by G. E. McGuire (Noyes, Park Ridge, NJ, 1988); A. M. R. Lin, D. A. Antoniadis, and R. W. Dutton, J. Electrochem. Soc. Solid-State Sci. Technol. 128, 1131 (1981); D. A. Antoniadis and I. Moskowitz, J. Appl. Phys. 53, 9214 (1982)] and the segregation coefficient at the Si/SiO2 interface [R. B. Fair and J. C. C. Tsai, J. Electrochem. Soc. Solid-State Sci. Technol. 125, 2050 (1978)] (ratio of indium concentration in silicon to that in silicon dioxide) are extracted as a function of temperature using SIMS depth profiles and the silicon process simulator PROPHET [M. Pinto, D. M. Boulin, C. S. Rafferty, R. K. Smith, W. M. Coughran, I. C. Kizilyalli, and M. J. Thoma, in IEDM Technical Digest, 1992, p. 923]. It is observed that the segregation coefficient of indium at the Si/SiO2 interface is mIn≪1, similar to boron; however, unlike boron, the segregation coefficient of indium at the Si/SiO2 interface decreases with increasing temperature. Extraction results are summarized in analytical forms suitable for incorporation into other silicon process simulators. Finally, the validity of the extracted parameters is verified by comparing the simulated and measured SIMS profiles for an indium implanted buried-channel p-channel metal-oxide-semiconductor field-effect-transistor [I. C. Kizilyalli, F. A. Stevie, and J. D. Bude, IEEE Electron Device Lett. (1996)] process that involves a gate oxidation and various other thermal processes.
Parameter Variability and Distributional Assumptions in the Diffusion Model
ERIC Educational Resources Information Center
Ratcliff, Roger
2013-01-01
If the diffusion model (Ratcliff & McKoon, 2008) is to account for the relative speeds of correct responses and errors, it is necessary that the components of processing identified by the model vary across the trials of a task. In standard applications, the rate at which information is accumulated by the diffusion process is assumed to be normally…
Averaging Models: Parameters Estimation with the R-Average Procedure
ERIC Educational Resources Information Center
Vidotto, G.; Massidda, D.; Noventa, S.
2010-01-01
The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982), can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto &…
Estimation of MIMIC Model Parameters with Multilevel Data
ERIC Educational Resources Information Center
Finch, W. Holmes; French, Brian F.
2011-01-01
The purpose of this simulation study was to assess the performance of latent variable models that take into account the complex sampling mechanism that often underlies data used in educational, psychological, and other social science research. Analyses were conducted using the multiple indicator multiple cause (MIMIC) model, which is a flexible…
Relating Data and Models to Characterize Parameter and Prediction Uncertainty
Applying PBPK models in risk analysis requires that we realistically assess the uncertainty of relevant model predictions in as quantitative a way as possible. The reality of human variability may add a confusing feature to the overall uncertainty assessment, as uncertainty and v...
Simultaneous Parameters Identifiability and Estimation of an E. coli Metabolic Network Model
Alberton, André Luís; Di Maggio, Jimena Andrea; Estrada, Vanina Gisela; Díaz, María Soledad; Secchi, Argimiro Resende
2015-01-01
This work proposes a procedure for simultaneous parameters identifiability and estimation in metabolic networks in order to overcome difficulties associated with lack of experimental data and large number of parameters, a common scenario in the modeling of such systems. As case study, the complex real problem of parameters identifiability of the Escherichia coli K-12 W3110 dynamic model was investigated, composed by 18 differential ordinary equations and 35 kinetic rates, containing 125 parameters. With the procedure, model fit was improved for most of the measured metabolites, achieving 58 parameters estimated, including 5 unknown initial conditions. The results indicate that simultaneous parameters identifiability and estimation approach in metabolic networks is appealing, since model fit to the most of measured metabolites was possible even when important measures of intracellular metabolites and good initial estimates of parameters are not available. PMID:25654103
Accurate Critical Parameters for the Modified Lennard-Jones Model
NASA Astrophysics Data System (ADS)
Okamoto, Kazuma; Fuchizaki, Kazuhiro
2017-03-01
The critical parameters of the modified Lennard-Jones system were examined. The isothermal-isochoric ensemble was generated by conducting a molecular dynamics simulation for the system consisting of 6912, 8788, 10976, and 13500 particles. The equilibrium between the liquid and vapor phases was judged from the chemical potential of both phases upon establishing the coexistence envelope, from which the critical temperature and density were obtained invoking the renormalization group theory. The finite-size scaling enabled us to finally determine the critical temperature, pressure, and density as Tc = 1.0762(2), pc = 0.09394(17), and ρc = 0.331(3), respectively.