Performance of Transit Model Fitting in Processing Four Years of Kepler Science Data
NASA Astrophysics Data System (ADS)
Li, Jie; Burke, Christopher J.; Jenkins, Jon Michael; Quintana, Elisa V.; Rowe, Jason; Seader, Shawn; Tenenbaum, Peter; Twicken, Joseph D.
2014-06-01
We present transit model fitting performance of the Kepler Science Operations Center (SOC) Pipeline in processing four years of science data, which were collected by the Kepler spacecraft from May 13, 2009 to May 12, 2013. Threshold Crossing Events (TCEs), which represent transiting planet detections, are generated by the Transiting Planet Search (TPS) component of the pipeline and subsequently processed in the Data Validation (DV) component. The transit model is used in DV to fit TCEs and derive parameters that are used in various diagnostic tests to validate planetary candidates. The standard transit model includes five fit parameters: transit epoch time (i.e. central time of first transit), orbital period, impact parameter, ratio of planet radius to star radius and ratio of semi-major axis to star radius. In the latest Kepler SOC pipeline codebase, the light curve of the target for which a TCE is generated is initially fitted by a trapezoidal model with four parameters: transit epoch time, depth, duration and ingress time. The trapezoidal model fit, implemented with repeated Levenberg-Marquardt minimization, provides a quick and high fidelity assessment of the transit signal. The fit parameters of the trapezoidal model with the minimum chi-square metric are converted to set initial values of the fit parameters of the standard transit model. Additional parameters, such as the equilibrium temperature and effective stellar flux of the planet candidate, are derived from the fit parameters of the standard transit model to characterize pipeline candidates for the search of Earth-size planets in the Habitable Zone. The uncertainties of all derived parameters are updated in the latest codebase to take into account for the propagated errors of the fit parameters as well as the uncertainties in stellar parameters. The results of the transit model fitting of the TCEs identified by the Kepler SOC Pipeline, including fitted and derived parameters, fit goodness metrics and diagnostic figures, are included in the DV report and one-page report summary, which are accessible by the science community at NASA Exoplanet Archive. Funding for the Kepler Mission has been provided by the NASA Science Mission Directorate.
Zhang, Z; Jewett, D L
1994-01-01
Due to model misspecification, currently-used Dipole Source Localization (DSL) methods may contain Multiple-Generator Errors (MulGenErrs) when fitting simultaneously-active dipoles. The size of the MulGenErr is a function of both the model used, and the dipole parameters, including the dipoles' waveforms (time-varying magnitudes). For a given fitting model, by examining the variation of the MulGenErrs (or the fit parameters) under different waveforms for the same generating-dipoles, the accuracy of the fitting model for this set of dipoles can be determined. This method of testing model misspecification can be applied to evoked potential maps even when the parameters of the generating-dipoles are unknown. The dipole parameters fitted in a model should only be accepted if the model can be shown to be sufficiently accurate.
On the Complexity of Item Response Theory Models.
Bonifay, Wes; Cai, Li
2017-01-01
Complexity in item response theory (IRT) has traditionally been quantified by simply counting the number of freely estimated parameters in the model. However, complexity is also contingent upon the functional form of the model. We examined four popular IRT models-exploratory factor analytic, bifactor, DINA, and DINO-with different functional forms but the same number of free parameters. In comparison, a simpler (unidimensional 3PL) model was specified such that it had 1 more parameter than the previous models. All models were then evaluated according to the minimum description length principle. Specifically, each model was fit to 1,000 data sets that were randomly and uniformly sampled from the complete data space and then assessed using global and item-level fit and diagnostic measures. The findings revealed that the factor analytic and bifactor models possess a strong tendency to fit any possible data. The unidimensional 3PL model displayed minimal fitting propensity, despite the fact that it included an additional free parameter. The DINA and DINO models did not demonstrate a proclivity to fit any possible data, but they did fit well to distinct data patterns. Applied researchers and psychometricians should therefore consider functional form-and not goodness-of-fit alone-when selecting an IRT model.
Volume effects of late term normal tissue toxicity in prostate cancer radiotherapy
NASA Astrophysics Data System (ADS)
Bonta, Dacian Viorel
Modeling of volume effects for treatment toxicity is paramount for optimization of radiation therapy. This thesis proposes a new model for calculating volume effects in gastro-intestinal and genito-urinary normal tissue complication probability (NTCP) following radiation therapy for prostate carcinoma. The radiobiological and the pathological basis for this model and its relationship to other models are detailed. A review of the radiobiological experiments and published clinical data identified salient features and specific properties a biologically adequate model has to conform to. The new model was fit to a set of actual clinical data. In order to verify the goodness of fit, two established NTCP models and a non-NTCP measure for complication risk were fitted to the same clinical data. The method of fit for the model parameters was maximum likelihood estimation. Within the framework of the maximum likelihood approach I estimated the parameter uncertainties for each complication prediction model. The quality-of-fit was determined using the Aikaike Information Criterion. Based on the model that provided the best fit, I identified the volume effects for both types of toxicities. Computer-based bootstrap resampling of the original dataset was used to estimate the bias and variance for the fitted parameter values. Computer simulation was also used to estimate the population size that generates a specific uncertainty level (3%) in the value of predicted complication probability. The same method was used to estimate the size of the patient population needed for accurate choice of the model underlying the NTCP. The results indicate that, depending on the number of parameters of a specific NTCP model, 100 (for two parameter models) and 500 patients (for three parameter models) are needed for accurate parameter fit. Correlation of complication occurrence in patients was also investigated. The results suggest that complication outcomes are correlated in a patient, although the correlation coefficient is rather small.
Qin, Qin; Huang, Alan J; Hua, Jun; Desmond, John E; Stevens, Robert D; van Zijl, Peter C M
2014-02-01
Measurement of the cerebral blood flow (CBF) with whole-brain coverage is challenging in terms of both acquisition and quantitative analysis. In order to fit arterial spin labeling-based perfusion kinetic curves, an empirical three-parameter model which characterizes the effective impulse response function (IRF) is introduced, which allows the determination of CBF, the arterial transit time (ATT) and T(1,eff). The accuracy and precision of the proposed model were compared with those of more complicated models with four or five parameters through Monte Carlo simulations. Pseudo-continuous arterial spin labeling images were acquired on a clinical 3-T scanner in 10 normal volunteers using a three-dimensional multi-shot gradient and spin echo scheme at multiple post-labeling delays to sample the kinetic curves. Voxel-wise fitting was performed using the three-parameter model and other models that contain two, four or five unknown parameters. For the two-parameter model, T(1,eff) values close to tissue and blood were assumed separately. Standard statistical analysis was conducted to compare these fitting models in various brain regions. The fitted results indicated that: (i) the estimated CBF values using the two-parameter model show appreciable dependence on the assumed T(1,eff) values; (ii) the proposed three-parameter model achieves the optimal balance between the goodness of fit and model complexity when compared among the models with explicit IRF fitting; (iii) both the two-parameter model using fixed blood T1 values for T(1,eff) and the three-parameter model provide reasonable fitting results. Using the proposed three-parameter model, the estimated CBF (46 ± 14 mL/100 g/min) and ATT (1.4 ± 0.3 s) values averaged from different brain regions are close to the literature reports; the estimated T(1,eff) values (1.9 ± 0.4 s) are higher than the tissue T1 values, possibly reflecting a contribution from the microvascular arterial blood compartment. Copyright © 2013 John Wiley & Sons, Ltd.
Universally Sloppy Parameter Sensitivities in Systems Biology Models
Gutenkunst, Ryan N; Waterfall, Joshua J; Casey, Fergal P; Brown, Kevin S; Myers, Christopher R; Sethna, James P
2007-01-01
Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a “sloppy” spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters. PMID:17922568
Universally sloppy parameter sensitivities in systems biology models.
Gutenkunst, Ryan N; Waterfall, Joshua J; Casey, Fergal P; Brown, Kevin S; Myers, Christopher R; Sethna, James P
2007-10-01
Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a "sloppy" spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters.
Kaur, Jaspreet; Nygren, Anders; Vigmond, Edward J
2014-01-01
Fitting parameter sets of non-linear equations in cardiac single cell ionic models to reproduce experimental behavior is a time consuming process. The standard procedure is to adjust maximum channel conductances in ionic models to reproduce action potentials (APs) recorded in isolated cells. However, vastly different sets of parameters can produce similar APs. Furthermore, even with an excellent AP match in case of single cell, tissue behaviour may be very different. We hypothesize that this uncertainty can be reduced by additionally fitting membrane resistance (Rm). To investigate the importance of Rm, we developed a genetic algorithm approach which incorporated Rm data calculated at a few points in the cycle, in addition to AP morphology. Performance was compared to a genetic algorithm using only AP morphology data. The optimal parameter sets and goodness of fit as computed by the different methods were compared. First, we fit an ionic model to itself, starting from a random parameter set. Next, we fit the AP of one ionic model to that of another. Finally, we fit an ionic model to experimentally recorded rabbit action potentials. Adding the extra objective (Rm, at a few voltages) to the AP fit, lead to much better convergence. Typically, a smaller MSE (mean square error, defined as the average of the squared error between the target AP and AP that is to be fitted) was achieved in one fifth of the number of generations compared to using only AP data. Importantly, the variability in fit parameters was also greatly reduced, with many parameters showing an order of magnitude decrease in variability. Adding Rm to the objective function improves the robustness of fitting, better preserving tissue level behavior, and should be incorporated.
Outdoor ground impedance models.
Attenborough, Keith; Bashir, Imran; Taherzadeh, Shahram
2011-05-01
Many models for the acoustical properties of rigid-porous media require knowledge of parameter values that are not available for outdoor ground surfaces. The relationship used between tortuosity and porosity for stacked spheres results in five characteristic impedance models that require not more than two adjustable parameters. These models and hard-backed-layer versions are considered further through numerical fitting of 42 short range level difference spectra measured over various ground surfaces. For all but eight sites, slit-pore, phenomenological and variable porosity models yield lower fitting errors than those given by the widely used one-parameter semi-empirical model. Data for 12 of 26 grassland sites and for three beech wood sites are fitted better by hard-backed-layer models. Parameter values obtained by fitting slit-pore and phenomenological models to data for relatively low flow resistivity grounds, such as forest floors, porous asphalt, and gravel, are consistent with values that have been obtained non-acoustically. Three impedance models yield reasonable fits to a narrow band excess attenuation spectrum measured at short range over railway ballast but, if extended reaction is taken into account, the hard-backed-layer version of the slit-pore model gives the most reasonable parameter values.
NASA Technical Reports Server (NTRS)
Giver, Lawrence P.; Benner, D. C.; Tomasko, M. G.; Fink, U.; Kerola, D.
1990-01-01
Transmission measurements made on near-infrared laboratory methane spectra have previously been fit using a Malkmus band model. The laboratory spectra were obtained in three groups at temperatures averaging 112, 188, and 295 K; band model fitting was done separately for each temperature group. These band model parameters cannot be used directly in scattering atmosphere model computations, so an exponential sum model is being developed which includes pressure and temperature fitting parameters. The goal is to obtain model parameters by least square fits at 10/cm intervals from 3800 to 9100/cm. These results will be useful in the interpretation of current planetary spectra and also NIMS spectra of Jupiter anticipated from the Galileo mission.
Using the Modification Index and Standardized Expected Parameter Change for Model Modification
ERIC Educational Resources Information Center
Whittaker, Tiffany A.
2012-01-01
Model modification is oftentimes conducted after discovering a badly fitting structural equation model. During the modification process, the modification index (MI) and the standardized expected parameter change (SEPC) are 2 statistics that may be used to aid in the selection of parameters to add to a model to improve the fit. The purpose of this…
NASA Astrophysics Data System (ADS)
De Geyter, G.; Baes, M.; Fritz, J.; Camps, P.
2013-02-01
We present FitSKIRT, a method to efficiently fit radiative transfer models to UV/optical images of dusty galaxies. These images have the advantage that they have better spatial resolution compared to FIR/submm data. FitSKIRT uses the GAlib genetic algorithm library to optimize the output of the SKIRT Monte Carlo radiative transfer code. Genetic algorithms prove to be a valuable tool in handling the multi- dimensional search space as well as the noise induced by the random nature of the Monte Carlo radiative transfer code. FitSKIRT is tested on artificial images of a simulated edge-on spiral galaxy, where we gradually increase the number of fitted parameters. We find that we can recover all model parameters, even if all 11 model parameters are left unconstrained. Finally, we apply the FitSKIRT code to a V-band image of the edge-on spiral galaxy NGC 4013. This galaxy has been modeled previously by other authors using different combinations of radiative transfer codes and optimization methods. Given the different models and techniques and the complexity and degeneracies in the parameter space, we find reasonable agreement between the different models. We conclude that the FitSKIRT method allows comparison between different models and geometries in a quantitative manner and minimizes the need of human intervention and biasing. The high level of automation makes it an ideal tool to use on larger sets of observed data.
Application of separable parameter space techniques to multi-tracer PET compartment modeling.
Zhang, Jeff L; Michael Morey, A; Kadrmas, Dan J
2016-02-07
Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.
Application of separable parameter space techniques to multi-tracer PET compartment modeling
NASA Astrophysics Data System (ADS)
Zhang, Jeff L.; Morey, A. Michael; Kadrmas, Dan J.
2016-02-01
Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.
NASA Astrophysics Data System (ADS)
Adams, Matthew P.; Collier, Catherine J.; Uthicke, Sven; Ow, Yan X.; Langlois, Lucas; O'Brien, Katherine R.
2017-01-01
When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (Topt) for maximum photosynthetic rate (Pmax). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike.
Adams, Matthew P; Collier, Catherine J; Uthicke, Sven; Ow, Yan X; Langlois, Lucas; O'Brien, Katherine R
2017-01-04
When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (T opt ) for maximum photosynthetic rate (P max ). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike.
Adams, Matthew P.; Collier, Catherine J.; Uthicke, Sven; Ow, Yan X.; Langlois, Lucas; O’Brien, Katherine R.
2017-01-01
When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (Topt) for maximum photosynthetic rate (Pmax). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike. PMID:28051123
NASA Astrophysics Data System (ADS)
Lee, Kyu Sang; Gill, Wonpyong
2017-11-01
The dynamic properties, such as the crossing time and time-dependence of the relative density of the four-state haploid coupled discrete-time mutation-selection model, were calculated with the assumption that μ ij = μ ji , where μ ij denotes the mutation rate between the sequence elements, i and j. The crossing time for s = 0 and r 23 = r 42 = 1 in the four-state model became saturated at a large fitness parameter when r 12 > 1, was scaled as a power law in the fitness parameter when r 12 = 1, and diverged when the fitness parameter approached the critical fitness parameter when r 12 < 1, where r ij = μ ij / μ 14.
Estimation of parameters of dose volume models and their confidence limits
NASA Astrophysics Data System (ADS)
van Luijk, P.; Delvigne, T. C.; Schilstra, C.; Schippers, J. M.
2003-07-01
Predictions of the normal-tissue complication probability (NTCP) for the ranking of treatment plans are based on fits of dose-volume models to clinical and/or experimental data. In the literature several different fit methods are used. In this work frequently used methods and techniques to fit NTCP models to dose response data for establishing dose-volume effects, are discussed. The techniques are tested for their usability with dose-volume data and NTCP models. Different methods to estimate the confidence intervals of the model parameters are part of this study. From a critical-volume (CV) model with biologically realistic parameters a primary dataset was generated, serving as the reference for this study and describable by the NTCP model. The CV model was fitted to this dataset. From the resulting parameters and the CV model, 1000 secondary datasets were generated by Monte Carlo simulation. All secondary datasets were fitted to obtain 1000 parameter sets of the CV model. Thus the 'real' spread in fit results due to statistical spreading in the data is obtained and has been compared with estimates of the confidence intervals obtained by different methods applied to the primary dataset. The confidence limits of the parameters of one dataset were estimated using the methods, employing the covariance matrix, the jackknife method and directly from the likelihood landscape. These results were compared with the spread of the parameters, obtained from the secondary parameter sets. For the estimation of confidence intervals on NTCP predictions, three methods were tested. Firstly, propagation of errors using the covariance matrix was used. Secondly, the meaning of the width of a bundle of curves that resulted from parameters that were within the one standard deviation region in the likelihood space was investigated. Thirdly, many parameter sets and their likelihood were used to create a likelihood-weighted probability distribution of the NTCP. It is concluded that for the type of dose response data used here, only a full likelihood analysis will produce reliable results. The often-used approximations, such as the usage of the covariance matrix, produce inconsistent confidence limits on both the parameter sets and the resulting NTCP values.
Application of separable parameter space techniques to multi-tracer PET compartment modeling
Zhang, Jeff L; Morey, A Michael; Kadrmas, Dan J
2016-01-01
Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg–Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models. PMID:26788888
Comparing basal area growth models, consistency of parameters, and accuracy of prediction
J.J. Colbert; Michael Schuckers; Desta Fekedulegn
2002-01-01
We fit alternative sigmoid growth models to sample tree basal area historical data derived from increment cores and disks taken at breast height. We examine and compare the estimated parameters for these models across a range of sample sites. Models are rated on consistency of parameters and on their ability to fit growth data from four sites that are located across a...
Lutchen, K R
1990-08-01
A sensitivity analysis based on weighted least-squares regression is presented to evaluate alternative methods for fitting lumped-parameter models to respiratory impedance data. The goal is to maintain parameter accuracy simultaneously with practical experiment design. The analysis focuses on predicting parameter uncertainties using a linearized approximation for joint confidence regions. Applications are with four-element parallel and viscoelastic models for 0.125- to 4-Hz data and a six-element model with separate tissue and airway properties for input and transfer impedance data from 2-64 Hz. The criterion function form was evaluated by comparing parameter uncertainties when data are fit as magnitude and phase, dynamic resistance and compliance, or real and imaginary parts of input impedance. The proper choice of weighting can make all three criterion variables comparable. For the six-element model, parameter uncertainties were predicted when both input impedance and transfer impedance are acquired and fit simultaneously. A fit to both data sets from 4 to 64 Hz could reduce parameter estimate uncertainties considerably from those achievable by fitting either alone. For the four-element models, use of an independent, but noisy, measure of static compliance was assessed as a constraint on model parameters. This may allow acceptable parameter uncertainties for a minimum frequency of 0.275-0.375 Hz rather than 0.125 Hz. This reduces data acquisition requirements from a 16- to a 5.33- to 8-s breath holding period. These results are approximations, and the impact of using the linearized approximation for the confidence regions is discussed.
Bayesian inference in an item response theory model with a generalized student t link function
NASA Astrophysics Data System (ADS)
Azevedo, Caio L. N.; Migon, Helio S.
2012-10-01
In this paper we introduce a new item response theory (IRT) model with a generalized Student t-link function with unknown degrees of freedom (df), named generalized t-link (GtL) IRT model. In this model we consider only the difficulty parameter in the item response function. GtL is an alternative to the two parameter logit and probit models, since the degrees of freedom (df) play a similar role to the discrimination parameter. However, the behavior of the curves of the GtL is different from those of the two parameter models and the usual Student t link, since in GtL the curve obtained from different df's can cross the probit curves in more than one latent trait level. The GtL model has similar proprieties to the generalized linear mixed models, such as the existence of sufficient statistics and easy parameter interpretation. Also, many techniques of parameter estimation, model fit assessment and residual analysis developed for that models can be used for the GtL model. We develop fully Bayesian estimation and model fit assessment tools through a Metropolis-Hastings step within Gibbs sampling algorithm. We consider a prior sensitivity choice concerning the degrees of freedom. The simulation study indicates that the algorithm recovers all parameters properly. In addition, some Bayesian model fit assessment tools are considered. Finally, a real data set is analyzed using our approach and other usual models. The results indicate that our model fits the data better than the two parameter models.
Event-scale power law recession analysis: quantifying methodological uncertainty
NASA Astrophysics Data System (ADS)
Dralle, David N.; Karst, Nathaniel J.; Charalampous, Kyriakos; Veenstra, Andrew; Thompson, Sally E.
2017-01-01
The study of single streamflow recession events is receiving increasing attention following the presentation of novel theoretical explanations for the emergence of power law forms of the recession relationship, and drivers of its variability. Individually characterizing streamflow recessions often involves describing the similarities and differences between model parameters fitted to each recession time series. Significant methodological sensitivity has been identified in the fitting and parameterization of models that describe populations of many recessions, but the dependence of estimated model parameters on methodological choices has not been evaluated for event-by-event forms of analysis. Here, we use daily streamflow data from 16 catchments in northern California and southern Oregon to investigate how combinations of commonly used streamflow recession definitions and fitting techniques impact parameter estimates of a widely used power law recession model. Results are relevant to watersheds that are relatively steep, forested, and rain-dominated. The highly seasonal mediterranean climate of northern California and southern Oregon ensures study catchments explore a wide range of recession behaviors and wetness states, ideal for a sensitivity analysis. In such catchments, we show the following: (i) methodological decisions, including ones that have received little attention in the literature, can impact parameter value estimates and model goodness of fit; (ii) the central tendencies of event-scale recession parameter probability distributions are largely robust to methodological choices, in the sense that differing methods rank catchments similarly according to the medians of these distributions; (iii) recession parameter distributions are method-dependent, but roughly catchment-independent, such that changing the choices made about a particular method affects a given parameter in similar ways across most catchments; and (iv) the observed correlative relationship between the power-law recession scale parameter and catchment antecedent wetness varies depending on recession definition and fitting choices. Considering study results, we recommend a combination of four key methodological decisions to maximize the quality of fitted recession curves, and to minimize bias in the related populations of fitted recession parameters.
Polynomials to model the growth of young bulls in performance tests.
Scalez, D C B; Fragomeni, B O; Passafaro, T L; Pereira, I G; Toral, F L B
2014-03-01
The use of polynomial functions to describe the average growth trajectory and covariance functions of Nellore and MA (21/32 Charolais+11/32 Nellore) young bulls in performance tests was studied. The average growth trajectories and additive genetic and permanent environmental covariance functions were fit with Legendre (linear through quintic) and quadratic B-spline (with two to four intervals) polynomials. In general, the Legendre and quadratic B-spline models that included more covariance parameters provided a better fit with the data. When comparing models with the same number of parameters, the quadratic B-spline provided a better fit than the Legendre polynomials. The quadratic B-spline with four intervals provided the best fit for the Nellore and MA groups. The fitting of random regression models with different types of polynomials (Legendre polynomials or B-spline) affected neither the genetic parameters estimates nor the ranking of the Nellore young bulls. However, fitting different type of polynomials affected the genetic parameters estimates and the ranking of the MA young bulls. Parsimonious Legendre or quadratic B-spline models could be used for genetic evaluation of body weight of Nellore young bulls in performance tests, whereas these parsimonious models were less efficient for animals of the MA genetic group owing to limited data at the extreme ages.
Unifying distance-based goodness-of-fit indicators for hydrologic model assessment
NASA Astrophysics Data System (ADS)
Cheng, Qinbo; Reinhardt-Imjela, Christian; Chen, Xi; Schulte, Achim
2014-05-01
The goodness-of-fit indicator, i.e. efficiency criterion, is very important for model calibration. However, recently the knowledge about the goodness-of-fit indicators is all empirical and lacks a theoretical support. Based on the likelihood theory, a unified distance-based goodness-of-fit indicator termed BC-GED model is proposed, which uses the Box-Cox (BC) transformation to remove the heteroscedasticity of model errors and the generalized error distribution (GED) with zero-mean to fit the distribution of model errors after BC. The BC-GED model can unify all recent distance-based goodness-of-fit indicators, and reveals the mean square error (MSE) and the mean absolute error (MAE) that are widely used goodness-of-fit indicators imply statistic assumptions that the model errors follow the Gaussian distribution and the Laplace distribution with zero-mean, respectively. The empirical knowledge about goodness-of-fit indicators can be also easily interpreted by BC-GED model, e.g. the sensitivity to high flow of the goodness-of-fit indicators with large power of model errors results from the low probability of large model error in the assumed distribution of these indicators. In order to assess the effect of the parameters (i.e. the BC transformation parameter λ and the GED kurtosis coefficient β also termed the power of model errors) of BC-GED model on hydrologic model calibration, six cases of BC-GED model were applied in Baocun watershed (East China) with SWAT-WB-VSA model. Comparison of the inferred model parameters and model simulation results among the six indicators demonstrates these indicators can be clearly separated two classes by the GED kurtosis β: β >1 and β ≤ 1. SWAT-WB-VSA calibrated by the class β >1 of distance-based goodness-of-fit indicators captures high flow very well and mimics the baseflow very badly, but it calibrated by the class β ≤ 1 mimics the baseflow very well, because first the larger value of β, the greater emphasis is put on high flow and second the derivative of GED probability density function at zero is zero as β >1, but discontinuous as β ≤ 1, and even infinity as β < 1 with which the maximum likelihood estimation can guarantee the model errors approach zero as well as possible. The BC-GED that estimates the parameters (i.e. λ and β) of BC-GED model as well as hydrologic model parameters is the best distance-based goodness-of-fit indicator because not only the model validation using groundwater levels is very good, but also the model errors fulfill the statistic assumption best. However, in some cases of model calibration with a few observations e.g. calibration of single-event model, for avoiding estimation of the parameters of BC-GED model the MAE i.e. the boundary indicator (β = 1) of the two classes, can replace the BC-GED, because the model validation of MAE is best.
Application of genetic algorithm in modeling on-wafer inductors for up to 110 Ghz
NASA Astrophysics Data System (ADS)
Liu, Nianhong; Fu, Jun; Liu, Hui; Cui, Wenpu; Liu, Zhihong; Liu, Linlin; Zhou, Wei; Wang, Quan; Guo, Ao
2018-05-01
In this work, the genetic algorithm has been introducted into parameter extraction for on-wafer inductors for up to 110 GHz millimeter-wave operations, and nine independent parameters of the equivalent circuit model are optimized together. With the genetic algorithm, the model with the optimized parameters gives a better fitting accuracy than the preliminary parameters without optimization. Especially, the fitting accuracy of the Q value achieves a significant improvement after the optimization.
A dual-process approach to exploring the role of delay discounting in obesity.
Price, Menna; Higgs, Suzanne; Maw, James; Lee, Michelle
2016-08-01
Delay discounting of financial rewards has been related to overeating and obesity. Neuropsychological evidence supports a dual-system account of both discounting and overeating behaviour where the degree of impulsive decision making is determined by the relative strength of reward desire and executive control. A dual-parameter model of discounting behaviour is consistent with this theory. In this study, the fit of the commonly used one-parameter model was compared to a new dual-parameter model for the first time in a sample of adults with wide ranging BMI. Delay discounting data from 79 males and females (males=26) across a wide age (M=28.44years (SD=8.81)) and BMI range (M=25.42 (SD=5.16)) was analysed. A dual-parameter model (saturating-hyperbolic; Doya, [Doya (2008) ]) was applied to the data and compared on model fit indices to the single-parameter model. Discounting was significantly greater in the overweight/obese participants using both models, however, the two parameter model showed a superior fit to data (p<0.0001). The two parameters were shown to be related yet distinct measures consistent with a dual-system account of inter-temporal choice behaviour. The dual-parameter model showed superior fit to data and the two parameters were shown to be related yet distinct indices sensitive to differences between weight groups. Findings are discussed in terms of the impulsive reward and executive control systems that contribute to unhealthy food choice and within the context of obesity related research. Copyright © 2016 Elsevier Inc. All rights reserved.
Fitting Item Response Theory Models to Two Personality Inventories: Issues and Insights.
Chernyshenko, O S; Stark, S; Chan, K Y; Drasgow, F; Williams, B
2001-10-01
The present study compared the fit of several IRT models to two personality assessment instruments. Data from 13,059 individuals responding to the US-English version of the Fifth Edition of the Sixteen Personality Factor Questionnaire (16PF) and 1,770 individuals responding to Goldberg's 50 item Big Five Personality measure were analyzed. Various issues pertaining to the fit of the IRT models to personality data were considered. We examined two of the most popular parametric models designed for dichotomously scored items (i.e., the two- and three-parameter logistic models) and a parametric model for polytomous items (Samejima's graded response model). Also examined were Levine's nonparametric maximum likelihood formula scoring models for dichotomous and polytomous data, which were previously found to provide good fits to several cognitive ability tests (Drasgow, Levine, Tsien, Williams, & Mead, 1995). The two- and three-parameter logistic models fit some scales reasonably well but not others; the graded response model generally did not fit well. The nonparametric formula scoring models provided the best fit of the models considered. Several implications of these findings for personality measurement and personnel selection were described.
Quantitative Rheological Model Selection
NASA Astrophysics Data System (ADS)
Freund, Jonathan; Ewoldt, Randy
2014-11-01
The more parameters in a rheological the better it will reproduce available data, though this does not mean that it is necessarily a better justified model. Good fits are only part of model selection. We employ a Bayesian inference approach that quantifies model suitability by balancing closeness to data against both the number of model parameters and their a priori uncertainty. The penalty depends upon prior-to-calibration expectation of the viable range of values that model parameters might take, which we discuss as an essential aspect of the selection criterion. Models that are physically grounded are usually accompanied by tighter physical constraints on their respective parameters. The analysis reflects a basic principle: models grounded in physics can be expected to enjoy greater generality and perform better away from where they are calibrated. In contrast, purely empirical models can provide comparable fits, but the model selection framework penalizes their a priori uncertainty. We demonstrate the approach by selecting the best-justified number of modes in a Multi-mode Maxwell description of PVA-Borax. We also quantify relative merits of the Maxwell model relative to powerlaw fits and purely empirical fits for PVA-Borax, a viscoelastic liquid, and gluten.
Poroviscoelastic cartilage properties in the mouse from indentation.
Chiravarambath, Sidharth; Simha, Narendra K; Namani, Ravi; Lewis, Jack L
2009-01-01
A method for fitting parameters in a poroviscoelastic (PVE) model of articular cartilage in the mouse is presented. Indentation is performed using two different sized indenters and then these data are fitted using a PVE finite element program and parameter extraction algorithm. Data from a smaller indenter, a 15 mum diameter flat-ended 60 deg cone, is first used to fit the viscoelastic (VE) parameters, on the basis that for this tip size the gel diffusion time (approximate time constant of the poroelastic (PE) response) is of the order of 0.1 s, so that the PE response is negligible. These parameters are then used to fit the data from a second 170 mum diameter flat-ended 60 deg cone for the PE parameters, using the VE parameters extracted from the data from the 15 mum tip. Data from tests on five different mouse tibial plateaus are presented and fitted. Parameter variation studies for the larger indenter show that for this case the VE and PE time responses overlap in time, necessitating the use of both models.
Using evolutionary algorithms for fitting high-dimensional models to neuronal data.
Svensson, Carl-Magnus; Coombes, Stephen; Peirce, Jonathan Westley
2012-04-01
In the study of neurosciences, and of complex biological systems in general, there is frequently a need to fit mathematical models with large numbers of parameters to highly complex datasets. Here we consider algorithms of two different classes, gradient following (GF) methods and evolutionary algorithms (EA) and examine their performance in fitting a 9-parameter model of a filter-based visual neuron to real data recorded from a sample of 107 neurons in macaque primary visual cortex (V1). Although the GF method converged very rapidly on a solution, it was highly susceptible to the effects of local minima in the error surface and produced relatively poor fits unless the initial estimates of the parameters were already very good. Conversely, although the EA required many more iterations of evaluating the model neuron's response to a series of stimuli, it ultimately found better solutions in nearly all cases and its performance was independent of the starting parameters of the model. Thus, although the fitting process was lengthy in terms of processing time, the relative lack of human intervention in the evolutionary algorithm, and its ability ultimately to generate model fits that could be trusted as being close to optimal, made it far superior in this particular application than the gradient following methods. This is likely to be the case in many further complex systems, as are often found in neuroscience.
Boer, H M T; Butler, S T; Stötzel, C; Te Pas, M F W; Veerkamp, R F; Woelders, H
2017-11-01
A recently developed mechanistic mathematical model of the bovine estrous cycle was parameterized to fit empirical data sets collected during one estrous cycle of 31 individual cows, with the main objective to further validate the model. The a priori criteria for validation were (1) the resulting model can simulate the measured data correctly (i.e. goodness of fit), and (2) this is achieved without needing extreme, probably non-physiological parameter values. We used a least squares optimization procedure to identify parameter configurations for the mathematical model to fit the empirical in vivo measurements of follicle and corpus luteum sizes, and the plasma concentrations of progesterone, estradiol, FSH and LH for each cow. The model was capable of accommodating normal variation in estrous cycle characteristics of individual cows. With the parameter sets estimated for the individual cows, the model behavior changed for 21 cows, with improved fit of the simulated output curves for 18 of these 21 cows. Moreover, the number of follicular waves was predicted correctly for 18 of the 25 two-wave and three-wave cows, without extreme parameter value changes. Estimation of specific parameters confirmed results of previous model simulations indicating that parameters involved in luteolytic signaling are very important for regulation of general estrous cycle characteristics, and are likely responsible for differences in estrous cycle characteristics between cows.
NASA Astrophysics Data System (ADS)
Choudhury, Kishalay; García, Javier A.; Steiner, James F.; Bambi, Cosimo
2017-12-01
The reflection spectroscopic model RELXILL is commonly implemented in studying relativistic X-ray reflection from accretion disks around black holes. We present a systematic study of the model’s capability to constrain the dimensionless spin and ionization parameters from ∼6000 Nuclear Spectroscopic Telescope Array (NuSTAR) simulations of a bright X-ray source employing the lamp-post geometry. We employ high-count spectra to show the limitations in the model without being confused with limitations in signal-to-noise. We find that both parameters are well-recovered at 90% confidence with improving constraints at higher reflection fraction, high spin, and low source height. We test spectra across a broad range—first at 106–107 and then ∼105 total source counts across the effective 3–79 keV band of NuSTAR, and discover a strong dependence of the results on how fits are performed around the starting parameters, owing to the complexity of the model itself. A blind fit chosen over an approach that carries some estimates of the actual parameter values can lead to significantly worse recovery of model parameters. We further stress the importance to span the space of nonlinear-behaving parameters like {log} ξ carefully and thoroughly for the model to avoid misleading results. In light of selecting fitting procedures, we recall the necessity to pay attention to the choice of data binning and fit statistics used to test the goodness of fit by demonstrating the effect on the photon index Γ. We re-emphasize and implore the need to account for the detector resolution while binning X-ray data and using Poisson fit statistics instead while analyzing Poissonian data.
2014-01-01
Background Striking a balance between the degree of model complexity and parameter identifiability, while still producing biologically feasible simulations using modelling is a major challenge in computational biology. While these two elements of model development are closely coupled, parameter fitting from measured data and analysis of model mechanisms have traditionally been performed separately and sequentially. This process produces potential mismatches between model and data complexities that can compromise the ability of computational frameworks to reveal mechanistic insights or predict new behaviour. In this study we address this issue by presenting a generic framework for combined model parameterisation, comparison of model alternatives and analysis of model mechanisms. Results The presented methodology is based on a combination of multivariate metamodelling (statistical approximation of the input–output relationships of deterministic models) and a systematic zooming into biologically feasible regions of the parameter space by iterative generation of new experimental designs and look-up of simulations in the proximity of the measured data. The parameter fitting pipeline includes an implicit sensitivity analysis and analysis of parameter identifiability, making it suitable for testing hypotheses for model reduction. Using this approach, under-constrained model parameters, as well as the coupling between parameters within the model are identified. The methodology is demonstrated by refitting the parameters of a published model of cardiac cellular mechanics using a combination of measured data and synthetic data from an alternative model of the same system. Using this approach, reduced models with simplified expressions for the tropomyosin/crossbridge kinetics were found by identification of model components that can be omitted without affecting the fit to the parameterising data. Our analysis revealed that model parameters could be constrained to a standard deviation of on average 15% of the mean values over the succeeding parameter sets. Conclusions Our results indicate that the presented approach is effective for comparing model alternatives and reducing models to the minimum complexity replicating measured data. We therefore believe that this approach has significant potential for reparameterising existing frameworks, for identification of redundant model components of large biophysical models and to increase their predictive capacity. PMID:24886522
Nonlinear Curve-Fitting Program
NASA Technical Reports Server (NTRS)
Everhart, Joel L.; Badavi, Forooz F.
1989-01-01
Nonlinear optimization algorithm helps in finding best-fit curve. Nonlinear Curve Fitting Program, NLINEAR, interactive curve-fitting routine based on description of quadratic expansion of X(sup 2) statistic. Utilizes nonlinear optimization algorithm calculating best statistically weighted values of parameters of fitting function and X(sup 2) minimized. Provides user with such statistical information as goodness of fit and estimated values of parameters producing highest degree of correlation between experimental data and mathematical model. Written in FORTRAN 77.
ERIC Educational Resources Information Center
Green, Samuel B.; Thompson, Marilyn S.; Poirier, Jennifer
1999-01-01
The use of Lagrange multiplier (LM) tests in specification searches and the efforts that involve the addition of extraneous parameters to models are discussed. Presented are a rationale and strategy for conducting specification searches in two stages that involve adding parameters to LM tests to maximize fit and then deleting parameters not needed…
ERIC Educational Resources Information Center
Kunina-Habenicht, Olga; Rupp, Andre A.; Wilhelm, Oliver
2012-01-01
Using a complex simulation study we investigated parameter recovery, classification accuracy, and performance of two item-fit statistics for correct and misspecified diagnostic classification models within a log-linear modeling framework. The basic manipulated test design factors included the number of respondents (1,000 vs. 10,000), attributes (3…
Atmospheric Properties Of T Dwarfs Inferred From Model Fits At Low Spectral Resolution
NASA Astrophysics Data System (ADS)
Giorla Godfrey, Paige A.; Rice, Emily L.; Filippazzo, Joseph C.; Douglas, Stephanie E.
2016-09-01
Brown dwarf spectral types (M, L, T, Y) correlate with spectral morphology, and generally appear to correspond with decreasing mass and effective temperature (Teff). Model fits to observed spectra suggest, however, that spectral subclasses do not share this monotonic temperature correlation, indicating that secondary parameters (gravity, metallicity, dust) significantly influence spectral morphology. We seekto disentangle the fundamental parameters that underlie the spectral type sequence of the coolest fully populated spectral class of brown dwarfs using atmosphere models. We investigate the relationship between spectral type and best fit model parameters for a sample of over 150 T dwarfs with low resolution (R 75-100) near-infrared ( 0.8-2.5 micron) SpeX Prism spectra. We use synthetic spectra from four model grids (Saumon & Marley 2008, Morley+ 2012, Saumon+ 2012, BT Settl 2013) and a Markov-Chain Monte Carlo (MCMC) analysis to determine robust best fit parameters and their uncertainties. We compare the consistency of each model grid by performing our analysis on the full spectrum and also on individual wavelength bands (Y,J,H,K). We find more consistent results between the J band and full spectrum fits and that our best fit spectral type-Teff results agree with the polynomial relationships of Stephens+2009 and Filippazzo+ 2015 using bolometric luminosities. Our analysis consists of the most extensive low resolution T dwarf model comparison to date, and lays the foundation for interpretation of cool brown dwarf and exoplanet spectra.
Item Response Theory Modeling of the Philadelphia Naming Test.
Fergadiotis, Gerasimos; Kellough, Stacey; Hula, William D
2015-06-01
In this study, we investigated the fit of the Philadelphia Naming Test (PNT; Roach, Schwartz, Martin, Grewal, & Brecher, 1996) to an item-response-theory measurement model, estimated the precision of the resulting scores and item parameters, and provided a theoretical rationale for the interpretation of PNT overall scores by relating explanatory variables to item difficulty. This article describes the statistical model underlying the computer adaptive PNT presented in a companion article (Hula, Kellough, & Fergadiotis, 2015). Using archival data, we evaluated the fit of the PNT to 1- and 2-parameter logistic models and examined the precision of the resulting parameter estimates. We regressed the item difficulty estimates on three predictor variables: word length, age of acquisition, and contextual diversity. The 2-parameter logistic model demonstrated marginally better fit, but the fit of the 1-parameter logistic model was adequate. Precision was excellent for both person ability and item difficulty estimates. Word length, age of acquisition, and contextual diversity all independently contributed to variance in item difficulty. Item-response-theory methods can be productively used to analyze and quantify anomia severity in aphasia. Regression of item difficulty on lexical variables supported the validity of the PNT and interpretation of anomia severity scores in the context of current word-finding models.
Robust and fast nonlinear optimization of diffusion MRI microstructure models.
Harms, R L; Fritz, F J; Tobisch, A; Goebel, R; Roebroeck, A
2017-07-15
Advances in biophysical multi-compartment modeling for diffusion MRI (dMRI) have gained popularity because of greater specificity than DTI in relating the dMRI signal to underlying cellular microstructure. A large range of these diffusion microstructure models have been developed and each of the popular models comes with its own, often different, optimization algorithm, noise model and initialization strategy to estimate its parameter maps. Since data fit, accuracy and precision is hard to verify, this creates additional challenges to comparability and generalization of results from diffusion microstructure models. In addition, non-linear optimization is computationally expensive leading to very long run times, which can be prohibitive in large group or population studies. In this technical note we investigate the performance of several optimization algorithms and initialization strategies over a few of the most popular diffusion microstructure models, including NODDI and CHARMED. We evaluate whether a single well performing optimization approach exists that could be applied to many models and would equate both run time and fit aspects. All models, algorithms and strategies were implemented on the Graphics Processing Unit (GPU) to remove run time constraints, with which we achieve whole brain dataset fits in seconds to minutes. We then evaluated fit, accuracy, precision and run time for different models of differing complexity against three common optimization algorithms and three parameter initialization strategies. Variability of the achieved quality of fit in actual data was evaluated on ten subjects of each of two population studies with a different acquisition protocol. We find that optimization algorithms and multi-step optimization approaches have a considerable influence on performance and stability over subjects and over acquisition protocols. The gradient-free Powell conjugate-direction algorithm was found to outperform other common algorithms in terms of run time, fit, accuracy and precision. Parameter initialization approaches were found to be relevant especially for more complex models, such as those involving several fiber orientations per voxel. For these, a fitting cascade initializing or fixing parameter values in a later optimization step from simpler models in an earlier optimization step further improved run time, fit, accuracy and precision compared to a single step fit. This establishes and makes available standards by which robust fit and accuracy can be achieved in shorter run times. This is especially relevant for the use of diffusion microstructure modeling in large group or population studies and in combining microstructure parameter maps with tractography results. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Pedretti, Daniele; Bianchi, Marco
2018-03-01
Breakthrough curves (BTCs) observed during tracer tests in highly heterogeneous aquifers display strong tailing. Power laws are popular models for both the empirical fitting of these curves, and the prediction of transport using upscaling models based on best-fitted estimated parameters (e.g. the power law slope or exponent). The predictive capacity of power law based upscaling models can be however questioned due to the difficulties to link model parameters with the aquifers' physical properties. This work analyzes two aspects that can limit the use of power laws as effective predictive tools: (a) the implication of statistical subsampling, which often renders power laws undistinguishable from other heavily tailed distributions, such as the logarithmic (LOG); (b) the difficulties to reconcile fitting parameters obtained from models with different formulations, such as the presence of a late-time cutoff in the power law model. Two rigorous and systematic stochastic analyses, one based on benchmark distributions and the other on BTCs obtained from transport simulations, are considered. It is found that a power law model without cutoff (PL) results in best-fitted exponents (αPL) falling in the range of typical experimental values reported in the literature (1.5 < αPL < 4). The PL exponent tends to lower values as the tailing becomes heavier. Strong fluctuations occur when the number of samples is limited, due to the effects of subsampling. On the other hand, when the power law model embeds a cutoff (PLCO), the best-fitted exponent (αCO) is insensitive to the degree of tailing and to the effects of subsampling and tends to a constant αCO ≈ 1. In the PLCO model, the cutoff rate (λ) is the parameter that fully reproduces the persistence of the tailing and is shown to be inversely correlated to the LOG scale parameter (i.e. with the skewness of the distribution). The theoretical results are consistent with the fitting analysis of a tracer test performed during the MADE-5 experiment. It is shown that a simple mechanistic upscaling model based on the PLCO formulation is able to predict the ensemble of BTCs from the stochastic transport simulations without the need of any fitted parameters. The model embeds the constant αCO = 1 and relies on a stratified description of the transport mechanisms to estimate λ. The PL fails to reproduce the ensemble of BTCs at late time, while the LOG model provides consistent results as the PLCO model, however without a clear mechanistic link between physical properties and model parameters. It is concluded that, while all parametric models may work equally well (or equally wrong) for the empirical fitting of the experimental BTCs tails due to the effects of subsampling, for predictive purposes this is not true. A careful selection of the proper heavily tailed models and corresponding parameters is required to ensure physically-based transport predictions.
Comparative Analyses of Creep Models of a Solid Propellant
NASA Astrophysics Data System (ADS)
Zhang, J. B.; Lu, B. J.; Gong, S. F.; Zhao, S. P.
2018-05-01
The creep experiments of a solid propellant samples under five different stresses are carried out at 293.15 K and 323.15 K. In order to express the creep properties of this solid propellant, the viscoelastic model i.e. three Parameters solid, three Parameters fluid, four Parameters solid, four Parameters fluid and exponential model are involved. On the basis of the principle of least squares fitting, and different stress of all the parameters for the models, the nonlinear fitting procedure can be used to analyze the creep properties. The study shows that the four Parameters solid model can best express the behavior of creep properties of the propellant samples. However, the three Parameters solid and exponential model cannot very well reflect the initial value of the creep process, while the modified four Parameters models are found to agree well with the acceleration characteristics of the creep process.
Strategies for fitting nonlinear ecological models in R, AD Model Builder, and BUGS
Bolker, Benjamin M.; Gardner, Beth; Maunder, Mark; Berg, Casper W.; Brooks, Mollie; Comita, Liza; Crone, Elizabeth; Cubaynes, Sarah; Davies, Trevor; de Valpine, Perry; Ford, Jessica; Gimenez, Olivier; Kéry, Marc; Kim, Eun Jung; Lennert-Cody, Cleridy; Magunsson, Arni; Martell, Steve; Nash, John; Nielson, Anders; Regentz, Jim; Skaug, Hans; Zipkin, Elise
2013-01-01
1. Ecologists often use nonlinear fitting techniques to estimate the parameters of complex ecological models, with attendant frustration. This paper compares three open-source model fitting tools and discusses general strategies for defining and fitting models. 2. R is convenient and (relatively) easy to learn, AD Model Builder is fast and robust but comes with a steep learning curve, while BUGS provides the greatest flexibility at the price of speed. 3. Our model-fitting suggestions range from general cultural advice (where possible, use the tools and models that are most common in your subfield) to specific suggestions about how to change the mathematical description of models to make them more amenable to parameter estimation. 4. A companion web site (https://groups.nceas.ucsb.edu/nonlinear-modeling/projects) presents detailed examples of application of the three tools to a variety of typical ecological estimation problems; each example links both to a detailed project report and to full source code and data.
Improving RNA nearest neighbor parameters for helices by going beyond the two-state model.
Spasic, Aleksandar; Berger, Kyle D; Chen, Jonathan L; Seetin, Matthew G; Turner, Douglas H; Mathews, David H
2018-06-01
RNA folding free energy change nearest neighbor parameters are widely used to predict folding stabilities of secondary structures. They were determined by linear regression to datasets of optical melting experiments on small model systems. Traditionally, the optical melting experiments are analyzed assuming a two-state model, i.e. a structure is either complete or denatured. Experimental evidence, however, shows that structures exist in an ensemble of conformations. Partition functions calculated with existing nearest neighbor parameters predict that secondary structures can be partially denatured, which also directly conflicts with the two-state model. Here, a new approach for determining RNA nearest neighbor parameters is presented. Available optical melting data for 34 Watson-Crick helices were fit directly to a partition function model that allows an ensemble of conformations. Fitting parameters were the enthalpy and entropy changes for helix initiation, terminal AU pairs, stacks of Watson-Crick pairs and disordered internal loops. The resulting set of nearest neighbor parameters shows a 38.5% improvement in the sum of residuals in fitting the experimental melting curves compared to the current literature set.
Clark, D Angus; Nuttall, Amy K; Bowles, Ryan P
2018-01-01
Latent change score models (LCS) are conceptually powerful tools for analyzing longitudinal data (McArdle & Hamagami, 2001). However, applications of these models typically include constraints on key parameters over time. Although practically useful, strict invariance over time in these parameters is unlikely in real data. This study investigates the robustness of LCS when invariance over time is incorrectly imposed on key change-related parameters. Monte Carlo simulation methods were used to explore the impact of misspecification on parameter estimation, predicted trajectories of change, and model fit in the dual change score model, the foundational LCS. When constraints were incorrectly applied, several parameters, most notably the slope (i.e., constant change) factor mean and autoproportion coefficient, were severely and consistently biased, as were regression paths to the slope factor when external predictors of change were included. Standard fit indices indicated that the misspecified models fit well, partly because mean level trajectories over time were accurately captured. Loosening constraint improved the accuracy of parameter estimates, but estimates were more unstable, and models frequently failed to converge. Results suggest that potentially common sources of misspecification in LCS can produce distorted impressions of developmental processes, and that identifying and rectifying the situation is a challenge.
Basal glycogenolysis in mouse skeletal muscle: in vitro model predicts in vivo fluxes
NASA Technical Reports Server (NTRS)
Lambeth, Melissa J.; Kushmerick, Martin J.; Marcinek, David J.; Conley, Kevin E.
2002-01-01
A previously published mammalian kinetic model of skeletal muscle glycogenolysis, consisting of literature in vitro parameters, was modified by substituting mouse specific Vmax values. The model demonstrates that glycogen breakdown to lactate is under ATPase control. Our criteria to test whether in vitro parameters could reproduce in vivo dynamics was the ability of the model to fit phosphocreatine (PCr) and inorganic phosphate (Pi) dynamic NMR data from ischemic basal mouse hindlimbs and predict biochemically-assayed lactate concentrations. Fitting was accomplished by optimizing four parameters--the ATPase rate coefficient, fraction of activated glycogen phosphorylase, and the equilibrium constants of creatine kinase and adenylate kinase (due to the absence of pH in the model). The optimized parameter values were physiologically reasonable, the resultant model fit the [PCr] and [Pi] timecourses well, and the model predicted the final measured lactate concentration. This result demonstrates that additional features of in vivo enzyme binding are not necessary for quantitative description of glycogenolytic dynamics.
Cosmological parameter extraction and biases from type Ia supernova magnitude evolution
NASA Astrophysics Data System (ADS)
Linden, S.; Virey, J.-M.; Tilquin, A.
2009-11-01
We study different one-parametric models of type Ia supernova magnitude evolution on cosmic time scales. Constraints on cosmological and supernova evolution parameters are obtained by combined fits on the actual data coming from supernovae, the cosmic microwave background, and baryonic acoustic oscillations. We find that the best-fit values imply supernova magnitude evolution such that high-redshift supernovae appear some percent brighter than would be expected in a standard cosmos with a dark energy component. However, the errors on the evolution parameters are of the same order, and data are consistent with nonevolving magnitudes at the 1σ level, except for special cases. We simulate a future data scenario where SN magnitude evolution is allowed for, and neglect the possibility of such an evolution in the fit. We find the fiducial models for which the wrong model assumption of nonevolving SN magnitude is not detectable, and for which biases on the fitted cosmological parameters are introduced at the same time. Of the cosmological parameters, the overall mass density ΩM has the strongest chances to be biased due to the wrong model assumption. Whereas early-epoch models with a magnitude offset Δ m˜ z2 show up to be not too dangerous when neglected in the fitting procedure, late epoch models with Δ m˜√{z} have high chances of undetectably biasing the fit results. Centre de Physique Théorique is UMR 6207 - “Unité Mixte de Recherche” of CNRS and of the Universities “de Provence”, “de la Mediterranée”, and “du Sud Toulon-Var” - Laboratory affiliated with FRUMAM (FR2291).
On the Usefulness of a Multilevel Logistic Regression Approach to Person-Fit Analysis
ERIC Educational Resources Information Center
Conijn, Judith M.; Emons, Wilco H. M.; van Assen, Marcel A. L. M.; Sijtsma, Klaas
2011-01-01
The logistic person response function (PRF) models the probability of a correct response as a function of the item locations. Reise (2000) proposed to use the slope parameter of the logistic PRF as a person-fit measure. He reformulated the logistic PRF model as a multilevel logistic regression model and estimated the PRF parameters from this…
The Beta-Geometric Model Applied to Fecundability in a Sample of Married Women
NASA Astrophysics Data System (ADS)
Adekanmbi, D. B.; Bamiduro, T. A.
2006-10-01
The time required to achieve pregnancy among married couples termed fecundability has been proposed to follow a beta-geometric distribution. The accuracy of the method used in estimating the parameters of the model has an implication on the goodness of fit of the model. In this study, the parameters of the model are estimated using the Method of Moments and Newton-Raphson estimation procedure. The goodness of fit of the model was considered, using estimates from the two methods of estimation, as well as the asymptotic relative efficiency of the estimates. A noticeable improvement in the fit of the model to the data on time to conception was observed, when the parameters are estimated by Newton-Raphson procedure, and thereby estimating reasonable expectations of fecundability for married female population in the country.
A Simple Model for Fine Structure Transitions in Alkali-Metal Noble-Gas Collisions
2015-03-01
63 33 Effect of Scaling the VRG(R) Radial Coupling Fit Parameter, V0, for KHe, KNe, and KAr...64 ix Figure Page 34 Effect of Scaling the VRG(R) Radial Coupling Fit Parameter, V0, for RbHe, RbNe, and...RbAr . . . . . . . . . . . . . . . . . . . . . . . . . 64 35 Effect of Scaling the VRG(R) Radial Coupling Fit Parameter, V0, for CsHe, CsNe, and CsAr
ERIC Educational Resources Information Center
Tay, Louis; Ali, Usama S.; Drasgow, Fritz; Williams, Bruce
2011-01-01
This study investigated the relative model-data fit of an ideal point item response theory (IRT) model (the generalized graded unfolding model [GGUM]) and dominance IRT models (e.g., the two-parameter logistic model [2PLM] and Samejima's graded response model [GRM]) to simulated dichotomous and polytomous data generated from each of these models.…
NASA Astrophysics Data System (ADS)
Mead, A. J.; Peacock, J. A.; Heymans, C.; Joudaki, S.; Heavens, A. F.
2015-12-01
We present an optimized variant of the halo model, designed to produce accurate matter power spectra well into the non-linear regime for a wide range of cosmological models. To do this, we introduce physically motivated free parameters into the halo-model formalism and fit these to data from high-resolution N-body simulations. For a variety of Λ cold dark matter (ΛCDM) and wCDM models, the halo-model power is accurate to ≃ 5 per cent for k ≤ 10h Mpc-1 and z ≤ 2. An advantage of our new halo model is that it can be adapted to account for the effects of baryonic feedback on the power spectrum. We demonstrate this by fitting the halo model to power spectra from the OWLS (OverWhelmingly Large Simulations) hydrodynamical simulation suite via parameters that govern halo internal structure. We are able to fit all feedback models investigated at the 5 per cent level using only two free parameters, and we place limits on the range of these halo parameters for feedback models investigated by the OWLS simulations. Accurate predictions to high k are vital for weak-lensing surveys, and these halo parameters could be considered nuisance parameters to marginalize over in future analyses to mitigate uncertainty regarding the details of feedback. Finally, we investigate how lensing observables predicted by our model compare to those from simulations and from HALOFIT for a range of k-cuts and feedback models and quantify the angular scales at which these effects become important. Code to calculate power spectra from the model presented in this paper can be found at https://github.com/alexander-mead/hmcode.
Syndromes of collateral-reported psychopathology for ages 18-59 in 18 Societies
Ivanova, Masha Y.; Achenbach, Thomas M.; Rescorla, Leslie A.; Turner, Lori V.; Árnadóttir, Hervör Alma; Au, Alma; Caldas, J. Carlos; Chaalal, Nebia; Chen, Yi Chuen; da Rocha, Marina M.; Decoster, Jeroen; Fontaine, Johnny R.J.; Funabiki, Yasuko; Guðmundsson, Halldór S.; Kim, Young Ah; Leung, Patrick; Liu, Jianghong; Malykh, Sergey; Marković, Jasminka; Oh, Kyung Ja; Petot, Jean-Michel; Samaniego, Virginia C.; Silvares, Edwiges Ferreira de Mattos; Šimulionienė, Roma; Šobot, Valentina; Sokoli, Elvisa; Sun, Guiju; Talcott, Joel B.; Vázquez, Natalia; Zasępa, Ewa
2017-01-01
The purpose was to advance research and clinical methodology for assessing psychopathology by testing the international generalizability of an 8-syndrome model derived from collateral ratings of adult behavioral, emotional, social, and thought problems. Collateral informants rated 8,582 18–59-year-old residents of 18 societies on the Adult Behavior Checklist (ABCL). Confirmatory factor analyses tested the fit of the 8-syndrome model to ratings from each society. The primary model fit index (Root Mean Square Error of Approximation) showed good model fit for all societies, while secondary indices (Tucker Lewis Index, Comparative Fit Index) showed acceptable to good fit for 17 societies. Factor loadings were robust across societies and items. Of the 5,007 estimated parameters, 4 (0.08%) were outside the admissible parameter space, but 95% confidence intervals included the admissible space, indicating that the 4 deviant parameters could be due to sampling fluctuations. The findings are consistent with previous evidence for the generalizability of the 8-syndrome model in self-ratings from 29 societies, and support the 8-syndrome model for operationalizing phenotypes of adult psychopathology from multi-informant ratings in diverse societies. PMID:29399019
Radar altimeter waveform modeled parameter recovery. [SEASAT-1 data
NASA Technical Reports Server (NTRS)
1981-01-01
Satellite-borne radar altimeters include waveform sampling gates providing point samples of the transmitted radar pulse after its scattering from the ocean's surface. Averages of the waveform sampler data can be fitted by varying parameters in a model mean return waveform. The theoretical waveform model used is described as well as a general iterative nonlinear least squares procedures used to obtain estimates of parameters characterizing the modeled waveform for SEASAT-1 data. The six waveform parameters recovered by the fitting procedure are: (1) amplitude; (2) time origin, or track point; (3) ocean surface rms roughness; (4) noise baseline; (5) ocean surface skewness; and (6) altitude or off-nadir angle. Additional practical processing considerations are addressed and FORTRAN source listing for subroutines used in the waveform fitting are included. While the description is for the Seasat-1 altimeter waveform data analysis, the work can easily be generalized and extended to other radar altimeter systems.
Tuncer, Necibe; Gulbudak, Hayriye; Cannataro, Vincent L; Martcheva, Maia
2016-09-01
In this article, we discuss the structural and practical identifiability of a nested immuno-epidemiological model of arbovirus diseases, where host-vector transmission rate, host recovery, and disease-induced death rates are governed by the within-host immune system. We incorporate the newest ideas and the most up-to-date features of numerical methods to fit multi-scale models to multi-scale data. For an immunological model, we use Rift Valley Fever Virus (RVFV) time-series data obtained from livestock under laboratory experiments, and for an epidemiological model we incorporate a human compartment to the nested model and use the number of human RVFV cases reported by the CDC during the 2006-2007 Kenya outbreak. We show that the immunological model is not structurally identifiable for the measurements of time-series viremia concentrations in the host. Thus, we study the non-dimensionalized and scaled versions of the immunological model and prove that both are structurally globally identifiable. After fixing estimated parameter values for the immunological model derived from the scaled model, we develop a numerical method to fit observable RVFV epidemiological data to the nested model for the remaining parameter values of the multi-scale system. For the given (CDC) data set, Monte Carlo simulations indicate that only three parameters of the epidemiological model are practically identifiable when the immune model parameters are fixed. Alternatively, we fit the multi-scale data to the multi-scale model simultaneously. Monte Carlo simulations for the simultaneous fitting suggest that the parameters of the immunological model and the parameters of the immuno-epidemiological model are practically identifiable. We suggest that analytic approaches for studying the structural identifiability of nested models are a necessity, so that identifiable parameter combinations can be derived to reparameterize the nested model to obtain an identifiable one. This is a crucial step in developing multi-scale models which explain multi-scale data.
Apparent cosmic acceleration from Type Ia supernovae
NASA Astrophysics Data System (ADS)
Dam, Lawrence H.; Heinesen, Asta; Wiltshire, David L.
2017-11-01
Parameters that quantify the acceleration of cosmic expansion are conventionally determined within the standard Friedmann-Lemaître-Robertson-Walker (FLRW) model, which fixes spatial curvature to be homogeneous. Generic averages of Einstein's equations in inhomogeneous cosmology lead to models with non-rigidly evolving average spatial curvature, and different parametrizations of apparent cosmic acceleration. The timescape cosmology is a viable example of such a model without dark energy. Using the largest available supernova data set, the JLA catalogue, we find that the timescape model fits the luminosity distance-redshift data with a likelihood that is statistically indistinguishable from the standard spatially flat Λ cold dark matter cosmology by Bayesian comparison. In the timescape case cosmic acceleration is non-zero but has a marginal amplitude, with best-fitting apparent deceleration parameter, q_{0}=-0.043^{+0.004}_{-0.000}. Systematic issues regarding standardization of supernova light curves are analysed. Cuts of data at the statistical homogeneity scale affect light-curve parameter fits independent of cosmology. A cosmological model dependence of empirical changes to the mean colour parameter is also found. Irrespective of which model ultimately fits better, we argue that as a competitive model with a non-FLRW expansion history, the timescape model may prove a useful diagnostic tool for disentangling selection effects and astrophysical systematics from the underlying expansion history.
NASA Astrophysics Data System (ADS)
Kleiner, Isabelle; Hougen, Jon T.
2017-06-01
In this talk we report on our progress in trying to make the hybrid Hamiltonian competitive with the pure-tunneling Hamiltonian for treating large-amplitude motions in methylamine. A treatment using the pure-tunneling model has the advantages of: (i) requiring relatively little computer time, (ii) working with relatively uncorrelated fitting parameters, and (iii) yielding in the vast majority of cases fits to experimental measurement accuracy. These advantages are all illustrated in the work published this past year on a gigantic v_{t} = 1 data set for the torsional fundamental band in methyl amine. A treatment using the hybrid model has the advantages of: (i) being able to carry out a global fit involving both v_{t} = 0 and v_{t} = 1 energy levels and (ii) working with fitting parameters that have a clearer physical interpretation. Unfortunately, a treatment using the hybrid model has the great disadvantage of requiring a highly correlated set of fitting parameters to achieve reasonable fitting accuracy, which complicates the search for a good set of molecular fitting parameters and a fit to experimental accuracy. At the time of writing this abstract, we have been able to carry out a fit with J up to 15 that includes all available infrared data in the v_{t} = 1-0 torsional fundamental band, all ground-state microwave data with K up to 10 and J up to 15, and about a hundred microwave lines within the v_{t} = 1 torsional state, achieving weighted root-mean-square (rms) deviations of about 1.4, 2.8, and 4.2 for these three categories of data. We will give an update of this situation at the meeting. I. Gulaczyk, M. Kreglewski, V.-M. Horneman, J. Mol. Spectrosc., in Press (2017).
A GUI-based Tool for Bridging the Gap between Models and Process-Oriented Studies
NASA Astrophysics Data System (ADS)
Kornfeld, A.; Van der Tol, C.; Berry, J. A.
2014-12-01
Models used for simulation of photosynthesis and transpiration by canopies of terrestrial plants typically have subroutines such as STOMATA.F90, PHOSIB.F90 or BIOCHEM.m that solve for photosynthesis and associated processes. Key parameters such as the Vmax for Rubisco and temperature response parameters are required by these subroutines. These are often taken from the literature or determined by separate analysis of gas exchange experiments. It is useful to note however that subroutines can be extracted and run as standalone models to simulate leaf responses collected in gas exchange experiments. Furthermore, there are excellent non-linear fitting tools that can be used to optimize the parameter values in these models to fit the observations. Ideally the Vmax fit in this way should be the same as that determined by a separate analysis, but it may not because of interactions with other kinetic constants and the temperature dependence of these in the full subroutine. We submit that it is more useful to fit the complete model to the calibration experiments rather as disaggregated constants. We designed a graphical user interface (GUI) based tool that uses gas exchange photosynthesis data to directly estimate model parameters in the SCOPE (Soil Canopy Observation, Photochemistry and Energy fluxes) model and, at the same time, allow researchers to change parameters interactively to visualize how variation in model parameters affect predicted outcomes such as photosynthetic rates, electron transport, and chlorophyll fluorescence. We have also ported some of this functionality to an Excel spreadsheet, which could be used as a teaching tool to help integrate process-oriented and model-oriented studies.
Methods of comparing associative models and an application to retrospective revaluation.
Witnauer, James E; Hutchings, Ryan; Miller, Ralph R
2017-11-01
Contemporary theories of associative learning are increasingly complex, which necessitates the use of computational methods to reveal predictions of these models. We argue that comparisons across multiple models in terms of goodness of fit to empirical data from experiments often reveal more about the actual mechanisms of learning and behavior than do simulations of only a single model. Such comparisons are best made when the values of free parameters are discovered through some optimization procedure based on the specific data being fit (e.g., hill climbing), so that the comparisons hinge on the psychological mechanisms assumed by each model rather than being biased by using parameters that differ in quality across models with respect to the data being fit. Statistics like the Bayesian information criterion facilitate comparisons among models that have different numbers of free parameters. These issues are examined using retrospective revaluation data. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Bowers, Peter; Rosowski, John J.
2018-05-01
An air-conduction circuit model that will serve as the basis for a model of bone-conduction hearing is developed for chinchilla. The lumped-element model is based on the classic Zwislocki model of the human middle ear. Model parameters are fit to various measurements of chinchilla middle-ear transfer functions and impedances. The model is in agreement with studies of the effects of middle-ear cavity holes in experiments that require access to the middle-ear air space.
Dickie, Ben R; Banerji, Anita; Kershaw, Lucy E; McPartlin, Andrew; Choudhury, Ananya; West, Catharine M; Rose, Chris J
2016-10-01
To improve the accuracy and precision of tracer kinetic model parameter estimates for use in dynamic contrast enhanced (DCE) MRI studies of solid tumors. Quantitative DCE-MRI requires an estimate of precontrast T1 , which is obtained prior to fitting a tracer kinetic model. As T1 mapping and tracer kinetic signal models are both a function of precontrast T1 it was hypothesized that its joint estimation would improve the accuracy and precision of both precontrast T1 and tracer kinetic model parameters. Accuracy and/or precision of two-compartment exchange model (2CXM) parameters were evaluated for standard and joint fitting methods in well-controlled synthetic data and for 36 bladder cancer patients. Methods were compared under a number of experimental conditions. In synthetic data, joint estimation led to statistically significant improvements in the accuracy of estimated parameters in 30 of 42 conditions (improvements between 1.8% and 49%). Reduced accuracy was observed in 7 of the remaining 12 conditions. Significant improvements in precision were observed in 35 of 42 conditions (between 4.7% and 50%). In clinical data, significant improvements in precision were observed in 18 of 21 conditions (between 4.6% and 38%). Accuracy and precision of DCE-MRI parameter estimates are improved when signal models are fit jointly rather than sequentially. Magn Reson Med 76:1270-1281, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
A Quadriparametric Model to Describe the Diversity of Waves Applied to Hormonal Data.
Abdullah, Saman; Bouchard, Thomas; Klich, Amna; Leiva, Rene; Pyper, Cecilia; Genolini, Christophe; Subtil, Fabien; Iwaz, Jean; Ecochard, René
2018-05-01
Even in normally cycling women, hormone level shapes may widely vary between cycles and between women. Over decades, finding ways to characterize and compare cycle hormone waves was difficult and most solutions, in particular polynomials or splines, do not correspond to physiologically meaningful parameters. We present an original concept to characterize most hormone waves with only two parameters. The modelling attempt considered pregnanediol-3-alpha-glucuronide (PDG) and luteinising hormone (LH) levels in 266 cycles (with ultrasound-identified ovulation day) in 99 normally fertile women aged 18 to 45. The study searched for a convenient wave description process and carried out an extended search for the best fitting density distribution. The highly flexible beta-binomial distribution offered the best fit of most hormone waves and required only two readily available and understandable wave parameters: location and scale. In bell-shaped waves (e.g., PDG curves), early peaks may be fitted with a low location parameter and a low scale parameter; plateau shapes are obtained with higher scale parameters. I-shaped, J-shaped, and U-shaped waves (sometimes the shapes of LH curves) may be fitted with high scale parameter and, respectively, low, high, and medium location parameter. These location and scale parameters will be later correlated with feminine physiological events. Our results demonstrate that, with unimodal waves, complex methods (e.g., functional mixed effects models using smoothing splines, second-order growth mixture models, or functional principal-component- based methods) may be avoided. The use, application, and, especially, result interpretation of four-parameter analyses might be advantageous within the context of feminine physiological events. Schattauer GmbH.
NASA Astrophysics Data System (ADS)
da Silva, Ricardo Siqueira; Kumar, Lalit; Shabani, Farzin; Picanço, Marcelo Coutinho
2018-04-01
A sensitivity analysis can categorize levels of parameter influence on a model's output. Identifying parameters having the most influence facilitates establishing the best values for parameters of models, providing useful implications in species modelling of crops and associated insect pests. The aim of this study was to quantify the response of species models through a CLIMEX sensitivity analysis. Using open-field Solanum lycopersicum and Neoleucinodes elegantalis distribution records, and 17 fitting parameters, including growth and stress parameters, comparisons were made in model performance by altering one parameter value at a time, in comparison to the best-fit parameter values. Parameters that were found to have a greater effect on the model results are termed "sensitive". Through the use of two species, we show that even when the Ecoclimatic Index has a major change through upward or downward parameter value alterations, the effect on the species is dependent on the selection of suitability categories and regions of modelling. Two parameters were shown to have the greatest sensitivity, dependent on the suitability categories of each species in the study. Results enhance user understanding of which climatic factors had a greater impact on both species distributions in our model, in terms of suitability categories and areas, when parameter values were perturbed by higher or lower values, compared to the best-fit parameter values. Thus, the sensitivity analyses have the potential to provide additional information for end users, in terms of improving management, by identifying the climatic variables that are most sensitive.
DIRT: The Dust InfraRed Toolbox
NASA Astrophysics Data System (ADS)
Pound, M. W.; Wolfire, M. G.; Mundy, L. G.; Teuben, P. J.; Lord, S.
We present DIRT, a Java applet geared toward modeling a variety of processes in envelopes of young and evolved stars. Users can automatically and efficiently search grids of pre-calculated models to fit their data. A large set of physical parameters and dust types are included in the model database, which contains over 500,000 models. The computing cluster for the database is described in the accompanying paper by Teuben et al. (2000). A typical user query will return about 50-100 models, which the user can then interactively filter as a function of 8 model parameters (e.g., extinction, size, flux, luminosity). A flexible, multi-dimensional plotter (Figure 1) allows users to view the models, rotate them, tag specific parameters with color or symbol size, and probe individual model points. For any given model, auxiliary plots such as dust grain properties, radial intensity profiles, and the flux as a function of wavelength and beamsize can be viewed. The user can fit observed data to several models simultaneously and see the results of the fit; the best fit is automatically selected for plotting. The URL for this project is http://dustem.astro.umd.edu.
Extension of the PC version of VEPFIT with input and output routines running under Windows
NASA Astrophysics Data System (ADS)
Schut, H.; van Veen, A.
1995-01-01
The fitting program VEPFIT has been extended with applications running under the Microsoft-Windows environment facilitating the input and output of the VEPFIT fitting module. We have exploited the Microsoft-Windows graphical users interface by making use of dialog windows, scrollbars, command buttons, etc. The user communicates with the program simply by clicking and dragging with the mouse pointing device. Keyboard actions are limited to a minimum. Upon changing one or more input parameters the results of the modeling of the S-parameter and Ps fractions versus positron implantation energy are updated and displayed. This action can be considered as the first step in the fitting procedure upon which the user can decide to further adapt the input parameters or to forward these parameters as initial values to the fitting routine. The modeling step has proven to be helpful for designing positron beam experiments.
Predicting responses from Rasch measures.
Linacre, John M
2010-01-01
There is a growing family of Rasch models for polytomous observations. Selecting a suitable model for an existing dataset, estimating its parameters and evaluating its fit is now routine. Problems arise when the model parameters are to be estimated from the current data, but used to predict future data. In particular, ambiguities in the nature of the current data, or overfit of the model to the current dataset, may mean that better fit to the current data may lead to worse fit to future data. The predictive power of several Rasch and Rasch-related models are discussed in the context of the Netflix Prize. Rasch-related models are proposed based on Singular Value Decomposition (SVD) and Boltzmann Machines.
NASA Astrophysics Data System (ADS)
Maghsoudi, Mastoureh; Bakar, Shaiful Anuar Abu
2017-05-01
In this paper, a recent novel approach is applied to estimate the threshold parameter of a composite model. Several composite models from Transformed Gamma and Inverse Transformed Gamma families are constructed based on this approach and their parameters are estimated by the maximum likelihood method. These composite models are fitted to allocated loss adjustment expenses (ALAE). In comparison to all composite models studied, the composite Weibull-Inverse Transformed Gamma model is proved to be a competitor candidate as it best fit the loss data. The final part considers the backtesting method to verify the validation of VaR and CTE risk measures.
Soil mechanics: breaking ground.
Einav, Itai
2007-12-15
In soil mechanics, student's models are classified as simple models that teach us unexplained elements of behaviour; an example is the Cam clay constitutive models of critical state soil mechanics (CSSM). 'Engineer's models' are models that elaborate the theory to fit more behavioural trends; this is usually done by adding fitting parameters to the student's models. Can currently unexplained behavioural trends of soil be explained without adding fitting parameters to CSSM models, by developing alternative student's models based on modern theories?Here I apply an alternative theory to CSSM, called 'breakage mechanics', and develop a simple student's model for sand. Its unique and distinctive feature is the use of an energy balance equation that connects grain size reduction to consumption of energy, which enables us to predict how grain size distribution (gsd) evolves-an unprecedented capability in constitutive modelling. With only four parameters, the model is physically clarifying what CSSM cannot for sand: the dependency of yielding and critical state on the initial gsd and void ratio.
An MLE method for finding LKB NTCP model parameters using Monte Carlo uncertainty estimates
NASA Astrophysics Data System (ADS)
Carolan, Martin; Oborn, Brad; Foo, Kerwyn; Haworth, Annette; Gulliford, Sarah; Ebert, Martin
2014-03-01
The aims of this work were to establish a program to fit NTCP models to clinical data with multiple toxicity endpoints, to test the method using a realistic test dataset, to compare three methods for estimating confidence intervals for the fitted parameters and to characterise the speed and performance of the program.
Modeling and Bayesian parameter estimation for shape memory alloy bending actuators
NASA Astrophysics Data System (ADS)
Crews, John H.; Smith, Ralph C.
2012-04-01
In this paper, we employ a homogenized energy model (HEM) for shape memory alloy (SMA) bending actuators. Additionally, we utilize a Bayesian method for quantifying parameter uncertainty. The system consists of a SMA wire attached to a flexible beam. As the actuator is heated, the beam bends, providing endoscopic motion. The model parameters are fit to experimental data using an ordinary least-squares approach. The uncertainty in the fit model parameters is then quantified using Markov Chain Monte Carlo (MCMC) methods. The MCMC algorithm provides bounds on the parameters, which will ultimately be used in robust control algorithms. One purpose of the paper is to test the feasibility of the Random Walk Metropolis algorithm, the MCMC method used here.
An Improved Statistical Solution for Global Seismicity by the HIST-ETAS Approach
NASA Astrophysics Data System (ADS)
Chu, A.; Ogata, Y.; Katsura, K.
2010-12-01
For long-term global seismic model fitting, recent work by Chu et al. (2010) applied the spatial-temporal ETAS model (Ogata 1998) and analyzed global data partitioned into tectonic zones based on geophysical characteristics (Bird 2003), and it has shown tremendous improvements of model fitting compared with one overall global model. While the ordinary ETAS model assumes constant parameter values across the complete region analyzed, the hierarchical space-time ETAS model (HIST-ETAS, Ogata 2004) is a newly introduced approach by proposing regional distinctions of the parameters for more accurate seismic prediction. As the HIST-ETAS model has been fit to regional data of Japan (Ogata 2010), our work applies the model to describe global seismicity. Employing the Akaike's Bayesian Information Criterion (ABIC) as an assessment method, we compare the MLE results with zone divisions considered to results obtained by an overall global model. Location dependent parameters of the model and Gutenberg-Richter b-values are optimized, and seismological interpretations are discussed.
Critical elements on fitting the Bayesian multivariate Poisson Lognormal model
NASA Astrophysics Data System (ADS)
Zamzuri, Zamira Hasanah binti
2015-10-01
Motivated by a problem on fitting multivariate models to traffic accident data, a detailed discussion of the Multivariate Poisson Lognormal (MPL) model is presented. This paper reveals three critical elements on fitting the MPL model: the setting of initial estimates, hyperparameters and tuning parameters. These issues have not been highlighted in the literature. Based on simulation studies conducted, we have shown that to use the Univariate Poisson Model (UPM) estimates as starting values, at least 20,000 iterations are needed to obtain reliable final estimates. We also illustrated the sensitivity of the specific hyperparameter, which if it is not given extra attention, may affect the final estimates. The last issue is regarding the tuning parameters where they depend on the acceptance rate. Finally, a heuristic algorithm to fit the MPL model is presented. This acts as a guide to ensure that the model works satisfactorily given any data set.
A Note on Recurring Misconceptions When Fitting Nonlinear Mixed Models.
Harring, Jeffrey R; Blozis, Shelley A
2016-01-01
Nonlinear mixed-effects (NLME) models are used when analyzing continuous repeated measures data taken on each of a number of individuals where the focus is on characteristics of complex, nonlinear individual change. Challenges with fitting NLME models and interpreting analytic results have been well documented in the statistical literature. However, parameter estimates as well as fitted functions from NLME analyses in recent articles have been misinterpreted, suggesting the need for clarification of these issues before these misconceptions become fact. These misconceptions arise from the choice of popular estimation algorithms, namely, the first-order linearization method (FO) and Gaussian-Hermite quadrature (GHQ) methods, and how these choices necessarily lead to population-average (PA) or subject-specific (SS) interpretations of model parameters, respectively. These estimation approaches also affect the fitted function for the typical individual, the lack-of-fit of individuals' predicted trajectories, and vice versa.
Drake, Andrew W; Klakamp, Scott L
2007-01-10
A new 4-parameter nonlinear equation based on the standard multiple independent binding site model (MIBS) is presented for fitting cell-based ligand titration data in order to calculate the ligand/cell receptor equilibrium dissociation constant and the number of receptors/cell. The most commonly used linear (Scatchard Plot) or nonlinear 2-parameter model (a single binding site model found in commercial programs like Prism(R)) used for analysis of ligand/receptor binding data assumes only the K(D) influences the shape of the titration curve. We demonstrate using simulated data sets that, depending upon the cell surface receptor expression level, the number of cells titrated, and the magnitude of the K(D) being measured, this assumption of always being under K(D)-controlled conditions can be erroneous and can lead to unreliable estimates for the binding parameters. We also compare and contrast the fitting of simulated data sets to the commonly used cell-based binding equation versus our more rigorous 4-parameter nonlinear MIBS model. It is shown through these simulations that the new 4-parameter MIBS model, when used for cell-based titrations under optimal conditions, yields highly accurate estimates of all binding parameters and hence should be the preferred model to fit cell-based experimental nonlinear titration data.
Waller, Niels G; Feuerstahler, Leah
2017-01-01
In this study, we explored item and person parameter recovery of the four-parameter model (4PM) in over 24,000 real, realistic, and idealized data sets. In the first analyses, we fit the 4PM and three alternative models to data from three Minnesota Multiphasic Personality Inventory-Adolescent form factor scales using Bayesian modal estimation (BME). Our results indicated that the 4PM fits these scales better than simpler item Response Theory (IRT) models. Next, using the parameter estimates from these real data analyses, we estimated 4PM item parameters in 6,000 realistic data sets to establish minimum sample size requirements for accurate item and person parameter recovery. Using a factorial design that crossed discrete levels of item parameters, sample size, and test length, we also fit the 4PM to an additional 18,000 idealized data sets to extend our parameter recovery findings. Our combined results demonstrated that 4PM item parameters and parameter functions (e.g., item response functions) can be accurately estimated using BME in moderate to large samples (N ⩾ 5, 000) and person parameters can be accurately estimated in smaller samples (N ⩾ 1, 000). In the supplemental files, we report annotated [Formula: see text] code that shows how to estimate 4PM item and person parameters in [Formula: see text] (Chalmers, 2012 ).
Convergence in parameters and predictions using computational experimental design.
Hagen, David R; White, Jacob K; Tidor, Bruce
2013-08-06
Typically, biological models fitted to experimental data suffer from significant parameter uncertainty, which can lead to inaccurate or uncertain predictions. One school of thought holds that accurate estimation of the true parameters of a biological system is inherently problematic. Recent work, however, suggests that optimal experimental design techniques can select sets of experiments whose members probe complementary aspects of a biochemical network that together can account for its full behaviour. Here, we implemented an experimental design approach for selecting sets of experiments that constrain parameter uncertainty. We demonstrated with a model of the epidermal growth factor-nerve growth factor pathway that, after synthetically performing a handful of optimal experiments, the uncertainty in all 48 parameters converged below 10 per cent. Furthermore, the fitted parameters converged to their true values with a small error consistent with the residual uncertainty. When untested experimental conditions were simulated with the fitted models, the predicted species concentrations converged to their true values with errors that were consistent with the residual uncertainty. This paper suggests that accurate parameter estimation is achievable with complementary experiments specifically designed for the task, and that the resulting parametrized models are capable of accurate predictions.
Sussman, Marshall S; Yang, Issac Y; Fok, Kai-Ho; Wintersperger, Bernd J
2016-06-01
The Modified Look-Locker Inversion Recovery (MOLLI) technique is used for T1 mapping in the heart. However, a drawback of this technique is that it requires lengthy rest periods in between inversion groupings to allow for complete magnetization recovery. In this work, a new MOLLI fitting algorithm (inversion group [IG] fitting) is presented that allows for arbitrary combinations of inversion groupings and rest periods (including no rest period). Conventional MOLLI algorithms use a three parameter fitting model. In IG fitting, the number of parameters is two plus the number of inversion groupings. This increased number of parameters permits any inversion grouping/rest period combination. Validation was performed through simulation, phantom, and in vivo experiments. IG fitting provided T1 values with less than 1% discrepancy across a range of inversion grouping/rest period combinations. By comparison, conventional three parameter fits exhibited up to 30% discrepancy for some combinations. The one drawback with IG fitting was a loss of precision-approximately 30% worse than the three parameter fits. IG fitting permits arbitrary inversion grouping/rest period combinations (including no rest period). The cost of the algorithm is a loss of precision relative to conventional three parameter fits. Magn Reson Med 75:2332-2340, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Geostatistical models are appropriate for spatially distributed data measured at irregularly spaced locations. We propose an efficient Markov chain Monte Carlo (MCMC) algorithm for fitting Bayesian geostatistical models with substantial numbers of unknown parameters to sizable...
flexsurv: A Platform for Parametric Survival Modeling in R
Jackson, Christopher H.
2018-01-01
flexsurv is an R package for fully-parametric modeling of survival data. Any parametric time-to-event distribution may be fitted if the user supplies a probability density or hazard function, and ideally also their cumulative versions. Standard survival distributions are built in, including the three and four-parameter generalized gamma and F distributions. Any parameter of any distribution can be modeled as a linear or log-linear function of covariates. The package also includes the spline model of Royston and Parmar (2002), in which both baseline survival and covariate effects can be arbitrarily flexible parametric functions of time. The main model-fitting function, flexsurvreg, uses the familiar syntax of survreg from the standard survival package (Therneau 2016). Censoring or left-truncation are specified in ‘Surv’ objects. The models are fitted by maximizing the full log-likelihood, and estimates and confidence intervals for any function of the model parameters can be printed or plotted. flexsurv also provides functions for fitting and predicting from fully-parametric multi-state models, and connects with the mstate package (de Wreede, Fiocco, and Putter 2011). This article explains the methods and design principles of the package, giving several worked examples of its use. PMID:29593450
Bustamante, Carlos D.; Valero-Cuevas, Francisco J.
2010-01-01
The field of complex biomechanical modeling has begun to rely on Monte Carlo techniques to investigate the effects of parameter variability and measurement uncertainty on model outputs, search for optimal parameter combinations, and define model limitations. However, advanced stochastic methods to perform data-driven explorations, such as Markov chain Monte Carlo (MCMC), become necessary as the number of model parameters increases. Here, we demonstrate the feasibility and, what to our knowledge is, the first use of an MCMC approach to improve the fitness of realistically large biomechanical models. We used a Metropolis–Hastings algorithm to search increasingly complex parameter landscapes (3, 8, 24, and 36 dimensions) to uncover underlying distributions of anatomical parameters of a “truth model” of the human thumb on the basis of simulated kinematic data (thumbnail location, orientation, and linear and angular velocities) polluted by zero-mean, uncorrelated multivariate Gaussian “measurement noise.” Driven by these data, ten Markov chains searched each model parameter space for the subspace that best fit the data (posterior distribution). As expected, the convergence time increased, more local minima were found, and marginal distributions broadened as the parameter space complexity increased. In the 36-D scenario, some chains found local minima but the majority of chains converged to the true posterior distribution (confirmed using a cross-validation dataset), thus demonstrating the feasibility and utility of these methods for realistically large biomechanical problems. PMID:19272906
Dynamical modeling and multi-experiment fitting with PottersWheel
Maiwald, Thomas; Timmer, Jens
2008-01-01
Motivation: Modelers in Systems Biology need a flexible framework that allows them to easily create new dynamic models, investigate their properties and fit several experimental datasets simultaneously. Multi-experiment-fitting is a powerful approach to estimate parameter values, to check the validity of a given model, and to discriminate competing model hypotheses. It requires high-performance integration of ordinary differential equations and robust optimization. Results: We here present the comprehensive modeling framework Potters-Wheel (PW) including novel functionalities to satisfy these requirements with strong emphasis on the inverse problem, i.e. data-based modeling of partially observed and noisy systems like signal transduction pathways and metabolic networks. PW is designed as a MATLAB toolbox and includes numerous user interfaces. Deterministic and stochastic optimization routines are combined by fitting in logarithmic parameter space allowing for robust parameter calibration. Model investigation includes statistical tests for model-data-compliance, model discrimination, identifiability analysis and calculation of Hessian- and Monte-Carlo-based parameter confidence limits. A rich application programming interface is available for customization within own MATLAB code. Within an extensive performance analysis, we identified and significantly improved an integrator–optimizer pair which decreases the fitting duration for a realistic benchmark model by a factor over 3000 compared to MATLAB with optimization toolbox. Availability: PottersWheel is freely available for academic usage at http://www.PottersWheel.de/. The website contains a detailed documentation and introductory videos. The program has been intensively used since 2005 on Windows, Linux and Macintosh computers and does not require special MATLAB toolboxes. Contact: maiwald@fdm.uni-freiburg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:18614583
Yang, Hyun Mo; Boldrini, José Luiz; Fassoni, Artur César; Freitas, Luiz Fernando Souza; Gomez, Miller Ceron; de Lima, Karla Katerine Barboza; Andrade, Valmir Roberto; Freitas, André Ricardo Ribas
2016-01-01
Four time-dependent dengue transmission models are considered in order to fit the incidence data from the City of Campinas, Brazil, recorded from October 1st 1995 to September 30th 2012. The entomological parameters are allowed to depend on temperature and precipitation, while the carrying capacity and the hatching of eggs depend only on precipitation. The whole period of incidence of dengue is split into four periods, due to the fact that the model is formulated considering the circulation of only one serotype. Dengue transmission parameters from human to mosquito and mosquito to human are fitted for each one of the periods. The time varying partial and overall effective reproduction numbers are obtained to explain the incidence of dengue provided by the models. PMID:27010654
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yu; Cao, Ruifen; Pei, Xi
2015-06-15
Purpose: The flat-panel detector response characteristics are investigated to optimize the scanning parameter considering the image quality and less radiation dose. The signal conversion model is also established to predict the tumor shape and physical thickness changes. Methods: With the ELEKTA XVI system, the planar images of 10cm water phantom were obtained under different image acquisition conditions, including tube voltage, electric current, exposure time and frames. The averaged responses of square area in center were analyzed using Origin8.0. The response characteristics for each scanning parameter were depicted by different fitting types. The transmission measured for 10cm water was compared tomore » Monte Carlo simulation. Using the quadratic calibration method, a series of variable-thickness water phantoms images were acquired to derive the signal conversion model. A 20cm wedge water phantom with 2cm step thickness was used to verify the model. At last, the stability and reproducibility of the model were explored during a four week period. Results: The gray values of image center all decreased with the increase of different image acquisition parameter presets. The fitting types adopted were linear fitting, quadratic polynomial fitting, Gauss fitting and logarithmic fitting with the fitting R-Square 0.992, 0.995, 0.997 and 0.996 respectively. For 10cm water phantom, the transmission measured showed better uniformity than Monte Carlo simulation. The wedge phantom experiment show that the radiological thickness changes prediction error was in the range of (-4mm, 5mm). The signal conversion model remained consistent over a period of four weeks. Conclusion: The flat-panel response decrease with the increase of different scanning parameters. The preferred scanning parameter combination was 100kV, 10mA, 10ms, 15frames. It is suggested that the signal conversion model could effectively be used for tumor shape change and radiological thickness prediction. Supported by National Natural Science Foundation of China (81101132, 11305203) and Natural Science Foundation of Anhui Province (11040606Q55, 1308085QH138)« less
Improved parameter inference in catchment models: 1. Evaluating parameter uncertainty
NASA Astrophysics Data System (ADS)
Kuczera, George
1983-10-01
A Bayesian methodology is developed to evaluate parameter uncertainty in catchment models fitted to a hydrologic response such as runoff, the goal being to improve the chance of successful regionalization. The catchment model is posed as a nonlinear regression model with stochastic errors possibly being both autocorrelated and heteroscedastic. The end result of this methodology, which may use Box-Cox power transformations and ARMA error models, is the posterior distribution, which summarizes what is known about the catchment model parameters. This can be simplified to a multivariate normal provided a linearization in parameter space is acceptable; means of checking and improving this assumption are discussed. The posterior standard deviations give a direct measure of parameter uncertainty, and study of the posterior correlation matrix can indicate what kinds of data are required to improve the precision of poorly determined parameters. Finally, a case study involving a nine-parameter catchment model fitted to monthly runoff and soil moisture data is presented. It is shown that use of ordinary least squares when its underlying error assumptions are violated gives an erroneous description of parameter uncertainty.
The Routine Fitting of Kinetic Data to Models
Berman, Mones; Shahn, Ezra; Weiss, Marjory F.
1962-01-01
A mathematical formalism is presented for use with digital computers to permit the routine fitting of data to physical and mathematical models. Given a set of data, the mathematical equations describing a model, initial conditions for an experiment, and initial estimates for the values of model parameters, the computer program automatically proceeds to obtain a least squares fit of the data by an iterative adjustment of the values of the parameters. When the experimental measures are linear combinations of functions, the linear coefficients for a least squares fit may also be calculated. The values of both the parameters of the model and the coefficients for the sum of functions may be unknown independent variables, unknown dependent variables, or known constants. In the case of dependence, only linear dependencies are provided for in routine use. The computer program includes a number of subroutines, each one of which performs a special task. This permits flexibility in choosing various types of solutions and procedures. One subroutine, for example, handles linear differential equations, another, special non-linear functions, etc. The use of analytic or numerical solutions of equations is possible. PMID:13867975
Correlated parameter fit of arrhenius model for thermal denaturation of proteins and cells.
Qin, Zhenpeng; Balasubramanian, Saravana Kumar; Wolkers, Willem F; Pearce, John A; Bischof, John C
2014-12-01
Thermal denaturation of proteins is critical to cell injury, food science and other biomaterial processing. For example protein denaturation correlates strongly with cell death by heating, and is increasingly of interest in focal thermal therapies of cancer and other diseases at temperatures which often exceed 50 °C. The Arrhenius model is a simple yet widely used model for both protein denaturation and cell injury. To establish the utility of the Arrhenius model for protein denaturation at 50 °C and above its sensitivities to the kinetic parameters (activation energy E a and frequency factor A) were carefully examined. We propose a simplified correlated parameter fit to the Arrhenius model by treating E a, as an independent fitting parameter and allowing A to follow dependently. The utility of the correlated parameter fit is demonstrated on thermal denaturation of proteins and cells from the literature as a validation, and new experimental measurements in our lab using FTIR spectroscopy to demonstrate broad applicability of this method. Finally, we demonstrate that the end-temperature within which the denaturation is measured is important and changes the kinetics. Specifically, higher E a and A parameters were found at low end-temperature (50 °C) and reduce as end-temperatures increase to 70 °C. This trend is consistent with Arrhenius parameters for cell injury in the literature that are significantly higher for clonogenics (45-50 °C) vs. membrane dye assays (60-70 °C). Future opportunities to monitor cell injury by spectroscopic measurement of protein denaturation are discussed.
Correlated Parameter Fit of Arrhenius Model for Thermal Denaturation of Proteins and Cells
Qin, Zhenpeng; Balasubramanian, Saravana Kumar; Wolkers, Willem F.; Pearce, John A.; Bischof, John C.
2014-01-01
Thermal denaturation of proteins is critical to cell injury, food science and other biomaterial processing. For example protein denaturation correlates strongly with cell death by heating, and is increasingly of interest in focal thermal therapies of cancer and other diseases at temperatures which often exceed 50 °C. The Arrhenius model is a simple yet widely used model for both protein denaturation and cell injury. To establish the utility of the Arrhenius model for protein denaturation at 50 °C and above its sensitivities to the kinetic parameters (activation energy Ea and frequency factor A) were carefully examined. We propose a simplified correlated parameter fit to the Arrhenius model by treating Ea, as an independent fitting parameter and allowing A to follow dependently. The utility of the correlated parameter fit is demonstrated on thermal denaturation of proteins and cells from the literature as a validation, and new experimental measurements in our lab using FTIR spectroscopy to demonstrate broad applicability of this method. Finally, we demonstrate that the end-temperature within which the denaturation is measured is important and changes the kinetics. Specifically, higher Ea and A parameters were found at low end-temperature (50°C) and reduce as end-temperatures increase to 70 °C. This trend is consistent with Arrhenius parameters for cell injury in the literature that are significantly higher for clonogenics (45 – 50 °C) vs. membrane dye assays (60 –70 °C). Future opportunities to monitor cell injury by spectroscopic measurement of protein denaturation are discussed. PMID:25205396
Rice, Stephen B; Chan, Christopher; Brown, Scott C; Eschbach, Peter; Han, Li; Ensor, David S; Stefaniak, Aleksandr B; Bonevich, John; Vladár, András E; Hight Walker, Angela R; Zheng, Jiwen; Starnes, Catherine; Stromberg, Arnold; Ye, Jia; Grulke, Eric A
2015-01-01
This paper reports an interlaboratory comparison that evaluated a protocol for measuring and analysing the particle size distribution of discrete, metallic, spheroidal nanoparticles using transmission electron microscopy (TEM). The study was focused on automated image capture and automated particle analysis. NIST RM8012 gold nanoparticles (30 nm nominal diameter) were measured for area-equivalent diameter distributions by eight laboratories. Statistical analysis was used to (1) assess the data quality without using size distribution reference models, (2) determine reference model parameters for different size distribution reference models and non-linear regression fitting methods and (3) assess the measurement uncertainty of a size distribution parameter by using its coefficient of variation. The interlaboratory area-equivalent diameter mean, 27.6 nm ± 2.4 nm (computed based on a normal distribution), was quite similar to the area-equivalent diameter, 27.6 nm, assigned to NIST RM8012. The lognormal reference model was the preferred choice for these particle size distributions as, for all laboratories, its parameters had lower relative standard errors (RSEs) than the other size distribution reference models tested (normal, Weibull and Rosin–Rammler–Bennett). The RSEs for the fitted standard deviations were two orders of magnitude higher than those for the fitted means, suggesting that most of the parameter estimate errors were associated with estimating the breadth of the distributions. The coefficients of variation for the interlaboratory statistics also confirmed the lognormal reference model as the preferred choice. From quasi-linear plots, the typical range for good fits between the model and cumulative number-based distributions was 1.9 fitted standard deviations less than the mean to 2.3 fitted standard deviations above the mean. Automated image capture, automated particle analysis and statistical evaluation of the data and fitting coefficients provide a framework for assessing nanoparticle size distributions using TEM for image acquisition. PMID:26361398
Fitting the Rasch Model to Account for Variation in Item Discrimination
ERIC Educational Resources Information Center
Weitzman, R. A.
2009-01-01
Building on the Kelley and Gulliksen versions of classical test theory, this article shows that a logistic model having only a single item parameter can account for varying item discrimination, as well as difficulty, by using item-test correlations to adjust incorrect-correct (0-1) item responses prior to an initial model fit. The fit occurs…
RATES OF FITNESS DECLINE AND REBOUND SUGGEST PERVASIVE EPISTASIS
Perfeito, L; Sousa, A; Bataillon, T; Gordo, I
2014-01-01
Unraveling the factors that determine the rate of adaptation is a major question in evolutionary biology. One key parameter is the effect of a new mutation on fitness, which invariably depends on the environment and genetic background. The fate of a mutation also depends on population size, which determines the amount of drift it will experience. Here, we manipulate both population size and genotype composition and follow adaptation of 23 distinct Escherichia coli genotypes. These have previously accumulated mutations under intense genetic drift and encompass a substantial fitness variation. A simple rule is uncovered: the net fitness change is negatively correlated with the fitness of the genotype in which new mutations appear—a signature of epistasis. We find that Fisher's geometrical model can account for the observed patterns of fitness change and infer the parameters of this model that best fit the data, using Approximate Bayesian Computation. We estimate a genomic mutation rate of 0.01 per generation for fitness altering mutations, albeit with a large confidence interval, a mean fitness effect of mutations of −0.01, and an effective number of traits nine in mutS− E. coli. This framework can be extended to confront a broader range of models with data and test different classes of fitness landscape models. PMID:24372601
Bayesian-MCMC-based parameter estimation of stealth aircraft RCS models
NASA Astrophysics Data System (ADS)
Xia, Wei; Dai, Xiao-Xia; Feng, Yuan
2015-12-01
When modeling a stealth aircraft with low RCS (Radar Cross Section), conventional parameter estimation methods may cause a deviation from the actual distribution, owing to the fact that the characteristic parameters are estimated via directly calculating the statistics of RCS. The Bayesian-Markov Chain Monte Carlo (Bayesian-MCMC) method is introduced herein to estimate the parameters so as to improve the fitting accuracies of fluctuation models. The parameter estimations of the lognormal and the Legendre polynomial models are reformulated in the Bayesian framework. The MCMC algorithm is then adopted to calculate the parameter estimates. Numerical results show that the distribution curves obtained by the proposed method exhibit improved consistence with the actual ones, compared with those fitted by the conventional method. The fitting accuracy could be improved by no less than 25% for both fluctuation models, which implies that the Bayesian-MCMC method might be a good candidate among the optimal parameter estimation methods for stealth aircraft RCS models. Project supported by the National Natural Science Foundation of China (Grant No. 61101173), the National Basic Research Program of China (Grant No. 613206), the National High Technology Research and Development Program of China (Grant No. 2012AA01A308), the State Scholarship Fund by the China Scholarship Council (CSC), and the Oversea Academic Training Funds, and University of Electronic Science and Technology of China (UESTC).
Utilization of Expert Knowledge in a Multi-Objective Hydrologic Model Automatic Calibration Process
NASA Astrophysics Data System (ADS)
Quebbeman, J.; Park, G. H.; Carney, S.; Day, G. N.; Micheletty, P. D.
2016-12-01
Spatially distributed continuous simulation hydrologic models have a large number of parameters for potential adjustment during the calibration process. Traditional manual calibration approaches of such a modeling system is extremely laborious, which has historically motivated the use of automatic calibration procedures. With a large selection of model parameters, achieving high degrees of objective space fitness - measured with typical metrics such as Nash-Sutcliffe, Kling-Gupta, RMSE, etc. - can easily be achieved using a range of evolutionary algorithms. A concern with this approach is the high degree of compensatory calibration, with many similarly performing solutions, and yet grossly varying parameter set solutions. To help alleviate this concern, and mimic manual calibration processes, expert knowledge is proposed for inclusion within the multi-objective functions, which evaluates the parameter decision space. As a result, Pareto solutions are identified with high degrees of fitness, but also create parameter sets that maintain and utilize available expert knowledge resulting in more realistic and consistent solutions. This process was tested using the joint SNOW-17 and Sacramento Soil Moisture Accounting method (SAC-SMA) within the Animas River basin in Colorado. Three different elevation zones, each with a range of parameters, resulted in over 35 model parameters simultaneously calibrated. As a result, high degrees of fitness were achieved, in addition to the development of more realistic and consistent parameter sets such as those typically achieved during manual calibration procedures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peeler, C; Bronk, L; UT Graduate School of Biomedical Sciences at Houston, Houston, TX
2015-06-15
Purpose: High throughput in vitro experiments assessing cell survival following proton radiation indicate that both the alpha and the beta parameters of the linear quadratic model increase with increasing proton linear energy transfer (LET). We investigated the relative biological effectiveness (RBE) of double-strand break (DSB) induction as a means of explaining the experimental results. Methods: Experiments were performed with two lung cancer cell lines and a range of proton LET values (0.94 – 19.4 keV/µm) using an experimental apparatus designed to irradiate cells in a 96 well plate such that each column encounters protons of different dose-averaged LET (LETd). Traditionalmore » linear quadratic survival curve fitting was performed, and alpha, beta, and RBE values obtained. Survival curves were also fit with a model incorporating RBE of DSB induction as the sole fit parameter. Fitted values of the RBE of DSB induction were then compared to values obtained using Monte Carlo Damage Simulation (MCDS) software and energy spectra calculated with Geant4. Other parameters including alpha, beta, and number of DSBs were compared to those obtained from traditional fitting. Results: Survival curve fitting with RBE of DSB induction yielded alpha and beta parameters that increase with proton LETd, which follows from the standard method of fitting; however, relying on a single fit parameter provided more consistent trends. The fitted values of RBE of DSB induction increased beyond what is predicted from MCDS data above proton LETd of approximately 10 keV/µm. Conclusion: In order to accurately model in vitro proton irradiation experiments performed with high throughput methods, the RBE of DSB induction must increase more rapidly than predicted by MCDS above LETd of 10 keV/µm. This can be explained by considering the increased complexity of DSBs or the nature of intra-track pairwise DSB interactions in this range of LETd values. NIH Grant 2U19CA021239-35.« less
Mechanistic equivalent circuit modelling of a commercial polymer electrolyte membrane fuel cell
NASA Astrophysics Data System (ADS)
Giner-Sanz, J. J.; Ortega, E. M.; Pérez-Herranz, V.
2018-03-01
Electrochemical impedance spectroscopy (EIS) has been widely used in the fuel cell field since it allows deconvolving the different physic-chemical processes that affect the fuel cell performance. Typically, EIS spectra are modelled using electric equivalent circuits. In this work, EIS spectra of an individual cell of a commercial PEM fuel cell stack were obtained experimentally. The goal was to obtain a mechanistic electric equivalent circuit in order to model the experimental EIS spectra. A mechanistic electric equivalent circuit is a semiempirical modelling technique which is based on obtaining an equivalent circuit that does not only correctly fit the experimental spectra, but which elements have a mechanistic physical meaning. In order to obtain the aforementioned electric equivalent circuit, 12 different models with defined physical meanings were proposed. These equivalent circuits were fitted to the obtained EIS spectra. A 2 step selection process was performed. In the first step, a group of 4 circuits were preselected out of the initial list of 12, based on general fitting indicators as the determination coefficient and the fitted parameter uncertainty. In the second step, one of the 4 preselected circuits was selected on account of the consistency of the fitted parameter values with the physical meaning of each parameter.
Model error in covariance structure models: Some implications for power and Type I error
Coffman, Donna L.
2010-01-01
The present study investigated the degree to which violation of the parameter drift assumption affects the Type I error rate for the test of close fit and power analysis procedures proposed by MacCallum, Browne, and Sugawara (1996) for both the test of close fit and the test of exact fit. The parameter drift assumption states that as sample size increases both sampling error and model error (i.e. the degree to which the model is an approximation in the population) decrease. Model error was introduced using a procedure proposed by Cudeck and Browne (1992). The empirical power for both the test of close fit, in which the null hypothesis specifies that the Root Mean Square Error of Approximation (RMSEA) ≤ .05, and the test of exact fit, in which the null hypothesis specifies that RMSEA = 0, is compared with the theoretical power computed using the MacCallum et al. (1996) procedure. The empirical power and theoretical power for both the test of close fit and the test of exact fit are nearly identical under violations of the assumption. The results also indicated that the test of close fit maintains the nominal Type I error rate under violations of the assumption. PMID:21331302
van Baalen, Sophie; Leemans, Alexander; Dik, Pieter; Lilien, Marc R; Ten Haken, Bennie; Froeling, Martijn
2017-07-01
To evaluate if a three-component model correctly describes the diffusion signal in the kidney and whether it can provide complementary anatomical or physiological information about the underlying tissue. Ten healthy volunteers were examined at 3T, with T 2 -weighted imaging, diffusion tensor imaging (DTI), and intravoxel incoherent motion (IVIM). Diffusion tensor parameters (mean diffusivity [MD] and fractional anisotropy [FA]) were obtained by iterative weighted linear least squares fitting of the DTI data and mono-, bi-, and triexponential fit parameters (D 1 , D 2 , D 3 , f fast2 , f fast3 , and f interm ) using a nonlinear fit of the IVIM data. Average parameters were calculated for three regions of interest (ROIs) (cortex, medulla, and rest) and from fiber tractography. Goodness of fit was assessed with adjusted R 2 ( Radj2) and the Shapiro-Wilk test was used to test residuals for normality. Maps of diffusion parameters were also visually compared. Fitting the diffusion signal was feasible for all models. The three-component model was best able to describe fast signal decay at low b values (b < 50), which was most apparent in Radj2 of the ROI containing high diffusion signals (ROI rest ), which was 0.42 ± 0.14, 0.61 ± 0.11, 0.77 ± 0.09, and 0.81 ± 0.08 for DTI, one-, two-, and three-component models, respectively, and in visual comparison of the fitted and measured S 0 . None of the models showed significant differences (P > 0.05) between the diffusion constant of the medulla and cortex, whereas the f fast component of the two and three-component models were significantly different (P < 0.001). Triexponential fitting is feasible for the diffusion signal in the kidney, and provides additional information. 2 Technical Efficacy: Stage 1 J. MAGN. RESON. IMAGING 2017;46:228-239. © 2016 The Authors Journal of Magnetic Resonance Imaging published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dirian, Yves; Foffa, Stefano; Kunz, Martin
We study the cosmological predictions of two recently proposed non-local modifications of General Relativity. Both models have the same number of parameters as ΛCDM, with a mass parameter m replacing the cosmological constant. We implement the cosmological perturbations of the non-local models into a modification of the CLASS Boltzmann code, and we make a full comparison to CMB, BAO and supernova data. We find that the non-local models fit these datasets very well, at the same level as ΛCDM. Among the vast literature on modified gravity models, this is, to our knowledge, the only example which fits data as wellmore » as ΛCDM without requiring any additional parameter. For both non-local models parameter estimation using Planck +JLA+BAO data gives a value of H{sub 0} slightly higher than in ΛCDM.« less
Impact of fitting algorithms on errors of parameter estimates in dynamic contrast-enhanced MRI
NASA Astrophysics Data System (ADS)
Debus, C.; Floca, R.; Nörenberg, D.; Abdollahi, A.; Ingrisch, M.
2017-12-01
Parameter estimation in dynamic contrast-enhanced MRI (DCE MRI) is usually performed by non-linear least square (NLLS) fitting of a pharmacokinetic model to a measured concentration-time curve. The two-compartment exchange model (2CXM) describes the compartments ‘plasma’ and ‘interstitial volume’ and their exchange in terms of plasma flow and capillary permeability. The model function can be defined by either a system of two coupled differential equations or a closed-form analytical solution. The aim of this study was to compare these two representations in terms of accuracy, robustness and computation speed, depending on parameter combination and temporal sampling. The impact on parameter estimation errors was investigated by fitting the 2CXM to simulated concentration-time curves. Parameter combinations representing five tissue types were used, together with two arterial input functions, a measured and a theoretical population based one, to generate 4D concentration images at three different temporal resolutions. Images were fitted by NLLS techniques, where the sum of squared residuals was calculated by either numeric integration with the Runge-Kutta method or convolution. Furthermore two example cases, a prostate carcinoma and a glioblastoma multiforme patient, were analyzed in order to investigate the validity of our findings in real patient data. The convolution approach yields improved results in precision and robustness of determined parameters. Precision and stability are limited in curves with low blood flow. The model parameter ve shows great instability and little reliability in all cases. Decreased temporal resolution results in significant errors for the differential equation approach in several curve types. The convolution excelled in computational speed by three orders of magnitude. Uncertainties in parameter estimation at low temporal resolution cannot be compensated by usage of the differential equations. Fitting with the convolution approach is superior in computational time, with better stability and accuracy at the same time.
Equal Area Logistic Estimation for Item Response Theory
NASA Astrophysics Data System (ADS)
Lo, Shih-Ching; Wang, Kuo-Chang; Chang, Hsin-Li
2009-08-01
Item response theory (IRT) models use logistic functions exclusively as item response functions (IRFs). Applications of IRT models require obtaining the set of values for logistic function parameters that best fit an empirical data set. However, success in obtaining such set of values does not guarantee that the constructs they represent actually exist, for the adequacy of a model is not sustained by the possibility of estimating parameters. In this study, an equal area based two-parameter logistic model estimation algorithm is proposed. Two theorems are given to prove that the results of the algorithm are equivalent to the results of fitting data by logistic model. Numerical results are presented to show the stability and accuracy of the algorithm.
Modeling dust emission in the Magellanic Clouds with Spitzer and Herschel
NASA Astrophysics Data System (ADS)
Chastenet, Jérémy; Bot, Caroline; Gordon, Karl D.; Bocchio, Marco; Roman-Duval, Julia; Jones, Anthony P.; Ysard, Nathalie
2017-05-01
Context. Dust modeling is crucial to infer dust properties and budget for galaxy studies. However, there are systematic disparities between dust grain models that result in corresponding systematic differences in the inferred dust properties of galaxies. Quantifying these systematics requires a consistent fitting analysis. Aims: We compare the output dust parameters and assess the differences between two dust grain models, the DustEM model and THEMIS. In this study, we use a single fitting method applied to all the models to extract a coherent and unique statistical analysis. Methods: We fit the models to the dust emission seen by Spitzer and Herschel in the Small and Large Magellanic Clouds (SMC and LMC). The observations cover the infrared (IR) spectrum from a few microns to the sub-millimeter range. For each fitted pixel, we calculate the full n-D likelihood based on a previously described method. The free parameters are both environmental (U, the interstellar radiation field strength; αISRF, power-law coefficient for a multi-U environment; Ω∗, the starlight strength) and intrinsic to the model (YI: abundances of the grain species I; αsCM20, coefficient in the small carbon grain size distribution). Results: Fractional residuals of five different sets of parameters show that fitting THEMIS brings a more accurate reproduction of the observations than the DustEM model. However, independent variations of the dust species show strong model-dependencies. We find that the abundance of silicates can only be constrained to an upper-limit and that the silicate/carbon ratio is different than that seen in our Galaxy. In the LMC, our fits result in dust masses slightly lower than those found in the literature, by a factor lower than 2. In the SMC, we find dust masses in agreement with previous studies.
Chaudhuri, Shomesh E; Merfeld, Daniel M
2013-03-01
Psychophysics generally relies on estimating a subject's ability to perform a specific task as a function of an observed stimulus. For threshold studies, the fitted functions are called psychometric functions. While fitting psychometric functions to data acquired using adaptive sampling procedures (e.g., "staircase" procedures), investigators have encountered a bias in the spread ("slope" or "threshold") parameter that has been attributed to the serial dependency of the adaptive data. Using simulations, we confirm this bias for cumulative Gaussian parametric maximum likelihood fits on data collected via adaptive sampling procedures, and then present a bias-reduced maximum likelihood fit that substantially reduces the bias without reducing the precision of the spread parameter estimate and without reducing the accuracy or precision of the other fit parameters. As a separate topic, we explain how to implement this bias reduction technique using generalized linear model fits as well as other numeric maximum likelihood techniques such as the Nelder-Mead simplex. We then provide a comparison of the iterative bootstrap and observed information matrix techniques for estimating parameter fit variance from adaptive sampling procedure data sets. The iterative bootstrap technique is shown to be slightly more accurate; however, the observed information technique executes in a small fraction (0.005 %) of the time required by the iterative bootstrap technique, which is an advantage when a real-time estimate of parameter fit variance is required.
The validation of a generalized Hooke's law for coronary arteries.
Wang, Chong; Zhang, Wei; Kassab, Ghassan S
2008-01-01
The exponential form of constitutive model is widely used in biomechanical studies of blood vessels. There are two main issues, however, with this model: 1) the curve fits of experimental data are not always satisfactory, and 2) the material parameters may be oversensitive. A new type of strain measure in a generalized Hooke's law for blood vessels was recently proposed by our group to address these issues. The new model has one nonlinear parameter and six linear parameters. In this study, the stress-strain equation is validated by fitting the model to experimental data of porcine coronary arteries. Material constants of left anterior descending artery and right coronary artery for the Hooke's law were computed with a separable nonlinear least-squares method with an excellent goodness of fit. A parameter sensitivity analysis shows that the stability of material constants is improved compared with the exponential model and a biphasic model. A boundary value problem was solved to demonstrate that the model prediction can match the measured arterial deformation under experimental loading conditions. The validated constitutive relation will serve as a basis for the solution of various boundary value problems of cardiovascular biomechanics.
METHODOLOGIES FOR CALIBRATION AND PREDICTIVE ANALYSIS OF A WATERSHED MODEL
The use of a fitted-parameter watershed model to address water quantity and quality management issues requires that it be calibrated under a wide range of hydrologic conditions. However, rarely does model calibration result in a unique parameter set. Parameter nonuniqueness can l...
Jastrzembski, Tiffany S.; Charness, Neil
2009-01-01
The authors estimate weighted mean values for nine information processing parameters for older adults using the Card, Moran, and Newell (1983) Model Human Processor model. The authors validate a subset of these parameters by modeling two mobile phone tasks using two different phones and comparing model predictions to a sample of younger (N = 20; Mage = 20) and older (N = 20; Mage = 69) adults. Older adult models fit keystroke-level performance at the aggregate grain of analysis extremely well (R = 0.99) and produced equivalent fits to previously validated younger adult models. Critical path analyses highlighted points of poor design as a function of cognitive workload, hardware/software design, and user characteristics. The findings demonstrate that estimated older adult information processing parameters are valid for modeling purposes, can help designers understand age-related performance using existing interfaces, and may support the development of age-sensitive technologies. PMID:18194048
Jastrzembski, Tiffany S; Charness, Neil
2007-12-01
The authors estimate weighted mean values for nine information processing parameters for older adults using the Card, Moran, and Newell (1983) Model Human Processor model. The authors validate a subset of these parameters by modeling two mobile phone tasks using two different phones and comparing model predictions to a sample of younger (N = 20; M-sub(age) = 20) and older (N = 20; M-sub(age) = 69) adults. Older adult models fit keystroke-level performance at the aggregate grain of analysis extremely well (R = 0.99) and produced equivalent fits to previously validated younger adult models. Critical path analyses highlighted points of poor design as a function of cognitive workload, hardware/software design, and user characteristics. The findings demonstrate that estimated older adult information processing parameters are valid for modeling purposes, can help designers understand age-related performance using existing interfaces, and may support the development of age-sensitive technologies.
Moran, Richard; Smith, Joshua H; García, José J
2014-11-28
The mechanical properties of human brain tissue are the subject of interest because of their use in understanding brain trauma and in developing therapeutic treatments and procedures. To represent the behavior of the tissue, we have developed hyperelastic mechanical models whose parameters are fitted in accordance with experimental test results. However, most studies available in the literature have fitted parameters with data of a single type of loading, such as tension, compression, or shear. Recently, Jin et al. (Journal of Biomechanics 46:2795-2801, 2013) reported data from ex vivo tests of human brain tissue under tension, compression, and shear loading using four strain rates and four different brain regions. However, they do not report parameters of energy functions that can be readily used in finite element simulations. To represent the tissue behavior for the quasi-static loading conditions, we aimed to determine the best fit of the hyperelastic parameters of the hyperfoam, Ogden, and polynomial strain energy functions available in ABAQUS for the low strain rate data, while simultaneously considering all three loading modes. We used an optimization process conducted in MATLAB, calling iteratively three finite element models developed in ABAQUS that represent the three loadings. Results showed a relatively good fit to experimental data in all loading modes using two terms in the energy functions. Values for the shear modulus obtained in this analysis (897-1653Pa) are in the range of those presented in other studies. These energy-function parameters can be used in brain tissue simulations using finite element models. Copyright © 2014 Elsevier Ltd. All rights reserved.
FracFit: A Robust Parameter Estimation Tool for Anomalous Transport Problems
NASA Astrophysics Data System (ADS)
Kelly, J. F.; Bolster, D.; Meerschaert, M. M.; Drummond, J. D.; Packman, A. I.
2016-12-01
Anomalous transport cannot be adequately described with classical Fickian advection-dispersion equations (ADE). Rather, fractional calculus models may be used, which capture non-Fickian behavior (e.g. skewness and power-law tails). FracFit is a robust parameter estimation tool based on space- and time-fractional models used to model anomalous transport. Currently, four fractional models are supported: 1) space fractional advection-dispersion equation (sFADE), 2) time-fractional dispersion equation with drift (TFDE), 3) fractional mobile-immobile equation (FMIE), and 4) tempered fractional mobile-immobile equation (TFMIE); additional models may be added in the future. Model solutions using pulse initial conditions and continuous injections are evaluated using stable distribution PDFs and CDFs or subordination integrals. Parameter estimates are extracted from measured breakthrough curves (BTCs) using a weighted nonlinear least squares (WNLS) algorithm. Optimal weights for BTCs for pulse initial conditions and continuous injections are presented, facilitating the estimation of power-law tails. Two sample applications are analyzed: 1) continuous injection laboratory experiments using natural organic matter and 2) pulse injection BTCs in the Selke river. Model parameters are compared across models and goodness-of-fit metrics are presented, assisting model evaluation. The sFADE and time-fractional models are compared using space-time duality (Baeumer et. al., 2009), which links the two paradigms.
NASA Astrophysics Data System (ADS)
Ngo, N. H.; Hartmann, J.-M.
2017-12-01
We propose a strategy to generate parameters of the Hartmann-Tran profile (HTp) by simultaneously using first principle calculations and broadening coefficients deduced from Voigt/Lorentz fits of experimental spectra. We start from reference absorptions simulated, at pressures between 10 and 950 Torr, using the HTp with parameters recently obtained from high quality experiments for the P(1) and P(17) lines of the 3-0 band of CO in He, Ar and Kr. Using requantized Classical Molecular Dynamics Simulations (rCMDS), we calculate spectra under the same conditions. We then correct them using a single parameter deduced from Lorentzian fits of both reference and calculated absorptions at a single pressure. The corrected rCMDS spectra are then simultaneously fitted using the HTp, yielding the parameters of this model and associated spectra. Comparisons between the retrieved and input (reference) HTp parameters show a quite satisfactory agreement. Furthermore, differences between the reference spectra and those computed with the HT model fitted to the corrected-rCMDS predictions are much smaller than those obtained with a Voigt line shape. Their full amplitudes are in most cases smaller than 1%, and often below 0.5%, of the peak absorption. This opens the route to completing spectroscopic databases using calculations and the very numerous broadening coefficients available from Voigt fits of laboratory spectra.
Stochastic approach to data analysis in fluorescence correlation spectroscopy.
Rao, Ramachandra; Langoju, Rajesh; Gösch, Michael; Rigler, Per; Serov, Alexandre; Lasser, Theo
2006-09-21
Fluorescence correlation spectroscopy (FCS) has emerged as a powerful technique for measuring low concentrations of fluorescent molecules and their diffusion constants. In FCS, the experimental data is conventionally fit using standard local search techniques, for example, the Marquardt-Levenberg (ML) algorithm. A prerequisite for these categories of algorithms is the sound knowledge of the behavior of fit parameters and in most cases good initial guesses for accurate fitting, otherwise leading to fitting artifacts. For known fit models and with user experience about the behavior of fit parameters, these local search algorithms work extremely well. However, for heterogeneous systems or where automated data analysis is a prerequisite, there is a need to apply a procedure, which treats FCS data fitting as a black box and generates reliable fit parameters with accuracy for the chosen model in hand. We present a computational approach to analyze FCS data by means of a stochastic algorithm for global search called PGSL, an acronym for Probabilistic Global Search Lausanne. This algorithm does not require any initial guesses and does the fitting in terms of searching for solutions by global sampling. It is flexible as well as computationally faster at the same time for multiparameter evaluations. We present the performance study of PGSL for two-component with triplet fits. The statistical study and the goodness of fit criterion for PGSL are also presented. The robustness of PGSL on noisy experimental data for parameter estimation is also verified. We further extend the scope of PGSL by a hybrid analysis wherein the output of PGSL is fed as initial guesses to ML. Reliability studies show that PGSL and the hybrid combination of both perform better than ML for various thresholds of the mean-squared error (MSE).
Thermodynamic assessment of Ag–Cu–In
Muzzillo, Christopher P.; Anderson, Tim
2018-01-16
The Ag-Cu-In thermodynamic material system is of interest for brazing alloys and chalcopyrite thin-film photovoltaics. To advance these applications, Ag-Cu-In was assessed and a Calphad model was developed. Binary Ag-Cu and Cu-In parameters were taken from previous assessments, while Ag-In was re-assessed. Structure-based models were employed for ..beta..-bcc(A2)-Ag 3In, ..gamma..-Ag 9In 4, and AgIn 2 to obtain good fit to enthalpy, phase boundary, and invariant reaction data for Ag-In. Ternary Ag-Cu-In parameters were optimized to achieve excellent fit to activity, enthalpy, and extensive phase equilibrium data. Relative to the previous Ag-Cu-In assessment, fit was improved while fewer parameters were used.
Thermodynamic assessment of Ag–Cu–In
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muzzillo, Christopher P.; Anderson, Tim
The Ag-Cu-In thermodynamic material system is of interest for brazing alloys and chalcopyrite thin-film photovoltaics. To advance these applications, Ag-Cu-In was assessed and a Calphad model was developed. Binary Ag-Cu and Cu-In parameters were taken from previous assessments, while Ag-In was re-assessed. Structure-based models were employed for ..beta..-bcc(A2)-Ag 3In, ..gamma..-Ag 9In 4, and AgIn 2 to obtain good fit to enthalpy, phase boundary, and invariant reaction data for Ag-In. Ternary Ag-Cu-In parameters were optimized to achieve excellent fit to activity, enthalpy, and extensive phase equilibrium data. Relative to the previous Ag-Cu-In assessment, fit was improved while fewer parameters were used.
Parameterizing sorption isotherms using a hybrid global-local fitting procedure.
Matott, L Shawn; Singh, Anshuman; Rabideau, Alan J
2017-05-01
Predictive modeling of the transport and remediation of groundwater contaminants requires an accurate description of the sorption process, which is usually provided by fitting an isotherm model to site-specific laboratory data. Commonly used calibration procedures, listed in order of increasing sophistication, include: trial-and-error, linearization, non-linear regression, global search, and hybrid global-local search. Given the considerable variability in fitting procedures applied in published isotherm studies, we investigated the importance of algorithm selection through a series of numerical experiments involving 13 previously published sorption datasets. These datasets, considered representative of state-of-the-art for isotherm experiments, had been previously analyzed using trial-and-error, linearization, or non-linear regression methods. The isotherm expressions were re-fit using a 3-stage hybrid global-local search procedure (i.e. global search using particle swarm optimization followed by Powell's derivative free local search method and Gauss-Marquardt-Levenberg non-linear regression). The re-fitted expressions were then compared to previously published fits in terms of the optimized weighted sum of squared residuals (WSSR) fitness function, the final estimated parameters, and the influence on contaminant transport predictions - where easily computed concentration-dependent contaminant retardation factors served as a surrogate measure of likely transport behavior. Results suggest that many of the previously published calibrated isotherm parameter sets were local minima. In some cases, the updated hybrid global-local search yielded order-of-magnitude reductions in the fitness function. In particular, of the candidate isotherms, the Polanyi-type models were most likely to benefit from the use of the hybrid fitting procedure. In some cases, improvements in fitness function were associated with slight (<10%) changes in parameter values, but in other cases significant (>50%) changes in parameter values were noted. Despite these differences, the influence of isotherm misspecification on contaminant transport predictions was quite variable and difficult to predict from inspection of the isotherms. Copyright © 2017 Elsevier B.V. All rights reserved.
Using geometry to improve model fitting and experiment design for glacial isostasy
NASA Astrophysics Data System (ADS)
Kachuck, S. B.; Cathles, L. M.
2017-12-01
As scientists we routinely deal with models, which are geometric objects at their core - the manifestation of a set of parameters as predictions for comparison with observations. When the number of observations exceeds the number of parameters, the model is a hypersurface (the model manifold) in the space of all possible predictions. The object of parameter fitting is to find the parameters corresponding to the point on the model manifold as close to the vector of observations as possible. But the geometry of the model manifold can make this difficult. By curving, ending abruptly (where, for instance, parameters go to zero or infinity), and by stretching and compressing the parameters together in unexpected directions, it can be difficult to design algorithms that efficiently adjust the parameters. Even at the optimal point on the model manifold, parameters might not be individually resolved well enough to be applied to new contexts. In our context of glacial isostatic adjustment, models of sparse surface observations have a broad spread of sensitivity to mixtures of the earth's viscous structure and the surface distribution of ice over the last glacial cycle. This impedes precise statements about crucial geophysical processes, such as the planet's thermal history or the climates that controlled the ice age. We employ geometric methods developed in the field of systems biology to improve the efficiency of fitting (geodesic accelerated Levenberg-Marquardt) and to identify the maximally informative sources of additional data to make better predictions of sea levels and ice configurations (optimal experiment design). We demonstrate this in particular in reconstructions of the Barents Sea Ice Sheet, where we show that only certain kinds of data from the central Barents have the power to distinguish between proposed models.
Shot model parameters for Cygnus X-1 through phase portrait fitting
NASA Technical Reports Server (NTRS)
Lochner, James C.; Swank, J. H.; Szymkowiak, A. E.
1991-01-01
Shot models for systems having about 1/f power density spectrum are developed by utilizing a distribution of shot durations. Parameters of the distribution are determined by fitting the power spectrum either with analytic forms for the spectrum of a shot model with a given shot profile, or with the spectrum derived from numerical realizations of trial shot models. The shot fraction is specified by fitting the phase portrait, which is a plot of intensity at a given time versus intensity at a delayed time and in principle is sensitive to different shot profiles. These techniques have been extensively applied to the X-ray variability of Cygnus X-1, using HEAO 1 A-2 and an Exosat ME observation. The power spectra suggest models having characteristic shot durations lasting from milliseconds to a few seconds, while the phase portrait fits give shot fractions of about 50 percent. Best fits to the portraits are obtained if the amplitude of the shot is a power-law function of the duration of the shot. These fits prefer shots having a symmetric exponential rise and decay. Results are interpreted in terms of a distribution of magnetic flares in the accretion disk.
Temperature-viscosity models reassessed.
Peleg, Micha
2017-05-04
The temperature effect on viscosity of liquid and semi-liquid foods has been traditionally described by the Arrhenius equation, a few other mathematical models, and more recently by the WLF and VTF (or VFT) equations. The essence of the Arrhenius equation is that the viscosity is proportional to the absolute temperature's reciprocal and governed by a single parameter, namely, the energy of activation. However, if the absolute temperature in K in the Arrhenius equation is replaced by T + b where both T and the adjustable b are in °C, the result is a two-parameter model, which has superior fit to experimental viscosity-temperature data. This modified version of the Arrhenius equation is also mathematically equal to the WLF and VTF equations, which are known to be equal to each other. Thus, despite their dissimilar appearances all three equations are essentially the same model, and when used to fit experimental temperature-viscosity data render exactly the same very high regression coefficient. It is shown that three new hybrid two-parameter mathematical models, whose formulation bears little resemblance to any of the conventional models, can also have excellent fit with r 2 ∼ 1. This is demonstrated by comparing the various models' regression coefficients to published viscosity-temperature relationships of 40% sucrose solution, soybean oil, and 70°Bx pear juice concentrate at different temperature ranges. Also compared are reconstructed temperature-viscosity curves using parameters calculated directly from 2 or 3 data points and fitted curves obtained by nonlinear regression using a larger number of experimental viscosity measurements.
Fitting ARMA Time Series by Structural Equation Models.
ERIC Educational Resources Information Center
van Buuren, Stef
1997-01-01
This paper outlines how the stationary ARMA (p,q) model (G. Box and G. Jenkins, 1976) can be specified as a structural equation model. Maximum likelihood estimates for the parameters in the ARMA model can be obtained by software for fitting structural equation models. The method is applied to three problem types. (SLD)
NASA Astrophysics Data System (ADS)
Mattei, G.; Ahluwalia, A.
2018-04-01
We introduce a new function, the apparent elastic modulus strain-rate spectrum, E_{app} ( \\dot{ɛ} ), for the derivation of lumped parameter constants for Generalized Maxwell (GM) linear viscoelastic models from stress-strain data obtained at various compressive strain rates ( \\dot{ɛ}). The E_{app} ( \\dot{ɛ} ) function was derived using the tangent modulus function obtained from the GM model stress-strain response to a constant \\dot{ɛ} input. Material viscoelastic parameters can be rapidly derived by fitting experimental E_{app} data obtained at different strain rates to the E_{app} ( \\dot{ɛ} ) function. This single-curve fitting returns similar viscoelastic constants as the original epsilon dot method based on a multi-curve global fitting procedure with shared parameters. Its low computational cost permits quick and robust identification of viscoelastic constants even when a large number of strain rates or replicates per strain rate are considered. This method is particularly suited for the analysis of bulk compression and nano-indentation data of soft (bio)materials.
NASA Astrophysics Data System (ADS)
Domanskyi, Sergii; Schilling, Joshua E.; Gorshkov, Vyacheslav; Libert, Sergiy; Privman, Vladimir
2016-09-01
We develop a theoretical approach that uses physiochemical kinetics modelling to describe cell population dynamics upon progression of viral infection in cell culture, which results in cell apoptosis (programmed cell death) and necrosis (direct cell death). Several model parameters necessary for computer simulation were determined by reviewing and analyzing available published experimental data. By comparing experimental data to computer modelling results, we identify the parameters that are the most sensitive to the measured system properties and allow for the best data fitting. Our model allows extraction of parameters from experimental data and also has predictive power. Using the model we describe interesting time-dependent quantities that were not directly measured in the experiment and identify correlations among the fitted parameter values. Numerical simulation of viral infection progression is done by a rate-equation approach resulting in a system of "stiff" equations, which are solved by using a novel variant of the stochastic ensemble modelling approach. The latter was originally developed for coupled chemical reactions.
NASA Astrophysics Data System (ADS)
Domanskyi, Sergii; Schilling, Joshua; Gorshkov, Vyacheslav; Libert, Sergiy; Privman, Vladimir
We develop a theoretical approach that uses physiochemical kinetics modelling to describe cell population dynamics upon progression of viral infection in cell culture, which results in cell apoptosis (programmed cell death) and necrosis (direct cell death). Several model parameters necessary for computer simulation were determined by reviewing and analyzing available published experimental data. By comparing experimental data to computer modelling results, we identify the parameters that are the most sensitive to the measured system properties and allow for the best data fitting. Our model allows extraction of parameters from experimental data and also has predictive power. Using the model we describe interesting time-dependent quantities that were not directly measured in the experiment and identify correlations among the fitted parameter values. Numerical simulation of viral infection progression is done by a rate-equation approach resulting in a system of ``stiff'' equations, which are solved by using a novel variant of the stochastic ensemble modelling approach. The latter was originally developed for coupled chemical reactions.
Fast and accurate fitting and filtering of noisy exponentials in Legendre space.
Bao, Guobin; Schild, Detlev
2014-01-01
The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters.
Properties of Martian Hematite at Meridiani Planum by Simultaneous Fitting of Mars Mossbauer Spectra
NASA Technical Reports Server (NTRS)
Agresti, D. G.; Fleischer, I.; Klingelhoefer, G.; Morris, R. V.
2010-01-01
Mossbauer spectrometers [1] on the two Mars Exploration Rovers (MERs) have been making measurements of surface rocks and soils since January 2004, recording spectra in 10-K-wide temperature bins ranging from 180 K to 290 K. Initial analyses focused on modeling individual spectra directly as acquired or, to increase statistical quality, as sums of single-rock or soil spectra over temperature or as sums over similar rock or soil type [2, 3]. Recently, we have begun to apply simultaneous fitting procedures [4] to Mars Mossbauer data [5-7]. During simultaneous fitting (simfitting), many spectra are modeled similarly and fit together to a single convergence criterion. A satisfactory simfit with parameter values consistent among all spectra is more likely than many single-spectrum fits of the same data because fitting parameters are shared among multiple spectra in the simfit. Consequently, the number of variable parameters, as well as the correlations among them, is greatly reduced. Here we focus on applications of simfitting to interpret the hematite signature in Moessbauer spectra acquired at Meridiani Planum, results of which were reported in [7]. The Spectra. We simfit two sets of spectra with large hematite content [7]: 1) 60 rock outcrop spectra from Eagle Crater; and 2) 46 spectra of spherule-rich lag deposits (Table 1). Spectra of 10 different targets acquired at several distinct temperatures are included in each simfit set. In the table, each Sol (martian day) represents a different target, NS is the number of spectra for a given sol, and NT is the number of spectra for a given temperature. The spectra are indexed to facilitate definition of parameter relations and constraints. An example spectrum is shown in Figure 1, together with a typical fitting model. Results. We have shown that simultaneous fitting is effective in analyzing a large set of related MER Mossbauer spectra. By using appropriate constraints, we derive target-specific quantities and the temperature dependence of certain parameters. By examining different fitting models, we demonstrate an improved fit for martian hematite modeled with two sextets rather than as a single sextet, and show that outcrop and spherule hematite are distinct. For outcrop, the weaker sextet indicates a Morin transition typical of well-crystallized and chemically pure hematite, while most of the outcrop hematite remains in a weakly ferromagnetic state at all temperatures. For spherule spectra, both sextets are consistent with weakly ferromagnetic hematite with no Morin transition. For both hematites, there is evidence for a range of particle sizes.
Cognitive Models of Risky Choice: Parameter Stability and Predictive Accuracy of Prospect Theory
ERIC Educational Resources Information Center
Glockner, Andreas; Pachur, Thorsten
2012-01-01
In the behavioral sciences, a popular approach to describe and predict behavior is cognitive modeling with adjustable parameters (i.e., which can be fitted to data). Modeling with adjustable parameters allows, among other things, measuring differences between people. At the same time, parameter estimation also bears the risk of overfitting. Are…
NASA Astrophysics Data System (ADS)
Kiamehr, Ramin
2016-04-01
One arc-second high resolution version of the SRTM model recently published for the Iran by the US Geological Survey database. Digital Elevation Models (DEM) is widely used in different disciplines and applications by geoscientist. It is an essential data in geoid computation procedure, e.g., to determine the topographic, downward continuation (DWC) and atmospheric corrections. Also, it can be used in road location and design in civil engineering and hydrological analysis. However, a DEM is only a model of the elevation surface and it is subject to errors. The most important parts of errors could be comes from the bias in height datum. On the other hand, the accuracy of DEM is usually published in global sense and it is important to have estimation about the accuracy in the area of interest before using of it. One of the best methods to have a reasonable indication about the accuracy of DEM is obtained from the comparison of their height versus the precise national GPS/levelling data. It can be done by the determination of the Root-Mean-Square (RMS) of fitting between the DEM and leveling heights. The errors in the DEM can be approximated by different kinds of functions in order to fit the DEMs to a set of GPS/levelling data using the least squares adjustment. In the current study, several models ranging from a simple linear regression to seven parameter similarity transformation model are used in fitting procedure. However, the seven parameter model gives the best fitting with minimum standard division in all selected DEMs in the study area. Based on the 35 precise GPS/levelling data we obtain a RMS of 7 parameter fitting for SRTM DEM 5.5 m, The corrective surface model in generated based on the transformation parameters and included to the original SRTM model. The result of fitting in combined model is estimated again by independent GPS/leveling data. The result shows great improvement in absolute accuracy of the model with the standard deviation of 3.4 meter.
Liu, Feng; Tai, An; Lee, Percy; Biswas, Tithi; Ding, George X.; El Naqa, Isaam; Grimm, Jimm; Jackson, Andrew; Kong, Feng-Ming (Spring); LaCouture, Tamara; Loo, Billy; Miften, Moyed; Solberg, Timothy; Li, X Allen
2017-01-01
Purpose To analyze pooled clinical data using different radiobiological models and to understand the relationship between biologically effective dose (BED) and tumor control probability (TCP) for stereotactic body radiotherapy (SBRT) of early-stage non-small cell lung cancer (NSCLC). Method and Materials The clinical data of 1-, 2-, 3-, and 5-year actuarial or Kaplan-Meier TCP from 46 selected studies were collected for SBRT of NSCLC in the literature. The TCP data were separated for Stage T1 and T2 tumors if possible, otherwise collected for combined stages. BED was calculated at isocenters using six radiobiological models. For each model, the independent model parameters were determined from a fit to the TCP data using the least chi-square (χ2) method with either one set of parameters regardless of tumor stages or two sets for T1 and T2 tumors separately. Results The fits to the clinic data yield consistent results of large α/β ratios of about 20 Gy for all models investigated. The regrowth model that accounts for the tumor repopulation and heterogeneity leads to a better fit to the data, compared to other 5 models where the fits were indistinguishable between the models. The models based on the fitting parameters predict that the T2 tumors require about additional 1 Gy physical dose at isocenters per fraction (≤5 fractions) to achieve the optimal TCP when compared to the T1 tumors. Conclusion This systematic analysis of a large set of published clinical data using different radiobiological models shows that local TCP for SBRT of early-stage NSCLC has strong dependence on BED with large α/β ratios of about 20 Gy. The six models predict that a BED (calculated with α/β of 20) of 90 Gy is sufficient to achieve TCP ≥ 95%. Among the models considered, the regrowth model leads to a better fit to the clinical data. PMID:27871671
Mathematical Models for the Apparent Mass of the Seated Human Body Exposed to Vertical Vibration
NASA Astrophysics Data System (ADS)
Wei, L.; Griffin, M. J.
1998-05-01
Alternative mathematical models of the vertical apparent mass of the seated human body are developed. The optimum parameters of four models (two single-degree-of-freedom models and two two-degree-of-freedom models) are derived from the mean measured apparent masses of 60 subjects (24 men, 24 women, 12 children) previously reported. The best fits were obtained by fitting the phase data with single-degree-of-freedom and two-degree-of-freedom models having rigid support structures. For these two models, curve fitting was performed on each of the 60 subjects (so as to obtain optimum model parameters for each subject), for the averages of each of the three groups of subjects, and for the entire group of subjects. The values obtained are tabulated. Use of a two-degree-of-freedom model provided a better fit to the phase of the apparent mass at frequencies greater than about 8 Hz and an improved fit to the modulus of the apparent mass at frequencies around 5 Hz. It is concluded that the two-degree-of-freedom model provides an apparent mass similar to that of the human body, but this does not imply that the body moves in the same manner as the masses in this optimized two-degree-of-freedom model.
NASA Astrophysics Data System (ADS)
Takahashi, Takuya; Sugiura, Junnnosuke; Nagayama, Kuniaki
2002-05-01
To investigate the role hydration plays in the electrostatic interactions of proteins, the time-averaged electrostatic potential of the B1 domain of protein G in an aqueous solution was calculated with full atomic molecular dynamics simulations that explicitly considers every atom (i.e., an all atom model). This all atom calculated potential was compared with the potential obtained from an electrostatic continuum model calculation. In both cases, the charge-screening effect was fairly well formulated with an effective relative dielectric constant which increased linearly with increasing charge-charge distance. This simulated linear dependence agrees with the experimentally determined linear relation proposed by Pickersgill. Cut-off approximations for Coulomb interactions failed to reproduce this linear relation. Correlation between the all atom model and the continuum models was found to be better than the respective correlation calculated for linear fitting to the two models. This confirms that the continuum model is better at treating the complicated shapes of protein conformations than the simple linear fitting empirical model. We have tried a sigmoid fitting empirical model in addition to the linear one. When weights of all data were treated equally, the sigmoid model, which requires two fitting parameters, fits results of both the all atom and the continuum models less accurately than the linear model which requires only one fitting parameter. When potential values are chosen as weighting factors, the fitting error of the sigmoid model became smaller, and the slope of both linear fitting curves became smaller. This suggests the screening effect of an aqueous medium within a short range, where potential values are relatively large, is smaller than that expected from the linear fitting curve whose slope is almost 4. To investigate the linear increase of the effective relative dielectric constant, the Poisson equation of a low-dielectric sphere in a high-dielectric medium was solved and charges distributed near the molecular surface were indicated as leading to the apparent linearity.
Cognitive models of risky choice: parameter stability and predictive accuracy of prospect theory.
Glöckner, Andreas; Pachur, Thorsten
2012-04-01
In the behavioral sciences, a popular approach to describe and predict behavior is cognitive modeling with adjustable parameters (i.e., which can be fitted to data). Modeling with adjustable parameters allows, among other things, measuring differences between people. At the same time, parameter estimation also bears the risk of overfitting. Are individual differences as measured by model parameters stable enough to improve the ability to predict behavior as compared to modeling without adjustable parameters? We examined this issue in cumulative prospect theory (CPT), arguably the most widely used framework to model decisions under risk. Specifically, we examined (a) the temporal stability of CPT's parameters; and (b) how well different implementations of CPT, varying in the number of adjustable parameters, predict individual choice relative to models with no adjustable parameters (such as CPT with fixed parameters, expected value theory, and various heuristics). We presented participants with risky choice problems and fitted CPT to each individual's choices in two separate sessions (which were 1 week apart). All parameters were correlated across time, in particular when using a simple implementation of CPT. CPT allowing for individual variability in parameter values predicted individual choice better than CPT with fixed parameters, expected value theory, and the heuristics. CPT's parameters thus seem to pick up stable individual differences that need to be considered when predicting risky choice. Copyright © 2011 Elsevier B.V. All rights reserved.
Analytical fitting model for rough-surface BRDF.
Renhorn, Ingmar G E; Boreman, Glenn D
2008-08-18
A physics-based model is developed for rough surface BRDF, taking into account angles of incidence and scattering, effective index, surface autocovariance, and correlation length. Shadowing is introduced on surface correlation length and reflectance. Separate terms are included for surface scatter, bulk scatter and retroreflection. Using the FindFit function in Mathematica, the functional form is fitted to BRDF measurements over a wide range of incident angles. The model has fourteen fitting parameters; once these are fixed, the model accurately describes scattering data over two orders of magnitude in BRDF without further adjustment. The resulting analytical model is convenient for numerical computations.
IPMP Global Fit - A one-step direct data analysis tool for predictive microbiology.
Huang, Lihan
2017-12-04
The objective of this work is to develop and validate a unified optimization algorithm for performing one-step global regression analysis of isothermal growth and survival curves for determination of kinetic parameters in predictive microbiology. The algorithm is incorporated with user-friendly graphical interfaces (GUIs) to develop a data analysis tool, the USDA IPMP-Global Fit. The GUIs are designed to guide the users to easily navigate through the data analysis process and properly select the initial parameters for different combinations of mathematical models. The software is developed for one-step kinetic analysis to directly construct tertiary models by minimizing the global error between the experimental observations and mathematical models. The current version of the software is specifically designed for constructing tertiary models with time and temperature as the independent model parameters in the package. The software is tested with a total of 9 different combinations of primary and secondary models for growth and survival of various microorganisms. The results of data analysis show that this software provides accurate estimates of kinetic parameters. In addition, it can be used to improve the experimental design and data collection for more accurate estimation of kinetic parameters. IPMP-Global Fit can be used in combination with the regular USDA-IPMP for solving the inverse problems and developing tertiary models in predictive microbiology. Published by Elsevier B.V.
Eaton, Jeffrey W.; Bao, Le
2017-01-01
Objectives The aim of the study was to propose and demonstrate an approach to allow additional nonsampling uncertainty about HIV prevalence measured at antenatal clinic sentinel surveillance (ANC-SS) in model-based inferences about trends in HIV incidence and prevalence. Design Mathematical model fitted to surveillance data with Bayesian inference. Methods We introduce a variance inflation parameter σinfl2 that accounts for the uncertainty of nonsampling errors in ANC-SS prevalence. It is additive to the sampling error variance. Three approaches are tested for estimating σinfl2 using ANC-SS and household survey data from 40 subnational regions in nine countries in sub-Saharan, as defined in UNAIDS 2016 estimates. Methods were compared using in-sample fit and out-of-sample prediction of ANC-SS data, fit to household survey prevalence data, and the computational implications. Results Introducing the additional variance parameter σinfl2 increased the error variance around ANC-SS prevalence observations by a median of 2.7 times (interquartile range 1.9–3.8). Using only sampling error in ANC-SS prevalence ( σinfl2=0), coverage of 95% prediction intervals was 69% in out-of-sample prediction tests. This increased to 90% after introducing the additional variance parameter σinfl2. The revised probabilistic model improved model fit to household survey prevalence and increased epidemic uncertainty intervals most during the early epidemic period before 2005. Estimating σinfl2 did not increase the computational cost of model fitting. Conclusions: We recommend estimating nonsampling error in ANC-SS as an additional parameter in Bayesian inference using the Estimation and Projection Package model. This approach may prove useful for incorporating other data sources such as routine prevalence from Prevention of mother-to-child transmission testing into future epidemic estimates. PMID:28296801
A Model Fit Statistic for Generalized Partial Credit Model
ERIC Educational Resources Information Center
Liang, Tie; Wells, Craig S.
2009-01-01
Investigating the fit of a parametric model is an important part of the measurement process when implementing item response theory (IRT), but research examining it is limited. A general nonparametric approach for detecting model misfit, introduced by J. Douglas and A. S. Cohen (2001), has exhibited promising results for the two-parameter logistic…
Greer, Amy L; Spence, Kelsey; Gardner, Emma
2017-01-05
The United States swine industry was first confronted with porcine epidemic diarrhea virus (PEDV) in 2013. In young pigs, the virus is highly pathogenic and the associated morbidity and mortality has a significant negative impact on the swine industry. We have applied the IDEA model to better understand the 2014 PEDV outbreak in Ontario, Canada. Using our simple, 2-parameter IDEA model, we have evaluated the early epidemic dynamics of PEDV on Ontario swine farms. We estimated the best-fit R 0 and control parameter (d) for the between farm transmission component of the outbreak by fitting the model to publically available cumulative incidence data. We used maximum likelihood to compare model fit estimates for different combinations of the R 0 and d parameters. Using our initial findings from the iterative fitting procedure, we projected the time course of the epidemic using only a subset of the early epidemic data. The IDEA model projections showed excellent agreement with the observed data based on a 7-day generation time estimate. The best-fit estimate for R 0 was 1.87 (95% CI: 1.52 - 2.34) and for the control parameter (d) was 0.059 (95% CI: 0.022 - 0.117). Using data from the first three generations of the outbreak, our iterative fitting procedure suggests that R 0 and d had stabilized sufficiently to project the time course of the outbreak with reasonable accuracy. The emergence and spread of PEDV represents an important agricultural emergency. The virus presents a significant ongoing threat to the Canadian swine industry. Developing an understanding of the important epidemiological characteristics and disease transmission dynamics of a novel pathogen such as PEDV is critical for helping to guide the implementation of effective, efficient, and economically feasible disease control and prevention strategies that are able to help decrease the impact of an outbreak.
A Person Fit Test for IRT Models for Polytomous Items
ERIC Educational Resources Information Center
Glas, C. A. W.; Dagohoy, Anna Villa T.
2007-01-01
A person fit test based on the Lagrange multiplier test is presented for three item response theory models for polytomous items: the generalized partial credit model, the sequential model, and the graded response model. The test can also be used in the framework of multidimensional ability parameters. It is shown that the Lagrange multiplier…
Charge Transport in Nonaqueous Liquid Electrolytes: A Paradigm Shift
2015-05-18
that provide inadequate descriptions of experimental data, often using empirical equations whose fitting parameters have no physical significance...provide inadequate descriptions of experimental data, often using empirical equations whose fitting parameters have no physical significance...Ea The hydrodynamic model, utilizing the Stokes equation describes isothermal conductivity, self-diffusion coefficient, and the dielectric
Predicting Performance on a Firefighter's Ability Test from Fitness Parameters
ERIC Educational Resources Information Center
Michaelides, Marcos A.; Parpa, Koulla M.; Thompson, Jerald; Brown, Barry
2008-01-01
The purpose of this project was to identify the relationships between various fitness parameters such as upper body muscular endurance, upper and lower body strength, flexibility, body composition and performance on an ability test (AT) that included simulated firefighting tasks. A second intent was to create a regression model that would predict…
NASA Astrophysics Data System (ADS)
Tsougos, Ioannis; Mavroidis, Panayiotis; Theodorou, Kyriaki; Rajala, J.; Pitkänen, M. A.; Holli, K.; Ojala, A. T.; Hyödynmaa, S.; Järvenpää, Ritva; Lind, Bengt K.; Kappas, Constantin
2006-02-01
The choice of the appropriate model and parameter set in determining the relation between the incidence of radiation pneumonitis and dose distribution in the lung is of great importance, especially in the case of breast radiotherapy where the observed incidence is fairly low. From our previous study based on 150 breast cancer patients, where the fits of dose-volume models to clinical data were estimated (Tsougos et al 2005 Evaluation of dose-response models and parameters predicting radiation induced pneumonitis using clinical data from breast cancer radiotherapy Phys. Med. Biol. 50 3535-54), one could get the impression that the relative seriality is significantly better than the LKB NTCP model. However, the estimation of the different NTCP models was based on their goodness-of-fit on clinical data, using various sets of published parameters from other groups, and this fact may provisionally justify the results. Hence, we sought to investigate further the LKB model, by applying different published parameter sets for the very same group of patients, in order to be able to compare the results. It was shown that, depending on the parameter set applied, the LKB model is able to predict the incidence of radiation pneumonitis with acceptable accuracy, especially when implemented on a sub-group of patients (120) receiving \\bar{\\bar{D}}|EUD higher than 8 Gy. In conclusion, the goodness-of-fit of a certain radiobiological model on a given clinical case is closely related to the selection of the proper scoring criteria and parameter set as well as to the compatibility of the clinical case from which the data were derived.
Simultaneous fits in ISIS on the example of GRO J1008-57
NASA Astrophysics Data System (ADS)
Kühnel, Matthias; Müller, Sebastian; Kreykenbohm, Ingo; Schwarm, Fritz-Walter; Grossberger, Christoph; Dauser, Thomas; Pottschmidt, Katja; Ferrigno, Carlo; Rothschild, Richard E.; Klochkov, Dmitry; Staubert, Rüdiger; Wilms, Joern
2015-04-01
Parallel computing and steadily increasing computation speed have led to a new tool for analyzing multiple datasets and datatypes: fitting several datasets simultaneously. With this technique, physically connected parameters of individual data can be treated as a single parameter by implementing this connection into the fit directly. We discuss the terminology, implementation, and possible issues of simultaneous fits based on the X-ray data analysis tool Interactive Spectral Interpretation System (ISIS). While all data modeling tools in X-ray astronomy allow in principle fitting data from multiple data sets individually, the syntax used in these tools is not often well suited for this task. Applying simultaneous fits to the transient X-ray binary GRO J1008-57, we find that the spectral shape is only dependent on X-ray flux. We determine time independent parameters such as, e.g., the folding energy E_fold, with unprecedented precision.
A Framework for Cloudy Model Optimization and Database Storage
NASA Astrophysics Data System (ADS)
Calvén, Emilia; Helton, Andrew; Sankrit, Ravi
2018-01-01
We present a framework for producing Cloudy photoionization models of the nebular emission from novae ejecta and storing a subset of the results in SQL database format for later usage. The database can be searched for models best fitting observed spectral line ratios. Additionally, the framework includes an optimization feature that can be used in tandem with the database to search for and improve on models by creating new Cloudy models while, varying the parameters. The database search and optimization can be used to explore the structures of nebulae by deriving their properties from the best-fit models. The goal is to provide the community with a large database of Cloudy photoionization models, generated from parameters reflecting conditions within novae ejecta, that can be easily fitted to observed spectral lines; either by directly accessing the database using the framework code or by usage of a website specifically made for this purpose.
Koen, Joshua D; Barrett, Frederick S; Harlow, Iain M; Yonelinas, Andrew P
2017-08-01
Signal-detection theory, and the analysis of receiver-operating characteristics (ROCs), has played a critical role in the development of theories of episodic memory and perception. The purpose of the current paper is to present the ROC Toolbox. This toolbox is a set of functions written in the Matlab programming language that can be used to fit various common signal detection models to ROC data obtained from confidence rating experiments. The goals for developing the ROC Toolbox were to create a tool (1) that is easy to use and easy for researchers to implement with their own data, (2) that can flexibly define models based on varying study parameters, such as the number of response options (e.g., confidence ratings) and experimental conditions, and (3) that provides optimal routines (e.g., Maximum Likelihood estimation) to obtain parameter estimates and numerous goodness-of-fit measures.The ROC toolbox allows for various different confidence scales and currently includes the models commonly used in recognition memory and perception: (1) the unequal variance signal detection (UVSD) model, (2) the dual process signal detection (DPSD) model, and (3) the mixture signal detection (MSD) model. For each model fit to a given data set the ROC toolbox plots summary information about the best fitting model parameters and various goodness-of-fit measures. Here, we present an overview of the ROC Toolbox, illustrate how it can be used to input and analyse real data, and finish with a brief discussion on features that can be added to the toolbox.
Ultra wideband (0.5-16 kHz) MR elastography for robust shear viscoelasticity model identification.
Liu, Yifei; Yasar, Temel K; Royston, Thomas J
2014-12-21
Changes in the viscoelastic parameters of soft biological tissues often correlate with progression of disease, trauma or injury, and response to treatment. Identifying the most appropriate viscoelastic model, then estimating and monitoring the corresponding parameters of that model can improve insight into the underlying tissue structural changes. MR Elastography (MRE) provides a quantitative method of measuring tissue viscoelasticity. In a previous study by the authors (Yasar et al 2013 Magn. Reson. Med. 70 479-89), a silicone-based phantom material was examined over the frequency range of 200 Hz-7.75 kHz using MRE, an unprecedented bandwidth at that time. Six viscoelastic models including four integer order models and two fractional order models, were fit to the wideband viscoelastic data (measured storage and loss moduli as a function of frequency). The 'fractional Voigt' model (spring and springpot in parallel) exhibited the best fit and was even able to fit the entire frequency band well when it was identified based only on a small portion of the band. This paper is an extension of that study with a wider frequency range from 500 Hz to 16 kHz. Furthermore, more fractional order viscoelastic models are added to the comparison pool. It is found that added complexity of the viscoelastic model provides only marginal improvement over the 'fractional Voigt' model. And, again, the fractional order models show significant improvement over integer order viscoelastic models that have as many or more fitting parameters.
Quantifying and Reducing Curve-Fitting Uncertainty in Isc
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campanelli, Mark; Duck, Benjamin; Emery, Keith
2015-06-14
Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data pointsmore » can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.« less
Quantifying and Reducing Curve-Fitting Uncertainty in Isc: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campanelli, Mark; Duck, Benjamin; Emery, Keith
Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data pointsmore » can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.« less
A comparison of methods of fitting several models to nutritional response data.
Vedenov, D; Pesti, G M
2008-02-01
A variety of models have been proposed to fit nutritional input-output response data. The models are typically nonlinear; therefore, fitting the models usually requires sophisticated statistical software and training to use it. An alternative tool for fitting nutritional response models was developed by using widely available and easier-to-use Microsoft Excel software. The tool, implemented as an Excel workbook (NRM.xls), allows simultaneous fitting and side-by-side comparisons of several popular models. This study compared the results produced by the tool we developed and PROC NLIN of SAS. The models compared were the broken line (ascending linear and quadratic segments), saturation kinetics, 4-parameter logistics, sigmoidal, and exponential models. The NRM.xls workbook provided results nearly identical to those of PROC NLIN. Furthermore, the workbook successfully fit several models that failed to converge in PROC NLIN. Two data sets were used as examples to compare fits by the different models. The results suggest that no particular nonlinear model is necessarily best for all nutritional response data.
The Impact on Individualizing Student Models on Necessary Practice Opportunities
ERIC Educational Resources Information Center
Lee, Jung In; Brunskill, Emma
2012-01-01
When modeling student learning, tutors that use the Knowledge Tracing framework often assume that all students have the same set of model parameters. We find that when fitting parameters to individual students, there is significant variation among the individual's parameters. We examine if this variation is important in terms of instructional…
NASA Astrophysics Data System (ADS)
Knani, S.; Aouaini, F.; Bahloul, N.; Khalfaoui, M.; Hachicha, M. A.; Ben Lamine, A.; Kechaou, N.
2014-04-01
Analytical expression for modeling water adsorption isotherms of food or agricultural products is developed using the statistical mechanics formalism. The model developed in this paper is further used to fit and interpret the isotherms of four varieties of Tunisian olive leaves called “Chemlali, Chemchali, Chetoui and Zarrazi”. The parameters involved in the model such as the number of adsorbed water molecules per site, n, the receptor sites density, NM, and the energetic parameters, a1 and a2, were determined by fitting the experimental adsorption isotherms at temperatures ranging from 303 to 323 K. We interpret the results of fitting. After that, the model is further applied to calculate thermodynamic functions which govern the adsorption mechanism such as entropy, the free enthalpy of Gibbs and the internal energy.
Weiss, Michael
2017-06-01
Appropriate model selection is important in fitting oral concentration-time data due to the complex character of the absorption process. When IV reference data are available, the problem is the selection of an empirical input function (absorption model). In the present examples a weighted sum of inverse Gaussian density functions (IG) was found most useful. It is shown that alternative models (gamma and Weibull density) are only valid if the input function is log-concave. Furthermore, it is demonstrated for the first time that the sum of IGs model can be also applied to fit oral data directly (without IV data). In the present examples, a weighted sum of two or three IGs was sufficient. From the parameters of this function, the model-independent measures AUC and mean residence time can be calculated. It turned out that a good fit of the data in the terminal phase is essential to avoid parameter biased estimates. The time course of fractional elimination rate and the concept of log-concavity have proved as useful tools in model selection.
NASA Technical Reports Server (NTRS)
Gross, Bernard
1996-01-01
Material characterization parameters obtained from naturally flawed specimens are necessary for reliability evaluation of non-deterministic advanced ceramic structural components. The least squares best fit method is applied to the three parameter uniaxial Weibull model to obtain the material parameters from experimental tests on volume or surface flawed specimens subjected to pure tension, pure bending, four point or three point loading. Several illustrative example problems are provided.
NASA Astrophysics Data System (ADS)
Bassiouni, Maoya; Higgins, Chad W.; Still, Christopher J.; Good, Stephen P.
2018-06-01
Vegetation controls on soil moisture dynamics are challenging to measure and translate into scale- and site-specific ecohydrological parameters for simple soil water balance models. We hypothesize that empirical probability density functions (pdfs) of relative soil moisture or soil saturation encode sufficient information to determine these ecohydrological parameters. Further, these parameters can be estimated through inverse modeling of the analytical equation for soil saturation pdfs, derived from the commonly used stochastic soil water balance framework. We developed a generalizable Bayesian inference framework to estimate ecohydrological parameters consistent with empirical soil saturation pdfs derived from observations at point, footprint, and satellite scales. We applied the inference method to four sites with different land cover and climate assuming (i) an annual rainfall pattern and (ii) a wet season rainfall pattern with a dry season of negligible rainfall. The Nash-Sutcliffe efficiencies of the analytical model's fit to soil observations ranged from 0.89 to 0.99. The coefficient of variation of posterior parameter distributions ranged from < 1 to 15 %. The parameter identifiability was not significantly improved in the more complex seasonal model; however, small differences in parameter values indicate that the annual model may have absorbed dry season dynamics. Parameter estimates were most constrained for scales and locations at which soil water dynamics are more sensitive to the fitted ecohydrological parameters of interest. In these cases, model inversion converged more slowly but ultimately provided better goodness of fit and lower uncertainty. Results were robust using as few as 100 daily observations randomly sampled from the full records, demonstrating the advantage of analyzing soil saturation pdfs instead of time series to estimate ecohydrological parameters from sparse records. Our work combines modeling and empirical approaches in ecohydrology and provides a simple framework to obtain scale- and site-specific analytical descriptions of soil moisture dynamics consistent with soil moisture observations.
Carbon dioxide stripping in aquaculture -- part III: model verification
Colt, John; Watten, Barnaby; Pfeiffer, Tim
2012-01-01
Based on conventional mass transfer models developed for oxygen, the use of the non-linear ASCE method, 2-point method, and one parameter linear-regression method were evaluated for carbon dioxide stripping data. For values of KLaCO2 < approximately 1.5/h, the 2-point or ASCE method are a good fit to experimental data, but the fit breaks down at higher values of KLaCO2. How to correct KLaCO2 for gas phase enrichment remains to be determined. The one-parameter linear regression model was used to vary the C*CO2 over the test, but it did not result in a better fit to the experimental data when compared to the ASCE or fixed C*CO2 assumptions.
He I lines in B stars - Comparison of non-local thermodynamic equilibrium models with observations
NASA Technical Reports Server (NTRS)
Heasley, J. N.; Timothy, J. G.; Wolff, S. C.
1982-01-01
Profiles of He gamma-gamma 4026, 4387, 4471, 4713, 5876, and 6678 have been obtained in 17 stars of spectral type B0-B5. Parameters of the nonlocal thermodynamic equilibrium models appropriate to each star are determined from the Stromgren index and fits to H-alpha line profiles. These parameters yield generally good fits to the observed He I line profiles, with the best fits being found for the blue He I lines where departures from local thermodynamic equilibrium are relatively small. For the two red lines it is found that, in the early B stars and in stars with log g less than 3.5, both lines are systematically stronger than predicted by the nonlocal thermodynamic equilibrium models.
Applying a Hypoxia-Incorporating TCP Model to Experimental Data on Rat Sarcoma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruggieri, Ruggero, E-mail: ruggieri.ruggero@gmail.com; Stavreva, Nadejda; Naccarato, Stefania
2012-08-01
Purpose: To verify whether a tumor control probability (TCP) model which mechanistically incorporates acute and chronic hypoxia is able to describe animal in vivo dose-response data, exhibiting tumor reoxygenation. Methods and Materials: The investigated TCP model accounts for tumor repopulation, reoxygenation of chronic hypoxia, and fluctuating oxygenation of acute hypoxia. Using the maximum likelihood method, the model is fitted to Fischer-Moulder data on Wag/Rij rats, inoculated with rat rhabdomyosarcoma BA1112, and irradiated in vivo using different fractionation schemes. This data set is chosen because two of the experimental dose-response curves exhibit an inverse dose behavior, which is interpreted as duemore » to reoxygenation. The tested TCP model is complex, and therefore, in vivo cell survival data on the same BA1112 cell line from Reinhold were added to Fischer-Moulder data and fitted simultaneously with a corresponding cell survival function. Results: The obtained fit to the combined Fischer-Moulder-Reinhold data was statistically acceptable. The best-fit values of the model parameters for which information exists were in the range of published values. The cell survival curves of well-oxygenated and hypoxic cells, computed using the best-fit values of the radiosensitivities and the initial number of clonogens, were in good agreement with the corresponding in vitro and in situ experiments of Reinhold. The best-fit values of most of the hypoxia-related parameters were used to recompute the TCP for non-small cell lung cancer patients as a function of the number of fractions, TCP(n). Conclusions: The investigated TCP model adequately describes animal in vivo data exhibiting tumor reoxygenation. The TCP(n) curve computed for non-small cell lung cancer patients with the best-fit values of most of the hypoxia-related parameters confirms previously obtained abrupt reduction in TCP for n < 10, thus warning against the adoption of severely hypofractionated schedules.« less
Syndromes of Self-Reported Psychopathology for Ages 18-59 in 29 Societies.
Ivanova, Masha Y; Achenbach, Thomas M; Rescorla, Leslie A; Tumer, Lori V; Ahmeti-Pronaj, Adelina; Au, Alma; Maese, Carmen Avila; Bellina, Monica; Caldas, J Carlos; Chen, Yi-Chuen; Csemy, Ladislav; da Rocha, Marina M; Decoster, Jeroen; Dobrean, Anca; Ezpeleta, Lourdes; Fontaine, Johnny R J; Funabiki, Yasuko; Guðmundsson, Halldór S; Harder, Valerie S; de la Cabada, Marie Leiner; Leung, Patrick; Liu, Jianghong; Mahr, Safia; Malykh, Sergey; Maras, Jelena Srdanovic; Markovic, Jasminka; Ndetei, David M; Oh, Kyung Ja; Petot, Jean-Michel; Riad, Geylan; Sakarya, Direnc; Samaniego, Virginia C; Sebre, Sandra; Shahini, Mimoza; Silvares, Edwiges; Simulioniene, Roma; Sokoli, Elvisa; Talcott, Joel B; Vazquez, Natalia; Zasepa, Ewa
2015-06-01
This study tested the multi-society generalizability of an eight-syndrome assessment model derived from factor analyses of American adults' self-ratings of 120 behavioral, emotional, and social problems. The Adult Self-Report (ASR; Achenbach and Rescorla 2003) was completed by 17,152 18-59-year-olds in 29 societies. Confirmatory factor analyses tested the fit of self-ratings in each sample to the eight-syndrome model. The primary model fit index (Root Mean Square Error of Approximation) showed good model fit for all samples, while secondary indices showed acceptable to good fit. Only 5 (0.06%) of the 8,598 estimated parameters were outside the admissible parameter space. Confidence intervals indicated that sampling fluctuations could account for the deviant parameters. Results thus supported the tested model in societies differing widely in social, political, and economic systems, languages, ethnicities, religions, and geographical regions. Although other items, societies, and analytic methods might yield different results, the findings indicate that adults in very diverse societies were willing and able to rate themselves on the same standardized set of 120 problem items. Moreover, their self-ratings fit an eight-syndrome model previously derived from self-ratings by American adults. The support for the statistically derived syndrome model is consistent with previous findings for parent, teacher, and self-ratings of 1½-18-year-olds in many societies. The ASR and its parallel collateral-report instrument, the Adult Behavior Checklist (ABCL), may offer mental health professionals practical tools for the multi-informant assessment of clinical constructs of adult psychopathology that appear to be meaningful across diverse societies.
Parameter Estimation as a Problem in Statistical Thermodynamics.
Earle, Keith A; Schneider, David J
2011-03-14
In this work, we explore the connections between parameter fitting and statistical thermodynamics using the maxent principle of Jaynes as a starting point. In particular, we show how signal averaging may be described by a suitable one particle partition function, modified for the case of a variable number of particles. These modifications lead to an entropy that is extensive in the number of measurements in the average. Systematic error may be interpreted as a departure from ideal gas behavior. In addition, we show how to combine measurements from different experiments in an unbiased way in order to maximize the entropy of simultaneous parameter fitting. We suggest that fit parameters may be interpreted as generalized coordinates and the forces conjugate to them may be derived from the system partition function. From this perspective, the parameter fitting problem may be interpreted as a process where the system (spectrum) does work against internal stresses (non-optimum model parameters) to achieve a state of minimum free energy/maximum entropy. Finally, we show how the distribution function allows us to define a geometry on parameter space, building on previous work[1, 2]. This geometry has implications for error estimation and we outline a program for incorporating these geometrical insights into an automated parameter fitting algorithm.
Estimation of fitness from energetics and life-history data: An example using mussels.
Sebens, Kenneth P; Sarà, Gianluca; Carrington, Emily
2018-06-01
Changing environments have the potential to alter the fitness of organisms through effects on components of fitness such as energy acquisition, metabolic cost, growth rate, survivorship, and reproductive output. Organisms, on the other hand, can alter aspects of their physiology and life histories through phenotypic plasticity as well as through genetic change in populations (selection). Researchers examining the effects of environmental variables frequently concentrate on individual components of fitness, although methods exist to combine these into a population level estimate of average fitness, as the per capita rate of population growth for a set of identical individuals with a particular set of traits. Recent advances in energetic modeling have provided excellent data on energy intake and costs leading to growth, reproduction, and other life-history parameters; these in turn have consequences for survivorship at all life-history stages, and thus for fitness. Components of fitness alone (performance measures) are useful in determining organism response to changing conditions, but are often not good predictors of fitness; they can differ in both form and magnitude, as demonstrated in our model. Here, we combine an energetics model for growth and allocation with a matrix model that calculates population growth rate for a group of individuals with a particular set of traits. We use intertidal mussels as an example, because data exist for some of the important energetic and life-history parameters, and because there is a hypothesized energetic trade-off between byssus production (affecting survivorship), and energy used for growth and reproduction. The model shows exactly how strong this trade-off is in terms of overall fitness, and it illustrates conditions where fitness components are good predictors of actual fitness, and cases where they are not. In addition, the model is used to examine the effects of environmental change on this trade-off and on both fitness and on individual fitness components.
NASA Astrophysics Data System (ADS)
Han, Lu; Gao, Kun; Gong, Chen; Zhu, Zhenyu; Guo, Yue
2017-08-01
On-orbit Modulation Transfer Function (MTF) is an important indicator to evaluate the performance of the optical remote sensors in a satellite. There are many methods to estimate MTF, such as pinhole method, slit method and so on. Among them, knife-edge method is quite efficient, easy-to-use and recommended in ISO12233 standard for the wholefrequency MTF curve acquisition. However, the accuracy of the algorithm is affected by Edge Spread Function (ESF) fitting accuracy significantly, which limits the range of application. So in this paper, an optimized knife-edge method using Powell algorithm is proposed to improve the ESF fitting precision. Fermi function model is the most popular ESF fitting model, yet it is vulnerable to the initial values of the parameters. Considering the characteristics of simple and fast convergence, Powell algorithm is applied to fit the accurate parameters adaptively with the insensitivity to the initial parameters. Numerical simulation results reveal the accuracy and robustness of the optimized algorithm under different SNR, edge direction and leaning angles conditions. Experimental results using images of the camera in ZY-3 satellite show that this method is more accurate than the standard knife-edge method of ISO12233 in MTF estimation.
The H,G_1,G_2 photometric system with scarce observational data
NASA Astrophysics Data System (ADS)
Penttilä, A.; Granvik, M.; Muinonen, K.; Wilkman, O.
2014-07-01
The H,G_1,G_2 photometric system was officially adopted at the IAU General Assembly in Beijing, 2012. The system replaced the H,G system from 1985. The 'photometric system' is a parametrized model V(α; params) for the magnitude-phase relation of small Solar System bodies, and the main purpose is to predict the magnitude at backscattering, H := V(0°), i.e., the (absolute) magnitude of the object. The original H,G system was designed using the best available data in 1985, but since then new observations have been made showing certain features, especially near backscattering, to which the H,G function has troubles adjusting to. The H,G_1,G_2 system was developed especially to address these issues [1]. With a sufficient number of high-accuracy observations and with a wide phase-angle coverage, the H,G_1,G_2 system performs well. However, with scarce low-accuracy data the system has troubles producing a reliable fit, as would any other three-parameter nonlinear function. Therefore, simultaneously with the H,G_1,G_2 system, a two-parameter version of the model, the H,G_{12} system, was introduced [1]. The two-parameter version ties the parameters G_1,G_2 into a single parameter G_{12} by a linear relation, and still uses the H,G_1,G_2 system in the background. This version dramatically improves the possibility to receive a reliable phase-curve fit to scarce data. The amount of observed small bodies is increasing all the time, and so is the need to produce estimates for the absolute magnitude/diameter/albedo and other size/composition related parameters. The lack of small-phase-angle observations is especially topical for near-Earth objects (NEOs). With these, even the two- parameter version faces problems. The previous procedure with the H,G system in such circumstances has been that the G-parameter has been fixed to some constant value, thus only fitting a single-parameter function. In conclusion, there is a definitive need for a reliable procedure to produce photometric fits to very scarce and low-accuracy data. There are a few details that should be considered with the H,G_1,G_2 or H,G_{12} systems with scarce data. The first point is the distribution of errors in the fit. The original H,G system allowed linear regression in the flux space, thus making the estimation computationally easier. The same principle was repeated with the H,G_1,G_2 system. There is, however, a major hidden assumption in the transformation. With regression modeling, the residuals should be distributed symmetrically around zero. If they are normally distributed, even better. We have noticed that, at least with some NEO observations, the residuals in the flux space are far from symmetric, and seem to be much more symmetric in the magnitude space. The result is that the nonlinear fit in magnitude space is far more reliable than the linear fit in the flux space. Since the computers and nonlinear regression algorithms are efficient enough, we conclude that, in many cases, with low-accuracy data the nonlinear fit should be favored. In fact, there are statistical procedures that should be employed with the photometric fit. At the moment, the choice between the three-parameter and two-parameter versions is simply based on subjective decision-making. By checking parameter error and model comparison statistics, the choice could be done objectively. Similarly, the choice between the linear fit in flux space and the nonlinear fit in magnitude space should be based on a statistical test of unbiased residuals. Furthermore, the so-called Box-Cox transform could be employed to find an optimal transformation somewhere between the magnitude and flux spaces. The H,G_1,G_2 system is based on cubic splines, and is therefore a bit more complicated to implement than a system with simpler basis functions. The same applies to a complete program that would automatically choose the best transforms to data, test if two- or three-parameter version of the model should be fitted, and produce the fitted parameters with their error estimates. Our group has already made implementations of the H,G_1,G_2 system publicly available [2]. We plan to implement the abovementioned improvements to the system and make also these tools public.
2011-01-01
Background Electronic patient records are generally coded using extensive sets of codes but the significance of the utilisation of individual codes may be unclear. Item response theory (IRT) models are used to characterise the psychometric properties of items included in tests and questionnaires. This study asked whether the properties of medical codes in electronic patient records may be characterised through the application of item response theory models. Methods Data were provided by a cohort of 47,845 participants from 414 family practices in the UK General Practice Research Database (GPRD) with a first stroke between 1997 and 2006. Each eligible stroke code, out of a set of 202 OXMIS and Read codes, was coded as either recorded or not recorded for each participant. A two parameter IRT model was fitted using marginal maximum likelihood estimation. Estimated parameters from the model were considered to characterise each code with respect to the latent trait of stroke diagnosis. The location parameter is referred to as a calibration parameter, while the slope parameter is referred to as a discrimination parameter. Results There were 79,874 stroke code occurrences available for analysis. Utilisation of codes varied between family practices with intraclass correlation coefficients of up to 0.25 for the most frequently used codes. IRT analyses were restricted to 110 Read codes. Calibration and discrimination parameters were estimated for 77 (70%) codes that were endorsed for 1,942 stroke patients. Parameters were not estimated for the remaining more frequently used codes. Discrimination parameter values ranged from 0.67 to 2.78, while calibration parameters values ranged from 4.47 to 11.58. The two parameter model gave a better fit to the data than either the one- or three-parameter models. However, high chi-square values for about a fifth of the stroke codes were suggestive of poor item fit. Conclusion The application of item response theory models to coded electronic patient records might potentially contribute to identifying medical codes that offer poor discrimination or low calibration. This might indicate the need for improved coding sets or a requirement for improved clinical coding practice. However, in this study estimates were only obtained for a small proportion of participants and there was some evidence of poor model fit. There was also evidence of variation in the utilisation of codes between family practices raising the possibility that, in practice, properties of codes may vary for different coders. PMID:22176509
Curtis L. Vanderschaaf
2008-01-01
Mixed effects models can be used to obtain site-specific parameters through the use of model calibration that often produces better predictions of independent data. This study examined whether parameters of a mixed effect height-diameter model estimated using loblolly pine plantation data but calibrated using sweetgum plantation data would produce reasonable...
Hands-on parameter search for neural simulations by a MIDI-controller.
Eichner, Hubert; Borst, Alexander
2011-01-01
Computational neuroscientists frequently encounter the challenge of parameter fitting--exploring a usually high dimensional variable space to find a parameter set that reproduces an experimental data set. One common approach is using automated search algorithms such as gradient descent or genetic algorithms. However, these approaches suffer several shortcomings related to their lack of understanding the underlying question, such as defining a suitable error function or getting stuck in local minima. Another widespread approach is manual parameter fitting using a keyboard or a mouse, evaluating different parameter sets following the users intuition. However, this process is often cumbersome and time-intensive. Here, we present a new method for manual parameter fitting. A MIDI controller provides input to the simulation software, where model parameters are then tuned according to the knob and slider positions on the device. The model is immediately updated on every parameter change, continuously plotting the latest results. Given reasonably short simulation times of less than one second, we find this method to be highly efficient in quickly determining good parameter sets. Our approach bears a close resemblance to tuning the sound of an analog synthesizer, giving the user a very good intuition of the problem at hand, such as immediate feedback if and how results are affected by specific parameter changes. In addition to be used in research, our approach should be an ideal teaching tool, allowing students to interactively explore complex models such as Hodgkin-Huxley or dynamical systems.
Hands-On Parameter Search for Neural Simulations by a MIDI-Controller
Eichner, Hubert; Borst, Alexander
2011-01-01
Computational neuroscientists frequently encounter the challenge of parameter fitting – exploring a usually high dimensional variable space to find a parameter set that reproduces an experimental data set. One common approach is using automated search algorithms such as gradient descent or genetic algorithms. However, these approaches suffer several shortcomings related to their lack of understanding the underlying question, such as defining a suitable error function or getting stuck in local minima. Another widespread approach is manual parameter fitting using a keyboard or a mouse, evaluating different parameter sets following the users intuition. However, this process is often cumbersome and time-intensive. Here, we present a new method for manual parameter fitting. A MIDI controller provides input to the simulation software, where model parameters are then tuned according to the knob and slider positions on the device. The model is immediately updated on every parameter change, continuously plotting the latest results. Given reasonably short simulation times of less than one second, we find this method to be highly efficient in quickly determining good parameter sets. Our approach bears a close resemblance to tuning the sound of an analog synthesizer, giving the user a very good intuition of the problem at hand, such as immediate feedback if and how results are affected by specific parameter changes. In addition to be used in research, our approach should be an ideal teaching tool, allowing students to interactively explore complex models such as Hodgkin-Huxley or dynamical systems. PMID:22066027
Alcalá-Quintana, Rocío; García-Pérez, Miguel A
2013-12-01
Research on temporal-order perception uses temporal-order judgment (TOJ) tasks or synchrony judgment (SJ) tasks in their binary SJ2 or ternary SJ3 variants. In all cases, two stimuli are presented with some temporal delay, and observers judge the order of presentation. Arbitrary psychometric functions are typically fitted to obtain performance measures such as sensitivity or the point of subjective simultaneity, but the parameters of these functions are uninterpretable. We describe routines in MATLAB and R that fit model-based functions whose parameters are interpretable in terms of the processes underlying temporal-order and simultaneity judgments and responses. These functions arise from an independent-channels model assuming arrival latencies with exponential distributions and a trichotomous decision space. Different routines fit data separately for SJ2, SJ3, and TOJ tasks, jointly for any two tasks, or also jointly for the three tasks (for common cases in which two or even the three tasks were used with the same stimuli and participants). Additional routines provide bootstrap p-values and confidence intervals for estimated parameters. A further routine is included that obtains performance measures from the fitted functions. An R package for Windows and source code of the MATLAB and R routines are available as Supplementary Files.
Optimization-Based Model Fitting for Latent Class and Latent Profile Analyses
ERIC Educational Resources Information Center
Huang, Guan-Hua; Wang, Su-Mei; Hsu, Chung-Chu
2011-01-01
Statisticians typically estimate the parameters of latent class and latent profile models using the Expectation-Maximization algorithm. This paper proposes an alternative two-stage approach to model fitting. The first stage uses the modified k-means and hierarchical clustering algorithms to identify the latent classes that best satisfy the…
Fast and Accurate Fitting and Filtering of Noisy Exponentials in Legendre Space
Bao, Guobin; Schild, Detlev
2014-01-01
The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters. PMID:24603904
The Anomalous Accretion Disk of the Cataclysmic Variable RW Sextantis
NASA Astrophysics Data System (ADS)
Linnell, Albert P.; Godon, P.; Hubeny, I.; Sion, E. M.; Szkody, P.
2011-01-01
The standard model for stable Cataclysmic Variable (CV) accretion disks (Frank, King and Raine 1992) derives an explicit analytic expression for the disk effective temperature as function of radial distance from the white dwarf (WD). That model specifies that the effective temperature, Teff(R), varies with R as ()0.25, where () represents a combination of parameters including R, the mass transfer rate M(dot), and other parameters. It is well known that fits of standard model synthetic spectra to observed CV spectra find almost no instances of agreement. We have derived a generalized expression for the radial temperature gradient, which preserves the total disk luminosity as function of M(dot) but permits a different exponent from the theoretical value of 0.25, and have applied it to RW Sex (Linnell et al.,2010,ApJ, 719,271). We find an excellent fit to observed FUSE and IUE spectra for an exponent of 0.125, curiously close to 1/2 the theoretical value. Our annulus synthetic spectra, combined to represent the accretion disk, were produced with program TLUSTY, were non-LTE and included H, He, C, Mg, Al, Si, and Fe as explicit ions. We illustrate our results with a plot showing the failure to fit RW Sex for a range of M(dot) values, our model fit to the observations, and a chi2 plot showing the selection of the exponent 0.125 as the best fit for the M(dot) range shown. (For the final model parameters see the paper cited.)
NASA Astrophysics Data System (ADS)
Kelleher, Christa A.; Shaw, Stephen B.
2018-02-01
Recent research has found that hydrologic modeling over decadal time periods often requires time variant model parameters. Most prior work has focused on assessing time variance in model parameters conceptualizing watershed features and functions. In this paper, we assess whether adding a time variant scalar to potential evapotranspiration (PET) can be used in place of time variant parameters. Using the HBV hydrologic model and four different simple but common PET methods (Hamon, Priestly-Taylor, Oudin, and Hargreaves), we simulated 60+ years of daily discharge on four rivers in New York state. Allowing all ten model parameters to vary in time achieved good model fits in terms of daily NSE and long-term water balance. However, allowing single model parameters to vary in time - including a scalar on PET - achieved nearly equivalent model fits across PET methods. Overall, varying a PET scalar in time is likely more physically consistent with known biophysical controls on PET as compared to varying parameters conceptualizing innate watershed properties related to soil properties such as wilting point and field capacity. This work suggests that the seeming need for time variance in innate watershed parameters may be due to overly simple evapotranspiration formulations that do not account for all factors controlling evapotranspiration over long time periods.
Modeling of soil water retention from saturation to oven dryness
Rossi, Cinzia; Nimmo, John R.
1994-01-01
Most analytical formulas used to model moisture retention in unsaturated porous media have been developed for the wet range and are unsuitable for applications in which low water contents are important. We have developed two models that fit the entire range from saturation to oven dryness in a practical and physically realistic way with smooth, continuous functions that have few parameters. Both models incorporate a power law and a logarithmic dependence of water content on suction, differing in how these two components are combined. In one model, functions are added together (model “sum”); in the other they are joined smoothly together at a discrete point (model “junction”). Both models also incorporate recent developments that assure a continuous derivative and force the function to reach zero water content at a finite value of suction that corresponds to oven dryness. The models have been tested with seven sets of water retention data that each cover nearly the entire range. The three-parameter sum model fits all data well and is useful for extrapolation into the dry range when data for it are unavailable. The two-parameter junction model fits most data sets almost as well as the sum model and has the advantage of being analytically integrable for convenient use with capillary-bundle models to obtain the unsaturated hydraulic conductivity.
Selgrade, J F; Harris, L A; Pasteur, R D
2009-10-21
This study presents a 13-dimensional system of delayed differential equations which predicts serum concentrations of five hormones important for regulation of the menstrual cycle. Parameters for the system are fit to two different data sets for normally cycling women. For these best fit parameter sets, model simulations agree well with the two different data sets but one model also has an abnormal stable periodic solution, which may represent polycystic ovarian syndrome. This abnormal cycle occurs for the model in which the normal cycle has estradiol levels at the high end of the normal range. Differences in model behavior are explained by studying hysteresis curves in bifurcation diagrams with respect to sensitive model parameters. For instance, one sensitive parameter is indicative of the estradiol concentration that promotes pituitary synthesis of a large amount of luteinizing hormone, which is required for ovulation. Also, it is observed that models with greater early follicular growth rates may have a greater risk of cycling abnormally.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Domanskyi, Sergii; Schilling, Joshua E.; Privman, Vladimir, E-mail: privman@clarkson.edu
We develop a theoretical approach that uses physiochemical kinetics modelling to describe cell population dynamics upon progression of viral infection in cell culture, which results in cell apoptosis (programmed cell death) and necrosis (direct cell death). Several model parameters necessary for computer simulation were determined by reviewing and analyzing available published experimental data. By comparing experimental data to computer modelling results, we identify the parameters that are the most sensitive to the measured system properties and allow for the best data fitting. Our model allows extraction of parameters from experimental data and also has predictive power. Using the model wemore » describe interesting time-dependent quantities that were not directly measured in the experiment and identify correlations among the fitted parameter values. Numerical simulation of viral infection progression is done by a rate-equation approach resulting in a system of “stiff” equations, which are solved by using a novel variant of the stochastic ensemble modelling approach. The latter was originally developed for coupled chemical reactions.« less
Broadband spectral fitting of blazars using XSPEC
NASA Astrophysics Data System (ADS)
Sahayanathan, Sunder; Sinha, Atreyee; Misra, Ranjeev
2018-03-01
The broadband spectral energy distribution (SED) of blazars is generally interpreted as radiation arising from synchrotron and inverse Compton mechanisms. Traditionally, the underlying source parameters responsible for these emission processes, like particle energy density, magnetic field, etc., are obtained through simple visual reproduction of the observed fluxes. However, this procedure is incapable of providing confidence ranges for the estimated parameters. In this work, we propose an efficient algorithm to perform a statistical fit of the observed broadband spectrum of blazars using different emission models. Moreover, we use the observable quantities as the fit parameters, rather than the direct source parameters which govern the resultant SED. This significantly improves the convergence time and eliminates the uncertainty regarding initial guess parameters. This approach also has an added advantage of identifying the degenerate parameters, which can be removed by including more observable information and/or additional constraints. A computer code developed based on this algorithm is implemented as a user-defined routine in the standard X-ray spectral fitting package, XSPEC. Further, we demonstrate the efficacy of the algorithm by fitting the well sampled SED of blazar 3C 279 during its gamma ray flare in 2014.
Kumar, K Vasanth; Porkodi, K
2006-12-01
Equilibrium uptake of methylne blue onto lemon peel was fitted to the 2 two-parameter isotherm models namely Freundlich and Langmuir and 3 six-parameter isotherm models namely Redlich-Peterson, Toth, Radke-Prausnitz, Fritz-Schluender, Vieth-Sladek and Sips isotherms by non-linear method. A comparison between two-parameter and three-parameter isotherms was reported. The best fitting isotherm was the Sips isotherm followed by Langmuir isotherm and Redlich-Peterson isotherm equation. Redlich-Peterson isotherm is a special case of Langmuir isotherm when the Redlich-Peterson isotherm constant g was unity. Radke-Prausnitz, Toth, Vieth-Sladek isotherm were the same when the Toth isotherm constant, n(T) and the Radke-Prausnitz isotherm, m(RP) are equal to unity and when the Vieth-Sladek isotherm constant, K(VS) equals zero. The sorption capacity of lemon peel for methylene blue uptake was found to be 29 mg/g.
Model of Energy Spectrum Parameters of Ground Level Enhancement Events in Solar Cycle 23
NASA Astrophysics Data System (ADS)
Wu, S.-S.; Qin, G.
2018-01-01
Mewaldt et al. (2012) fitted the observations of the ground level enhancement (GLE) events during solar cycle 23 to the double power law equation to obtain the four spectral parameters, the normalization constant C, low-energy power law slope γ1, high-energy power law slope γ2, and break energy E0. There are 16 GLEs from which we select 13 for study by excluding some events with complicated situation. We analyze the four parameters with conditions of the corresponding solar events. According to solar event conditions, we divide the GLEs into two groups, one with strong acceleration by interplanetary shocks and another one without strong acceleration. By fitting the four parameters with solar event conditions we obtain models of the parameters for the two groups of GLEs separately. Therefore, we establish a model of energy spectrum of solar cycle 23 GLEs, which may be used in prediction in the future.
Yang, Huan; Meijer, Hil G E; Buitenweg, Jan R; van Gils, Stephan A
2016-01-01
Healthy or pathological states of nociceptive subsystems determine different stimulus-response relations measured from quantitative sensory testing. In turn, stimulus-response measurements may be used to assess these states. In a recently developed computational model, six model parameters characterize activation of nerve endings and spinal neurons. However, both model nonlinearity and limited information in yes-no detection responses to electrocutaneous stimuli challenge to estimate model parameters. Here, we address the question whether and how one can overcome these difficulties for reliable parameter estimation. First, we fit the computational model to experimental stimulus-response pairs by maximizing the likelihood. To evaluate the balance between model fit and complexity, i.e., the number of model parameters, we evaluate the Bayesian Information Criterion. We find that the computational model is better than a conventional logistic model regarding the balance. Second, our theoretical analysis suggests to vary the pulse width among applied stimuli as a necessary condition to prevent structural non-identifiability. In addition, the numerically implemented profile likelihood approach reveals structural and practical non-identifiability. Our model-based approach with integration of psychophysical measurements can be useful for a reliable assessment of states of the nociceptive system.
Resonant structure, formation and stability of the planetary system HD155358
NASA Astrophysics Data System (ADS)
Silburt, Ari; Rein, Hanno
2017-08-01
Two Jovian-sized planets are orbiting the star HD155358 near exact mean motion resonance (MMR) commensurability. In this work, we re-analyse the radial velocity (RV) data previously collected by Robertson et al. Using a Bayesian framework, we construct two models - one that includes and the other that excludes gravitational planet-planet interactions (PPIs). We find that the orbital parameters from our PPI and no planet-planet interaction (noPPI) models differ by up to 2σ, with our noPPI model being statistically consistent with previous results. In addition, our new PPI model strongly favours the planets being in MMR, while our noPPI model strongly disfavours MMR. We conduct a stability analysis by drawing samples from our PPI model's posterior distribution and simulating them for 109 yr, finding that our best-fitting values land firmly in a stable region of parameter space. We explore a series of formation models that migrate the planets into their observed MMR. We then use these models to directly fit to the observed RV data, where each model is uniquely parametrized by only three constants describing its migration history. Using a Bayesian framework, we find that a number of migration models fit the RV data surprisingly well, with some migration parameters being ruled out. Our analysis shows that PPIs are important to take into account when modelling observations of multiplanetary systems. The additional information that one can gain from interacting models can help constrain planet migration parameters.
Galactic cosmic-ray model in the light of AMS-02 nuclei data
NASA Astrophysics Data System (ADS)
Niu, Jia-Shu; Li, Tianjun
2018-01-01
Cosmic ray (CR) physics has entered a precision-driven era. With the latest AMS-02 nuclei data (boron-to-carbon ratio, proton flux, helium flux, and antiproton-to-proton ratio), we perform a global fitting and constrain the primary source and propagation parameters of cosmic rays in the Milky Way by considering 3 schemes with different data sets (with and without p ¯ /p data) and different propagation models (diffusion-reacceleration and diffusion-reacceleration-convection models). We find that the data set with p ¯/p data can remove the degeneracy between the propagation parameters effectively and it favors the model with a very small value of convection (or disfavors the model with convection). The separated injection spectrum parameters are used for proton and other nucleus species, which reveal the different breaks and slopes among them. Moreover, the helium abundance, antiproton production cross sections, and solar modulation are parametrized in our global fitting. Benefited from the self-consistence of the new data set, the fitting results show a little bias, and thus the disadvantages and limitations of the existed propagation models appear. Comparing to the best fit results for the local interstellar spectra (ϕ =0 ) with the VOYAGER-1 data, we find that the primary sources or propagation mechanisms should be different between proton and helium (or other heavier nucleus species). Thus, how to explain these results properly is an interesting and challenging question.
Qin, Shanlin; Liu, Fawang; Turner, Ian W; Yu, Qiang; Yang, Qianqian; Vegh, Viktor
2017-04-01
To study the utility of fractional calculus in modeling gradient-recalled echo MRI signal decay in the normal human brain. We solved analytically the extended time-fractional Bloch equations resulting in five model parameters, namely, the amplitude, relaxation rate, order of the time-fractional derivative, frequency shift, and constant offset. Voxel-level temporal fitting of the MRI signal was performed using the classical monoexponential model, a previously developed anomalous relaxation model, and using our extended time-fractional relaxation model. Nine brain regions segmented from multiple echo gradient-recalled echo 7 Tesla MRI data acquired from five participants were then used to investigate the characteristics of the extended time-fractional model parameters. We found that the extended time-fractional model is able to fit the experimental data with smaller mean squared error than the classical monoexponential relaxation model and the anomalous relaxation model, which do not account for frequency shift. We were able to fit multiple echo time MRI data with high accuracy using the developed model. Parameters of the model likely capture information on microstructural and susceptibility-induced changes in the human brain. Magn Reson Med 77:1485-1494, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Analysis of the statistical thermodynamic model for nonlinear binary protein adsorption equilibria.
Zhou, Xiao-Peng; Su, Xue-Li; Sun, Yan
2007-01-01
The statistical thermodynamic (ST) model was used to study nonlinear binary protein adsorption equilibria on an anion exchanger. Single-component and binary protein adsorption isotherms of bovine hemoglobin (Hb) and bovine serum albumin (BSA) on DEAE Spherodex M were determined by batch adsorption experiments in 10 mM Tris-HCl buffer containing a specific NaCl concentration (0.05, 0.10, and 0.15 M) at pH 7.40. The ST model was found to depict the effect of ionic strength on the single-component equilibria well, with model parameters depending on ionic strength. Moreover, the ST model gave acceptable fitting to the binary adsorption data with the fitted single-component model parameters, leading to the estimation of the binary ST model parameter. The effects of ionic strength on the model parameters are reasonably interpreted by the electrostatic and thermodynamic theories. The effective charge of protein in adsorption phase can be separately calculated from the two categories of the model parameters, and the values obtained from the two methods are consistent. The results demonstrate the utility of the ST model for describing nonlinear binary protein adsorption equilibria.
Inverse modeling and animation of growing single-stemmed trees at interactive rates
S. Rudnick; L. Linsen; E.G. McPherson
2007-01-01
For city planning purposes, animations of growing trees of several species can be used to deduce which species may best fit a particular environment. The models used for the animation must conform to real measured data. We present an approach for inverse modeling to fit global growth parameters. The model comprises local production rules, which are iteratively and...
An approximation method for improving dynamic network model fitting.
Carnegie, Nicole Bohme; Krivitsky, Pavel N; Hunter, David R; Goodreau, Steven M
There has been a great deal of interest recently in the modeling and simulation of dynamic networks, i.e., networks that change over time. One promising model is the separable temporal exponential-family random graph model (ERGM) of Krivitsky and Handcock, which treats the formation and dissolution of ties in parallel at each time step as independent ERGMs. However, the computational cost of fitting these models can be substantial, particularly for large, sparse networks. Fitting cross-sectional models for observations of a network at a single point in time, while still a non-negligible computational burden, is much easier. This paper examines model fitting when the available data consist of independent measures of cross-sectional network structure and the duration of relationships under the assumption of stationarity. We introduce a simple approximation to the dynamic parameters for sparse networks with relationships of moderate or long duration and show that the approximation method works best in precisely those cases where parameter estimation is most likely to fail-networks with very little change at each time step. We consider a variety of cases: Bernoulli formation and dissolution of ties, independent-tie formation and Bernoulli dissolution, independent-tie formation and dissolution, and dependent-tie formation models.
Fundamental Parameters Line Profile Fitting in Laboratory Diffractometers
Cheary, R. W.; Coelho, A. A.; Cline, J. P.
2004-01-01
The fundamental parameters approach to line profile fitting uses physically based models to generate the line profile shapes. Fundamental parameters profile fitting (FPPF) has been used to synthesize and fit data from both parallel beam and divergent beam diffractometers. The refined parameters are determined by the diffractometer configuration. In a divergent beam diffractometer these include the angular aperture of the divergence slit, the width and axial length of the receiving slit, the angular apertures of the axial Soller slits, the length and projected width of the x-ray source, the absorption coefficient and axial length of the sample. In a parallel beam system the principal parameters are the angular aperture of the equatorial analyser/Soller slits and the angular apertures of the axial Soller slits. The presence of a monochromator in the beam path is normally accommodated by modifying the wavelength spectrum and/or by changing one or more of the axial divergence parameters. Flat analyzer crystals have been incorporated into FPPF as a Lorentzian shaped angular acceptance function. One of the intrinsic benefits of the fundamental parameters approach is its adaptability any laboratory diffractometer. Good fits can normally be obtained over the whole 20 range without refinement using the known properties of the diffractometer, such as the slit sizes and diffractometer radius, and emission profile. PMID:27366594
Uncertainty quantification for optical model parameters
Lovell, A. E.; Nunes, F. M.; Sarich, J.; ...
2017-02-21
Although uncertainty quantification has been making its way into nuclear theory, these methods have yet to be explored in the context of reaction theory. For example, it is well known that different parameterizations of the optical potential can result in different cross sections, but these differences have not been systematically studied and quantified. The purpose of our work is to investigate the uncertainties in nuclear reactions that result from fitting a given model to elastic-scattering data, as well as to study how these uncertainties propagate to the inelastic and transfer channels. We use statistical methods to determine a best fitmore » and create corresponding 95% confidence bands. A simple model of the process is fit to elastic-scattering data and used to predict either inelastic or transfer cross sections. In this initial work, we assume that our model is correct, and the only uncertainties come from the variation of the fit parameters. Here, we study a number of reactions involving neutron and deuteron projectiles with energies in the range of 5–25 MeV/u, on targets with mass A=12–208. We investigate the correlations between the parameters in the fit. The case of deuterons on 12C is discussed in detail: the elastic-scattering fit and the prediction of 12C(d,p) 13C transfer angular distributions, using both uncorrelated and correlated χ 2 minimization functions. The general features for all cases are compiled in a systematic manner to identify trends. This work shows that, in many cases, the correlated χ 2 functions (in comparison to the uncorrelated χ 2 functions) provide a more natural parameterization of the process. These correlated functions do, however, produce broader confidence bands. Further optimization may require improvement in the models themselves and/or more information included in the fit.« less
The curvature of sensitometric curves for Kodak XV-2 film irradiated with photon and electron beams.
van Battum, L J; Huizenga, H
2006-07-01
Sensitometric curves of Kodak XV-2 film, obtained in a time period of ten years with various types of equipment, have been analyzed both for photon and electron beams. The sensitometric slope in the dataset varies more than a factor of 2, which is attributed mainly to variations in developer conditions. In the literature, the single hit equation has been proposed as a model for the sensitometric curve, as with the parameters of the sensitivity and maximum optical density. In this work, the single hit equation has been translated into a polynomial like function as with the parameters of the sensitometric slope and curvature. The model has been applied to fit the sensitometric data. If the dataset is fitted for each single sensitometric curve separately, a large variation is observed for both fit parameters. When sensitometric curves are fitted simultaneously it appears that all curves can be fitted adequately with a sensitometric curvature that is related to the sensitometric slope. When fitting each curve separately, apparently measurement uncertainty hides this relation. This relation appears to be dependent only on the type of densitometer used. No significant differences between beam energies or beam modalities are observed. Using the intrinsic relation between slope and curvature in fitting sensitometric data, e.g., for pretreatment verification of intensity-modulated radiotherapy, will increase the accuracy of the sensitometric curve. A calibration at a single dose point, together with a predetermined densitometer-dependent parameter ODmax will be adequate to find the actual relation between optical density and dose.
Improved battery parameter estimation method considering operating scenarios for HEV/EV applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Jufeng; Xia, Bing; Shang, Yunlong
This study presents an improved battery parameter estimation method based on typical operating scenarios in hybrid electric vehicles and pure electric vehicles. Compared with the conventional estimation methods, the proposed method takes both the constant-current charging and the dynamic driving scenarios into account, and two separate sets of model parameters are estimated through different parts of the pulse-rest test. The model parameters for the constant-charging scenario are estimated from the data in the pulse-charging periods, while the model parameters for the dynamic driving scenario are estimated from the data in the rest periods, and the length of the fitted datasetmore » is determined by the spectrum analysis of the load current. In addition, the unsaturated phenomenon caused by the long-term resistor-capacitor (RC) network is analyzed, and the initial voltage expressions of the RC networks in the fitting functions are improved to ensure a higher model fidelity. Simulation and experiment results validated the feasibility of the developed estimation method.« less
Improved battery parameter estimation method considering operating scenarios for HEV/EV applications
Yang, Jufeng; Xia, Bing; Shang, Yunlong; ...
2016-12-22
This study presents an improved battery parameter estimation method based on typical operating scenarios in hybrid electric vehicles and pure electric vehicles. Compared with the conventional estimation methods, the proposed method takes both the constant-current charging and the dynamic driving scenarios into account, and two separate sets of model parameters are estimated through different parts of the pulse-rest test. The model parameters for the constant-charging scenario are estimated from the data in the pulse-charging periods, while the model parameters for the dynamic driving scenario are estimated from the data in the rest periods, and the length of the fitted datasetmore » is determined by the spectrum analysis of the load current. In addition, the unsaturated phenomenon caused by the long-term resistor-capacitor (RC) network is analyzed, and the initial voltage expressions of the RC networks in the fitting functions are improved to ensure a higher model fidelity. Simulation and experiment results validated the feasibility of the developed estimation method.« less
Distributed Evaluation of Local Sensitivity Analysis (DELSA), with application to hydrologic models
Rakovec, O.; Hill, Mary C.; Clark, M.P.; Weerts, A. H.; Teuling, A. J.; Uijlenhoet, R.
2014-01-01
This paper presents a hybrid local-global sensitivity analysis method termed the Distributed Evaluation of Local Sensitivity Analysis (DELSA), which is used here to identify important and unimportant parameters and evaluate how model parameter importance changes as parameter values change. DELSA uses derivative-based “local” methods to obtain the distribution of parameter sensitivity across the parameter space, which promotes consideration of sensitivity analysis results in the context of simulated dynamics. This work presents DELSA, discusses how it relates to existing methods, and uses two hydrologic test cases to compare its performance with the popular global, variance-based Sobol' method. The first test case is a simple nonlinear reservoir model with two parameters. The second test case involves five alternative “bucket-style” hydrologic models with up to 14 parameters applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both examples, Sobol' and DELSA identify similar important and unimportant parameters, with DELSA enabling more detailed insight at much lower computational cost. For example, in the real-world problem the time delay in runoff is the most important parameter in all models, but DELSA shows that for about 20% of parameter sets it is not important at all and alternative mechanisms and parameters dominate. Moreover, the time delay was identified as important in regions producing poor model fits, whereas other parameters were identified as more important in regions of the parameter space producing better model fits. The ability to understand how parameter importance varies through parameter space is critical to inform decisions about, for example, additional data collection and model development. The ability to perform such analyses with modest computational requirements provides exciting opportunities to evaluate complicated models as well as many alternative models.
VizieR Online Data Catalog: Parameters and IR excesses of Gaia DR1 stars (McDonald+, 2017)
NASA Astrophysics Data System (ADS)
McDonald, I.; Zijlstra, A. A.; Watson, R. A.
2017-08-01
Spectral energy distribution fits are presented for stars from the Tycho-Gaia Astrometric Solution (TGAS) from Gaia Data Release 1. Hipparcos-Gaia stars are presented in a separate table. Effective temperatures, bolometric luminosities, and infrared excesses are presented (alongside other parameters pertinent to the model fits), plus the source photometry used. (3 data files).
Li, Michael; Dushoff, Jonathan; Bolker, Benjamin M
2018-07-01
Simple mechanistic epidemic models are widely used for forecasting and parameter estimation of infectious diseases based on noisy case reporting data. Despite the widespread application of models to emerging infectious diseases, we know little about the comparative performance of standard computational-statistical frameworks in these contexts. Here we build a simple stochastic, discrete-time, discrete-state epidemic model with both process and observation error and use it to characterize the effectiveness of different flavours of Bayesian Markov chain Monte Carlo (MCMC) techniques. We use fits to simulated data, where parameters (and future behaviour) are known, to explore the limitations of different platforms and quantify parameter estimation accuracy, forecasting accuracy, and computational efficiency across combinations of modeling decisions (e.g. discrete vs. continuous latent states, levels of stochasticity) and computational platforms (JAGS, NIMBLE, Stan).
NASA Astrophysics Data System (ADS)
Imvitthaya, Chomchid; Honda, Kiyoshi; Lertlum, Surat; Tangtham, Nipon
2011-01-01
In this paper, we present the results of a net primary production (NPP) modeling of teak (Tectona grandis Lin F.), an important species in tropical deciduous forests. The biome-biogeochemical cycles or Biome-BGC model was calibrated to estimate net NPP through the inverse modeling approach. A genetic algorithm (GA) was linked with Biome-BGC to determine the optimal ecophysiological model parameters. The Biome-BGC was calibrated by adjusting the ecophysiological model parameters to fit the simulated LAI to the satellite LAI (SPOT-Vegetation), and the best fitness confirmed the high accuracy of generated ecophysioligical parameter from GA. The modeled NPP, using optimized parameters from GA as input data, was evaluated using daily NPP derived by the MODIS satellite and the annual field data in northern Thailand. The results showed that NPP obtained using the optimized ecophysiological parameters were more accurate than those obtained using default literature parameterization. This improvement occurred mainly because the model's optimized parameters reduced the bias by reducing systematic underestimation in the model. These Biome-BGC results can be effectively applied in teak forests in tropical areas. The study proposes a more effective method of using GA to determine ecophysiological parameters at the site level and represents a first step toward the analysis of the carbon budget of teak plantations at the regional scale.
A BRDF statistical model applying to space target materials modeling
NASA Astrophysics Data System (ADS)
Liu, Chenghao; Li, Zhi; Xu, Can; Tian, Qichen
2017-10-01
In order to solve the problem of poor effect in modeling the large density BRDF measured data with five-parameter semi-empirical model, a refined statistical model of BRDF which is suitable for multi-class space target material modeling were proposed. The refined model improved the Torrance-Sparrow model while having the modeling advantages of five-parameter model. Compared with the existing empirical model, the model contains six simple parameters, which can approximate the roughness distribution of the material surface, can approximate the intensity of the Fresnel reflectance phenomenon and the attenuation of the reflected light's brightness with the azimuth angle changes. The model is able to achieve parameter inversion quickly with no extra loss of accuracy. The genetic algorithm was used to invert the parameters of 11 different samples in the space target commonly used materials, and the fitting errors of all materials were below 6%, which were much lower than those of five-parameter model. The effect of the refined model is verified by comparing the fitting results of the three samples at different incident zenith angles in 0° azimuth angle. Finally, the three-dimensional modeling visualizations of these samples in the upper hemisphere space was given, in which the strength of the optical scattering of different materials could be clearly shown. It proved the good describing ability of the refined model at the material characterization as well.
Quantum algorithm for linear regression
NASA Astrophysics Data System (ADS)
Wang, Guoming
2017-07-01
We present a quantum algorithm for fitting a linear regression model to a given data set using the least-squares approach. Differently from previous algorithms which yield a quantum state encoding the optimal parameters, our algorithm outputs these numbers in the classical form. So by running it once, one completely determines the fitted model and then can use it to make predictions on new data at little cost. Moreover, our algorithm works in the standard oracle model, and can handle data sets with nonsparse design matrices. It runs in time poly( log2(N ) ,d ,κ ,1 /ɛ ) , where N is the size of the data set, d is the number of adjustable parameters, κ is the condition number of the design matrix, and ɛ is the desired precision in the output. We also show that the polynomial dependence on d and κ is necessary. Thus, our algorithm cannot be significantly improved. Furthermore, we also give a quantum algorithm that estimates the quality of the least-squares fit (without computing its parameters explicitly). This algorithm runs faster than the one for finding this fit, and can be used to check whether the given data set qualifies for linear regression in the first place.
Seven-parameter statistical model for BRDF in the UV band.
Bai, Lu; Wu, Zhensen; Zou, Xiren; Cao, Yunhua
2012-05-21
A new semi-empirical seven-parameter BRDF model is developed in the UV band using experimentally measured data. The model is based on the five-parameter model of Wu and the fourteen-parameter model of Renhorn and Boreman. Surface scatter, bulk scatter and retro-reflection scatter are considered. An optimizing modeling method, the artificial immune network genetic algorithm, is used to fit the BRDF measurement data over a wide range of incident angles. The calculation time and accuracy of the five- and seven-parameter models are compared. After fixing the seven parameters, the model can well describe scattering data in the UV band.
Basha, Shaik; Jaiswar, Santlal; Jha, Bhavanath
2010-09-01
The biosorption equilibrium isotherms of Ni(II) onto marine brown algae Lobophora variegata, which was chemically-modified by CaCl(2) were studied and modeled. To predict the biosorption isotherms and to determine the characteristic parameters for process design, twenty-three one-, two-, three-, four- and five-parameter isotherm models were applied to experimental data. The interaction among biosorbed molecules is attractive and biosorption is carried out on energetically different sites and is an endothermic process. The five-parameter Fritz-Schluender model gives the most accurate fit with high regression coefficient, R (2) (0.9911-0.9975) and F-ratio (118.03-179.96), and low standard error, SE (0.0902-0.0.1556) and the residual or sum of square error, SSE (0.0012-0.1789) values to all experimental data in comparison to other models. The biosorption isotherm models fitted the experimental data in the order: Fritz-Schluender (five-parameter) > Freundlich (two-parameter) > Langmuir (two-parameter) > Khan (three-parameter) > Fritz-Schluender (four-parameter). The thermodynamic parameters such as DeltaG (0), DeltaH (0) and DeltaS (0) have been determined, which indicates the sorption of Ni(II) onto L. variegata was spontaneous and endothermic in nature.
An analytic formula for the supercluster mass function
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lim, Seunghwan; Lee, Jounghun, E-mail: slim@astro.umass.edu, E-mail: jounghun@astro.snu.ac.kr
2014-03-01
We present an analytic formula for the supercluster mass function, which is constructed by modifying the extended Zel'dovich model for the halo mass function. The formula has two characteristic parameters whose best-fit values are determined by fitting to the numerical results from N-body simulations for the standard ΛCDM cosmology. The parameters are found to be independent of redshifts and robust against variation of the key cosmological parameters. Under the assumption that the same formula for the supercluster mass function is valid for non-standard cosmological models, we show that the relative abundance of the rich superclusters should be a powerful indicatormore » of any deviation of the real universe from the prediction of the standard ΛCDM model.« less
A Simulated Annealing based Optimization Algorithm for Automatic Variogram Model Fitting
NASA Astrophysics Data System (ADS)
Soltani-Mohammadi, Saeed; Safa, Mohammad
2016-09-01
Fitting a theoretical model to an experimental variogram is an important issue in geostatistical studies because if the variogram model parameters are tainted with uncertainty, the latter will spread in the results of estimations and simulations. Although the most popular fitting method is fitting by eye, in some cases use is made of the automatic fitting method on the basis of putting together the geostatistical principles and optimization techniques to: 1) provide a basic model to improve fitting by eye, 2) fit a model to a large number of experimental variograms in a short time, and 3) incorporate the variogram related uncertainty in the model fitting. Effort has been made in this paper to improve the quality of the fitted model by improving the popular objective function (weighted least squares) in the automatic fitting. Also, since the variogram model function (£) and number of structures (m) too affect the model quality, a program has been provided in the MATLAB software that can present optimum nested variogram models using the simulated annealing method. Finally, to select the most desirable model from among the single/multi-structured fitted models, use has been made of the cross-validation method, and the best model has been introduced to the user as the output. In order to check the capability of the proposed objective function and the procedure, 3 case studies have been presented.
Prager, Jens; Najm, Habib N.; Sargsyan, Khachik; ...
2013-02-23
We study correlations among uncertain Arrhenius rate parameters in a chemical model for hydrocarbon fuel-air combustion. We consider correlations induced by the use of rate rules for modeling reaction rate constants, as well as those resulting from fitting rate expressions to empirical measurements arriving at a joint probability density for all Arrhenius parameters. We focus on homogeneous ignition in a fuel-air mixture at constant-pressure. We also outline a general methodology for this analysis using polynomial chaos and Bayesian inference methods. Finally, we examine the uncertainties in both the Arrhenius parameters and in predicted ignition time, outlining the role of correlations,more » and considering both accuracy and computational efficiency.« less
Using multiple group modeling to test moderators in meta-analysis.
Schoemann, Alexander M
2016-12-01
Meta-analysis is a popular and flexible analysis that can be fit in many modeling frameworks. Two methods of fitting meta-analyses that are growing in popularity are structural equation modeling (SEM) and multilevel modeling (MLM). By using SEM or MLM to fit a meta-analysis researchers have access to powerful techniques associated with SEM and MLM. This paper details how to use one such technique, multiple group analysis, to test categorical moderators in meta-analysis. In a multiple group meta-analysis a model is fit to each level of the moderator simultaneously. By constraining parameters across groups any model parameter can be tested for equality. Using multiple groups to test for moderators is especially relevant in random-effects meta-analysis where both the mean and the between studies variance of the effect size may be compared across groups. A simulation study and the analysis of a real data set are used to illustrate multiple group modeling with both SEM and MLM. Issues related to multiple group meta-analysis and future directions for research are discussed. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Syndromes of Self-Reported Psychopathology for Ages 18–59 in 29 Societies
Achenbach, Thomas M.; Rescorla, Leslie A.; Tumer, Lori V.; Ahmeti-Pronaj, Adelina; Au, Alma; Maese, Carmen Avila; Bellina, Monica; Caldas, J. Carlos; Chen, Yi-Chuen; Csemy, Ladislav; da Rocha, Marina M.; Decoster, Jeroen; Dobrean, Anca; Ezpeleta, Lourdes; Fontaine, Johnny R. J.; Funabiki, Yasuko; Guðmundsson, Halldór S.; Harder, Valerie s; de la Cabada, Marie Leiner; Leung, Patrick; Liu, Jianghong; Mahr, Safia; Malykh, Sergey; Maras, Jelena Srdanovic; Markovic, Jasminka; Ndetei, David M.; Oh, Kyung Ja; Petot, Jean-Michel; Riad, Geylan; Sakarya, Direnc; Samaniego, Virginia C.; Sebre, Sandra; Shahini, Mimoza; Silvares, Edwiges; Simulioniene, Roma; Sokoli, Elvisa; Talcott, Joel B.; Vazquez, Natalia; Zasepa, Ewa
2017-01-01
This study tested the multi-society generalizability of an eight-syndrome assessment model derived from factor analyses of American adults’ self-ratings of 120 behavioral, emotional, and social problems. The Adult Self-Report (ASR; Achenbach and Rescorla 2003) was completed by 17,152 18–59-year-olds in 29 societies. Confirmatory factor analyses tested the fit of self-ratings in each sample to the eight-syndrome model. The primary model fit index (Root Mean Square Error of Approximation) showed good model fit for all samples, while secondary indices showed acceptable to good fit. Only 5 (0.06%) of the 8,598 estimated parameters were outside the admissible parameter space. Confidence intervals indicated that sampling fluctuations could account for the deviant parameters. Results thus supported the tested model in societies differing widely in social, political, and economic systems, languages, ethnicities, religions, and geographical regions. Although other items, societies, and analytic methods might yield different results, the findings indicate that adults in very diverse societies were willing and able to rate themselves on the same standardized set of 120 problem items. Moreover, their self-ratings fit an eight-syndrome model previously derived from self-ratings by American adults. The support for the statistically derived syndrome model is consistent with previous findings for parent, teacher, and self-ratings of 1½–18-year-olds in many societies. The ASR and its parallel collateral-report instrument, the Adult Behavior Checklist (ABCL), may offer mental health professionals practical tools for the multi-informant assessment of clinical constructs of adult psychopathology that appear to be meaningful across diverse societies. PMID:29805197
Garcia-Hermoso, A; Agostinis-Sobrinho, C; Mota, J; Santos, R M; Correa-Bautista, J E; Ramírez-Vélez, R
2017-06-01
Studies in the paediatric population have shown inconsistent associations between cardiorespiratory fitness and inflammation independently of adiposity. The purpose of this study was (i) to analyse the combined association of cardiorespiratory fitness and adiposity with high-sensitivity C-reactive protein (hs-CRP), and (ii) to determine whether adiposity acts as a mediator on the association between cardiorespiratory fitness and hs-CRP in children and adolescents. This cross-sectional study included 935 (54.7% girls) healthy children and adolescents from Bogotá, Colombia. The 20 m shuttle run test was used to estimate cardiorespiratory fitness. We assessed the following adiposity parameters: body mass index, waist circumference, and fat mass index and the sum of subscapular and triceps skinfold thickness. High sensitivity assays were used to obtain hs-CRP. Linear regression models were fitted for mediation analyses examined whether the association between cardiorespiratory fitness and hs-CRP was mediated by each of adiposity parameters according to Baron and Kenny procedures. Lower levels of hs-CRP were associated with the best schoolchildren profiles (high cardiorespiratory fitness + low adiposity) (p for trend <0.001 in the four adiposity parameters), compared with unfit and overweight (low cardiorespiratory fitness + high adiposity) counterparts. Linear regression models suggest a full mediation of adiposity on the association between cardiorespiratory fitness and hs-CRP levels. Our findings seem to emphasize the importance of obesity prevention in childhood, suggesting that having high levels of cardiorespiratory fitness may not counteract the negative consequences ascribed to adiposity on hs-CRP. Copyright © 2017 The Italian Society of Diabetology, the Italian Society for the Study of Atherosclerosis, the Italian Society of Human Nutrition, and the Department of Clinical Medicine and Surgery, Federico II University. Published by Elsevier B.V. All rights reserved.
Stepaniak, Pieter S; Soliman Hamad, Mohamed A; Dekker, Lukas R C; Koolen, Jacques J
2014-01-01
In this study, we sought to analyze the stochastic behavior of Catherization Laboratories (Cath Labs) procedures in our institution. Statistical models may help to improve estimated case durations to support management in the cost-effective use of expensive surgical resources. We retrospectively analyzed all the procedures performed in the Cath Labs in 2012. The duration of procedures is strictly positive (larger than zero) and has mostly a large minimum duration. Because of the strictly positive character of the Cath Lab procedures, a fit of a lognormal model may be desirable. Having a minimum duration requires an estimate of the threshold (shift) parameter of the lognormal model. Therefore, the 3-parameter lognormal model is interesting. To avoid heterogeneous groups of observations, we tested every group-cardiologist-procedure combination for the normal, 2- and 3-parameter lognormal distribution. The total number of elective and emergency procedures performed was 6,393 (8,186 h). The final analysis included 6,135 procedures (7,779 h). Electrophysiology (intervention) procedures fit the 3-parameter lognormal model 86.1% (80.1%). Using Friedman test statistics, we conclude that the 3-parameter lognormal model is superior to the 2-parameter lognormal model. Furthermore, the 2-parameter lognormal is superior to the normal model. Cath Lab procedures are well-modelled by lognormal models. This information helps to improve and to refine Cath Lab schedules and hence their efficient use.
NASA Astrophysics Data System (ADS)
Małoszewski, P.; Zuber, A.
1982-06-01
Three new lumped-parameter models have been developed for the interpretation of environmental radioisotope data in groundwater systems. Two of these models combine other simpler models, i.e. the piston flow model is combined either with the exponential model (exponential distribution of transit times) or with the linear model (linear distribution of transit times). The third model is based on a new solution to the dispersion equation which more adequately represents the real systems than the conventional solution generally applied so far. The applicability of models was tested by the reinterpretation of several known case studies (Modry Dul, Cheju Island, Rasche Spring and Grafendorf). It has been shown that two of these models, i.e. the exponential-piston flow model and the dispersive model give better fitting than other simpler models. Thus, the obtained values of turnover times are more reliable, whereas the additional fitting parameter gives some information about the structure of the system. In the examples considered, in spite of a lower number of fitting parameters, the new models gave practically the same fitting as the multiparameter finite state mixing-cell models. It has been shown that in the case of a constant tracer input a prior physical knowledge of the groundwater system is indispensable for determining the turnover time. The piston flow model commonly used for age determinations by the 14C method is an approximation applicable only in the cases of low dispersion. In some cases the stable-isotope method aids in the interpretation of systems containing mixed waters of different ages. However, when 14C method is used for mixed-water systems a serious mistake may arise by neglecting the different bicarbonate contents in particular water components.
Boore, David M.
2012-01-01
Stress parameters (Δσ) are determined for nine relatively well-recorded earthquakes in eastern North America for ten attenuation models. This is an update of a previous study by Boore et al. (2010). New to this paper are observations from the 2010 Val des Bois earthquake, additional observations for the 1988 Saguenay and 2005 Riviere du Loup earthquakes, and consideration of six attenuation models in addition to the four used in the previous study. As in that study, it is clear that Δσ depends strongly on the rate of geometrical spreading (as well as other model parameters). The observations necessary to determine conclusively which attenuation model best fits the data are still lacking. At this time, a simple 1/R model seems to give as good an overall fit to the data as more complex models.
Bayesian analysis of CCDM models
NASA Astrophysics Data System (ADS)
Jesus, J. F.; Valentim, R.; Andrade-Oliveira, F.
2017-09-01
Creation of Cold Dark Matter (CCDM), in the context of Einstein Field Equations, produces a negative pressure term which can be used to explain the accelerated expansion of the Universe. In this work we tested six different spatially flat models for matter creation using statistical criteria, in light of SNe Ia data: Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Bayesian Evidence (BE). These criteria allow to compare models considering goodness of fit and number of free parameters, penalizing excess of complexity. We find that JO model is slightly favoured over LJO/ΛCDM model, however, neither of these, nor Γ = 3αH0 model can be discarded from the current analysis. Three other scenarios are discarded either because poor fitting or because of the excess of free parameters. A method of increasing Bayesian evidence through reparameterization in order to reducing parameter degeneracy is also developed.
The Gibbs free energy of homogeneous nucleation: From atomistic nuclei to the planar limit.
Cheng, Bingqing; Tribello, Gareth A; Ceriotti, Michele
2017-09-14
In this paper we discuss how the information contained in atomistic simulations of homogeneous nucleation should be used when fitting the parameters in macroscopic nucleation models. We show how the number of solid and liquid atoms in such simulations can be determined unambiguously by using a Gibbs dividing surface and how the free energy as a function of the number of solid atoms in the nucleus can thus be extracted. We then show that the parameters (the chemical potential, the interfacial free energy, and a Tolman correction) of a model based on classical nucleation theory can be fitted using the information contained in these free-energy profiles but that the parameters in such models are highly correlated. This correlation is unfortunate as it ensures that small errors in the computed free energy surface can give rise to large errors in the extrapolated properties of the fitted model. To resolve this problem we thus propose a method for fitting macroscopic nucleation models that uses simulations of planar interfaces and simulations of three-dimensional nuclei in tandem. We show that when the chemical potentials and the interface energy are pinned to their planar-interface values, more precise estimates for the Tolman length are obtained. Extrapolating the free energy profile obtained from small simulation boxes to larger nuclei is thus more reliable.
A Method of Q-Matrix Validation for the Linear Logistic Test Model
Baghaei, Purya; Hohensinn, Christine
2017-01-01
The linear logistic test model (LLTM) is a well-recognized psychometric model for examining the components of difficulty in cognitive tests and validating construct theories. The plausibility of the construct model, summarized in a matrix of weights, known as the Q-matrix or weight matrix, is tested by (1) comparing the fit of LLTM with the fit of the Rasch model (RM) using the likelihood ratio (LR) test and (2) by examining the correlation between the Rasch model item parameters and LLTM reconstructed item parameters. The problem with the LR test is that it is almost always significant and, consequently, LLTM is rejected. The drawback of examining the correlation coefficient is that there is no cut-off value or lower bound for the magnitude of the correlation coefficient. In this article we suggest a simulation method to set a minimum benchmark for the correlation between item parameters from the Rasch model and those reconstructed by the LLTM. If the cognitive model is valid then the correlation coefficient between the RM-based item parameters and the LLTM-reconstructed item parameters derived from the theoretical weight matrix should be greater than those derived from the simulated matrices. PMID:28611721
Effects of vegetation canopy on the radar backscattering coefficient
NASA Technical Reports Server (NTRS)
Mo, T.; Blanchard, B. J.; Schmugge, T. J.
1983-01-01
Airborne L- and C-band scatterometer data, taken over both vegetation-covered and bare fields, were systematically analyzed and theoretically reproduced, using a recently developed model for calculating radar backscattering coefficients of rough soil surfaces. The results show that the model can reproduce the observed angular variations of radar backscattering coefficient quite well via a least-squares fit method. Best fits to the data provide estimates of the statistical properties of the surface roughness, which is characterized by two parameters: the standard deviation of surface height, and the surface correlation length. In addition, the processes of vegetation attenuation and volume scattering require two canopy parameters, the canopy optical thickness and a volume scattering factor. Canopy parameter values for individual vegetation types, including alfalfa, milo and corn, were also determined from the best-fit results. The uncertainties in the scatterometer data were also explored.
NLINEAR - NONLINEAR CURVE FITTING PROGRAM
NASA Technical Reports Server (NTRS)
Everhart, J. L.
1994-01-01
A common method for fitting data is a least-squares fit. In the least-squares method, a user-specified fitting function is utilized in such a way as to minimize the sum of the squares of distances between the data points and the fitting curve. The Nonlinear Curve Fitting Program, NLINEAR, is an interactive curve fitting routine based on a description of the quadratic expansion of the chi-squared statistic. NLINEAR utilizes a nonlinear optimization algorithm that calculates the best statistically weighted values of the parameters of the fitting function and the chi-square that is to be minimized. The inputs to the program are the mathematical form of the fitting function and the initial values of the parameters to be estimated. This approach provides the user with statistical information such as goodness of fit and estimated values of parameters that produce the highest degree of correlation between the experimental data and the mathematical model. In the mathematical formulation of the algorithm, the Taylor expansion of chi-square is first introduced, and justification for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations are derived, which are solved by matrix algebra. To achieve convergence, the algorithm requires meaningful initial estimates for the parameters of the fitting function. NLINEAR is written in Fortran 77 for execution on a CDC Cyber 750 under NOS 2.3. It has a central memory requirement of 5K 60 bit words. Optionally, graphical output of the fitting function can be plotted. Tektronix PLOT-10 routines are required for graphics. NLINEAR was developed in 1987.
Some Empirical Evidence for Latent Trait Model Selection.
ERIC Educational Resources Information Center
Hutten, Leah R.
The results of this study suggest that for purposes of estimating ability by latent trait methods, the Rasch model compares favorably with the three-parameter logistic model. Using estimated parameters to make predictions about 25 actual number-correct score distributions with samples of 1,000 cases each, those predicted by the Rasch model fit the…
A Comparison of the Fit of Empirical Data to Two Latent Trait Models. Report No. 92.
ERIC Educational Resources Information Center
Hutten, Leah R.
Goodness of fit of raw test score data were compared, using two latent trait models: the Rasch model and the Birnbaum three-parameter logistic model. Data were taken from various achievement tests and the Scholastic Aptitude Test (Verbal). A minimum sample size of 1,000 was required, and the minimum test length was 40 items. Results indicated that…
Babcock, Chad; Finley, Andrew O.; Bradford, John B.; Kolka, Randall K.; Birdsey, Richard A.; Ryan, Michael G.
2015-01-01
Many studies and production inventory systems have shown the utility of coupling covariates derived from Light Detection and Ranging (LiDAR) data with forest variables measured on georeferenced inventory plots through regression models. The objective of this study was to propose and assess the use of a Bayesian hierarchical modeling framework that accommodates both residual spatial dependence and non-stationarity of model covariates through the introduction of spatial random effects. We explored this objective using four forest inventory datasets that are part of the North American Carbon Program, each comprising point-referenced measures of above-ground forest biomass and discrete LiDAR. For each dataset, we considered at least five regression model specifications of varying complexity. Models were assessed based on goodness of fit criteria and predictive performance using a 10-fold cross-validation procedure. Results showed that the addition of spatial random effects to the regression model intercept improved fit and predictive performance in the presence of substantial residual spatial dependence. Additionally, in some cases, allowing either some or all regression slope parameters to vary spatially, via the addition of spatial random effects, further improved model fit and predictive performance. In other instances, models showed improved fit but decreased predictive performance—indicating over-fitting and underscoring the need for cross-validation to assess predictive ability. The proposed Bayesian modeling framework provided access to pixel-level posterior predictive distributions that were useful for uncertainty mapping, diagnosing spatial extrapolation issues, revealing missing model covariates, and discovering locally significant parameters.
Assessment and Selection of Competing Models for Zero-Inflated Microbiome Data
Xu, Lizhen; Paterson, Andrew D.; Turpin, Williams; Xu, Wei
2015-01-01
Typical data in a microbiome study consist of the operational taxonomic unit (OTU) counts that have the characteristic of excess zeros, which are often ignored by investigators. In this paper, we compare the performance of different competing methods to model data with zero inflated features through extensive simulations and application to a microbiome study. These methods include standard parametric and non-parametric models, hurdle models, and zero inflated models. We examine varying degrees of zero inflation, with or without dispersion in the count component, as well as different magnitude and direction of the covariate effect on structural zeros and the count components. We focus on the assessment of type I error, power to detect the overall covariate effect, measures of model fit, and bias and effectiveness of parameter estimations. We also evaluate the abilities of model selection strategies using Akaike information criterion (AIC) or Vuong test to identify the correct model. The simulation studies show that hurdle and zero inflated models have well controlled type I errors, higher power, better goodness of fit measures, and are more accurate and efficient in the parameter estimation. Besides that, the hurdle models have similar goodness of fit and parameter estimation for the count component as their corresponding zero inflated models. However, the estimation and interpretation of the parameters for the zero components differs, and hurdle models are more stable when structural zeros are absent. We then discuss the model selection strategy for zero inflated data and implement it in a gut microbiome study of > 400 independent subjects. PMID:26148172
Assessment and Selection of Competing Models for Zero-Inflated Microbiome Data.
Xu, Lizhen; Paterson, Andrew D; Turpin, Williams; Xu, Wei
2015-01-01
Typical data in a microbiome study consist of the operational taxonomic unit (OTU) counts that have the characteristic of excess zeros, which are often ignored by investigators. In this paper, we compare the performance of different competing methods to model data with zero inflated features through extensive simulations and application to a microbiome study. These methods include standard parametric and non-parametric models, hurdle models, and zero inflated models. We examine varying degrees of zero inflation, with or without dispersion in the count component, as well as different magnitude and direction of the covariate effect on structural zeros and the count components. We focus on the assessment of type I error, power to detect the overall covariate effect, measures of model fit, and bias and effectiveness of parameter estimations. We also evaluate the abilities of model selection strategies using Akaike information criterion (AIC) or Vuong test to identify the correct model. The simulation studies show that hurdle and zero inflated models have well controlled type I errors, higher power, better goodness of fit measures, and are more accurate and efficient in the parameter estimation. Besides that, the hurdle models have similar goodness of fit and parameter estimation for the count component as their corresponding zero inflated models. However, the estimation and interpretation of the parameters for the zero components differs, and hurdle models are more stable when structural zeros are absent. We then discuss the model selection strategy for zero inflated data and implement it in a gut microbiome study of > 400 independent subjects.
NASA Astrophysics Data System (ADS)
Hajigeorgiou, Photos G.
2016-12-01
An analytical model for the diatomic potential energy function that was recently tested as a universal function (Hajigeorgiou, 2010) has been further modified and tested as a suitable model for direct-potential-fit analysis. Applications are presented for the ground electronic states of three diatomic molecules: oxygen, carbon monoxide, and hydrogen fluoride. The adjustable parameters of the extended Lennard-Jones potential model are determined through nonlinear regression by fits to calculated rovibrational energy term values or experimental spectroscopic line positions. The model is shown to lead to reliable, compact and simple representations for the potential energy functions of these systems and could therefore be classified as a suitable and attractive model for direct-potential-fit analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swaminathan-Gopalan, Krishnan; Stephani, Kelly A., E-mail: ksteph@illinois.edu
2016-02-15
A systematic approach for calibrating the direct simulation Monte Carlo (DSMC) collision model parameters to achieve consistency in the transport processes is presented. The DSMC collision cross section model parameters are calibrated for high temperature atmospheric conditions by matching the collision integrals from DSMC against ab initio based collision integrals that are currently employed in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and Data Parallel Line Relaxation (DPLR) high temperature computational fluid dynamics solvers. The DSMC parameter values are computed for the widely used Variable Hard Sphere (VHS) and the Variable Soft Sphere (VSS) models using the collision-specific pairing approach.more » The recommended best-fit VHS/VSS parameter values are provided over a temperature range of 1000-20 000 K for a thirteen-species ionized air mixture. Use of the VSS model is necessary to achieve consistency in transport processes of ionized gases. The agreement of the VSS model transport properties with the transport properties as determined by the ab initio collision integral fits was found to be within 6% in the entire temperature range, regardless of the composition of the mixture. The recommended model parameter values can be readily applied to any gas mixture involving binary collisional interactions between the chemical species presented for the specified temperature range.« less
Zhang, Yong; Green, Christopher T.; Baeumer, Boris
2014-01-01
Time-nonlocal transport models can describe non-Fickian diffusion observed in geological media, but the physical meaning of parameters can be ambiguous, and most applications are limited to curve-fitting. This study explores methods for predicting the parameters of a temporally tempered Lévy motion (TTLM) model for transient sub-diffusion in mobile–immobile like alluvial settings represented by high-resolution hydrofacies models. The TTLM model is a concise multi-rate mass transfer (MRMT) model that describes a linear mass transfer process where the transfer kinetics and late-time transport behavior are controlled by properties of the host medium, especially the immobile domain. The intrinsic connection between the MRMT and TTLM models helps to estimate the main time-nonlocal parameters in the TTLM model (which are the time scale index, the capacity coefficient, and the truncation parameter) either semi-analytically or empirically from the measurable aquifer properties. Further applications show that the TTLM model captures the observed solute snapshots, the breakthrough curves, and the spatial moments of plumes up to the fourth order. Most importantly, the a priori estimation of the time-nonlocal parameters outside of any breakthrough fitting procedure provides a reliable “blind” prediction of the late-time dynamics of subdiffusion observed in a spectrum of alluvial settings. Predictability of the time-nonlocal parameters may be due to the fact that the late-time subdiffusion is not affected by the exact location of each immobile zone, but rather is controlled by the time spent in immobile blocks surrounding the pathway of solute particles. Results also show that the effective dispersion coefficient has to be fitted due to the scale effect of transport, and the mean velocity can differ from local measurements or volume averages. The link between medium heterogeneity and time-nonlocal parameters will help to improve model predictability for non-Fickian transport in alluvial settings.
Bai, Yu; Katahira, Kentaro; Ohira, Hideki
2014-01-01
Humans are capable of correcting their actions based on actions performed in the past, and this ability enables them to adapt to a changing environment. The computational field of reinforcement learning (RL) has provided a powerful explanation for understanding such processes. Recently, the dual learning system, modeled as a hybrid model that incorporates value update based on reward-prediction error and learning rate modulation based on the surprise signal, has gained attention as a model for explaining various neural signals. However, the functional significance of the hybrid model has not been established. In the present study, we used computer simulation in a reversal learning task to address functional significance in a probabilistic reversal learning task. The hybrid model was found to perform better than the standard RL model in a large parameter setting. These results suggest that the hybrid model is more robust against the mistuning of parameters compared with the standard RL model when decision-makers continue to learn stimulus-reward contingencies, which can create abrupt changes. The parameter fitting results also indicated that the hybrid model fit better than the standard RL model for more than 50% of the participants, which suggests that the hybrid model has more explanatory power for the behavioral data than the standard RL model. PMID:25161635
Statistical distributions of extreme dry spell in Peninsular Malaysia
NASA Astrophysics Data System (ADS)
Zin, Wan Zawiah Wan; Jemain, Abdul Aziz
2010-11-01
Statistical distributions of annual extreme (AE) series and partial duration (PD) series for dry-spell event are analyzed for a database of daily rainfall records of 50 rain-gauge stations in Peninsular Malaysia, with recording period extending from 1975 to 2004. The three-parameter generalized extreme value (GEV) and generalized Pareto (GP) distributions are considered to model both series. In both cases, the parameters of these two distributions are fitted by means of the L-moments method, which provides a robust estimation of them. The goodness-of-fit (GOF) between empirical data and theoretical distributions are then evaluated by means of the L-moment ratio diagram and several goodness-of-fit tests for each of the 50 stations. It is found that for the majority of stations, the AE and PD series are well fitted by the GEV and GP models, respectively. Based on the models that have been identified, we can reasonably predict the risks associated with extreme dry spells for various return periods.
The critical role of uncertainty in projections of hydrological extremes
NASA Astrophysics Data System (ADS)
Meresa, Hadush K.; Romanowicz, Renata J.
2017-08-01
This paper aims to quantify the uncertainty in projections of future hydrological extremes in the Biala Tarnowska River at Koszyce gauging station, south Poland. The approach followed is based on several climate projections obtained from the EURO-CORDEX initiative, raw and bias-corrected realizations of catchment precipitation, and flow simulations derived using multiple hydrological model parameter sets. The projections cover the 21st century. Three sources of uncertainty are considered: one related to climate projection ensemble spread, the second related to the uncertainty in hydrological model parameters and the third related to the error in fitting theoretical distribution models to annual extreme flow series. The uncertainty of projected extreme indices related to hydrological model parameters was conditioned on flow observations from the reference period using the generalized likelihood uncertainty estimation (GLUE) approach, with separate criteria for high- and low-flow extremes. Extreme (low and high) flow quantiles were estimated using the generalized extreme value (GEV) distribution at different return periods and were based on two different lengths of the flow time series. A sensitivity analysis based on the analysis of variance (ANOVA) shows that the uncertainty introduced by the hydrological model parameters can be larger than the climate model variability and the distribution fit uncertainty for the low-flow extremes whilst for the high-flow extremes higher uncertainty is observed from climate models than from hydrological parameter and distribution fit uncertainties. This implies that ignoring one of the three uncertainty sources may cause great risk to future hydrological extreme adaptations and water resource planning and management.
Fitting direct covariance structures by the MSTRUCT modeling language of the CALIS procedure.
Yung, Yiu-Fai; Browne, Michael W; Zhang, Wei
2015-02-01
This paper demonstrates the usefulness and flexibility of the general structural equation modelling (SEM) approach to fitting direct covariance patterns or structures (as opposed to fitting implied covariance structures from functional relationships among variables). In particular, the MSTRUCT modelling language (or syntax) of the CALIS procedure (SAS/STAT version 9.22 or later: SAS Institute, 2010) is used to illustrate the SEM approach. The MSTRUCT modelling language supports a direct covariance pattern specification of each covariance element. It also supports the input of additional independent and dependent parameters. Model tests, fit statistics, estimates, and their standard errors are then produced under the general SEM framework. By using numerical and computational examples, the following tests of basic covariance patterns are illustrated: sphericity, compound symmetry, and multiple-group covariance patterns. Specification and testing of two complex correlation structures, the circumplex pattern and the composite direct product models with or without composite errors and scales, are also illustrated by the MSTRUCT syntax. It is concluded that the SEM approach offers a general and flexible modelling of direct covariance and correlation patterns. In conjunction with the use of SAS macros, the MSTRUCT syntax provides an easy-to-use interface for specifying and fitting complex covariance and correlation structures, even when the number of variables or parameters becomes large. © 2014 The British Psychological Society.
Henriksson, Mikael; Corino, Valentina D A; Sornmo, Leif; Sandberg, Frida
2016-09-01
The atrioventricular (AV) node plays a central role in atrial fibrillation (AF), as it influences the conduction of impulses from the atria into the ventricles. In this paper, the statistical dual pathway AV node model, previously introduced by us, is modified so that it accounts for atrial impulse pathway switching even if the preceding impulse did not cause a ventricular activation. The proposed change in model structure implies that the number of model parameters subjected to maximum likelihood estimation is reduced from five to four. The model is evaluated using the data acquired in the RATe control in atrial fibrillation (RATAF) study, involving 24-h ECG recordings from 60 patients with permanent AF. When fitting the models to the RATAF database, similar results were obtained for both the present and the previous model, with a median fit of 86%. The results show that the parameter estimates characterizing refractory period prolongation exhibit considerably lower variation when using the present model, a finding that may be ascribed to fewer model parameters. The new model maintains the capability to model RR intervals, while providing more reliable parameters estimates. The model parameters are expected to convey novel clinical information, and may be useful for predicting the effect of rate control drugs.
Development of a program to fit data to a new logistic model for microbial growth.
Fujikawa, Hiroshi; Kano, Yoshihiro
2009-06-01
Recently we developed a mathematical model for microbial growth in food. The model successfully predicted microbial growth at various patterns of temperature. In this study, we developed a program to fit data to the model with a spread sheet program, Microsoft Excel. Users can instantly get curves fitted to the model by inputting growth data and choosing the slope portion of a curve. The program also could estimate growth parameters including the rate constant of growth and the lag period. This program would be a useful tool for analyzing growth data and further predicting microbial growth.
A flexible, interactive software tool for fitting the parameters of neuronal models.
Friedrich, Péter; Vella, Michael; Gulyás, Attila I; Freund, Tamás F; Káli, Szabolcs
2014-01-01
The construction of biologically relevant neuronal models as well as model-based analysis of experimental data often requires the simultaneous fitting of multiple model parameters, so that the behavior of the model in a certain paradigm matches (as closely as possible) the corresponding output of a real neuron according to some predefined criterion. Although the task of model optimization is often computationally hard, and the quality of the results depends heavily on technical issues such as the appropriate choice (and implementation) of cost functions and optimization algorithms, no existing program provides access to the best available methods while also guiding the user through the process effectively. Our software, called Optimizer, implements a modular and extensible framework for the optimization of neuronal models, and also features a graphical interface which makes it easy for even non-expert users to handle many commonly occurring scenarios. Meanwhile, educated users can extend the capabilities of the program and customize it according to their needs with relatively little effort. Optimizer has been developed in Python, takes advantage of open-source Python modules for nonlinear optimization, and interfaces directly with the NEURON simulator to run the models. Other simulators are supported through an external interface. We have tested the program on several different types of problems of varying complexity, using different model classes. As targets, we used simulated traces from the same or a more complex model class, as well as experimental data. We successfully used Optimizer to determine passive parameters and conductance densities in compartmental models, and to fit simple (adaptive exponential integrate-and-fire) neuronal models to complex biological data. Our detailed comparisons show that Optimizer can handle a wider range of problems, and delivers equally good or better performance than any other existing neuronal model fitting tool.
A flexible, interactive software tool for fitting the parameters of neuronal models
Friedrich, Péter; Vella, Michael; Gulyás, Attila I.; Freund, Tamás F.; Káli, Szabolcs
2014-01-01
The construction of biologically relevant neuronal models as well as model-based analysis of experimental data often requires the simultaneous fitting of multiple model parameters, so that the behavior of the model in a certain paradigm matches (as closely as possible) the corresponding output of a real neuron according to some predefined criterion. Although the task of model optimization is often computationally hard, and the quality of the results depends heavily on technical issues such as the appropriate choice (and implementation) of cost functions and optimization algorithms, no existing program provides access to the best available methods while also guiding the user through the process effectively. Our software, called Optimizer, implements a modular and extensible framework for the optimization of neuronal models, and also features a graphical interface which makes it easy for even non-expert users to handle many commonly occurring scenarios. Meanwhile, educated users can extend the capabilities of the program and customize it according to their needs with relatively little effort. Optimizer has been developed in Python, takes advantage of open-source Python modules for nonlinear optimization, and interfaces directly with the NEURON simulator to run the models. Other simulators are supported through an external interface. We have tested the program on several different types of problems of varying complexity, using different model classes. As targets, we used simulated traces from the same or a more complex model class, as well as experimental data. We successfully used Optimizer to determine passive parameters and conductance densities in compartmental models, and to fit simple (adaptive exponential integrate-and-fire) neuronal models to complex biological data. Our detailed comparisons show that Optimizer can handle a wider range of problems, and delivers equally good or better performance than any other existing neuronal model fitting tool. PMID:25071540
Trajectory fitting in function space with application to analytic modeling of surfaces
NASA Technical Reports Server (NTRS)
Barger, Raymond L.
1992-01-01
A theory for representing a parameter-dependent function as a function trajectory is described. Additionally, a theory for determining a piecewise analytic fit to the trajectory is described. An example is given that illustrates the application of the theory to generating a smooth surface through a discrete set of input cross-section shapes. A simple procedure for smoothing in the parameter direction is discussed, and a computed example is given. Application of the theory to aerodynamic surface modeling is demonstrated by applying it to a blended wing-fuselage surface.
NASA Astrophysics Data System (ADS)
Franzetti, Paolo; Scodeggio, Marco
2012-10-01
GOSSIP fits the electro-magnetic emission of an object (the SED, Spectral Energy Distribution) against synthetic models to find the simulated one that best reproduces the observed data. It builds-up the observed SED of an object (or a large sample of objects) combining magnitudes in different bands and eventually a spectrum; then it performs a chi-square minimization fitting procedure versus a set of synthetic models. The fitting results are used to estimate a number of physical parameters like the Star Formation History, absolute magnitudes, stellar mass and their Probability Distribution Functions.
Ramadan, Ahmed; Boss, Connor; Choi, Jongeun; Peter Reeves, N; Cholewicki, Jacek; Popovich, John M; Radcliffe, Clark J
2018-07-01
Estimating many parameters of biomechanical systems with limited data may achieve good fit but may also increase 95% confidence intervals in parameter estimates. This results in poor identifiability in the estimation problem. Therefore, we propose a novel method to select sensitive biomechanical model parameters that should be estimated, while fixing the remaining parameters to values obtained from preliminary estimation. Our method relies on identifying the parameters to which the measurement output is most sensitive. The proposed method is based on the Fisher information matrix (FIM). It was compared against the nonlinear least absolute shrinkage and selection operator (LASSO) method to guide modelers on the pros and cons of our FIM method. We present an application identifying a biomechanical parametric model of a head position-tracking task for ten human subjects. Using measured data, our method (1) reduced model complexity by only requiring five out of twelve parameters to be estimated, (2) significantly reduced parameter 95% confidence intervals by up to 89% of the original confidence interval, (3) maintained goodness of fit measured by variance accounted for (VAF) at 82%, (4) reduced computation time, where our FIM method was 164 times faster than the LASSO method, and (5) selected similar sensitive parameters to the LASSO method, where three out of five selected sensitive parameters were shared by FIM and LASSO methods.
On Least Squares Fitting Nonlinear Submodels.
ERIC Educational Resources Information Center
Bechtel, Gordon G.
Three simplifying conditions are given for obtaining least squares (LS) estimates for a nonlinear submodel of a linear model. If these are satisfied, and if the subset of nonlinear parameters may be LS fit to the corresponding LS estimates of the linear model, then one attains the desired LS estimates for the entire submodel. Two illustrative…
Three-dimensional deformable-model-based localization and recognition of road vehicles.
Zhang, Zhaoxiang; Tan, Tieniu; Huang, Kaiqi; Wang, Yunhong
2012-01-01
We address the problem of model-based object recognition. Our aim is to localize and recognize road vehicles from monocular images or videos in calibrated traffic scenes. A 3-D deformable vehicle model with 12 shape parameters is set up as prior information, and its pose is determined by three parameters, which are its position on the ground plane and its orientation about the vertical axis under ground-plane constraints. An efficient local gradient-based method is proposed to evaluate the fitness between the projection of the vehicle model and image data, which is combined into a novel evolutionary computing framework to estimate the 12 shape parameters and three pose parameters by iterative evolution. The recovery of pose parameters achieves vehicle localization, whereas the shape parameters are used for vehicle recognition. Numerous experiments are conducted in this paper to demonstrate the performance of our approach. It is shown that the local gradient-based method can evaluate accurately and efficiently the fitness between the projection of the vehicle model and the image data. The evolutionary computing framework is effective for vehicles of different types and poses is robust to all kinds of occlusion.
Estimating Dynamical Systems: Derivative Estimation Hints From Sir Ronald A. Fisher.
Deboeck, Pascal R
2010-08-06
The fitting of dynamical systems to psychological data offers the promise of addressing new and innovative questions about how people change over time. One method of fitting dynamical systems is to estimate the derivatives of a time series and then examine the relationships between derivatives using a differential equation model. One common approach for estimating derivatives, Local Linear Approximation (LLA), produces estimates with correlated errors. Depending on the specific differential equation model used, such correlated errors can lead to severely biased estimates of differential equation model parameters. This article shows that the fitting of dynamical systems can be improved by estimating derivatives in a manner similar to that used to fit orthogonal polynomials. Two applications using simulated data compare the proposed method and a generalized form of LLA when used to estimate derivatives and when used to estimate differential equation model parameters. A third application estimates the frequency of oscillation in observations of the monthly deaths from bronchitis, emphysema, and asthma in the United Kingdom. These data are publicly available in the statistical program R, and functions in R for the method presented are provided.
Hepatic function imaging using dynamic Gd-EOB-DTPA enhanced MRI and pharmacokinetic modeling.
Ning, Jia; Yang, Zhiying; Xie, Sheng; Sun, Yongliang; Yuan, Chun; Chen, Huijun
2017-10-01
To determine whether pharmacokinetic modeling parameters with different output assumptions of dynamic contrast-enhanced MRI (DCE-MRI) using Gd-EOB-DTPA correlate with serum-based liver function tests, and compare the goodness of fit of the different output assumptions. A 6-min DCE-MRI protocol was performed in 38 patients. Four dual-input two-compartment models with different output assumptions and a published one-compartment model were used to calculate hepatic function parameters. The Akaike information criterion fitting error was used to evaluate the goodness of fit. Imaging-based hepatic function parameters were compared with blood chemistry using correlation with multiple comparison correction. The dual-input two-compartment model assuming venous flow equals arterial flow plus portal venous flow and no bile duct output better described the liver tissue enhancement with low fitting error and high correlation with blood chemistry. The relative uptake rate Kir derived from this model was found to be significantly correlated with direct bilirubin (r = -0.52, P = 0.015), prealbumin concentration (r = 0.58, P = 0.015), and prothrombin time (r = -0.51, P = 0.026). It is feasible to evaluate hepatic function by proper output assumptions. The relative uptake rate has the potential to serve as a biomarker of function. Magn Reson Med 78:1488-1495, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jesus, J.F.; Valentim, R.; Andrade-Oliveira, F., E-mail: jfjesus@itapeva.unesp.br, E-mail: valentim.rodolfo@unifesp.br, E-mail: felipe.oliveira@port.ac.uk
Creation of Cold Dark Matter (CCDM), in the context of Einstein Field Equations, produces a negative pressure term which can be used to explain the accelerated expansion of the Universe. In this work we tested six different spatially flat models for matter creation using statistical criteria, in light of SNe Ia data: Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Bayesian Evidence (BE). These criteria allow to compare models considering goodness of fit and number of free parameters, penalizing excess of complexity. We find that JO model is slightly favoured over LJO/ΛCDM model, however, neither of these, nor Γmore » = 3α H {sub 0} model can be discarded from the current analysis. Three other scenarios are discarded either because poor fitting or because of the excess of free parameters. A method of increasing Bayesian evidence through reparameterization in order to reducing parameter degeneracy is also developed.« less
Uncertainty Analysis in 3D Equilibrium Reconstruction
Cianciosa, Mark R.; Hanson, James D.; Maurer, David A.
2018-02-21
Reconstruction is an inverse process where a parameter space is searched to locate a set of parameters with the highest probability of describing experimental observations. Due to systematic errors and uncertainty in experimental measurements, this optimal set of parameters will contain some associated uncertainty. This uncertainty in the optimal parameters leads to uncertainty in models derived using those parameters. V3FIT is a three-dimensional (3D) equilibrium reconstruction code that propagates uncertainty from the input signals, to the reconstructed parameters, and to the final model. Here in this paper, we describe the methods used to propagate uncertainty in V3FIT. Using the resultsmore » of whole shot 3D equilibrium reconstruction of the Compact Toroidal Hybrid, this propagated uncertainty is validated against the random variation in the resulting parameters. Two different model parameterizations demonstrate how the uncertainty propagation can indicate the quality of a reconstruction. As a proxy for random sampling, the whole shot reconstruction results in a time interval that will be used to validate the propagated uncertainty from a single time slice.« less
Uncertainty Analysis in 3D Equilibrium Reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cianciosa, Mark R.; Hanson, James D.; Maurer, David A.
Reconstruction is an inverse process where a parameter space is searched to locate a set of parameters with the highest probability of describing experimental observations. Due to systematic errors and uncertainty in experimental measurements, this optimal set of parameters will contain some associated uncertainty. This uncertainty in the optimal parameters leads to uncertainty in models derived using those parameters. V3FIT is a three-dimensional (3D) equilibrium reconstruction code that propagates uncertainty from the input signals, to the reconstructed parameters, and to the final model. Here in this paper, we describe the methods used to propagate uncertainty in V3FIT. Using the resultsmore » of whole shot 3D equilibrium reconstruction of the Compact Toroidal Hybrid, this propagated uncertainty is validated against the random variation in the resulting parameters. Two different model parameterizations demonstrate how the uncertainty propagation can indicate the quality of a reconstruction. As a proxy for random sampling, the whole shot reconstruction results in a time interval that will be used to validate the propagated uncertainty from a single time slice.« less
TransFit: Finite element analysis data fitting software
NASA Technical Reports Server (NTRS)
Freeman, Mark
1993-01-01
The Advanced X-Ray Astrophysics Facility (AXAF) mission support team has made extensive use of geometric ray tracing to analyze the performance of AXAF developmental and flight optics. One important aspect of this performance modeling is the incorporation of finite element analysis (FEA) data into the surface deformations of the optical elements. TransFit is software designed for the fitting of FEA data of Wolter I optical surface distortions with a continuous surface description which can then be used by SAO's analytic ray tracing software, currently OSAC (Optical Surface Analysis Code). The improved capabilities of Transfit over previous methods include bicubic spline fitting of FEA data to accommodate higher spatial frequency distortions, fitted data visualization for assessing the quality of fit, the ability to accommodate input data from three FEA codes plus other standard formats, and options for alignment of the model coordinate system with the ray trace coordinate system. TransFit uses the AnswerGarden graphical user interface (GUI) to edit input parameters and then access routines written in PV-WAVE, C, and FORTRAN to allow the user to interactively create, evaluate, and modify the fit. The topics covered include an introduction to TransFit: requirements, designs philosophy, and implementation; design specifics: modules, parameters, fitting algorithms, and data displays; a procedural example; verification of performance; future work; and appendices on online help and ray trace results of the verification section.
Vajuvalli, Nithin N; Nayak, Krupa N; Geethanath, Sairam
2014-01-01
Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) is widely used in the diagnosis of cancer and is also a promising tool for monitoring tumor response to treatment. The Tofts model has become a standard for the analysis of DCE-MRI. The process of curve fitting employed in the Tofts equation to obtain the pharmacokinetic (PK) parameters is time-consuming for high resolution scans. Current work demonstrates a frequency-domain approach applied to the standard Tofts equation to speed-up the process of curve-fitting in order to obtain the pharmacokinetic parameters. The results obtained show that using the frequency domain approach, the process of curve fitting is computationally more efficient compared to the time-domain approach.
ERIC Educational Resources Information Center
Rakkapao, Suttida; Prasitpong, Singha; Arayathanitkul, Kwan
2016-01-01
This study investigated the multiple-choice test of understanding of vectors (TUV), by applying item response theory (IRT). The difficulty, discriminatory, and guessing parameters of the TUV items were fit with the three-parameter logistic model of IRT, using the parscale program. The TUV ability is an ability parameter, here estimated assuming…
Salari, Marjan; Salami Shahid, Esmaeel; Afzali, Seied Hosein; Ehteshami, Majid; Conti, Gea Oliveri; Derakhshan, Zahra; Sheibani, Solmaz Nikbakht
2018-04-22
Today, due to the increase in the population, the growth of industry and the variety of chemical compounds, the quality of drinking water has decreased. Five important river water quality properties such as: dissolved oxygen (DO), total dissolved solids (TDS), total hardness (TH), alkalinity (ALK) and turbidity (TU) were estimated by parameters such as: electric conductivity (EC), temperature (T), and pH that could be measured easily with almost no costs. Simulate water quality parameters were examined with two methods of modeling include mathematical and Artificial Neural Networks (ANN). Mathematical methods are based on polynomial fitting with least square method and ANN modeling algorithms are feed-forward networks. All conditions/circumstances covered by neural network modeling were tested for all parameters in this study, except for Alkalinity. All optimum ANN models developed to simulate water quality parameters had precision value as R-value close to 0.99. The ANN model extended to simulate alkalinity with R-value equals to 0.82. Moreover, Surface fitting techniques were used to refine data sets. Presented models and equations are reliable/useable tools for studying water quality parameters at similar rivers, as a proper replacement for traditional water quality measuring equipment's. Copyright © 2018 Elsevier Ltd. All rights reserved.
BioNetFit: a fitting tool compatible with BioNetGen, NFsim and distributed computing environments
Thomas, Brandon R.; Chylek, Lily A.; Colvin, Joshua; ...
2015-11-09
Rule-based models are analyzed with specialized simulators, such as those provided by the BioNetGen and NFsim open-source software packages. Here in this paper, we present BioNetFit, a general-purpose fitting tool that is compatible with BioNetGen and NFsim. BioNetFit is designed to take advantage of distributed computing resources. This feature facilitates fitting (i.e. optimization of parameter values for consistency with data) when simulations are computationally expensive.
Bayesian investigation of isochrone consistency using the old open cluster NGC 188
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hills, Shane; Courteau, Stéphane; Von Hippel, Ted
2015-03-01
This paper provides a detailed comparison of the differences in parameters derived for a star cluster from its color–magnitude diagrams (CMDs) depending on the filters and models used. We examine the consistency and reliability of fitting three widely used stellar evolution models to 15 combinations of optical and near-IR photometry for the old open cluster NGC 188. The optical filter response curves match those of theoretical systems and are thus not the source of fit inconsistencies. NGC 188 is ideally suited to this study thanks to a wide variety of high-quality photometry and available proper motions and radial velocities thatmore » enable us to remove non-cluster members and many binaries. Our Bayesian fitting technique yields inferred values of age, metallicity, distance modulus, and absorption as a function of the photometric band combinations and stellar models. We show that the historically favored three-band combinations of UBV and VRI can be meaningfully inconsistent with each other and with longer baseline data sets such as UBVRIJHK{sub S}. Differences among model sets can also be substantial. For instance, fitting Yi et al. (2001) and Dotter et al. (2008) models to UBVRIJHK{sub S} photometry for NGC 188 yields the following cluster parameters: age = (5.78 ± 0.03, 6.45 ± 0.04) Gyr, [Fe/H] = (+0.125 ± 0.003, −0.077 ± 0.003) dex, (m−M){sub V} = (11.441 ± 0.007, 11.525 ± 0.005) mag, and A{sub V} = (0.162 ± 0.003, 0.236 ± 0.003) mag, respectively. Within the formal fitting errors, these two fits are substantially and statistically different. Such differences among fits using different filters and models are a cautionary tale regarding our current ability to fit star cluster CMDs. Additional modeling of this kind, with more models and star clusters, and future Gaia parallaxes are critical for isolating and quantifying the most relevant uncertainties in stellar evolutionary models.« less
Shapes, rotation, and pole solutions of the selected Hilda and Trojan asteroids
NASA Astrophysics Data System (ADS)
Gritsevich, Maria; Sonnett, Sarah; Torppa, Johanna; Mainzer, Amy; Muinonen, Karri; Penttilä, Antti; Grav, Thomas; Masiero, Joseph; Bauer, James; Kramer, Emily
2017-04-01
Binary asteroid systems contain key information about the dynamical and chemical environments in which they formed. For example, determining the formation environments of Trojan and Hilda asteroids (in 1:1 and 3:2 mean-motion resonance with Jupiter, respectively) will provide critical constraints on how small bodies and the planets that drive their migration must have moved throughout Solar System history, see e.g. [1-3]. Therefore, identifying and characterizing binary asteroids within the Trojan and Hilda populations could offer a powerful means of discerning between Solar System evolution models. Dozens of possibly close or contact binary Trojans and Hildas were identified within the data obtained by NEOWISE [4]. Densely sampled light curves of these candidate binaries have been obtained in order to resolve rotational light curve features that are indicative of binarity (e.g., [5-7]). We present analysis of the shapes, rotation, and pole solutions of some of the follow-up targets observed with optical ground-based telescopes. For modelling the asteroid photometric properties, we use parameters describing the shape, surface light scattering properties and spin state of the asteroid. Scattering properties of the asteroid surface are modeled using a two parameter H-G12 magnitude system. Determination of the initial best-fit parameters is carried out by first using a triaxial ellipsoid shape model, and scanning over the period values and spin axis orientations, while fitting the other parameters, after which all parameters were fitted, taking the initial values for spin properties from the spin scanning. In addition to the best-fit parameters, we also provide the distribution of the possible solution, which should cover the inaccuracies of the solution, caused by the observing errors and model. The distribution of solutions is generated by Markov-Chain Monte Carlo sampling the spin and shape model parameters, using both an ellipsoid shape model and a convex model, Gaussian curvature of which is defined as a spherical harmonics series [8]. References: [1] Marzari F. and Scholl H. (1998), A&A, 339, 278. [2] Morbidelli A. et al. (2005), Nature, 435, 462. [3] Nesvorny D. et al. (2013), ApJ, 768, 45. [4] Sonnett S. et al. (2015), ApJ, 799, 191. [5] Behrend R. et al. (2006), A&A, 446, 1177. [6] Lacerda P. and Jewitt D. C. (2007), AJ, 133, 1393. [7] Oey J. (2016), MPB, 43, 45. [8] Muinonen et al., ACM 2017.
Jerome, Neil P; Orton, Matthew R; d'Arcy, James A; Collins, David J; Koh, Dow-Mu; Leach, Martin O
2014-01-01
To evaluate the effect on diffusion-weighted image-derived parameters in the apparent diffusion coefficient (ADC) and intra-voxel incoherent motion (IVIM) models from choice of either free-breathing or navigator-controlled acquisition. Imaging was performed with consent from healthy volunteers (n = 10) on a 1.5T Siemens Avanto scanner. Parameter-matched free-breathing and navigator-controlled diffusion-weighted images were acquired, without averaging in the console, for a total scan time of ∼10 minutes. Regions of interest were drawn for renal cortex, renal pyramid, whole kidney, liver, spleen, and paraspinal muscle. An ADC diffusion model for these regions was fitted for b-values ≥ 250 s/mm(2) , using a Levenberg-Marquardt algorithm, and an IVIM model was fitted for all images using a Bayesian method. ADC and IVIM parameters from the two acquisition regimes show no significant differences for the cohort; individual cases show occasional discrepancies, with outliers in parameter estimates arising more commonly from navigator-controlled scans. The navigator-controlled acquisitions showed, on average, a smaller range of movement for the kidneys (6.0 ± 1.4 vs. 10.0 ± 1.7 mm, P = 0.03), but also a smaller number of averages collected (3.9 ± 0.1 vs. 5.5 ± 0.2, P < 0.01) in the allocated time. Navigator triggering offers no advantage in fitted diffusion parameters, whereas free-breathing appears to offer greater confidence in fitted diffusion parameters, with fewer outliers, for matched acquisition periods. Copyright © 2013 Wiley Periodicals, Inc.
Andrew D. Richardson; David Y. Hollinger; David Y. Hollinger
2005-01-01
Whether the goal is to fill gaps in the flux record, or to extract physiological parameters from eddy covariance data, researchers are frequently interested in fitting simple models of ecosystem physiology to measured data. Presently, there is no consensus on the best models to use, or the ideal optimization criteria. We demonstrate that, given our estimates of the...
2014-01-01
The single parameter hyperbolic model has been frequently used to describe value discounting as a function of time and to differentiate substance abusers and non-clinical participants with the model's parameter k. However, k says little about the mechanisms underlying the observed differences. The present study evaluates several alternative models with the purpose of identifying whether group differences stem from differences in subjective valuation, and/or time perceptions. Using three two-parameter models, plus secondary data analyses of 14 studies with 471 indifference point curves, results demonstrated that adding a valuation, or a time perception function led to better model fits. However, the gain in fit due to the flexibility granted by a second parameter did not always lead to a better understanding of the data patterns and corresponding psychological processes. The k parameter consistently indexed group and context (magnitude) differences; it is thus a mixed measure of person and task level effects. This was similar for a parameter meant to index payoff devaluation. A time perception parameter, on the other hand, fluctuated with contexts in a non-predicted fashion and the interpretation of its values was inconsistent with prior findings that supported enlarged perceived delays for substance abusers compared to controls. Overall, the results provide mixed support for hyperbolic models of intertemporal choice in terms of the psychological meaning afforded by their parameters. PMID:25390941
The Predicting Model of E-commerce Site Based on the Ideas of Curve Fitting
NASA Astrophysics Data System (ADS)
Tao, Zhang; Li, Zhang; Dingjun, Chen
On the basis of the idea of the second multiplication curve fitting, the number and scale of Chinese E-commerce site is analyzed. A preventing increase model is introduced in this paper, and the model parameters are solved by the software of Matlab. The validity of the preventing increase model is confirmed though the numerical experiment. The experimental results show that the precision of preventing increase model is ideal.
Helgesson, P; Sjöstrand, H
2017-11-01
Fitting a parametrized function to data is important for many researchers and scientists. If the model is non-linear and/or defect, it is not trivial to do correctly and to include an adequate uncertainty analysis. This work presents how the Levenberg-Marquardt algorithm for non-linear generalized least squares fitting can be used with a prior distribution for the parameters and how it can be combined with Gaussian processes to treat model defects. An example, where three peaks in a histogram are to be distinguished, is carefully studied. In particular, the probability r 1 for a nuclear reaction to end up in one out of two overlapping peaks is studied. Synthetic data are used to investigate effects of linearizations and other assumptions. For perfect Gaussian peaks, it is seen that the estimated parameters are distributed close to the truth with good covariance estimates. This assumes that the method is applied correctly; for example, prior knowledge should be implemented using a prior distribution and not by assuming that some parameters are perfectly known (if they are not). It is also important to update the data covariance matrix using the fit if the uncertainties depend on the expected value of the data (e.g., for Poisson counting statistics or relative uncertainties). If a model defect is added to the peaks, such that their shape is unknown, a fit which assumes perfect Gaussian peaks becomes unable to reproduce the data, and the results for r 1 become biased. It is, however, seen that it is possible to treat the model defect with a Gaussian process with a covariance function tailored for the situation, with hyper-parameters determined by leave-one-out cross validation. The resulting estimates for r 1 are virtually unbiased, and the uncertainty estimates agree very well with the underlying uncertainty.
NASA Astrophysics Data System (ADS)
Helgesson, P.; Sjöstrand, H.
2017-11-01
Fitting a parametrized function to data is important for many researchers and scientists. If the model is non-linear and/or defect, it is not trivial to do correctly and to include an adequate uncertainty analysis. This work presents how the Levenberg-Marquardt algorithm for non-linear generalized least squares fitting can be used with a prior distribution for the parameters and how it can be combined with Gaussian processes to treat model defects. An example, where three peaks in a histogram are to be distinguished, is carefully studied. In particular, the probability r1 for a nuclear reaction to end up in one out of two overlapping peaks is studied. Synthetic data are used to investigate effects of linearizations and other assumptions. For perfect Gaussian peaks, it is seen that the estimated parameters are distributed close to the truth with good covariance estimates. This assumes that the method is applied correctly; for example, prior knowledge should be implemented using a prior distribution and not by assuming that some parameters are perfectly known (if they are not). It is also important to update the data covariance matrix using the fit if the uncertainties depend on the expected value of the data (e.g., for Poisson counting statistics or relative uncertainties). If a model defect is added to the peaks, such that their shape is unknown, a fit which assumes perfect Gaussian peaks becomes unable to reproduce the data, and the results for r1 become biased. It is, however, seen that it is possible to treat the model defect with a Gaussian process with a covariance function tailored for the situation, with hyper-parameters determined by leave-one-out cross validation. The resulting estimates for r1 are virtually unbiased, and the uncertainty estimates agree very well with the underlying uncertainty.
Financial model calibration using consistency hints.
Abu-Mostafa, Y S
2001-01-01
We introduce a technique for forcing the calibration of a financial model to produce valid parameters. The technique is based on learning from hints. It converts simple curve fitting into genuine calibration, where broad conclusions can be inferred from parameter values. The technique augments the error function of curve fitting with consistency hint error functions based on the Kullback-Leibler distance. We introduce an efficient EM-type optimization algorithm tailored to this technique. We also introduce other consistency hints, and balance their weights using canonical errors. We calibrate the correlated multifactor Vasicek model of interest rates, and apply it successfully to Japanese Yen swaps market and US dollar yield market.
Longhi, Daniel Angelo; Martins, Wiaslan Figueiredo; da Silva, Nathália Buss; Carciofi, Bruno Augusto Mattar; de Aragão, Gláucia Maria Falcão; Laurindo, João Borges
2017-01-02
In predictive microbiology, the model parameters have been estimated using the sequential two-step modeling (TSM) approach, in which primary models are fitted to the microbial growth data, and then secondary models are fitted to the primary model parameters to represent their dependence with the environmental variables (e.g., temperature). The Optimal Experimental Design (OED) approach allows reducing the experimental workload and costs, and the improvement of model identifiability because primary and secondary models are fitted simultaneously from non-isothermal data. Lactobacillus viridescens was selected to this study because it is a lactic acid bacterium of great interest to meat products preservation. The objectives of this study were to estimate the growth parameters of L. viridescens in culture medium from TSM and OED approaches and to evaluate both the number of experimental data and the time needed in each approach and the confidence intervals of the model parameters. Experimental data for estimating the model parameters with TSM approach were obtained at six temperatures (total experimental time of 3540h and 196 experimental data of microbial growth). Data for OED approach were obtained from four optimal non-isothermal profiles (total experimental time of 588h and 60 experimental data of microbial growth), two profiles with increasing temperatures (IT) and two with decreasing temperatures (DT). The Baranyi and Roberts primary model and the square root secondary model were used to describe the microbial growth, in which the parameters b and T min (±95% confidence interval) were estimated from the experimental data. The parameters obtained from TSM approach were b=0.0290 (±0.0020) [1/(h 0.5 °C)] and T min =-1.33 (±1.26) [°C], with R 2 =0.986 and RMSE=0.581, and the parameters obtained with the OED approach were b=0.0316 (±0.0013) [1/(h 0.5 °C)] and T min =-0.24 (±0.55) [°C], with R 2 =0.990 and RMSE=0.436. The parameters obtained from OED approach presented smaller confidence intervals and best statistical indexes than those from TSM approach. Besides, less experimental data and time were needed to estimate the model parameters with OED than TSM. Furthermore, the OED model parameters were validated with non-isothermal experimental data with great accuracy. In this way, OED approach is feasible and is a very useful tool to improve the prediction of microbial growth under non-isothermal condition. Copyright © 2016 Elsevier B.V. All rights reserved.
Macroscopic singlet oxygen model incorporating photobleaching as an input parameter
NASA Astrophysics Data System (ADS)
Kim, Michele M.; Finlay, Jarod C.; Zhu, Timothy C.
2015-03-01
A macroscopic singlet oxygen model for photodynamic therapy (PDT) has been used extensively to calculate the reacted singlet oxygen concentration for various photosensitizers. The four photophysical parameters (ξ, σ, β, δ) and threshold singlet oxygen dose ([1O2]r,sh) can be found for various drugs and drug-light intervals using a fitting algorithm. The input parameters for this model include the fluence, photosensitizer concentration, optical properties, and necrosis radius. An additional input variable of photobleaching was implemented in this study to optimize the results. Photobleaching was measured by using the pre-PDT and post-PDT sensitizer concentrations. Using the RIF model of murine fibrosarcoma, mice were treated with a linear source with fluence rates from 12 - 150 mW/cm and total fluences from 24 - 135 J/cm. The two main drugs investigated were benzoporphyrin derivative monoacid ring A (BPD) and 2-[1-hexyloxyethyl]-2-devinyl pyropheophorbide-a (HPPH). Previously published photophysical parameters were fine-tuned and verified using photobleaching as the additional fitting parameter. Furthermore, photobleaching can be used as an indicator of the robustness of the model for the particular mouse experiment by comparing the experimental and model-calculated photobleaching ratio.
Koeppe, R A; Holthoff, V A; Frey, K A; Kilbourn, M R; Kuhl, D E
1991-09-01
The in vivo kinetic behavior of [11C]flumazenil ([11C]FMZ), a non-subtype-specific central benzodiazepine antagonist, is characterized using compartmental analysis with the aim of producing an optimized data acquisition protocol and tracer kinetic model configuration for the assessment of [11C]FMZ binding to benzodiazepine receptors (BZRs) in human brain. The approach presented is simple, requiring only a single radioligand injection. Dynamic positron emission tomography data were acquired on 18 normal volunteers using a 60- to 90-min sequence of scans and were analyzed with model configurations that included a three-compartment, four-parameter model, a three-compartment, three-parameter model, with a fixed value for free plus nonspecific binding; and a two-compartment, two-parameter model. Statistical analysis indicated that a four-parameter model did not yield significantly better fits than a three-parameter model. Goodness of fit was improved for three- versus two-parameter configurations in regions with low receptor density, but not in regions with moderate to high receptor density. Thus, a two-compartment, two-parameter configuration was found to adequately describe the kinetic behavior of [11C]FMZ in human brain, with stable estimates of the model parameters obtainable from as little as 20-30 min of data. Pixel-by-pixel analysis yields functional images of transport rate (K1) and ligand distribution volume (DV"), and thus provides independent estimates of ligand delivery and BZR binding.
Linear model for fast background subtraction in oligonucleotide microarrays.
Kroll, K Myriam; Barkema, Gerard T; Carlon, Enrico
2009-11-16
One important preprocessing step in the analysis of microarray data is background subtraction. In high-density oligonucleotide arrays this is recognized as a crucial step for the global performance of the data analysis from raw intensities to expression values. We propose here an algorithm for background estimation based on a model in which the cost function is quadratic in a set of fitting parameters such that minimization can be performed through linear algebra. The model incorporates two effects: 1) Correlated intensities between neighboring features in the chip and 2) sequence-dependent affinities for non-specific hybridization fitted by an extended nearest-neighbor model. The algorithm has been tested on 360 GeneChips from publicly available data of recent expression experiments. The algorithm is fast and accurate. Strong correlations between the fitted values for different experiments as well as between the free-energy parameters and their counterparts in aqueous solution indicate that the model captures a significant part of the underlying physical chemistry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Raymond H.; Truax, Ryan A.; Lankford, David A.
Solid-phase iron concentrations and generalized composite surface complexation models were used to evaluate procedures in determining uranium sorption on oxidized aquifer material at a proposed U in situ recovery (ISR) site. At the proposed Dewey Burdock ISR site in South Dakota, USA, oxidized aquifer material occurs downgradient of the U ore zones. Solid-phase Fe concentrations did not explain our batch sorption test results,though total extracted Fe appeared to be positively correlated with overall measured U sorption. Batch sorption test results were used to develop generalized composite surface complexation models that incorporated the full genericsorption potential of each sample, without detailedmore » mineralogiccharacterization. The resultant models provide U sorption parameters (site densities and equilibrium constants) for reactive transport modeling. The generalized composite surface complexation sorption models were calibrated to batch sorption data from three oxidized core samples using inverse modeling, and gave larger sorption parameters than just U sorption on the measured solidphase Fe. These larger sorption parameters can significantly influence reactive transport modeling, potentially increasing U attenuation. Because of the limited number of calibration points, inverse modeling required the reduction of estimated parameters by fixing two parameters. The best-fit models used fixed values for equilibrium constants, with the sorption site densities being estimated by the inversion process. While these inverse routines did provide best-fit sorption parameters, local minima and correlated parameters might require further evaluation. Despite our limited number of proxy samples, the procedures presented provide a valuable methodology to consider for sites where metal sorption parameters are required. Furthermore, these sorption parameters can be used in reactive transport modeling to assess downgradient metal attenuation, especially when no other calibration data are available, such as at proposed U ISR sites.« less
Johnson, Raymond H.; Truax, Ryan A.; Lankford, David A.; ...
2016-02-03
Solid-phase iron concentrations and generalized composite surface complexation models were used to evaluate procedures in determining uranium sorption on oxidized aquifer material at a proposed U in situ recovery (ISR) site. At the proposed Dewey Burdock ISR site in South Dakota, USA, oxidized aquifer material occurs downgradient of the U ore zones. Solid-phase Fe concentrations did not explain our batch sorption test results,though total extracted Fe appeared to be positively correlated with overall measured U sorption. Batch sorption test results were used to develop generalized composite surface complexation models that incorporated the full genericsorption potential of each sample, without detailedmore » mineralogiccharacterization. The resultant models provide U sorption parameters (site densities and equilibrium constants) for reactive transport modeling. The generalized composite surface complexation sorption models were calibrated to batch sorption data from three oxidized core samples using inverse modeling, and gave larger sorption parameters than just U sorption on the measured solidphase Fe. These larger sorption parameters can significantly influence reactive transport modeling, potentially increasing U attenuation. Because of the limited number of calibration points, inverse modeling required the reduction of estimated parameters by fixing two parameters. The best-fit models used fixed values for equilibrium constants, with the sorption site densities being estimated by the inversion process. While these inverse routines did provide best-fit sorption parameters, local minima and correlated parameters might require further evaluation. Despite our limited number of proxy samples, the procedures presented provide a valuable methodology to consider for sites where metal sorption parameters are required. Furthermore, these sorption parameters can be used in reactive transport modeling to assess downgradient metal attenuation, especially when no other calibration data are available, such as at proposed U ISR sites.« less
Individual tree basal-area growth parameter estimates for four models
J.J. Colbert; Michael Schuckers; Desta Fekedulegn; James Rentch; Mairtin MacSiurtain; Kurt Gottschalk
2004-01-01
Four sigmoid growth models are fit to basal-area data derived from increment cores and disks taken at breast height from oak trees. Models are rated on their ability to fit growth data from five datasets that are obtained from 10 locations along a longitudinal gradient across the states of Delaware, Pennsylvania, West Virginia, and Ohio in the USA. We examine and...
A new simple local muscle recovery model and its theoretical and experimental validation.
Ma, Liang; Zhang, Wei; Wu, Su; Zhang, Zhanwu
2015-01-01
This study was conducted to provide theoretical and experimental validation of a local muscle recovery model. Muscle recovery has been modeled in different empirical and theoretical approaches to determine work-rest allowance for musculoskeletal disorder (MSD) prevention. However, time-related parameters and individual attributes have not been sufficiently considered in conventional approaches. A new muscle recovery model was proposed by integrating time-related task parameters and individual attributes. Theoretically, this muscle recovery model was compared to other theoretical models mathematically. Experimentally, a total of 20 subjects participated in the experimental validation. Hand grip force recovery and shoulder joint strength recovery were measured after a fatiguing operation. The recovery profile was fitted by using the recovery model, and individual recovery rates were calculated as well after fitting. Good fitting values (r(2) > .8) were found for all the subjects. Significant differences in recovery rates were found among different muscle groups (p < .05). The theoretical muscle recovery model was primarily validated by characterization of the recovery process after fatiguing operation. The determined recovery rate may be useful to represent individual recovery attribute.
A systematic study of multiple minerals precipitation modelling in wastewater treatment.
Kazadi Mbamba, Christian; Tait, Stephan; Flores-Alsina, Xavier; Batstone, Damien J
2015-11-15
Mineral solids precipitation is important in wastewater treatment. However approaches to minerals precipitation modelling are varied, often empirical, and mostly focused on single precipitate classes. A common approach, applicable to multi-species precipitates, is needed to integrate into existing wastewater treatment models. The present study systematically tested a semi-mechanistic modelling approach, using various experimental platforms with multiple minerals precipitation. Experiments included dynamic titration with addition of sodium hydroxide to synthetic wastewater, and aeration to progressively increase pH and induce precipitation in real piggery digestate and sewage sludge digestate. The model approach consisted of an equilibrium part for aqueous phase reactions and a kinetic part for minerals precipitation. The model was fitted to dissolved calcium, magnesium, total inorganic carbon and phosphate. Results indicated that precipitation was dominated by the mineral struvite, forming together with varied and minor amounts of calcium phosphate and calcium carbonate. The model approach was noted to have the advantage of requiring a minimal number of fitted parameters, so the model was readily identifiable. Kinetic rate coefficients, which were statistically fitted, were generally in the range 0.35-11.6 h(-1) with confidence intervals of 10-80% relative. Confidence regions for the kinetic rate coefficients were often asymmetric with model-data residuals increasing more gradually with larger coefficient values. This suggests that a large kinetic coefficient could be used when actual measured data is lacking for a particular precipitate-matrix combination. Correlation between the kinetic rate coefficients of different minerals was low, indicating that parameter values for individual minerals could be independently fitted (keeping all other model parameters constant). Implementation was therefore relatively flexible, and would be readily expandable to include other minerals. Copyright © 2015 Elsevier Ltd. All rights reserved.
Nasserie, Tahmina; Tuite, Ashleigh R; Whitmore, Lindsay; Hatchette, Todd; Drews, Steven J; Peci, Adriana; Kwong, Jeffrey C; Friedman, Dara; Garber, Gary; Gubbay, Jonathan; Fisman, David N
2017-01-01
Seasonal influenza epidemics occur frequently. Rapid characterization of seasonal dynamics and forecasting of epidemic peaks and final sizes could help support real-time decision-making related to vaccination and other control measures. Real-time forecasting remains challenging. We used the previously described "incidence decay with exponential adjustment" (IDEA) model, a 2-parameter phenomenological model, to evaluate the characteristics of the 2015-2016 influenza season in 4 Canadian jurisdictions: the Provinces of Alberta, Nova Scotia and Ontario, and the City of Ottawa. Model fits were updated weekly with receipt of incident virologically confirmed case counts. Best-fit models were used to project seasonal influenza peaks and epidemic final sizes. The 2015-2016 influenza season was mild and late-peaking. Parameter estimates generated through fitting were consistent in the 2 largest jurisdictions (Ontario and Alberta) and with pooled data including Nova Scotia counts (R 0 approximately 1.4 for all fits). Lower R 0 estimates were generated in Nova Scotia and Ottawa. Final size projections that made use of complete time series were accurate to within 6% of true final sizes, but final size was using pre-peak data. Projections of epidemic peaks stabilized before the true epidemic peak, but these were persistently early (~2 weeks) relative to the true peak. A simple, 2-parameter influenza model provided reasonably accurate real-time projections of influenza seasonal dynamics in an atypically late, mild influenza season. Challenges are similar to those seen with more complex forecasting methodologies. Future work includes identification of seasonal characteristics associated with variability in model performance.
Multifractal model of magnetic susceptibility distributions in some igneous rocks
Gettings, Mark E.
2012-01-01
Measurements of in-situ magnetic susceptibility were compiled from mainly Precambrian crystalline basement rocks beneath the Colorado Plateau and ranges in Arizona, Colorado, and New Mexico. The susceptibility meter used measures about 30 cm3 of rock and measures variations in the modal distribution of magnetic minerals that form a minor component volumetrically in these coarsely crystalline granitic to granodioritic rocks. Recent measurements include 50–150 measurements on each outcrop, and show that the distribution of magnetic susceptibilities is highly variable, multimodal and strongly non-Gaussian. Although the distribution of magnetic susceptibility is well known to be multifractal, the small number of data points at an outcrop precludes calculation of the multifractal spectrum by conventional methods. Instead, a brute force approach was adopted using multiplicative cascade models to fit the outcrop scale variability of magnetic minerals. Model segment proportion and length parameters resulted in 26 676 models to span parameter space. Distributions at each outcrop were normalized to unity magnetic susceptibility and added to compare all data for a rock body accounting for variations in petrology and alteration. Once the best-fitting model was found, the equation relating the segment proportion and length parameters was solved numerically to yield the multifractal spectrum estimate. For the best fits, the relative density (the proportion divided by the segment length) of one segment tends to be dominant and the other two densities are smaller and nearly equal. No other consistent relationships between the best fit parameters were identified. The multifractal spectrum estimates appear to distinguish between metamorphic gneiss sites and sites on plutons, even if the plutons have been metamorphosed. In particular, rocks that have undergone multiple tectonic events tend to have a larger range of scaling exponents.
Application of an OCT data-based mathematical model of the foveal pit in Parkinson disease.
Ding, Yin; Spund, Brian; Glazman, Sofya; Shrier, Eric M; Miri, Shahnaz; Selesnick, Ivan; Bodis-Wollner, Ivan
2014-11-01
Spectral-domain Optical coherence tomography (OCT) has shown remarkable utility in the study of retinal disease and has helped to characterize the fovea in Parkinson disease (PD) patients. We developed a detailed mathematical model based on raw OCT data to allow differentiation of foveae of PD patients from healthy controls. Of the various models we tested, a difference of a Gaussian and a polynomial was found to have "the best fit". Decision was based on mathematical evaluation of the fit of the model to the data of 45 control eyes versus 50 PD eyes. We compared the model parameters in the two groups using receiver-operating characteristics (ROC). A single parameter discriminated 70 % of PD eyes from controls, while using seven of the eight parameters of the model allowed 76 % to be discriminated. The future clinical utility of mathematical modeling in study of diffuse neurodegenerative conditions that also affect the fovea is discussed.
ERIC Educational Resources Information Center
Molenaar, Peter C. M.; Nesselroade, John R.
1998-01-01
Pseudo-Maximum Likelihood (p-ML) and Asymptotically Distribution Free (ADF) estimation methods for estimating dynamic factor model parameters within a covariance structure framework were compared through a Monte Carlo simulation. Both methods appear to give consistent model parameter estimates, but only ADF gives standard errors and chi-square…
Bayesian Estimation in the One-Parameter Latent Trait Model.
1980-03-01
Journal of Mathematical and Statistical Psychology , 1973, 26, 31-44. (a) Andersen, E. B. A goodness of fit test for the Rasch model. Psychometrika, 1973, 28...technique for estimating latent trait mental test parameters. Educational and Psychological Measurement, 1976, 36, 705-715. Lindley, D. V. The...Lord, F. M. An analysis of verbal Scholastic Aptitude Test using Birnbaum’s three-parameter logistic model. Educational and Psychological
The AKARI IRC asteroid flux catalogue: updated diameters and albedos
NASA Astrophysics Data System (ADS)
Alí-Lagoa, V.; Müller, T. G.; Usui, F.; Hasegawa, S.
2018-05-01
The AKARI IRC all-sky survey provided more than twenty thousand thermal infrared observations of over five thousand asteroids. Diameters and albedos were obtained by fitting an empirically calibrated version of the standard thermal model to these data. After the publication of the flux catalogue in October 2016, our aim here is to present the AKARI IRC all-sky survey data and discuss valuable scientific applications in the field of small body physical properties studies. As an example, we update the catalogue of asteroid diameters and albedos based on AKARI using the near-Earth asteroid thermal model (NEATM). We fit the NEATM to derive asteroid diameters and, whenever possible, infrared beaming parameters. We fit groups of observations taken for the same object at different epochs of the survey separately, so we compute more than one diameter for approximately half of the catalogue. We obtained a total of 8097 diameters and albedos for 5170 asteroids, and we fitted the beaming parameter for almost two thousand of them. When it was not possible to fit the beaming parameter, we used a straight line fit to our sample's beaming parameter-versus-phase angle plot to set the default value for each fit individually instead of using a single average value. Our diameters agree with stellar-occultation-based diameters well within the accuracy expected for the model. They also match the previous AKARI-based catalogue at phase angles lower than 50°, but we find a systematic deviation at higher phase angles, at which near-Earth and Mars-crossing asteroids were observed. The AKARI IRC All-sky survey is an essential source of information about asteroids, especially the large ones, since, it provides observations at different observation geometries, rotational coverages and aspect angles. For example, by comparing in more detail a few asteroids for which dimensions were derived from occultations, we discuss how the multiple observations per object may already provide three-dimensional information about elongated objects even based on an idealised model like the NEATM. Finally, we enumerate additional expected applications for more complex models, especially in combination with other catalogues. Full Table 1 is only available at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/612/A85
Text vectorization based on character recognition and character stroke modeling
NASA Astrophysics Data System (ADS)
Fan, Zhigang; Zhou, Bingfeng; Tse, Francis; Mu, Yadong; He, Tao
2014-03-01
In this paper, a text vectorization method is proposed using OCR (Optical Character Recognition) and character stroke modeling. This is based on the observation that for a particular character, its font glyphs may have different shapes, but often share same stroke structures. Like many other methods, the proposed algorithm contains two procedures, dominant point determination and data fitting. The first one partitions the outlines into segments and second one fits a curve to each segment. In the proposed method, the dominant points are classified as "major" (specifying stroke structures) and "minor" (specifying serif shapes). A set of rules (parameters) are determined offline specifying for each character the number of major and minor dominant points and for each dominant point the detection and fitting parameters (projection directions, boundary conditions and smoothness). For minor points, multiple sets of parameters could be used for different fonts. During operation, OCR is performed and the parameters associated with the recognized character are selected. Both major and minor dominant points are detected as a maximization process as specified by the parameter set. For minor points, an additional step could be performed to test the competing hypothesis and detect degenerated cases.
Universal fitting formulae for baryon oscillation surveys
NASA Astrophysics Data System (ADS)
Blake, Chris; Parkinson, David; Bassett, Bruce; Glazebrook, Karl; Kunz, Martin; Nichol, Robert C.
2006-01-01
The next generation of galaxy surveys will attempt to measure the baryon oscillations in the clustering power spectrum with high accuracy. These oscillations encode a preferred scale which may be used as a standard ruler to constrain cosmological parameters and dark energy models. In this paper we present simple analytical fitting formulae for the accuracy with which the preferred scale may be determined in the tangential and radial directions by future spectroscopic and photometric galaxy redshift surveys. We express these accuracies as a function of survey parameters such as the central redshift, volume, galaxy number density and (where applicable) photometric redshift error. These fitting formulae should greatly increase the efficiency of optimizing future surveys, which requires analysis of a potentially vast number of survey configurations and cosmological models. The formulae are calibrated using a grid of Monte Carlo simulations, which are analysed by dividing out the overall shape of the power spectrum before fitting a simple decaying sinusoid to the oscillations. The fitting formulae reproduce the simulation results with a fractional scatter of 7 per cent (10 per cent) in the tangential (radial) directions over a wide range of input parameters. We also indicate how sparse-sampling strategies may enhance the effective survey area if the sampling scale is much smaller than the projected baryon oscillation scale.
NASA Astrophysics Data System (ADS)
Fu, W.; Gu, L.; Hoffman, F. M.
2013-12-01
The photosynthesis model of Farquhar, von Caemmerer & Berry (1980) is an important tool for predicting the response of plants to climate change. So far, the critical parameters required by the model have been obtained from the leaf-level measurements of gas exchange, namely the net assimilation of CO2 against intercellular CO2 concentration (A-Ci) curves, made at saturating light conditions. With such measurements, most points are likely in the Rubisco-limited state for which the model is structurally overparameterized (the model is also overparameterized in the TPU-limited state). In order to reliably estimate photosynthetic parameters, there must be sufficient number of points in the RuBP regeneration-limited state, which has no structural over-parameterization. To improve the accuracy of A-Ci data analysis, we investigate the potential of using multiple A-Ci curves at subsaturating light intensities to generate some important parameter estimates more accurately. Using subsaturating light intensities allow more RuBp regeneration-limited points to be obtained. In this study, simulated examples are used to demonstrate how this method can eliminate the errors of conventional A-Ci curve fitting methods. Some fitted parameters like the photocompensation point and day respiration impose a significant limitation on modeling leaf CO2 exchange. The multiple A-Ci curves fitting can also improve over the so-called Laisk (1977) method, which was shown by some recent publication to produce incorrect estimates of photocompensation point and day respiration. We also test the approach with actual measurements, along with suggested measurement conditions to constrain measured A-Ci points to maximize the occurrence of RuBP regeneration-limited photosynthesis. Finally, we use our measured gas exchange datasets to quantify the magnitude of resistance of chloroplast and cell wall-plasmalemma and explore the effect of variable mesophyll conductance. The variable mesophyll conductance takes into account the influence of CO2 from mitochondria, comparing to the commonly used constant value of mesophyll conductance. We show that after considering this effect the other parameters of the photosynthesis model can be re-estimated. Our results indicate that variable mesophyll conductance has most effect on the estimation of the parameter of the maximum electron transport rate (Jmax), but has a negligible impact on the estimated day respiration (Rd) and photocompensation point (<2%).
Molitor, John
2012-03-01
Bayesian methods have seen an increase in popularity in a wide variety of scientific fields, including epidemiology. One of the main reasons for their widespread application is the power of the Markov chain Monte Carlo (MCMC) techniques generally used to fit these models. As a result, researchers often implicitly associate Bayesian models with MCMC estimation procedures. However, Bayesian models do not always require Markov-chain-based methods for parameter estimation. This is important, as MCMC estimation methods, while generally quite powerful, are complex and computationally expensive and suffer from convergence problems related to the manner in which they generate correlated samples used to estimate probability distributions for parameters of interest. In this issue of the Journal, Cole et al. (Am J Epidemiol. 2012;175(5):368-375) present an interesting paper that discusses non-Markov-chain-based approaches to fitting Bayesian models. These methods, though limited, can overcome some of the problems associated with MCMC techniques and promise to provide simpler approaches to fitting Bayesian models. Applied researchers will find these estimation approaches intuitively appealing and will gain a deeper understanding of Bayesian models through their use. However, readers should be aware that other non-Markov-chain-based methods are currently in active development and have been widely published in other fields.
FIT: statistical modeling tool for transcriptome dynamics under fluctuating field conditions
Iwayama, Koji; Aisaka, Yuri; Kutsuna, Natsumaro
2017-01-01
Abstract Motivation: Considerable attention has been given to the quantification of environmental effects on organisms. In natural conditions, environmental factors are continuously changing in a complex manner. To reveal the effects of such environmental variations on organisms, transcriptome data in field environments have been collected and analyzed. Nagano et al. proposed a model that describes the relationship between transcriptomic variation and environmental conditions and demonstrated the capability to predict transcriptome variation in rice plants. However, the computational cost of parameter optimization has prevented its wide application. Results: We propose a new statistical model and efficient parameter optimization based on the previous study. We developed and released FIT, an R package that offers functions for parameter optimization and transcriptome prediction. The proposed method achieves comparable or better prediction performance within a shorter computational time than the previous method. The package will facilitate the study of the environmental effects on transcriptomic variation in field conditions. Availability and Implementation: Freely available from CRAN (https://cran.r-project.org/web/packages/FIT/). Contact: anagano@agr.ryukoku.ac.jp Supplementary information: Supplementary data are available at Bioinformatics online PMID:28158396
NASA Astrophysics Data System (ADS)
Ito, Shin-ichi; Yoshie, Naoki; Okunishi, Takeshi; Ono, Tsuneo; Okazaki, Yuji; Kuwata, Akira; Hashioka, Taketo; Rose, Kenneth A.; Megrey, Bernard A.; Kishi, Michio J.; Nakamachi, Miwa; Shimizu, Yugo; Kakehi, Shigeho; Saito, Hiroaki; Takahashi, Kazutaka; Tadokoro, Kazuaki; Kusaka, Akira; Kasai, Hiromi
2010-10-01
The Oyashio region in the western North Pacific supports high biological productivity and has been well monitored. We applied the NEMURO (North Pacific Ecosystem Model for Understanding Regional Oceanography) model to simulate the nutrients, phytoplankton, and zooplankton dynamics. Determination of parameters values is very important, yet ad hoc calibration methods are often used. We used the automatic calibration software PEST (model-independent Parameter ESTimation), which has been used previously with NEMURO but in a system without ontogenetic vertical migration of the large zooplankton functional group. Determining the performance of PEST with vertical migration, and obtaining a set of realistic parameter values for the Oyashio, will likely be useful in future applications of NEMURO. Five identical twin simulation experiments were performed with the one-box version of NEMURO. The experiments differed in whether monthly snapshot or averaged state variables were used, in whether state variables were model functional groups or were aggregated (total phytoplankton, small plus large zooplankton), and in whether vertical migration of large zooplankton was included or not. We then applied NEMURO to monthly climatological field data covering 1 year for the Oyashio, and compared model fits and parameter values between PEST-determined estimates and values used in previous applications to the Oyashio region that relied on ad hoc calibration. We substituted the PEST and ad hoc calibrated parameter values into a 3-D version of NEMURO for the western North Pacific, and compared the two sets of spatial maps of chlorophyll- a with satellite-derived data. The identical twin experiments demonstrated that PEST could recover the known model parameter values when vertical migration was included, and that over-fitting can occur as a result of slight differences in the values of the state variables. PEST recovered known parameter values when using monthly snapshots of aggregated state variables, but estimated a different set of parameters with monthly averaged values. Both sets of parameters resulted in good fits of the model to the simulated data. Disaggregating the variables provided to PEST into functional groups did not solve the over-fitting problem, and including vertical migration seemed to amplify the problem. When we used the climatological field data, simulated values with PEST-estimated parameters were closer to these field data than with the previously determined ad hoc set of parameter values. When these same PEST and ad hoc sets of parameter values were substituted into 3-D-NEMURO (without vertical migration), the PEST-estimated parameter values generated spatial maps that were similar to the satellite data for the Kuroshio Extension during January and March and for the subarctic ocean from May to November. With non-linear problems, such as vertical migration, PEST should be used with caution because parameter estimates can be sensitive to how the data are prepared and to the values used for the searching parameters of PEST. We recommend the usage of PEST, or other parameter optimization methods, to generate first-order parameter estimates for simulating specific systems and for insertion into 2-D and 3-D models. The parameter estimates that are generated are useful, and the inconsistencies between simulated values and the available field data provide valuable information on model behavior and the dynamics of the ecosystem.
Muir, W M; Howard, R D
2001-07-01
Any release of transgenic organisms into nature is a concern because ecological relationships between genetically engineered organisms and other organisms (including their wild-type conspecifics) are unknown. To address this concern, we developed a method to evaluate risk in which we input estimates of fitness parameters from a founder population into a recurrence model to predict changes in transgene frequency after a simulated transgenic release. With this method, we grouped various aspects of an organism's life cycle into six net fitness components: juvenile viability, adult viability, age at sexual maturity, female fecundity, male fertility, and mating advantage. We estimated these components for wild-type and transgenic individuals using the fish, Japanese medaka (Oryzias latipes). We generalized our model's predictions using various combinations of fitness component values in addition to our experimentally derived estimates. Our model predicted that, for a wide range of parameter values, transgenes could spread in populations despite high juvenile viability costs if transgenes also have sufficiently high positive effects on other fitness components. Sensitivity analyses indicated that transgene effects on age at sexual maturity should have the greatest impact on transgene frequency, followed by juvenile viability, mating advantage, female fecundity, and male fertility, with changes in adult viability, resulting in the least impact.
Garabedian, Stephen P.
1986-01-01
A nonlinear, least-squares regression technique for the estimation of ground-water flow model parameters was applied to the regional aquifer underlying the eastern Snake River Plain, Idaho. The technique uses a computer program to simulate two-dimensional, steady-state ground-water flow. Hydrologic data for the 1980 water year were used to calculate recharge rates, boundary fluxes, and spring discharges. Ground-water use was estimated from irrigated land maps and crop consumptive-use figures. These estimates of ground-water withdrawal, recharge rates, and boundary flux, along with leakance, were used as known values in the model calibration of transmissivity. Leakance values were adjusted between regression solutions by comparing model-calculated to measured spring discharges. In other simulations, recharge and leakance also were calibrated as prior-information regression parameters, which limits the variation of these parameters using a normalized standard error of estimate. Results from a best-fit model indicate a wide areal range in transmissivity from about 0.05 to 44 feet squared per second and in leakance from about 2.2x10 -9 to 6.0 x 10 -8 feet per second per foot. Along with parameter values, model statistics also were calculated, including the coefficient of correlation between calculated and observed head (0.996), the standard error of the estimates for head (40 feet), and the parameter coefficients of variation (about 10-40 percent). Additional boundary flux was added in some areas during calibration to achieve proper fit to ground-water flow directions. Model fit improved significantly when areas that violated model assumptions were removed. It also improved slightly when y-direction (northwest-southeast) transmissivity values were larger than x-direction (northeast-southwest) transmissivity values. The model was most sensitive to changes in recharge, and in some areas, to changes in transmissivity, particularly near the spring discharge area from Milner Dam to King Hill.
Allen, Mark B; Brey, Richard R; Gesell, Thomas; Derryberry, Dewayne; Poudel, Deepesh
2016-01-01
This study had a goal to evaluate the predictive capabilities of the National Council on Radiation Protection and Measurements (NCRP) wound model coupled to the International Commission on Radiological Protection (ICRP) systemic model for 90Sr-contaminated wounds using non-human primate data. Studies were conducted on 13 macaque (Macaca mulatta) monkeys, each receiving one-time intramuscular injections of 90Sr solution. Urine and feces samples were collected up to 28 d post-injection and analyzed for 90Sr activity. Integrated Modules for Bioassay Analysis (IMBA) software was configured with default NCRP and ICRP model transfer coefficients to calculate predicted 90Sr intake via the wound based on the radioactivity measured in bioassay samples. The default parameters of the combined models produced adequate fits of the bioassay data, but maximum likelihood predictions of intake were overestimated by a factor of 1.0 to 2.9 when bioassay data were used as predictors. Skeletal retention was also over-predicted, suggesting an underestimation of the excretion fraction. Bayesian statistics and Monte Carlo sampling were applied using IMBA to vary the default parameters, producing updated transfer coefficients for individual monkeys that improved model fit and predicted intake and skeletal retention. The geometric means of the optimized transfer rates for the 11 cases were computed, and these optimized sample population parameters were tested on two independent monkey cases and on the 11 monkeys from which the optimized parameters were derived. The optimized model parameters did not improve the model fit in most cases, and the predicted skeletal activity produced improvements in three of the 11 cases. The optimized parameters improved the predicted intake in all cases but still over-predicted the intake by an average of 50%. The results suggest that the modified transfer rates were not always an improvement over the default NCRP and ICRP model values.
A Generalized QMRA Beta-Poisson Dose-Response Model.
Xie, Gang; Roiko, Anne; Stratton, Helen; Lemckert, Charles; Dunn, Peter K; Mengersen, Kerrie
2016-10-01
Quantitative microbial risk assessment (QMRA) is widely accepted for characterizing the microbial risks associated with food, water, and wastewater. Single-hit dose-response models are the most commonly used dose-response models in QMRA. Denoting PI(d) as the probability of infection at a given mean dose d, a three-parameter generalized QMRA beta-Poisson dose-response model, PI(d|α,β,r*), is proposed in which the minimum number of organisms required for causing infection, K min , is not fixed, but a random variable following a geometric distribution with parameter 0
AN ONLINE CATALOG OF CATACLYSMIC VARIABLE SPECTRA FROM THE FAR-ULTRAVIOLET SPECTROSCOPIC EXPLORER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Godon, Patrick; Sion, Edward M.; Levay, Karen
2012-12-15
We present an online catalog containing spectra and supporting information for cataclysmic variables that have been observed with the Far-Ultraviolet Spectroscopic Explorer (FUSE). For each object in the catalog we list some of the basic system parameters such as (R.A., decl.), period, inclination, and white dwarf mass, as well as information on the available FUSE spectra: data ID, observation date and time, and exposure time. In addition, we provide parameters needed for the analysis of the FUSE spectra such as the reddening E(B - V), distance, and state (high, low, intermediate) of the system at the time it was observed.more » For some of these spectra we have carried out model fits to the continuum with synthetic stellar and/or disk spectra using the codes TLUSTY and SYNSPEC. We provide the parameters obtained from these model fits; this includes the white dwarf temperature, gravity, projected rotational velocity, and elemental abundances of C, Si, S, and N, together with the disk mass accretion rate, the resulting inclination, and model-derived distance (when unknown). For each object one or more figures are provided (as gif files) with line identification and model fit(s) when available. The FUSE spectra and the synthetic spectra are directly available for download as ASCII tables. References are provided for each object, as well as for the model fits. In this article we present 36 objects, and additional ones will be added to the online catalog in the future. In addition to cataclysmic variables, we also include a few related objects, such as a wind-accreting white dwarf, a pre-cataclysmic variable, and some symbiotics.« less
Atmospheric and Fundamental Parameters of Stars in Hubble's Next Generation Spectral Library
NASA Technical Reports Server (NTRS)
Heap, Sally
2010-01-01
Hubble's Next Generation Spectral Library (NGSL) consists of R approximately 1000 spectra of 374 stars of assorted temperature, gravity, and metallicity. We are presently working to determine the atmospheric and fundamental parameters of the stars from the NGSL spectra themselves via full-spectrum fitting of model spectra to the observed (extinction-corrected) spectrum over the full wavelength range, 0.2-1.0 micron. We use two grids of model spectra for this purpose: the very low-resolution spectral grid from Castelli-Kurucz (2004), and the grid from MARCS (2008). Both the observed spectrum and the MARCS spectra are first degraded in resolution to match the very low resolution of the Castelli-Kurucz models, so that our fitting technique is the same for both model grids. We will present our preliminary results with a comparison with those from the Sloan/Segue Stellar Parameter Pipeline, ELODIE, and MILES, etc.
NASA Astrophysics Data System (ADS)
Gebresellasie, K.; Shirokoff, J.; Lewis, J. C.
2012-12-01
X-ray line spectra profile fitting using Pearson VII, pseudo-Voigt and generalized Fermi functions was performed on asphalt binders prior to the calculation of aromaticity and crystallite size parameters. The effects of these functions on the results are presented and discussed in terms of the peak profile fit parameters, the uncertainties in calculated values that can arise owing to peak shape, peak features in the pattern and crystallite size according to the asphalt models (Yen, modified Yen or Yen-Mullins) and theories. Interpretation of these results is important in terms of evaluating the performance of asphalt binders widely used in the application of transportation systems (roads, highways, airports).
Liu, S.; Anderson, P.; Zhou, G.; Kauffman, B.; Hughes, F.; Schimel, D.; Watson, Vicente; Tosi, Joseph
2008-01-01
Objectively assessing the performance of a model and deriving model parameter values from observations are critical and challenging in landscape to regional modeling. In this paper, we applied a nonlinear inversion technique to calibrate the ecosystem model CENTURY against carbon (C) and nitrogen (N) stock measurements collected from 39 mature tropical forest sites in seven life zones in Costa Rica. Net primary productivity from the Moderate-Resolution Imaging Spectroradiometer (MODIS), C and N stocks in aboveground live biomass, litter, coarse woody debris (CWD), and in soils were used to calibrate the model. To investigate the resolution of available observations on the number of adjustable parameters, inversion was performed using nine setups of adjustable parameters. Statistics including observation sensitivity, parameter correlation coefficient, parameter sensitivity, and parameter confidence limits were used to evaluate the information content of observations, resolution of model parameters, and overall model performance. Results indicated that soil organic carbon content, soil nitrogen content, and total aboveground biomass carbon had the highest information contents, while measurements of carbon in litter and nitrogen in CWD contributed little to the parameter estimation processes. The available information could resolve the values of 2-4 parameters. Adjusting just one parameter resulted in under-fitting and unacceptable model performance, while adjusting five parameters simultaneously led to over-fitting. Results further indicated that the MODIS NPP values were compressed as compared with the spatial variability of net primary production (NPP) values inferred from inverse modeling. Using inverse modeling to infer NPP and other sensitive model parameters from C and N stock observations provides an opportunity to utilize data collected by national to regional forest inventory systems to reduce the uncertainties in the carbon cycle and generate valuable databases to validate and improve MODIS NPP algorithms.
NASA Astrophysics Data System (ADS)
Song, Huan; Hu, Yaogai; Jiang, Chunhua; Zhou, Chen; Zhao, Zhengyu; Zou, Xianjian
2016-12-01
Scaling oblique ionogram plays an important role in obtaining ionospheric structure at the midpoint of oblique sounding path. The paper proposed an automatic scaling method to extract the trace and parameters of oblique ionogram based on hybrid genetic algorithm (HGA). The extracted 10 parameters come from F2 layer and Es layer, such as maximum observation frequency, critical frequency, and virtual height. The method adopts quasi-parabolic (QP) model to describe F2 layer's electron density profile that is used to synthesize trace. And it utilizes secant theorem, Martyn's equivalent path theorem, image processing technology, and echoes' characteristics to determine seven parameters' best fit values, and three parameter's initial values in QP model to set up their searching spaces which are the needed input data of HGA. Then HGA searches the three parameters' best fit values from their searching spaces based on the fitness between the synthesized trace and the real trace. In order to verify the performance of the method, 240 oblique ionograms are scaled and their results are compared with manual scaling results and the inversion results of the corresponding vertical ionograms. The comparison results show that the scaling results are accurate or at least adequate 60-90% of the time.
NASA Astrophysics Data System (ADS)
Antoshechkina, P. M.; Wolf, A. S.; Hamecher, E. A.; Asimow, P. D.; Ghiorso, M. S.
2013-12-01
Community databases such as EarthChem, LEPR, and AMCSD both increase demand for quantitative petrological tools, including thermodynamic models like the MELTS family of algorithms, and are invaluable in development of such tools. The need to extend existing solid solution models to include minor components such as Cr and Na has been evident for years but as the number of components increases it becomes impossible to completely separate derivation of end-member thermodynamic data from calibration of solution properties. In Hamecher et al. (2012; 2013) we developed a calibration scheme that directly interfaces with a MySQL database based on LEPR, with volume data from AMCSD and elsewhere. Here we combine that scheme with a Bayesian approach, where independent constraints on parameter values (e.g. existence of miscibility gaps) are combined with uncertainty propagation to give a more reliable best-fit along with associated model uncertainties. We illustrate the scheme with a new model of molar volume for (Ca,Fe,Mg,Mn,Na)3(Al,Cr,Fe3+,Fe2+,Mg,Mn,Si,Ti)2Si3O12 cubic garnets. For a garnet in this chemical system, the model molar volume is obtained by adding excess volume terms to a linear combination of nine independent end-member volumes. The model calibration is broken into three main stages: (1) estimation of individual end-member thermodynamic properties; (2) calibration of standard state volumes for all available independent and dependent end members; (3) fitting of binary and mixed composition data. For each calibration step, the goodness-of-fit includes weighted residuals as well as χ2-like penalty terms representing the (not necessarily Gaussian) prior constraints on parameter values. Using the Bayesian approach, uncertainties are correctly propagated forward to subsequent steps, allowing determination of final parameter values and correlated uncertainties that account for the entire calibration process. For the aluminosilicate garnets, optimal values of the bulk modulus and its pressure derivative are obtained by fitting published compression data using the Vinet equation of state, with the Mie-Grüneisen-Debye thermal pressure formalism to model thermal expansion. End-member thermal parameters are obtained by fitting volume data while ensuring that the heat capacity is consistent with the thermodynamic database of Berman and co-workers. For other end members, data for related compositions are used where such data exist; otherwise ultrasonic data or density functional theory results are taken or, for thermal parameters, systematics in cation radii are used. In stages (2) and (3) the remaining data at ambient conditions are fit. Using this step-wise calibration scheme, most parameters are modified little by subsequent calibration steps but some, such as the standard state volume of the Ti-bearing end member, can vary within calculated uncertainties. The final model satisfies desired criteria and fits almost all the data (more than 1000 points); only excess parameters that are justified by the data are activated. The scheme can be easily extended to calibration of end-member and solution properties from experimental phase equilibria. As a first step we obtain the internally consistent standard state entropy and enthalpy of formation for knorringite and discuss differences between our results and those of Klemme and co-workers.
NASA Astrophysics Data System (ADS)
Švajdlenková, H.; Ruff, A.; Lunkenheimer, P.; Loidl, A.; Bartoš, J.
2017-08-01
We report a broadband dielectric spectroscopic (BDS) study on the clustering fragile glass-former meta-toluidine (m-TOL) from 187 K up to 289 K over a wide frequency range of 10-3-109 Hz with focus on the primary α relaxation and the secondary β relaxation above the glass temperature Tg. The broadband dielectric spectra were fitted by using the Havriliak-Negami (HN) and Cole-Cole (CC) models. The β process disappearing at Tβ,disap = 1.12Tg exhibits non-Arrhenius dependence fitted by the Vogel-Fulcher-Tamman-Hesse equation with T0βVFTH in accord with the characteristic differential scanning calorimetry (DSC) limiting temperature of the glassy state. The essential feature of the α process consists in the distinct changes of its spectral shape parameter βHN marked by the characteristic BDS temperatures TB1βHN and TB2βHN. The primary α relaxation times were fitted over the entire temperature and frequency range by several current three-parameter up to six-parameter dynamic models. This analysis reveals that the crossover temperatures of the idealized mode coupling theory model (TcMCT), the extended free volume model (T0EFV), and the two-order parameter (TOP) model (Tmc) are close to TB1βHN, which provides a consistent physical rationalization for the first change of the shape parameter. In addition, the other two characteristic TOP temperatures T0TOP and TA are coinciding with the thermodynamic Kauzmann temperature TK and the second change of the shape parameter at around TB2βHN, respectively. These can be related to the onset of the liquid-like domains in the glassy state or the disappearance of the solid-like domains in the normal liquid state.
Using the Flipchem Photochemistry Model When Fitting Incoherent Scatter Radar Data
NASA Astrophysics Data System (ADS)
Reimer, A. S.; Varney, R. H.
2017-12-01
The North face Resolute Bay Incoherent Scatter Radar (RISR-N) routinely images the dynamics of the polar ionosphere, providing measurements of the plasma density, electron temperature, ion temperature, and line of sight velocity with seconds to minutes time resolution. RISR-N does not directly measure ionospheric parameters, but backscattered signals, recording them as voltage samples. Using signal processing techniques, radar autocorrelation functions (ACF) are estimated from the voltage samples. A model of the signal ACF is then fitted to the ACF using non-linear least-squares techniques to obtain the best-fit ionospheric parameters. The signal model, and therefore the fitted parameters, depend on the ionospheric ion composition that is used [e.g. Zettergren et. al. (2010), Zou et. al. (2017)].The software used to process RISR-N ACF data includes the "flipchem" model, which is an ion photochemistry model developed by Richards [2011] that was adapted from the Field LineInterhemispheric Plasma (FLIP) model. Flipchem requires neutral densities, neutral temperatures, electron density, ion temperature, electron temperature, solar zenith angle, and F10.7 as inputs to compute ion densities, which are input to the signal model. A description of how the flipchem model is used in RISR-N fitting software will be presented. Additionally, a statistical comparison of the fitted electron density, ion temperature, electron temperature, and velocity obtained using a flipchem ionosphere, a pure O+ ionosphere, and a Chapman O+ ionosphere will be presented. The comparison covers nearly two years of RISR-N data (April 2015 - December 2016). Richards, P. G. (2011), Reexamination of ionospheric photochemistry, J. Geophys. Res., 116, A08307, doi:10.1029/2011JA016613.Zettergren, M., Semeter, J., Burnett, B., Oliver, W., Heinselman, C., Blelly, P.-L., and Diaz, M.: Dynamic variability in F-region ionospheric composition at auroral arc boundaries, Ann. Geophys., 28, 651-664, https://doi.org/10.5194/angeo-28-651-2010, 2010.Zou, S., D. Ozturk, R. Varney, and A. Reimer (2017), Effects of sudden commencement on the ionosphere: PFISR observations and global MHD simulation, Geophys. Res. Lett., 44, 3047-3058, doi:10.1002/2017GL072678.
Mathieu, Amélie; Vidal, Tiphaine; Jullien, Alexandra; Wu, QiongLi; Chambon, Camille; Bayol, Benoit; Cournède, Paul-Henry
2018-06-19
Functional-structural plant models (FSPMs) describe explicitly the interactions between plants and their environment at organ to plant scale. However, the high level of description of the structure or model mechanisms makes this type of model very complex and hard to calibrate. A two-step methodology to facilitate the calibration process is proposed here. First, a global sensitivity analysis method was applied to the calibration loss function. It provided first-order and total-order sensitivity indexes that allow parameters to be ranked by importance in order to select the most influential ones. Second, the Akaike information criterion (AIC) was used to quantify the model's quality of fit after calibration with different combinations of selected parameters. The model with the lowest AIC gives the best combination of parameters to select. This methodology was validated by calibrating the model on an independent data set (same cultivar, another year) with the parameters selected in the second step. All the parameters were set to their nominal value; only the most influential ones were re-estimated. Sensitivity analysis applied to the calibration loss function is a relevant method to underline the most significant parameters in the estimation process. For the studied winter oilseed rape model, 11 out of 26 estimated parameters were selected. Then, the model could be recalibrated for a different data set by re-estimating only three parameters selected with the model selection method. Fitting only a small number of parameters dramatically increases the efficiency of recalibration, increases the robustness of the model and helps identify the principal sources of variation in varying environmental conditions. This innovative method still needs to be more widely validated but already gives interesting avenues to improve the calibration of FSPMs.
Induced subgraph searching for geometric model fitting
NASA Astrophysics Data System (ADS)
Xiao, Fan; Xiao, Guobao; Yan, Yan; Wang, Xing; Wang, Hanzi
2017-11-01
In this paper, we propose a novel model fitting method based on graphs to fit and segment multiple-structure data. In the graph constructed on data, each model instance is represented as an induced subgraph. Following the idea of pursuing the maximum consensus, the multiple geometric model fitting problem is formulated as searching for a set of induced subgraphs including the maximum union set of vertices. After the generation and refinement of the induced subgraphs that represent the model hypotheses, the searching process is conducted on the "qualified" subgraphs. Multiple model instances can be simultaneously estimated by solving a converted problem. Then, we introduce the energy evaluation function to determine the number of model instances in data. The proposed method is able to effectively estimate the number and the parameters of model instances in data severely corrupted by outliers and noises. Experimental results on synthetic data and real images validate the favorable performance of the proposed method compared with several state-of-the-art fitting methods.
Nichols, J.D.; Pollock, K.H.
1983-01-01
Capture-recapture models can be used to estimate parameters of interest from paleobiological data when encouter probabilities are unknown and variable over time. These models also permit estimation of sampling variances and goodness-of-fit tests are available for assessing the fit of data to most models. The authors describe capture-recapture models which should be useful in paleobiological analyses and discuss the assumptions which underlie them. They illustrate these models with examples and discuss aspects of study design.
NASA Astrophysics Data System (ADS)
Varady, Mark; Mantooth, Brent; Pearl, Thomas; Willis, Matthew
2014-03-01
A continuum model of reactive decontamination in absorbing polymeric thin film substrates exposed to the chemical warfare agent O-ethyl S-[2-(diisopropylamino)ethyl] methylphosphonothioate (known as VX) was developed to assess the performance of various decontaminants. Experiments were performed in conjunction with an inverse analysis method to obtain the necessary model parameters. The experiments involved contaminating a substrate with a fixed VX exposure, applying a decontaminant, followed by a time-resolved, liquid phase extraction of the absorbing substrate to measure the residual contaminant by chromatography. Decontamination model parameters were uniquely determined using the Levenberg-Marquardt nonlinear least squares fitting technique to best fit the experimental time evolution of extracted mass. The model was implemented numerically in both a 2D axisymmetric finite element program and a 1D finite difference code, and it was found that the more computationally efficient 1D implementation was sufficiently accurate. The resulting decontamination model provides an accurate quantification of contaminant concentration profile in the material, which is necessary to assess exposure hazards.
Light and compressed gluinos at the LHC via string theory.
AbdusSalam, S S
2017-01-01
In this article, we show that making global fits of string theory model parameters to data is an interesting mechanism for probing, mapping and forecasting connections of the theory to real world physics. We considered a large volume scenario (LVS) with D3-brane matter fields and supersymmetry breaking. A global fit of the parameters to low-energy data shows that the set of LVS models are associated with light gluinos which are quasi-degenerate with the neutralinos and charginos they can promptly decay into, and thus they are possibly hidden to current LHC gluino search strategies.
Conditional statistical inference with multistage testing designs.
Zwitser, Robert J; Maris, Gunter
2015-03-01
In this paper it is demonstrated how statistical inference from multistage test designs can be made based on the conditional likelihood. Special attention is given to parameter estimation, as well as the evaluation of model fit. Two reasons are provided why the fit of simple measurement models is expected to be better in adaptive designs, compared to linear designs: more parameters are available for the same number of observations; and undesirable response behavior, like slipping and guessing, might be avoided owing to a better match between item difficulty and examinee proficiency. The results are illustrated with simulated data, as well as with real data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Yu; Hou, Zhangshuan; Huang, Maoyi
2013-12-10
This study demonstrates the possibility of inverting hydrologic parameters using surface flux and runoff observations in version 4 of the Community Land Model (CLM4). Previous studies showed that surface flux and runoff calculations are sensitive to major hydrologic parameters in CLM4 over different watersheds, and illustrated the necessity and possibility of parameter calibration. Two inversion strategies, the deterministic least-square fitting and stochastic Markov-Chain Monte-Carlo (MCMC) - Bayesian inversion approaches, are evaluated by applying them to CLM4 at selected sites. The unknowns to be estimated include surface and subsurface runoff generation parameters and vadose zone soil water parameters. We find thatmore » using model parameters calibrated by the least-square fitting provides little improvements in the model simulations but the sampling-based stochastic inversion approaches are consistent - as more information comes in, the predictive intervals of the calibrated parameters become narrower and the misfits between the calculated and observed responses decrease. In general, parameters that are identified to be significant through sensitivity analyses and statistical tests are better calibrated than those with weak or nonlinear impacts on flux or runoff observations. Temporal resolution of observations has larger impacts on the results of inverse modeling using heat flux data than runoff data. Soil and vegetation cover have important impacts on parameter sensitivities, leading to the different patterns of posterior distributions of parameters at different sites. Overall, the MCMC-Bayesian inversion approach effectively and reliably improves the simulation of CLM under different climates and environmental conditions. Bayesian model averaging of the posterior estimates with different reference acceptance probabilities can smooth the posterior distribution and provide more reliable parameter estimates, but at the expense of wider uncertainty bounds.« less
Glassy dynamics in three-dimensional embryonic tissues
Schötz, Eva-Maria; Lanio, Marcos; Talbot, Jared A.; Manning, M. Lisa
2013-01-01
Many biological tissues are viscoelastic, behaving as elastic solids on short timescales and fluids on long timescales. This collective mechanical behaviour enables and helps to guide pattern formation and tissue layering. Here, we investigate the mechanical properties of three-dimensional tissue explants from zebrafish embryos by analysing individual cell tracks and macroscopic mechanical response. We find that the cell dynamics inside the tissue exhibit features of supercooled fluids, including subdiffusive trajectories and signatures of caging behaviour. We develop a minimal, three-parameter mechanical model for these dynamics, which we calibrate using only information about cell tracks. This model generates predictions about the macroscopic bulk response of the tissue (with no fit parameters) that are verified experimentally, providing a strong validation of the model. The best-fit model parameters indicate that although the tissue is fluid-like, it is close to a glass transition, suggesting that small changes to single-cell parameters could generate a significant change in the viscoelastic properties of the tissue. These results provide a robust framework for quantifying and modelling mechanically driven pattern formation in tissues. PMID:24068179
The effect of noise-induced variance on parameter recovery from reaction times.
Vadillo, Miguel A; Garaizar, Pablo
2016-03-31
Technical noise can compromise the precision and accuracy of the reaction times collected in psychological experiments, especially in the case of Internet-based studies. Although this noise seems to have only a small impact on traditional statistical analyses, its effects on model fit to reaction-time distributions remains unexplored. Across four simulations we study the impact of technical noise on parameter recovery from data generated from an ex-Gaussian distribution and from a Ratcliff Diffusion Model. Our results suggest that the impact of noise-induced variance tends to be limited to specific parameters and conditions. Although we encourage researchers to adopt all measures to reduce the impact of noise on reaction-time experiments, we conclude that the typical amount of noise-induced variance found in these experiments does not pose substantial problems for statistical analyses based on model fitting.
Tian, Lu; Wei, Wan-Zhi; Mao, You-An
2004-04-01
The adsorption of human serum albumin onto hydroxyapatite-modified silver electrodes has been in situ investigated by utilizing the piezoelectric quartz crystal impedance technique. The changes of equivalent circuit parameters were used to interpret the adsorption process. A kinetic model of two consecutive steps was derived to describe the process and compared with a first-order kinetic model by using residual analysis. The experimental data of frequency shift fitted to the model and kinetics parameters, k1, k2, psi1, psi2 and qr, were obtained. All fitted results were in reasonable agreement with the corresponding experimental results. Two adsorption constants (7.19 kJ mol(-1) and 22.89 kJ mol(-1)) were calculated according to the Arrhenius formula.
Genome-wide heterogeneity of nucleotide substitution model fit.
Arbiza, Leonardo; Patricio, Mateus; Dopazo, Hernán; Posada, David
2011-01-01
At a genomic scale, the patterns that have shaped molecular evolution are believed to be largely heterogeneous. Consequently, comparative analyses should use appropriate probabilistic substitution models that capture the main features under which different genomic regions have evolved. While efforts have concentrated in the development and understanding of model selection techniques, no descriptions of overall relative substitution model fit at the genome level have been reported. Here, we provide a characterization of best-fit substitution models across three genomic data sets including coding regions from mammals, vertebrates, and Drosophila (24,000 alignments). According to the Akaike Information Criterion (AIC), 82 of 88 models considered were selected as best-fit models at least in one occasion, although with very different frequencies. Most parameter estimates also varied broadly among genes. Patterns found for vertebrates and Drosophila were quite similar and often more complex than those found in mammals. Phylogenetic trees derived from models in the 95% confidence interval set showed much less variance and were significantly closer to the tree estimated under the best-fit model than trees derived from models outside this interval. Although alternative criteria selected simpler models than the AIC, they suggested similar patterns. All together our results show that at a genomic scale, different gene alignments for the same set of taxa are best explained by a large variety of different substitution models and that model choice has implications on different parameter estimates including the inferred phylogenetic trees. After taking into account the differences related to sample size, our results suggest a noticeable diversity in the underlying evolutionary process. All together, we conclude that the use of model selection techniques is important to obtain consistent phylogenetic estimates from real data at a genomic scale.
NASA Astrophysics Data System (ADS)
Mercier, Lény; Panfili, Jacques; Paillon, Christelle; N'diaye, Awa; Mouillot, David; Darnaude, Audrey M.
2011-05-01
Accurate knowledge of fish age and growth is crucial for species conservation and management of exploited marine stocks. In exploited species, age estimation based on otolith reading is routinely used for building growth curves that are used to implement fishery management models. However, the universal fit of the von Bertalanffy growth function (VBGF) on data from commercial landings can lead to uncertainty in growth parameter inference, preventing accurate comparison of growth-based history traits between fish populations. In the present paper, we used a comprehensive annual sample of wild gilthead seabream ( Sparus aurata L.) in the Gulf of Lions (France, NW Mediterranean) to test a methodology improving growth modelling for exploited fish populations. After validating the timing for otolith annual increment formation for all life stages, a comprehensive set of growth models (including VBGF) were fitted to the obtained age-length data, used as a whole or sub-divided between group 0 individuals and those coming from commercial landings (ages 1-6). Comparisons in growth model accuracy based on Akaike Information Criterion allowed assessment of the best model for each dataset and, when no model correctly fitted the data, a multi-model inference (MMI) based on model averaging was carried out. The results provided evidence that growth parameters inferred with VBGF must be used with high caution. Hence, VBGF turned to be among the less accurate for growth prediction irrespective of the dataset and its fit to the whole population, the juvenile or the adult datasets provided different growth parameters. The best models for growth prediction were the Tanaka model, for group 0 juveniles, and the MMI, for the older fish, confirming that growth differs substantially between juveniles and adults. All asymptotic models failed to correctly describe the growth of adult S. aurata, probably because of the poor representation of old individuals in the dataset. Multi-model inference associated with separate analysis of juveniles and adult fish is then advised to obtain objective estimations of growth parameters when sampling cannot be corrected towards older fish.
NASA Astrophysics Data System (ADS)
Planck Collaboration; Aghanim, N.; Akrami, Y.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Ballardini, M.; Banday, A. J.; Barreiro, R. B.; Bartolo, N.; Basak, S.; Benabed, K.; Bersanelli, M.; Bielewicz, P.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Burigana, C.; Calabrese, E.; Cardoso, J.-F.; Challinor, A.; Chiang, H. C.; Colombo, L. P. L.; Combet, C.; Crill, B. P.; Curto, A.; Cuttaia, F.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Di Valentino, E.; Dickinson, C.; Diego, J. M.; Doré, O.; Ducout, A.; Dupac, X.; Dusini, S.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Fantaye, Y.; Finelli, F.; Forastieri, F.; Frailis, M.; Franceschi, E.; Frolov, A.; Galeotta, S.; Galli, S.; Ganga, K.; Génova-Santos, R. T.; Gerbino, M.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gruppuso, A.; Gudmundsson, J. E.; Herranz, D.; Hivon, E.; Huang, Z.; Jaffe, A. H.; Jones, W. C.; Keihänen, E.; Keskitalo, R.; Kiiveri, K.; Kim, J.; Kisner, T. S.; Knox, L.; Krachmalnicoff, N.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Le Jeune, M.; Levrier, F.; Lewis, A.; Liguori, M.; Lilje, P. B.; Lilley, M.; Lindholm, V.; López-Caniego, M.; Lubin, P. M.; Ma, Y.-Z.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Martin, P. G.; Martínez-González, E.; Matarrese, S.; Mauri, N.; McEwen, J. D.; Meinhold, P. R.; Mennella, A.; Migliaccio, M.; Millea, M.; Miville-Deschênes, M.-A.; Molinari, D.; Moneti, A.; Montier, L.; Morgante, G.; Moss, A.; Narimani, A.; Natoli, P.; Oxborrow, C. A.; Pagano, L.; Paoletti, D.; Partridge, B.; Patanchon, G.; Patrizii, L.; Pettorino, V.; Piacentini, F.; Polastri, L.; Polenta, G.; Puget, J.-L.; Rachen, J. P.; Racine, B.; Reinecke, M.; Remazeilles, M.; Renzi, A.; Rocha, G.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Ruiz-Granados, B.; Salvati, L.; Sandri, M.; Savelainen, M.; Scott, D.; Sirignano, C.; Sirri, G.; Stanco, L.; Suur-Uski, A.-S.; Tauber, J. A.; Tavagnacco, D.; Tenti, M.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Trombetti, T.; Valiviita, J.; Van Tent, F.; Vielva, P.; Villa, F.; Vittorio, N.; Wandelt, B. D.; Wehus, I. K.; White, M.; Zacchei, A.; Zonca, A.
2017-11-01
The six parameters of the standard ΛCDM model have best-fit values derived from the Planck temperature power spectrum that are shifted somewhat from the best-fit values derived from WMAP data. These shifts are driven by features in the Planck temperature power spectrum at angular scales that had never before been measured to cosmic-variance level precision. We have investigated these shifts to determine whether they are within the range of expectation and to understand their origin in the data. Taking our parameter set to be the optical depth of the reionized intergalactic medium τ, the baryon density ωb, the matter density ωm, the angular size of the sound horizon θ∗, the spectral index of the primordial power spectrum, ns, and Ase- 2τ (where As is the amplitude of the primordial power spectrum), we have examined the change in best-fit values between a WMAP-like large angular-scale data set (with multipole moment ℓ < 800 in the Planck temperature power spectrum) and an all angular-scale data set (ℓ < 2500Planck temperature power spectrum), each with a prior on τ of 0.07 ± 0.02. We find that the shifts, in units of the 1σ expected dispersion for each parameter, are { Δτ,ΔAse- 2τ,Δns,Δωm,Δωb,Δθ∗ } = { -1.7,-2.2,1.2,-2.0,1.1,0.9 }, with a χ2 value of 8.0. We find that this χ2 value is exceeded in 15% of our simulated data sets, and that a parameter deviates by more than 2.2σ in 9% of simulated data sets, meaning that the shifts are not unusually large. Comparing ℓ < 800 instead to ℓ> 800, or splitting at a different multipole, yields similar results. We examined the ℓ < 800 model residuals in the ℓ> 800 power spectrum data and find that the features there that drive these shifts are a set of oscillations across a broad range of angular scales. Although they partly appear similar to the effects of enhanced gravitational lensing, the shifts in ΛCDM parameters that arise in response to these features correspond to model spectrum changes that are predominantly due to non-lensing effects; the only exception is τ, which, at fixed Ase- 2τ, affects the ℓ> 800 temperature power spectrum solely through the associated change in As and the impact of that on the lensing potential power spectrum. We also ask, "what is it about the power spectrum at ℓ < 800 that leads to somewhat different best-fit parameters than come from the full ℓ range?" We find that if we discard the data at ℓ < 30, where there is a roughly 2σ downward fluctuation in power relative to the model that best fits the full ℓ range, the ℓ < 800 best-fit parameters shift significantly towards the ℓ < 2500 best-fit parameters. In contrast, including ℓ < 30, this previously noted "low-ℓ deficit" drives ns up and impacts parameters correlated with ns, such as ωm and H0. As expected, the ℓ < 30 data have a much greater impact on the ℓ < 800 best fit than on the ℓ < 2500 best fit. So although the shifts are not very significant, we find that they can be understood through the combined effects of an oscillatory-like set of high-ℓ residuals and the deficit in low-ℓ power, excursions consistent with sample variance that happen to map onto changes in cosmological parameters. Finally, we examine agreement between PlanckTT data and two other CMB data sets, namely the Planck lensing reconstruction and the TT power spectrum measured by the South Pole Telescope, again finding a lack of convincing evidence of any significant deviations in parameters, suggesting that current CMB data sets give an internally consistent picture of the ΛCDM model.
Event generator tunes obtained from underlying event and multiparton scattering measurements.
Khachatryan, V; Sirunyan, A M; Tumasyan, A; Adam, W; Asilar, E; Bergauer, T; Brandstetter, J; Brondolin, E; Dragicevic, M; Erö, J; Friedl, M; Frühwirth, R; Ghete, V M; Hartl, C; Hörmann, N; Hrubec, J; Jeitler, M; Knünz, V; König, A; Krammer, M; Krätschmer, I; Liko, D; Matsushita, T; Mikulec, I; Rabady, D; Rahbaran, B; Rohringer, H; Schieck, J; Schöfbeck, R; Strauss, J; Treberer-Treberspurg, W; Waltenberger, W; Wulz, C-E; Mossolov, V; Shumeiko, N; Suarez Gonzalez, J; Alderweireldt, S; Cornelis, T; De Wolf, E A; Janssen, X; Knutsson, A; Lauwers, J; Luyckx, S; Van De Klundert, M; Van Haevermaet, H; Van Mechelen, P; Van Remortel, N; Van Spilbeeck, A; Abu Zeid, S; Blekman, F; D'Hondt, J; Daci, N; De Bruyn, I; Deroover, K; Heracleous, N; Keaveney, J; Lowette, S; Moreels, L; Olbrechts, A; Python, Q; Strom, D; Tavernier, S; Van Doninck, W; Van Mulders, P; Van Onsem, G P; Van Parijs, I; Barria, P; Brun, H; Caillol, C; Clerbaux, B; De Lentdecker, G; Fasanella, G; Favart, L; Grebenyuk, A; Karapostoli, G; Lenzi, T; Léonard, A; Maerschalk, T; Marinov, A; Perniè, L; Randle-Conde, A; Seva, T; Vander Velde, C; Yonamine, R; Vanlaer, P; Yonamine, R; Zenoni, F; Zhang, F; Adler, V; Beernaert, K; Benucci, L; Cimmino, A; Crucy, S; Dobur, D; Fagot, A; Garcia, G; Gul, M; Mccartin, J; Ocampo Rios, A A; Poyraz, D; Ryckbosch, D; Salva, S; Sigamani, M; Tytgat, M; Van Driessche, W; Yazgan, E; Zaganidis, N; Basegmez, S; Beluffi, C; Bondu, O; Brochet, S; Bruno, G; Caudron, A; Ceard, L; Da Silveira, G G; Delaere, C; Favart, D; Forthomme, L; Giammanco, A; Hollar, J; Jafari, A; Jez, P; Komm, M; Lemaitre, V; Mertens, A; Musich, M; Nuttens, C; Perrini, L; Pin, A; Piotrzkowski, K; Popov, A; Quertenmont, L; Selvaggi, M; Vidal Marono, M; Beliy, N; Hammad, G H; Júnior, W L Aldá; Alves, F L; Alves, G A; Brito, L; Correa Martins Junior, M; Hamer, M; Hensel, C; Moraes, A; Pol, M E; Rebello Teles, P; Belchior Batista Das Chagas, E; Carvalho, W; Chinellato, J; Custódio, A; Da Costa, E M; De Jesus Damiao, D; De Oliveira Martins, C; Fonseca De Souza, S; Huertas Guativa, L M; Malbouisson, H; Matos Figueiredo, D; Mora Herrera, C; Mundim, L; Nogima, H; Prado Da Silva, W L; Santoro, A; Sznajder, A; Tonelli Manganote, E J; Vilela Pereira, A; Ahuja, S; Bernardes, C A; De Souza Santos, A; Dogra, S; Fernandez Perez Tomei, T R; Gregores, E M; Mercadante, P G; Moon, C S; Novaes, S F; Padula, Sandra S; Romero Abad, D; Ruiz Vargas, J C; Aleksandrov, A; Hadjiiska, R; Iaydjiev, P; Rodozov, M; Stoykova, S; Sultanov, G; Vutova, M; Dimitrov, A; Glushkov, I; Litov, L; Pavlov, B; Petkov, P; Ahmad, M; Bian, J G; Chen, G M; Chen, H S; Chen, M; Cheng, T; Du, R; Jiang, C H; Plestina, R; Romeo, F; Shaheen, S M; Spiezia, A; Tao, J; Wang, C; Wang, Z; Zhang, H; Asawatangtrakuldee, C; Ban, Y; Li, Q; Liu, S; Mao, Y; Qian, S J; Wang, D; Xu, Z; Avila, C; Cabrera, A; Chaparro Sierra, L F; Florez, C; Gomez, J P; Gomez Moreno, B; Sanabria, J C; Godinovic, N; Lelas, D; Puljak, I; Ribeiro Cipriano, P M; Antunovic, Z; Kovac, M; Brigljevic, V; Kadija, K; Luetic, J; Micanovic, S; Sudic, L; Attikis, A; Mavromanolakis, G; Mousa, J; Nicolaou, C; Ptochos, F; Razis, P A; Rykaczewski, H; Bodlak, M; Finger, M; Finger, M; Abdelalim, A A; Awad, A; Mahrous, A; Mohammed, Y; Radi, A; Calpas, B; Kadastik, M; Murumaa, M; Raidal, M; Tiko, A; Veelken, C; Eerola, P; Pekkanen, J; Voutilainen, M; Härkönen, J; Karimäki, V; Kinnunen, R; Lampén, T; Lassila-Perini, K; Lehti, S; Lindén, T; Luukka, P; Mäenpää, T; Peltola, T; Tuominen, E; Tuominiemi, J; Tuovinen, E; Wendland, L; Talvitie, J; Tuuva, T; Besancon, M; Couderc, F; Dejardin, M; Denegri, D; Fabbro, B; Faure, J L; Favaro, C; Ferri, F; Ganjour, S; Givernaud, A; Gras, P; Hamel de Monchenault, G; Jarry, P; Locci, E; Machet, M; Malcles, J; Rander, J; Rosowsky, A; Titov, M; Zghiche, A; Antropov, I; Baffioni, S; Beaudette, F; Busson, P; Cadamuro, L; Chapon, E; Charlot, C; Dahms, T; Davignon, O; Filipovic, N; Granier de Cassagnac, R; Jo, M; Lisniak, S; Mastrolorenzo, L; Miné, P; Naranjo, I N; Nguyen, M; Ochando, C; Ortona, G; Paganini, P; Pigard, P; Regnard, S; Salerno, R; Sauvan, J B; Sirois, Y; Strebler, T; Yilmaz, Y; Zabi, A; Agram, J-L; Andrea, J; Aubin, A; Bloch, D; Brom, J-M; Buttignol, M; Chabert, E C; Chanon, N; Collard, C; Conte, E; Coubez, X; Fontaine, J-C; Gelé, D; Goerlach, U; Goetzmann, C; Le Bihan, A-C; Merlin, J A; Skovpen, K; Van Hove, P; Gadrat, S; Beauceron, S; Bernet, C; Boudoul, G; Bouvier, E; Carrillo Montoya, C A; Chierici, R; Contardo, D; Courbon, B; Depasse, P; El Mamouni, H; Fan, J; Fay, J; Gascon, S; Gouzevitch, M; Ille, B; Lagarde, F; Laktineh, I B; Lethuillier, M; Mirabito, L; Pequegnot, A L; Perries, S; Ruiz Alvarez, J D; Sabes, D; Sgandurra, L; Sordini, V; Vander Donckt, M; Verdier, P; Viret, S; Toriashvili, T; Lomidze, D; Autermann, C; Beranek, S; Edelhoff, M; Feld, L; Heister, A; Kiesel, M K; Klein, K; Lipinski, M; Ostapchuk, A; Preuten, M; Raupach, F; Schael, S; Schulte, J F; Verlage, T; Weber, H; Wittmer, B; Zhukov, V; Ata, M; Brodski, M; Dietz-Laursonn, E; Duchardt, D; Endres, M; Erdmann, M; Erdweg, S; Esch, T; Fischer, R; Güth, A; Hebbeker, T; Heidemann, C; Hoepfner, K; Knutzen, S; Kreuzer, P; Merschmeyer, M; Meyer, A; Millet, P; Olschewski, M; Padeken, K; Papacz, P; Pook, T; Radziej, M; Reithler, H; Rieger, M; Scheuch, F; Sonnenschein, L; Teyssier, D; Thüer, S; Cherepanov, V; Erdogan, Y; Flügge, G; Geenen, H; Geisler, M; Hoehle, F; Kargoll, B; Kress, T; Kuessel, Y; Künsken, A; Lingemann, J; Nehrkorn, A; Nowack, A; Nugent, I M; Pistone, C; Pooth, O; Stahl, A; Aldaya Martin, M; Asin, I; Bartosik, N; Behnke, O; Behrens, U; Bell, A J; Borras, K; Burgmeier, A; Campbell, A; Choudhury, S; Costanza, F; Diez Pardos, C; Dolinska, G; Dooling, S; Dorland, T; Eckerlin, G; Eckstein, D; Eichhorn, T; Flucke, G; Gallo, E; Garcia, J Garay; Geiser, A; Gizhko, A; Gunnellini, P; Hauk, J; Hempel, M; Jung, H; Kalogeropoulos, A; Karacheban, O; Kasemann, M; Katsas, P; Kieseler, J; Kleinwort, C; Korol, I; Lange, W; Leonard, J; Lipka, K; Lobanov, A; Lohmann, W; Mankel, R; Marfin, I; Melzer-Pellmann, I-A; Meyer, A B; Mittag, G; Mnich, J; Mussgiller, A; Naumann-Emme, S; Nayak, A; Ntomari, E; Perrey, H; Pitzl, D; Placakyte, R; Raspereza, A; Roland, B; Sahin, M Ö; Saxena, P; Schoerner-Sadenius, T; Schröder, M; Seitz, C; Spannagel, S; Trippkewitz, K D; Walsh, R; Wissing, C; Blobel, V; Centis Vignali, M; Draeger, A R; Erfle, J; Garutti, E; Goebel, K; Gonzalez, D; Görner, M; Haller, J; Hoffmann, M; Höing, R S; Junkes, A; Klanner, R; Kogler, R; Kovalchuk, N; Lapsien, T; Lenz, T; Marchesini, I; Marconi, D; Meyer, M; Nowatschin, D; Ott, J; Pantaleo, F; Peiffer, T; Perieanu, A; Pietsch, N; Poehlsen, J; Rathjens, D; Sander, C; Scharf, C; Schettler, H; Schleper, P; Schlieckau, E; Schmidt, A; Schwandt, J; Sola, V; Stadie, H; Steinbrück, G; Tholen, H; Troendle, D; Usai, E; Vanelderen, L; Vanhoefer, A; Vormwald, B; Barth, C; Baus, C; Berger, J; Böser, C; Butz, E; Chwalek, T; Colombo, F; De Boer, W; Descroix, A; Dierlamm, A; Fink, S; Frensch, F; Friese, R; Giffels, M; Gilbert, A; Haitz, D; Hartmann, F; Heindl, S M; Husemann, U; Katkov, I; Kornmayer, A; Lobelle Pardo, P; Maier, B; Mildner, H; Mozer, M U; Müller, T; Müller, Th; Plagge, M; Quast, G; Rabbertz, K; Röcker, S; Roscher, F; Sieber, G; Simonis, H J; Stober, F M; Ulrich, R; Wagner-Kuhr, J; Wayand, S; Weber, M; Weiler, T; Williamson, S; Wöhrmann, C; Wolf, R; Anagnostou, G; Daskalakis, G; Geralis, T; Giakoumopoulou, V A; Kyriakis, A; Loukas, D; Psallidas, A; Topsis-Giotis, I; Agapitos, A; Kesisoglou, S; Panagiotou, A; Saoulidou, N; Tziaferi, E; Evangelou, I; Flouris, G; Foudas, C; Kokkas, P; Loukas, N; Manthos, N; Papadopoulos, I; Paradas, E; Strologas, J; Bencze, G; Hajdu, C; Hazi, A; Hidas, P; Horvath, D; Sikler, F; Veszpremi, V; Vesztergombi, G; Zsigmond, A J; Beni, N; Czellar, S; Karancsi, J; Molnar, J; Szillasi, Z; Bartók, M; Makovec, A; Raics, P; Trocsanyi, Z L; Ujvari, B; Mal, P; Mandal, K; Sahoo, D K; Sahoo, N; Swain, S K; Bansal, S; Beri, S B; Bhatnagar, V; Chawla, R; Gupta, R; Bhawandeep, U; Kalsi, A K; Kaur, A; Kaur, M; Kumar, R; Mehta, A; Mittal, M; Singh, J B; Walia, G; Kumar, Ashok; Bhardwaj, A; Choudhary, B C; Garg, R B; Kumar, A; Malhotra, S; Naimuddin, M; Nishu, N; Ranjan, K; Sharma, R; Sharma, V; Bhattacharya, S; Chatterjee, K; Dey, S; Dutta, S; Jain, Sa; Majumdar, N; Modak, A; Mondal, K; Mukherjee, S; Mukhopadhyay, S; Roy, A; Roy, D; Roy Chowdhury, S; Sarkar, S; Sharan, M; Abdulsalam, A; Chudasama, R; Dutta, D; Jha, V; Kumar, V; Mohanty, A K; Pant, L M; Shukla, P; Topkar, A; Aziz, T; Banerjee, S; Bhowmik, S; Chatterjee, R M; Dewanjee, R K; Dugad, S; Ganguly, S; Ghosh, S; Guchait, M; Gurtu, A; Kole, G; Kumar, S; Mahakud, B; Maity, M; Majumder, G; Mazumdar, K; Mitra, S; Mohanty, G B; Parida, B; Sarkar, T; Sur, N; Sutar, B; Wickramage, N; Chauhan, S; Dube, S; Kapoor, A; Kothekar, K; Sharma, S; Bakhshiansohi, H; Behnamian, H; Etesami, S M; Fahim, A; Goldouzian, R; Khakzad, M; Mohammadi Najafabadi, M; Naseri, M; Paktinat Mehdiabadi, S; Rezaei Hosseinabadi, F; Safarzadeh, B; Zeinali, M; Felcini, M; Grunewald, M; Abbrescia, M; Calabria, C; Caputo, C; Colaleo, A; Creanza, D; Cristella, L; De Filippis, N; De Palma, M; Fiore, L; Iaselli, G; Maggi, G; Miniello, G; Maggi, M; My, S; Nuzzo, S; Pompili, A; Pugliese, G; Radogna, R; Ranieri, A; Selvaggi, G; Silvestris, L; Venditti, R; Verwilligen, P; Abbiendi, G; Battilana, C; Benvenuti, A C; Bonacorsi, D; Braibant-Giacomelli, S; Brigliadori, L; Campanini, R; Capiluppi, P; Castro, A; Cavallo, F R; Chhibra, S S; Codispoti, G; Cuffiani, M; Dallavalle, G M; Fabbri, F; Fanfani, A; Fasanella, D; Giacomelli, P; Grandi, C; Guiducci, L; Marcellini, S; Masetti, G; Montanari, A; Navarria, F L; Perrotta, A; Rossi, A M; Primavera, F; Rovelli, T; Siroli, G P; Tosi, N; Travaglini, R; Cappello, G; Chiorboli, M; Costa, S; Mattia, A Di; Giordano, F; Potenza, R; Tricomi, A; Tuve, C; Barbagli, G; Ciulli, V; Civinini, C; D'Alessandro, R; Focardi, E; Gonzi, S; Gori, V; Lenzi, P; Meschini, M; Paoletti, S; Sguazzoni, G; Tropiano, A; Viliani, L; Benussi, L; Bianco, S; Fabbri, F; Piccolo, D; Primavera, F; Calvelli, V; Ferro, F; Lo Vetere, M; Monge, M R; Robutti, E; Tosi, S; Brianza, L; Dinardo, M E; Fiorendi, S; Gennai, S; Gerosa, R; Ghezzi, A; Govoni, P; Malvezzi, S; Manzoni, R A; Marzocchi, B; Menasce, D; Moroni, L; Paganoni, M; Pedrini, D; Ragazzi, S; Redaelli, N; Tabarelli de Fatis, T; Buontempo, S; Cavallo, N; Di Guida, S; Esposito, M; Fabozzi, F; Iorio, A O M; Lanza, G; Lista, L; Meola, S; Merola, M; Paolucci, P; Sciacca, C; Thyssen, F; Azzi, P; Bacchetta, N; Benato, L; Bisello, D; Boletti, A; Branca, A; Carlin, R; Checchia, P; Dall'Osso, M; Dorigo, T; Dosselli, U; Fantinel, S; Fanzago, F; Gasparini, F; Gasparini, U; Gozzelino, A; Kanishchev, K; Lacaprara, S; Margoni, M; Meneguzzo, A T; Pazzini, J; Pozzobon, N; Ronchese, P; Simonetto, F; Torassa, E; Tosi, M; Zanetti, M; Zotto, P; Zucchetta, A; Braghieri, A; Magnani, A; Montagna, P; Ratti, S P; Re, V; Riccardi, C; Salvini, P; Vai, I; Vitulo, P; Alunni Solestizi, L; Bilei, G M; Ciangottini, D; Fanò, L; Lariccia, P; Mantovani, G; Menichelli, M; Saha, A; Santocchia, A; Androsov, K; Azzurri, P; Bagliesi, G; Bernardini, J; Boccali, T; Castaldi, R; Ciocci, M A; Dell'Orso, R; Donato, S; Fedi, G; Fiori, F; Foà, L; Giassi, A; Grippo, M T; Ligabue, F; Lomtadze, T; Martini, L; Messineo, A; Palla, F; Rizzi, A; Savoy-Navarro, A; Serban, A T; Spagnolo, P; Tenchini, R; Tonelli, G; Venturi, A; Verdini, P G; Barone, L; Cavallari, F; D'imperio, G; Del Re, D; Diemoz, M; Gelli, S; Jorda, C; Longo, E; Margaroli, F; Meridiani, P; Organtini, G; Paramatti, R; Preiato, F; Rahatlou, S; Rovelli, C; Santanastasio, F; Traczyk, P; Amapane, N; Arcidiacono, R; Argiro, S; Arneodo, M; Bellan, R; Biino, C; Cartiglia, N; Costa, M; Covarelli, R; Degano, A; Demaria, N; Finco, L; Kiani, B; Mariotti, C; Maselli, S; Migliore, E; Monaco, V; Monteil, E; Obertino, M M; Pacher, L; Pastrone, N; Pelliccioni, M; Pinna Angioni, G L; Ravera, F; Potenza, A; Romero, A; Ruspa, M; Sacchi, R; Solano, A; Staiano, A; Belforte, S; Candelise, V; Casarsa, M; Cossutti, F; Della Ricca, G; Gobbo, B; La Licata, C; Marone, M; Schizzi, A; Zanetti, A; Kropivnitskaya, T A; Nam, S K; Kim, D H; Kim, G N; Kim, M S; Kim, M S; Kong, D J; Lee, S; Oh, Y D; Sakharov, A; Son, D C; Brochero Cifuentes, J A; Kim, H; Kim, T J; Song, S; Choi, S; Go, Y; Gyun, D; Hong, B; Kim, H; Kim, Y; Lee, B; Lee, K; Lee, K S; Lee, S; Lee, S; Park, S K; Roh, Y; Yoo, H D; Choi, M; Kim, H; Kim, J H; Lee, J S H; Park, I C; Ryu, G; Ryu, M S; Choi, Y; Goh, J; Kim, D; Kwon, E; Lee, J; Yu, I; Dudenas, V; Juodagalvis, A; Vaitkus, J; Ahmed, I; Ibrahim, Z A; Komaragiri, J R; Md Ali, M A B; Mohamad Idris, F; Wan Abdullah, W A T; Yusli, M N; Wan Abdullah, W A T; Casimiro Linares, E; Castilla-Valdez, H; De La Cruz-Burelo, E; Heredia-De La Cruz, I; Hernandez-Almada, A; Lopez-Fernandez, R; Sanchez-Hernandez, A; Carrillo Moreno, S; Vazquez Valencia, F; Pedraza, I; Salazar Ibarguen, H A; Morelos Pineda, A; Krofcheck, D; Butler, P H; Ahmad, A; Ahmad, M; Hassan, Q; Hoorani, H R; Khan, W A; Khurshid, T; Shoaib, M; Bialkowska, H; Bluj, M; Boimska, B; Frueboes, T; Górski, M; Kazana, M; Nawrocki, K; Romanowska-Rybinska, K; Szleper, M; Zalewski, P; Brona, G; Bunkowski, K; Byszuk, A; Doroba, K; Kalinowski, A; Konecki, M; Krolikowski, J; Misiura, M; Olszewski, M; Walczak, M; Bargassa, P; Da Cruz E Silva, C Beir Ao; Di Francesco, A; Faccioli, P; Parracho, P G Ferreira; Gallinaro, M; Leonardo, N; Lloret Iglesias, L; Nguyen, F; Rodrigues Antunes, J; Seixas, J; Toldaiev, O; Vadruccio, D; Varela, J; Vischia, P; Afanasiev, S; Bunin, P; Gavrilenko, M; Golutvin, I; Gorbunov, I; Kamenev, A; Karjavin, V; Konoplyanikov, V; Lanev, A; Malakhov, A; Matveev, V; Moisenz, P; Palichik, V; Perelygin, V; Savina, M; Shmatov, S; Shulha, S; Smirnov, V; Zarubin, A; Golovtsov, V; Ivanov, Y; Kim, V; Kuznetsova, E; Levchenko, P; Murzin, V; Oreshkin, V; Smirnov, I; Sulimov, V; Uvarov, L; Vavilov, S; Vorobyev, A; Andreev, Yu; Dermenev, A; Gninenko, S; Golubev, N; Karneyeu, A; Kirsanov, M; Krasnikov, N; Pashenkov, A; Tlisov, D; Toropin, A; Epshteyn, V; Gavrilov, V; Lychkovskaya, N; Popov, V; Pozdnyakov, L; Safronov, G; Spiridonov, A; Vlasov, E; Zhokin, A; Bylinkin, A; Andreev, V; Azarkin, M; Dremin, I; Kirakosyan, M; Leonidov, A; Mesyats, G; Rusakov, S V; Baskakov, A; Belyaev, A; Boos, E; Dubinin, M; Dudko, L; Ershov, A; Gribushin, A; Klyukhin, V; Kodolova, O; Lokhtin, I; Myagkov, I; Obraztsov, S; Petrushanko, S; Savrin, V; Snigirev, A; Azhgirey, I; Bayshev, I; Bitioukov, S; Kachanov, V; Kalinin, A; Konstantinov, D; Krychkine, V; Petrov, V; Ryutin, R; Sobol, A; Tourtchanovitch, L; Troshin, S; Tyurin, N; Uzunian, A; Volkov, A; Adzic, P; Cirkovic, P; Milosevic, J; Rekovic, V; Alcaraz Maestre, J; Battilana, C; Calvo, E; Cerrada, M; Chamizo Llatas, M; Colino, N; De La Cruz, B; Delgado Peris, A; Escalante Del Valle, A; Fernandez Bedoya, C; Ramos, J P Fernández; Flix, J; Fouz, M C; Garcia-Abia, P; Gonzalez Lopez, O; Goy Lopez, S; Hernandez, J M; Josa, M I; Navarro De Martino, E; Yzquierdo, A Pérez-Calero; Puerta Pelayo, J; Quintario Olmeda, A; Redondo, I; Romero, L; Santaolalla, J; Soares, M S; Albajar, C; de Trocóniz, J F; Missiroli, M; Moran, D; Cuevas, J; Fernandez Menendez, J; Folgueras, S; Gonzalez Caballero, I; Palencia Cortezon, E; Vizan Garcia, J M; Cabrillo, I J; Calderon, A; Castiñeiras De Saa, J R; De Castro Manzano, P; Fernandez, M; Garcia-Ferrero, J; Gomez, G; Lopez Virto, A; Marco, J; Marco, R; Martinez Rivero, C; Matorras, F; Piedra Gomez, J; Rodrigo, T; Rodríguez-Marrero, A Y; Ruiz-Jimeno, A; Scodellaro, L; Trevisani, N; Vila, I; Vilar Cortabitarte, R; Abbaneo, D; Auffray, E; Auzinger, G; Bachtis, M; Baillon, P; Ball, A H; Barney, D; Benaglia, A; Bendavid, J; Benhabib, L; Benitez, J F; Berruti, G M; Bloch, P; Bocci, A; Bonato, A; Botta, C; Breuker, H; Camporesi, T; Castello, R; Cerminara, G; D'Alfonso, M; d'Enterria, D; Dabrowski, A; Daponte, V; David, A; De Gruttola, M; De Guio, F; De Roeck, A; De Visscher, S; Di Marco, E; Dobson, M; Dordevic, M; Dorney, B; du Pree, T; Duggan, D; Dünser, M; Dupont, N; Elliott-Peisert, A; Franzoni, G; Fulcher, J; Funk, W; Gigi, D; Gill, K; Giordano, D; Girone, M; Glege, F; Guida, R; Gundacker, S; Guthoff, M; Hammer, J; Harris, P; Hegeman, J; Innocente, V; Janot, P; Kirschenmann, H; Kortelainen, M J; Kousouris, K; Krajczar, K; Lecoq, P; Lourenço, C; Lucchini, M T; Magini, N; Malgeri, L; Mannelli, M; Martelli, A; Masetti, L; Meijers, F; Mersi, S; Meschi, E; Moortgat, F; Morovic, S; Mulders, M; Nemallapudi, M V; Neugebauer, H; Orfanelli, S; Orsini, L; Pape, L; Perez, E; Peruzzi, M; Petrilli, A; Petrucciani, G; Pfeiffer, A; Piparo, D; Racz, A; Reis, T; Rolandi, G; Rovere, M; Ruan, M; Sakulin, H; Schäfer, C; Schwick, C; Seidel, M; Sharma, A; Silva, P; Simon, M; Sphicas, P; Steggemann, J; Stieger, B; Stoye, M; Takahashi, Y; Treille, D; Triossi, A; Tsirou, A; Veres, G I; Wardle, N; Wöhri, H K; Zagozdzinska, A; Zeuner, W D; Bertl, W; Deiters, K; Erdmann, W; Horisberger, R; Ingram, Q; Kaestli, H C; Kotlinski, D; Langenegger, U; Renker, D; Rohe, T; Bachmair, F; Bäni, L; Bianchini, L; Casal, B; Dissertori, G; Dittmar, M; Donegà, M; Eller, P; Grab, C; Heidegger, C; Hits, D; Hoss, J; Kasieczka, G; Lustermann, W; Mangano, B; Marionneau, M; Martinez Ruiz Del Arbol, P; Masciovecchio, M; Meister, D; Micheli, F; Musella, P; Nessi-Tedaldi, F; Pandolfi, F; Pata, J; Pauss, F; Perrozzi, L; Quittnat, M; Rossini, M; Starodumov, A; Takahashi, M; Tavolaro, V R; Theofilatos, K; Wallny, R; Aarrestad, T K; Amsler, C; Caminada, L; Canelli, M F; Chiochia, V; De Cosa, A; Galloni, C; Hinzmann, A; Hreus, T; Kilminster, B; Lange, C; Ngadiuba, J; Pinna, D; Robmann, P; Ronga, F J; Salerno, D; Yang, Y; Cardaci, M; Chen, K H; Doan, T H; Jain, Sh; Khurana, R; Konyushikhin, M; Kuo, C M; Lin, W; Lu, Y J; Yu, S S; Kumar, Arun; Bartek, R; Chang, P; Chang, Y H; Chao, Y; Chen, K F; Chen, P H; Dietz, C; Fiori, F; Grundler, U; Hou, W-S; Hsiung, Y; Liu, Y F; Lu, R-S; Miñano Moya, M; Petrakou, E; Tsai, J F; Tzeng, Y M; Asavapibhop, B; Kovitanggoon, K; Singh, G; Srimanobhas, N; Suwonjandee, N; Adiguzel, A; Bakirci, M N; Cerci, S; Demiroglu, Z S; Dozen, C; Eskut, E; Gecit, F H; Girgis, S; Gokbulut, G; Guler, Y; Guler, Y; Gurpinar, E; Hos, I; Kangal, E E; Onengut, G; Ozcan, M; Ozdemir, K; Polatoz, A; Sunar Cerci, D; Topakli, H; Vergili, M; Zorbilmez, C; Akin, I V; Bilin, B; Bilmis, S; Isildak, B; Karapinar, G; Yalvac, M; Zeyrek, M; Gülmez, E; Kaya, M; Kaya, O; Yetkin, E A; Yetkin, T; Cakir, A; Cankocak, K; Sen, S; Vardarlı, F I; Grynyov, B; Levchuk, L; Sorokin, P; Aggleton, R; Ball, F; Beck, L; Brooke, J J; Clement, E; Cussans, D; Flacher, H; Goldstein, J; Grimes, M; Heath, G P; Heath, H F; Jacob, J; Kreczko, L; Lucas, C; Meng, Z; Newbold, D M; Paramesvaran, S; Poll, A; Sakuma, T; Seif El Nasr-Storey, S; Senkin, S; Smith, D; Smith, V J; Bell, K W; Belyaev, A; Brew, C; Brown, R M; Calligaris, L; Cieri, D; Cockerill, D J A; Coughlan, J A; Harder, K; Harper, S; Olaiya, E; Petyt, D; Shepherd-Themistocleous, C H; Thea, A; Tomalin, I R; Williams, T; Worm, S D; Baber, M; Bainbridge, R; Buchmuller, O; Bundock, A; Burton, D; Casasso, S; Citron, M; Colling, D; Corpe, L; Cripps, N; Dauncey, P; Davies, G; De Wit, A; Della Negra, M; Dunne, P; Elwood, A; Elwood, A; Ferguson, W; Futyan, D; Hall, G; Iles, G; Kenzie, M; Lane, R; Lucas, R; Lyons, L; Magnan, A-M; Malik, S; Nash, J; Nikitenko, A; Pela, J; Pesaresi, M; Petridis, K; Raymond, D M; Richards, A; Rose, A; Seez, C; Tapper, A; Uchida, K; Vazquez Acosta, M; Virdee, T; Zenz, S C; Cole, J E; Hobson, P R; Khan, A; Kyberd, P; Leggat, D; Leslie, D; Reid, I D; Symonds, P; Teodorescu, L; Turner, M; Borzou, A; Call, K; Dittmann, J; Hatakeyama, K; Liu, H; Pastika, N; Scarborough, T; Wu, Z; Charaf, O; Cooper, S I; Henderson, C; Rumerio, P; Arcaro, D; Avetisyan, A; Bose, T; Fantasia, C; Gastler, D; Lawson, P; Rankin, D; Richardson, C; Rohlf, J; St John, J; Sulak, L; Zou, D; Alimena, J; Berry, E; Bhattacharya, S; Cutts, D; Dhingra, N; Ferapontov, A; Garabedian, A; Hakala, J; Heintz, U; Laird, E; Landsberg, G; Mao, Z; Narain, M; Piperov, S; Sagir, S; Syarif, R; Breedon, R; Breto, G; De La Barca Sanchez, M Calderon; Chauhan, S; Chertok, M; Conway, J; Conway, R; Cox, P T; Erbacher, R; Funk, G; Gardner, M; Ko, W; Lander, R; Mulhearn, M; Pellett, D; Pilot, J; Ricci-Tam, F; Shalhout, S; Smith, J; Squires, M; Stolp, D; Tripathi, M; Wilbur, S; Yohay, R; Bravo, C; Cousins, R; Everaerts, P; Farrell, C; Florent, A; Hauser, J; Ignatenko, M; Saltzberg, D; Schnaible, C; Valuev, V; Weber, M; Burt, K; Clare, R; Ellison, J; Gary, J W; Hanson, G; Heilman, J; Ivova Paneva, M; Jandir, P; Kennedy, E; Lacroix, F; Long, O R; Luthra, A; Malberti, M; Negrete, M Olmedo; Shrinivas, A; Wei, H; Wimpenny, S; Yates, B R; Branson, J G; Cerati, G B; Cittolin, S; D'Agnolo, R T; Derdzinski, M; Holzner, A; Kelley, R; Klein, D; Letts, J; Macneill, I; Olivito, D; Padhi, S; Pieri, M; Sani, M; Sharma, V; Simon, S; Tadel, M; Tu, Y; Vartak, A; Wasserbaech, S; Welke, C; Würthwein, F; Yagil, A; Zevi Della Porta, G; Bradmiller-Feld, J; Campagnari, C; Dishaw, A; Dutta, V; Flowers, K; Franco Sevilla, M; Geffert, P; George, C; Golf, F; Gouskos, L; Gran, J; Incandela, J; Mccoll, N; Mullin, S D; Mullin, S D; Richman, J; Stuart, D; Suarez, I; West, C; Yoo, J; Anderson, D; Apresyan, A; Bornheim, A; Bunn, J; Chen, Y; Duarte, J; Mott, A; Newman, H B; Pena, C; Pierini, M; Spiropulu, M; Vlimant, J R; Xie, S; Zhu, R Y; Andrews, M B; Azzolini, V; Calamba, A; Carlson, B; Ferguson, T; Paulini, M; Russ, J; Sun, M; Vogel, H; Vorobiev, I; Cumalat, J P; Ford, W T; Gaz, A; Jensen, F; Johnson, A; Krohn, M; Mulholland, T; Nauenberg, U; Stenson, K; Wagner, S R; Alexander, J; Chatterjee, A; Chaves, J; Chu, J; Dittmer, S; Eggert, N; Mirman, N; Nicolas Kaufman, G; Patterson, J R; Rinkevicius, A; Ryd, A; Skinnari, L; Soffi, L; Sun, W; Tan, S M; Teo, W D; Thom, J; Thompson, J; Tucker, J; Weng, Y; Wittich, P; Abdullin, S; Albrow, M; Apollinari, G; Banerjee, S; Bauerdick, L A T; Beretvas, A; Berryhill, J; Bhat, P C; Bolla, G; Burkett, K; Butler, J N; Cheung, H W K; Chlebana, F; Cihangir, S; Elvira, V D; Fisk, I; Freeman, J; Gottschalk, E; Gray, L; Green, D; Grünendahl, S; Gutsche, O; Hanlon, J; Hare, D; Harris, R M; Hasegawa, S; Hirschauer, J; Hu, Z; Jayatilaka, B; Jindariani, S; Johnson, M; Joshi, U; Jung, A W; Klima, B; Kreis, B; Lammel, S; Linacre, J; Lincoln, D; Lipton, R; Liu, T; Lopes De Sá, R; Lykken, J; Maeshima, K; Marraffino, J M; Martinez Outschoorn, V I; Maruyama, S; Mason, D; McBride, P; Merkel, P; Mishra, K; Mrenna, S; Nahn, S; Newman-Holmes, C; O'Dell, V; Pedro, K; Prokofyev, O; Rakness, G; Sexton-Kennedy, E; Soha, A; Spalding, W J; Spiegel, L; Strobbe, N; Taylor, L; Tkaczyk, S; Tran, N V; Uplegger, L; Vaandering, E W; Vernieri, C; Verzocchi, M; Vidal, R; Weber, H A; Whitbeck, A; Acosta, D; Avery, P; Bortignon, P; Bourilkov, D; Carnes, A; Carver, M; Curry, D; Das, S; Field, R D; Furic, I K; Gleyzer, S V; Hugon, J; Konigsberg, J; Korytov, A; Kotov, K; Low, J F; Ma, P; Matchev, K; Mei, H; Milenovic, P; Mitselmakher, G; Rank, D; Rossin, R; Shchutska, L; Snowball, M; Sperka, D; Terentyev, N; Thomas, L; Wang, J; Wang, S; Yelton, J; Hewamanage, S; Linn, S; Markowitz, P; Martinez, G; Rodriguez, J L; Adams, J R; Ackert, A; Adams, T; Askew, A; Bein, S; Bochenek, J; Diamond, B; Haas, J; Hagopian, S; Hagopian, V; Johnson, K F; Khatiwada, A; Prosper, H; Weinberg, M; Baarmand, M M; Bhopatkar, V; Colafranceschi, S; Hohlmann, M; Kalakhety, H; Noonan, D; Roy, T; Yumiceva, F; Adams, M R; Apanasevich, L; Berry, D; Betts, R R; Bucinskaite, I; Cavanaugh, R; Evdokimov, O; Gauthier, L; Gerber, C E; Hofman, D J; Kurt, P; O'Brien, C; Sandoval Gonzalez, L D; Silkworth, C; Turner, P; Varelas, N; Wu, Z; Zakaria, M; Bilki, B; Clarida, W; Dilsiz, K; Durgut, S; Gandrajula, R P; Haytmyradov, M; Khristenko, V; Merlo, J-P; Mermerkaya, H; Mestvirishvili, A; Moeller, A; Nachtman, J; Ogul, H; Onel, Y; Ozok, F; Penzo, A; Snyder, C; Tiras, E; Wetzel, J; Yi, K; Anderson, I; Anderson, I; Barnett, B A; Blumenfeld, B; Eminizer, N; Fehling, D; Feng, L; Gritsan, A V; Maksimovic, P; Martin, C; Osherson, M; Roskes, J; Sady, A; Sarica, U; Swartz, M; Xiao, M; Xin, Y; You, C; Xiao, M; Baringer, P; Bean, A; Benelli, G; Bruner, C; Kenny, R P; Majumder, D; Majumder, D; Malek, M; Murray, M; Sanders, S; Stringer, R; Wang, Q; Ivanov, A; Kaadze, K; Khalil, S; Makouski, M; Maravin, Y; Mohammadi, A; Saini, L K; Skhirtladze, N; Toda, S; Lange, D; Rebassoo, F; Wright, D; Anelli, C; Baden, A; Baron, O; Belloni, A; Calvert, B; Eno, S C; Ferraioli, C; Gomez, J A; Hadley, N J; Jabeen, S; Jabeen, S; Kellogg, R G; Kolberg, T; Kunkle, J; Lu, Y; Mignerey, A C; Shin, Y H; Skuja, A; Tonjes, M B; Tonwar, S C; Apyan, A; Barbieri, R; Baty, A; Bierwagen, K; Brandt, S; Bierwagen, K; Busza, W; Cali, I A; Demiragli, Z; Di Matteo, L; Gomez Ceballos, G; Goncharov, M; Gulhan, D; Iiyama, Y; Innocenti, G M; Klute, M; Kovalskyi, D; Lai, Y S; Lee, Y-J; Levin, A; Luckey, P D; Marini, A C; Mcginn, C; Mironov, C; Narayanan, S; Niu, X; Paus, C; Ralph, D; Roland, C; Roland, G; Salfeld-Nebgen, J; Stephans, G S F; Sumorok, K; Varma, M; Velicanu, D; Veverka, J; Wang, J; Wang, T W; Wyslouch, B; Yang, M; Zhukova, V; Dahmes, B; Evans, A; Finkel, A; Gude, A; Hansen, P; Kalafut, S; Kao, S C; Klapoetke, K; Kubota, Y; Lesko, Z; Mans, J; Nourbakhsh, S; Ruckstuhl, N; Rusack, R; Tambe, N; Turkewitz, J; Acosta, J G; Oliveros, S; Avdeeva, E; Bloom, K; Bose, S; Claes, D R; Dominguez, A; Fangmeier, C; Gonzalez Suarez, R; Kamalieddin, R; Keller, J; Knowlton, D; Kravchenko, I; Meier, F; Monroy, J; Ratnikov, F; Siado, J E; Snow, G R; Alyari, M; Dolen, J; George, J; Godshalk, A; Harrington, C; Iashvili, I; Kaisen, J; Kharchilava, A; Kumar, A; Rappoccio, S; Roozbahani, B; Alverson, G; Barberis, E; Baumgartel, D; Chasco, M; Hortiangtham, A; Massironi, A; Morse, D M; Nash, D; Orimoto, T; Teixeira De Lima, R; Trocino, D; Wang, R-J; Wood, D; Zhang, J; Hahn, K A; Kubik, A; Mucia, N; Odell, N; Pollack, B; Pozdnyakov, A; Schmitt, M; Stoynev, S; Sung, K; Trovato, M; Velasco, M; Brinkerhoff, A; Dev, N; Hildreth, M; Jessop, C; Karmgard, D J; Kellams, N; Lannon, K; Marinelli, N; Meng, F; Mueller, C; Musienko, Y; Planer, M; Reinsvold, A; Ruchti, R; Smith, G; Taroni, S; Valls, N; Wayne, M; Wolf, M; Woodard, A; Antonelli, L; Brinson, J; Bylsma, B; Durkin, L S; Flowers, S; Hart, A; Hill, C; Hughes, R; Ji, W; Ling, T Y; Liu, B; Luo, W; Puigh, D; Rodenburg, M; Winer, B L; Wulsin, H W; Driga, O; Elmer, P; Hardenbrook, J; Hebda, P; Koay, S A; Lujan, P; Marlow, D; Medvedeva, T; Mooney, M; Olsen, J; Palmer, C; Piroué, P; Saka, H; Stickland, D; Tully, C; Zuranski, A; Malik, S; Barnes, V E; Benedetti, D; Bortoletto, D; Gutay, L; Jha, M K; Jones, M; Jung, K; Miller, D H; Neumeister, N; Primavera, F; Radburn-Smith, B C; Shi, X; Shipsey, I; Silvers, D; Sun, J; Svyatkovskiy, A; Wang, F; Xie, W; Xu, L; Parashar, N; Stupak, J; Adair, A; Akgun, B; Chen, Z; Ecklund, K M; Geurts, F J M; Guilbaud, M; Li, W; Michlin, B; Northup, M; Padley, B P; Redjimi, R; Roberts, J; Rorie, J; Tu, Z; Zabel, J; Betchart, B; Bodek, A; de Barbaro, P; Demina, R; Eshaq, Y; Ferbel, T; Galanti, M; Galanti, M; Garcia-Bellido, A; Han, J; Harel, A; Hindrichs, O; Hindrichs, O; Khukhunaishvili, A; Petrillo, G; Tan, P; Verzetti, M; Arora, S; Barker, A; Chou, J P; Contreras-Campana, C; Contreras-Campana, E; Ferencek, D; Gershtein, Y; Gray, R; Halkiadakis, E; Hidas, D; Hughes, E; Kaplan, S; Kunnawalkam Elayavalli, R; Lath, A; Nash, K; Panwalkar, S; Park, M; Salur, S; Schnetzer, S; Sheffield, D; Somalwar, S; Stone, R; Thomas, S; Thomassen, P; Walker, M; Foerster, M; Riley, G; Rose, K; Spanier, S; York, A; Bouhali, O; Castaneda Hernandez, A; Celik, A; Dalchenko, M; De Mattia, M; Delgado, A; Dildick, S; Dildick, S; Eusebi, R; Gilmore, J; Huang, T; Kamon, T; Krutelyov, V; Krutelyov, V; Mueller, R; Osipenkov, I; Pakhotin, Y; Patel, R; Patel, R; Perloff, A; Rose, A; Safonov, A; Tatarinov, A; Ulmer, K A; Akchurin, N; Cowden, C; Damgov, J; Dragoiu, C; Dudero, P R; Faulkner, J; Kunori, S; Lamichhane, K; Lee, S W; Libeiro, T; Undleeb, S; Volobouev, I; Appelt, E; Delannoy, A G; Greene, S; Gurrola, A; Janjam, R; Johns, W; Maguire, C; Mao, Y; Melo, A; Ni, H; Sheldon, P; Snook, B; Tuo, S; Velkovska, J; Xu, Q; Arenton, M W; Cox, B; Francis, B; Goodell, J; Hirosky, R; Ledovskoy, A; Li, H; Lin, C; Neu, C; Sinthuprasith, T; Sun, X; Wang, Y; Wolfe, E; Wood, J; Xia, F; Clarke, C; Harr, R; Karchin, P E; Kottachchi Kankanamge Don, C; Lamichhane, P; Sturdy, J; Belknap, D A; Carlsmith, D; Cepeda, M; Dasu, S; Dodd, L; Duric, S; Gomber, B; Grothe, M; Hall-Wilton, R; Herndon, M; Hervé, A; Klabbers, P; Lanaro, A; Levine, A; Long, K; Loveless, R; Mohapatra, A; Ojalvo, I; Perry, T; Pierro, G A; Polese, G; Ruggles, T; Sarangi, T; Savin, A; Sharma, A; Smith, N; Smith, W H; Taylor, D; Woods, N
New sets of parameters ("tunes") for the underlying-event (UE) modelling of the pythia8, pythia6 and herwig++ Monte Carlo event generators are constructed using different parton distribution functions. Combined fits to CMS UE proton-proton ([Formula: see text]) data at [Formula: see text] and to UE proton-antiproton ([Formula: see text]) data from the CDF experiment at lower [Formula: see text], are used to study the UE models and constrain their parameters, providing thereby improved predictions for proton-proton collisions at 13[Formula: see text]. In addition, it is investigated whether the values of the parameters obtained from fits to UE observables are consistent with the values determined from fitting observables sensitive to double-parton scattering processes. Finally, comparisons are presented of the UE tunes to "minimum bias" (MB) events, multijet, and Drell-Yan ([Formula: see text] lepton-antilepton+jets) observables at 7 and 8[Formula: see text], as well as predictions for MB and UE observables at 13[Formula: see text].
Adediran, S A; Ratkowsky, D A; Donaghy, D J; Malau-Aduli, A E O
2012-09-01
Fourteen lactation models were fitted to average and individual cow lactation data from pasture-based dairy systems in the Australian states of Victoria and Tasmania. The models included a new "log-quadratic" model, and a major objective was to evaluate and compare the performance of this model with the other models. Nine empirical and 5 mechanistic models were first fitted to average test-day milk yield of Holstein-Friesian dairy cows using the nonlinear procedure in SAS. Two additional semiparametric models were fitted using a linear model in ASReml. To investigate the influence of days to first test-day and the number of test-days, 5 of the best-fitting models were then fitted to individual cow lactation data. Model goodness of fit was evaluated using criteria such as the residual mean square, the distribution of residuals, the correlation between actual and predicted values, and the Wald-Wolfowitz runs test. Goodness of fit was similar in all but one of the models in terms of fitting average lactation but they differed in their ability to predict individual lactations. In particular, the widely used incomplete gamma model most displayed this failing. The new log-quadratic model was robust in fitting average and individual lactations, and was less affected by sampled data and more parsimonious in having only 3 parameters, each of which lends itself to biological interpretation. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Non-linear Growth Models in Mplus and SAS
Grimm, Kevin J.; Ram, Nilam
2013-01-01
Non-linear growth curves or growth curves that follow a specified non-linear function in time enable researchers to model complex developmental patterns with parameters that are easily interpretable. In this paper we describe how a variety of sigmoid curves can be fit using the Mplus structural modeling program and the non-linear mixed-effects modeling procedure NLMIXED in SAS. Using longitudinal achievement data collected as part of a study examining the effects of preschool instruction on academic gain we illustrate the procedures for fitting growth models of logistic, Gompertz, and Richards functions. Brief notes regarding the practical benefits, limitations, and choices faced in the fitting and estimation of such models are included. PMID:23882134
Extracting harmonic signal from a chaotic background with local linear model
NASA Astrophysics Data System (ADS)
Li, Chenlong; Su, Liyun
2017-02-01
In this paper, the problems of blind detection and estimation of harmonic signal in strong chaotic background are analyzed, and new methods by using local linear (LL) model are put forward. The LL model has been exhaustively researched and successfully applied for fitting and forecasting chaotic signal in many chaotic fields. We enlarge the modeling capacity substantially. Firstly, we can predict the short-term chaotic signal and obtain the fitting error based on the LL model. Then we detect the frequencies from the fitting error by periodogram, a property on the fitting error is proposed which has not been addressed before, and this property ensures that the detected frequencies are similar to that of harmonic signal. Secondly, we establish a two-layer LL model to estimate the determinate harmonic signal in strong chaotic background. To estimate this simply and effectively, we develop an efficient backfitting algorithm to select and optimize the parameters that are hard to be exhaustively searched for. In the method, based on sensitivity to initial value of chaos motion, the minimum fitting error criterion is used as the objective function to get the estimation of the parameters of the two-layer LL model. Simulation shows that the two-layer LL model and its estimation technique have appreciable flexibility to model the determinate harmonic signal in different chaotic backgrounds (Lorenz, Henon and Mackey-Glass (M-G) equations). Specifically, the harmonic signal can be extracted well with low SNR and the developed background algorithm satisfies the condition of convergence in repeated 3-5 times.
NTCP Modeling of Subacute/Late Laryngeal Edema Scored by Fiberoptic Examination
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rancati, Tiziana; Fiorino, Claudio, E-mail: fiorino.claudio@hsr.i; Sanguineti, Giuseppe
2009-11-01
Purpose: Finding best-fit parameters of normal tissue complication probability (NTCP) models for laryngeal edema after radiotherapy for head and neck cancer. Methods and Materials: Forty-eight patients were considered for this study who met the following criteria: (1) grossly uninvolved larynx, (2) no prior major surgery except for neck dissection and tonsillectomy, (3) at least one fiberoptic examination of the larynx within 2 years from radiotherapy, (4) minimum follow-up of 15 months. Larynx dose-volume histograms (DVHs) were corrected into a linear quadratic equivalent one at 2 Gy/fr with alpha/beta = 3 Gy. Subacute/late edema was prospectively scored at each follow-up examinationmore » according to the Radiation Therapy Oncology Group scale. G2-G3 edema within 15 months from RT was considered as our endpoint. Two NTCP models were considered: (1) the Lyman model with DVH reduced to the equivalent uniform dose (EUD; LEUD) and (2) the Logit model with DVH reduced to the EUD (LOGEUD). The parameters for the models were fit to patient data using a maximum likelihood analysis. Results: All patients had a minimum of 15 months follow-up (only 8/48 received concurrent chemotherapy): 25/48 (52.1%) experienced G2-G3 edema. Both NTCP models fit well the clinical data: with LOGEUD the relationship between EUD and NTCP can be described with TD50 = 46.7 +- 2.1 Gy, n = 1.41 +- 0.8 and a steepness parameter k = 7.2 +- 2.5 Gy. Best fit parameters for LEUD are n = 1.17 +- 0.6, m = 0.23 +- 0.07 and TD50 = 47.3 +- 2.1 Gy. Conclusions: A clear volume effect was found for edema, consistent with a parallel architecture of the larynx for this endpoint. On the basis of our findings, an EUD <30-35 Gy should drastically reduce the risk of G2-G3 edema.« less
Improving the Fit of a Land-Surface Model to Data Using its Adjoint
NASA Astrophysics Data System (ADS)
Raoult, Nina; Jupp, Tim; Cox, Peter; Luke, Catherine
2016-04-01
Land-surface models (LSMs) are crucial components of the Earth System Models (ESMs) which are used to make coupled climate-carbon cycle projections for the 21st century. The Joint UK Land Environment Simulator (JULES) is the land-surface model used in the climate and weather forecast models of the UK Met Office. In this study, JULES is automatically differentiated using commercial software from FastOpt, resulting in an analytical gradient, or adjoint, of the model. Using this adjoint, the adJULES parameter estimation system has been developed, to search for locally optimum parameter sets by calibrating against observations. We present an introduction to the adJULES system and demonstrate its ability to improve the model-data fit using eddy covariance measurements of gross primary production (GPP) and latent heat (LE) fluxes. adJULES also has the ability to calibrate over multiple sites simultaneously. This feature is used to define new optimised parameter values for the 5 Plant Functional Types (PFTS) in JULES. The optimised PFT-specific parameters improve the performance of JULES over 90% of the FLUXNET sites used in the study. These reductions in error are shown and compared to reductions found due to site-specific optimisations. Finally, we show that calculation of the 2nd derivative of JULES allows us to produce posterior probability density functions of the parameters and how knowledge of parameter values is constrained by observations.
The drift diffusion model as the choice rule in reinforcement learning.
Pedersen, Mads Lund; Frank, Michael J; Biele, Guido
2017-08-01
Current reinforcement-learning models often assume simplified decision processes that do not fully reflect the dynamic complexities of choice processes. Conversely, sequential-sampling models of decision making account for both choice accuracy and response time, but assume that decisions are based on static decision values. To combine these two computational models of decision making and learning, we implemented reinforcement-learning models in which the drift diffusion model describes the choice process, thereby capturing both within- and across-trial dynamics. To exemplify the utility of this approach, we quantitatively fit data from a common reinforcement-learning paradigm using hierarchical Bayesian parameter estimation, and compared model variants to determine whether they could capture the effects of stimulant medication in adult patients with attention-deficit hyperactivity disorder (ADHD). The model with the best relative fit provided a good description of the learning process, choices, and response times. A parameter recovery experiment showed that the hierarchical Bayesian modeling approach enabled accurate estimation of the model parameters. The model approach described here, using simultaneous estimation of reinforcement-learning and drift diffusion model parameters, shows promise for revealing new insights into the cognitive and neural mechanisms of learning and decision making, as well as the alteration of such processes in clinical groups.
The drift diffusion model as the choice rule in reinforcement learning
Frank, Michael J.
2017-01-01
Current reinforcement-learning models often assume simplified decision processes that do not fully reflect the dynamic complexities of choice processes. Conversely, sequential-sampling models of decision making account for both choice accuracy and response time, but assume that decisions are based on static decision values. To combine these two computational models of decision making and learning, we implemented reinforcement-learning models in which the drift diffusion model describes the choice process, thereby capturing both within- and across-trial dynamics. To exemplify the utility of this approach, we quantitatively fit data from a common reinforcement-learning paradigm using hierarchical Bayesian parameter estimation, and compared model variants to determine whether they could capture the effects of stimulant medication in adult patients with attention-deficit hyper-activity disorder (ADHD). The model with the best relative fit provided a good description of the learning process, choices, and response times. A parameter recovery experiment showed that the hierarchical Bayesian modeling approach enabled accurate estimation of the model parameters. The model approach described here, using simultaneous estimation of reinforcement-learning and drift diffusion model parameters, shows promise for revealing new insights into the cognitive and neural mechanisms of learning and decision making, as well as the alteration of such processes in clinical groups. PMID:27966103
Preliminary estimates of Gulf Stream characteristics from TOPEX data and a precise gravimetric geoid
NASA Technical Reports Server (NTRS)
Rapp, Richard H.; Smith, Dru A.
1994-01-01
TOPEX sea surface height data has been used, with a gravimetric geoid, to calculate sea surface topography across the Gulf Stream. This topography was initially computed for nine tracks on cycles 21 to 29. Due to inaccurate geoid undulations on one track, results for eight tracks are reported. The sea surface topography estimates were used to calculate parameters that describe Gulf Stream characteristics from two models of the Gulf Stream. One model was based on a Gaussian representation of the velocity while the other was a hyperbolic representation of velocity or the sea surface topography. The parameters of the Gaussian velocity model fit were a width parameter, a maximum velocity value, and the location of the maximum velocity. The parameters of the hyperbolic sea surface topography model were the width, the height jump, position, and sea surface topography at the center of the stream. Both models were used for the eight tracks and nine cycles studied. Comparisons were made between the width parameters, the maximum velocities, and the height jumps. Some of the parameter estimates were found to be highly (0.9) correlated when the hyperbolic sea surface topography fit was carried out, but such correlations were reduced for either the Gaussian velocity fits or the hyperbolic velocity model fit. A comparison of the parameters derived from 1-year TOPEX data showed good agreement with values derived by Kelly (1991) using 2.5 years of Geosat data near 38 deg N, 66 deg W longitude. Accuracy of the geoid undulations used in the calculations was of order of +/- 16 cm with the accuracy of a geoid undulation difference equal to +/- 15 cm over a 100-km line in areas with good terrestrial data coverage. This paper demonstrates that our knowledge or geoid undulations and undulation differences, in a portion of the Gulf Stream region, is sufficiently accurate to determine characteristics of the jet when used with TOPEX altimeter data. The method used here has not been shown to be more accurate than methods that average altimeter data to form a reference surface used in analysis to obtain the Gulf Stream characteristics. However, the results show the geoid approach may be used in areas where lack of current meandering reduces the accuracy of the average surface procedure.
Xiang, Junfeng; Xie, Lijing; Gao, Feinong; Zhang, Yu; Yi, Jie; Wang, Tao; Pang, Siqin; Wang, Xibin
2018-01-01
Discrepancies in capturing material behavior of some materials, such as Particulate Reinforced Metal Matrix Composites, by using conventional ad hoc strategy make the applicability of Johnson-Cook constitutive model challenged. Despites applicable efforts, its extended formalism with more fitting parameters would increase the difficulty in identifying constitutive parameters. A weighted multi-objective strategy for identifying any constitutive formalism is developed to predict mechanical behavior in static and dynamic loading conditions equally well. These varying weighting is based on the Gaussian-distributed noise evaluation of experimentally obtained stress-strain data in quasi-static or dynamic mode. This universal method can be used to determine fast and directly whether the constitutive formalism is suitable to describe the material constitutive behavior by measuring goodness-of-fit. A quantitative comparison of different fitting strategies on identifying Al6063/SiCp’s material parameters is made in terms of performance evaluation including noise elimination, correlation, and reliability. Eventually, a three-dimensional (3D) FE model in small-hole drilling of Al6063/SiCp composites, using multi-objective identified constitutive formalism, is developed. Comparison with the experimental observations in thrust force, torque, and chip morphology provides valid evidence on the applicability of the developed multi-objective identification strategy in identifying constitutive parameters. PMID:29324688
LEPTONIC AND LEPTO-HADRONIC MODELING OF THE 2010 NOVEMBER FLARE FROM 3C 454.3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diltz, C.; Böttcher, M.
In this study, we use a one-zone leptonic and a lepto-hadronic model to investigate the multi-wavelength emission and prominent flare of the flat spectrum radio quasar 3C 454.3 in 2010 November. We perform a parameter study with both models to obtain broadband fits to the spectral energy distribution (SED) of 3C 454.3. Starting with the baseline parameters obtained from the fits, we then investigate different flaring scenarios for both models to explain an extreme outburst and spectral hardening of 3C 454.3 that occurred in 2010 November. We find that the one-zone lepto-hadronic model can successfully explain both the broadband multi-wavelengthmore » SED and light curves in the optical R, Swift X-Ray Telescope, and Fermi γ -ray band passes for 3C 454.3 during quiescence and the peak of the 2010 November flare. We also find that the one-zone leptonic model produces poor fits to the broadband spectra in the X-ray and high-energy γ -ray band passes for the 2010 November flare.« less
Abbott, Lauren J; Stevens, Mark J
2015-12-28
A coarse-grained (CG) model is developed for the thermoresponsive polymer poly(N-isopropylacrylamide) (PNIPAM), using a hybrid top-down and bottom-up approach. Nonbonded parameters are fit to experimental thermodynamic data following the procedures of the SDK (Shinoda, DeVane, and Klein) CG force field, with minor adjustments to provide better agreement with radial distribution functions from atomistic simulations. Bonded parameters are fit to probability distributions from atomistic simulations using multi-centered Gaussian-based potentials. The temperature-dependent potentials derived for the PNIPAM CG model in this work properly capture the coil-globule transition of PNIPAM single chains and yield a chain-length dependence consistent with atomistic simulations.
Power law versus exponential state transition dynamics: application to sleep-wake architecture.
Chu-Shore, Jesse; Westover, M Brandon; Bianchi, Matt T
2010-12-02
Despite the common experience that interrupted sleep has a negative impact on waking function, the features of human sleep-wake architecture that best distinguish sleep continuity versus fragmentation remain elusive. In this regard, there is growing interest in characterizing sleep architecture using models of the temporal dynamics of sleep-wake stage transitions. In humans and other mammals, the state transitions defining sleep and wake bout durations have been described with exponential and power law models, respectively. However, sleep-wake stage distributions are often complex, and distinguishing between exponential and power law processes is not always straightforward. Although mono-exponential distributions are distinct from power law distributions, multi-exponential distributions may in fact resemble power laws by appearing linear on a log-log plot. To characterize the parameters that may allow these distributions to mimic one another, we systematically fitted multi-exponential-generated distributions with a power law model, and power law-generated distributions with multi-exponential models. We used the Kolmogorov-Smirnov method to investigate goodness of fit for the "incorrect" model over a range of parameters. The "zone of mimicry" of parameters that increased the risk of mistakenly accepting power law fitting resembled empiric time constants obtained in human sleep and wake bout distributions. Recognizing this uncertainty in model distinction impacts interpretation of transition dynamics (self-organizing versus probabilistic), and the generation of predictive models for clinical classification of normal and pathological sleep architecture.
NEUTRON STAR MASS–RADIUS CONSTRAINTS USING EVOLUTIONARY OPTIMIZATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, A. L.; Morsink, S. M.; Fiege, J. D.
The equation of state of cold supra-nuclear-density matter, such as in neutron stars, is an open question in astrophysics. A promising method for constraining the neutron star equation of state is modeling pulse profiles of thermonuclear X-ray burst oscillations from hot spots on accreting neutron stars. The pulse profiles, constructed using spherical and oblate neutron star models, are comparable to what would be observed by a next-generation X-ray timing instrument like ASTROSAT , NICER , or a mission similar to LOFT . In this paper, we showcase the use of an evolutionary optimization algorithm to fit pulse profiles to determinemore » the best-fit masses and radii. By fitting synthetic data, we assess how well the optimization algorithm can recover the input parameters. Multiple Poisson realizations of the synthetic pulse profiles, constructed with 1.6 million counts and no background, were fitted with the Ferret algorithm to analyze both statistical and degeneracy-related uncertainty and to explore how the goodness of fit depends on the input parameters. For the regions of parameter space sampled by our tests, the best-determined parameter is the projected velocity of the spot along the observer’s line of sight, with an accuracy of ≤3% compared to the true value and with ≤5% statistical uncertainty. The next best determined are the mass and radius; for a neutron star with a spin frequency of 600 Hz, the best-fit mass and radius are accurate to ≤5%, with respective uncertainties of ≤7% and ≤10%. The accuracy and precision depend on the observer inclination and spot colatitude, with values of ∼1% achievable in mass and radius if both the inclination and colatitude are ≳60°.« less
NASA Astrophysics Data System (ADS)
Liu, Huaming; Qin, Xunpeng; Huang, Song; Hu, Zeqi; Ni, Mao
2018-01-01
This paper presents an investigation on the relationship between the process parameters and geometrical characteristics of the sectional profile for the single track cladding (STC) deposited by High Power Diode Laser (HPDL) with rectangle beam spot (RBS). To obtain the geometry parameters, namely cladding width Wc and height Hc of the sectional profile, a full factorial design (FFD) of experiment was used to conduct the experiments with a total of 27. The pre-placed powder technique has been employed during laser cladding. The influence of the process parameters including laser power, powder thickness and scanning speed on the Wc and Hc was analyzed in detail. A nonlinear fitting model was used to fit the relationship between the process parameters and geometry parameters. And a circular arc was adopted to describe the geometry profile of the cross-section of STC. The above models were confirmed by all the experiments. The results indicated that the geometrical characteristics of the sectional profile of STC can be described as the circular arc, and the other geometry parameters of the sectional profile can be calculated only using Wc and Hc. Meanwhile, the Wc and Hc can be predicted through the process parameters.
NASA Astrophysics Data System (ADS)
Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.
2017-06-01
We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of a subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.
Model of head-neck joint fast movements in the frontal plane.
Pedrocchi, A; Ferrigno, G
2004-06-01
The objective of this work is to develop a model representing the physiological systems driving fast head movements in frontal plane. All the contributions occurring mechanically in the head movement are considered: damping, stiffness, physiological limit of range of motion, gravitational field, and muscular torques due to voluntary activation as well as to stretch reflex depending on fusal afferences. Model parameters are partly derived from the literature, when possible, whereas undetermined block parameters are determined by optimising the model output, fitting to real kinematics data acquired by a motion capture system in specific experimental set-ups. The optimisation for parameter identification is performed by genetic algorithms. Results show that the model represents very well fast head movements in the whole range of inclination in the frontal plane. Such a model could be proposed as a tool for transforming kinematics data on head movements in 'neural equivalent data', especially for assessing head control disease and properly planning the rehabilitation process. In addition, the use of genetic algorithms seems to fit well the problem of parameter identification, allowing for the use of a very simple experimental set-up and granting model robustness.
McKenna, James E.
2000-01-01
Although, perceiving genetic differences and their effects on fish population dynamics is difficult, simulation models offer a means to explore and illustrate these effects. I partitioned the intrinsic rate of increase parameter of a simple logistic-competition model into three components, allowing specification of effects of relative differences in fitness and mortality, as well as finite rate of increase. This model was placed into an interactive, stochastic environment to allow easy manipulation of model parameters (FITPOP). Simulation results illustrated the effects of subtle differences in genetic and population parameters on total population size, overall fitness, and sensitivity of the system to variability. Several consequences of mixing genetically distinct populations were illustrated. For example, behaviors such as depression of population size after initial introgression and extirpation of native stocks due to continuous stocking of genetically inferior fish were reproduced. It also was shown that carrying capacity relative to the amount of stocking had an important influence on population dynamics. Uncertainty associated with parameter estimates reduced confidence in model projections. The FITPOP model provided a simple tool to explore population dynamics, which may assist in formulating management strategies and identifying research needs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holoien, Thomas W. -S.; Marshall, Philip J.; Wechsler, Risa H.
We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of amore » subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.« less
Model Fit to Experimental Data for Foam-Assisted Deep Vadose Zone Remediation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roostapour, A.; Lee, G.; Zhong, Lirong
2014-01-15
Foam has been regarded as a promising means of remeidal amendment delivery to overcome subsurface heterogeneity in subsurface remediation processes. This study investigates how a foam model, developed by Method of Characteristics and fractional flow analysis in the companion paper of Roostapour and Kam (2012), can be applied to make a fit to a set of existing laboratory flow experiments (Zhong et al., 2009) in an application relevant to deep vadose zone remediation. This study reveals a few important insights regarding foam-assisted deep vadose zone remediation: (i) the mathematical framework established for foam modeling can fit typical flow experiments matchingmore » wave velocities, saturation history , and pressure responses; (ii) the set of input parameters may not be unique for the fit, and therefore conducting experiments to measure basic model parameters related to relative permeability, initial and residual saturations, surfactant adsorption and so on should not be overlooked; and (iii) gas compressibility plays an important role for data analysis, thus should be handled carefully in laboratory flow experiments. Foam kinetics, causing foam texture to reach its steady-state value slowly, may impose additional complications.« less
NASA Astrophysics Data System (ADS)
Hasan, Husna; Salam, Norfatin; Kassim, Suraiya
2013-04-01
Extreme temperature of several stations in Malaysia is modeled by fitting the annual maximum to the Generalized Extreme Value (GEV) distribution. The Augmented Dickey Fuller (ADF) and Phillips Perron (PP) tests are used to detect stochastic trends among the stations. The Mann-Kendall (MK) test suggests a non-stationary model. Three models are considered for stations with trend and the Likelihood Ratio test is used to determine the best-fitting model. The results show that Subang and Bayan Lepas stations favour a model which is linear for the location parameters while Kota Kinabalu and Sibu stations are suitable with a model in the logarithm of the scale parameters. The return level is the level of events (maximum temperature) which is expected to be exceeded once, on average, in a given number of years, is obtained.
Bajzer, Željko; Gibbons, Simon J.; Coleman, Heidi D.; Linden, David R.
2015-01-01
Noninvasive breath tests for gastric emptying are important techniques for understanding the changes in gastric motility that occur in disease or in response to drugs. Mice are often used as an animal model; however, the gamma variate model currently used for data analysis does not always fit the data appropriately. The aim of this study was to determine appropriate mathematical models to better fit mouse gastric emptying data including when two peaks are present in the gastric emptying curve. We fitted 175 gastric emptying data sets with two standard models (gamma variate and power exponential), with a gamma variate model that includes stretched exponential and with a proposed two-component model. The appropriateness of the fit was assessed by the Akaike Information Criterion. We found that extension of the gamma variate model to include a stretched exponential improves the fit, which allows for a better estimation of T1/2 and Tlag. When two distinct peaks in gastric emptying are present, a two-component model is required for the most appropriate fit. We conclude that use of a stretched exponential gamma variate model and when appropriate a two-component model will result in a better estimate of physiologically relevant parameters when analyzing mouse gastric emptying data. PMID:26045615
Lord, Dominique
2006-07-01
There has been considerable research conducted on the development of statistical models for predicting crashes on highway facilities. Despite numerous advancements made for improving the estimation tools of statistical models, the most common probabilistic structure used for modeling motor vehicle crashes remains the traditional Poisson and Poisson-gamma (or Negative Binomial) distribution; when crash data exhibit over-dispersion, the Poisson-gamma model is usually the model of choice most favored by transportation safety modelers. Crash data collected for safety studies often have the unusual attributes of being characterized by low sample mean values. Studies have shown that the goodness-of-fit of statistical models produced from such datasets can be significantly affected. This issue has been defined as the "low mean problem" (LMP). Despite recent developments on methods to circumvent the LMP and test the goodness-of-fit of models developed using such datasets, no work has so far examined how the LMP affects the fixed dispersion parameter of Poisson-gamma models used for modeling motor vehicle crashes. The dispersion parameter plays an important role in many types of safety studies and should, therefore, be reliably estimated. The primary objective of this research project was to verify whether the LMP affects the estimation of the dispersion parameter and, if it is, to determine the magnitude of the problem. The secondary objective consisted of determining the effects of an unreliably estimated dispersion parameter on common analyses performed in highway safety studies. To accomplish the objectives of the study, a series of Poisson-gamma distributions were simulated using different values describing the mean, the dispersion parameter, and the sample size. Three estimators commonly used by transportation safety modelers for estimating the dispersion parameter of Poisson-gamma models were evaluated: the method of moments, the weighted regression, and the maximum likelihood method. In an attempt to complement the outcome of the simulation study, Poisson-gamma models were fitted to crash data collected in Toronto, Ont. characterized by a low sample mean and small sample size. The study shows that a low sample mean combined with a small sample size can seriously affect the estimation of the dispersion parameter, no matter which estimator is used within the estimation process. The probability the dispersion parameter becomes unreliably estimated increases significantly as the sample mean and sample size decrease. Consequently, the results show that an unreliably estimated dispersion parameter can significantly undermine empirical Bayes (EB) estimates as well as the estimation of confidence intervals for the gamma mean and predicted response. The paper ends with recommendations about minimizing the likelihood of producing Poisson-gamma models with an unreliable dispersion parameter for modeling motor vehicle crashes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Defraene, Gilles, E-mail: gilles.defraene@uzleuven.be; Van den Bergh, Laura; Al-Mamgani, Abrahim
2012-03-01
Purpose: To study the impact of clinical predisposing factors on rectal normal tissue complication probability modeling using the updated results of the Dutch prostate dose-escalation trial. Methods and Materials: Toxicity data of 512 patients (conformally treated to 68 Gy [n = 284] and 78 Gy [n = 228]) with complete follow-up at 3 years after radiotherapy were studied. Scored end points were rectal bleeding, high stool frequency, and fecal incontinence. Two traditional dose-based models (Lyman-Kutcher-Burman (LKB) and Relative Seriality (RS) and a logistic model were fitted using a maximum likelihood approach. Furthermore, these model fits were improved by including themore » most significant clinical factors. The area under the receiver operating characteristic curve (AUC) was used to compare the discriminating ability of all fits. Results: Including clinical factors significantly increased the predictive power of the models for all end points. In the optimal LKB, RS, and logistic models for rectal bleeding and fecal incontinence, the first significant (p = 0.011-0.013) clinical factor was 'previous abdominal surgery.' As second significant (p = 0.012-0.016) factor, 'cardiac history' was included in all three rectal bleeding fits, whereas including 'diabetes' was significant (p = 0.039-0.048) in fecal incontinence modeling but only in the LKB and logistic models. High stool frequency fits only benefitted significantly (p = 0.003-0.006) from the inclusion of the baseline toxicity score. For all models rectal bleeding fits had the highest AUC (0.77) where it was 0.63 and 0.68 for high stool frequency and fecal incontinence, respectively. LKB and logistic model fits resulted in similar values for the volume parameter. The steepness parameter was somewhat higher in the logistic model, also resulting in a slightly lower D{sub 50}. Anal wall DVHs were used for fecal incontinence, whereas anorectal wall dose best described the other two endpoints. Conclusions: Comparable prediction models were obtained with LKB, RS, and logistic NTCP models. Including clinical factors improved the predictive power of all models significantly.« less
Fitting Photometry of Blended Microlensing Events
NASA Astrophysics Data System (ADS)
Thomas, Christian L.; Griest, Kim
2006-03-01
We reexamine the usefulness of fitting blended light-curve models to microlensing photometric data. We find agreement with previous workers (e.g., Woźniak & Paczyński) that this is a difficult proposition because of the degeneracy of blend fraction with other fit parameters. We show that follow-up observations at specific point along the light curve (peak region and wings) of high-magnification events are the most helpful in removing degeneracies. We also show that very small errors in the baseline magnitude can result in problems in measuring the blend fraction and study the importance of non-Gaussian errors in the fit results. The biases and skewness in the distribution of the recovered blend fraction is discussed. We also find a new approximation formula relating the blend fraction and the unblended fit parameters to the underlying event duration needed to estimate microlensing optical depth.
Cameron, Donnie; Bouhrara, Mustapha; Reiter, David A; Fishbein, Kenneth W; Choi, Seongjin; Bergeron, Christopher M; Ferrucci, Luigi; Spencer, Richard G
2017-07-01
This work characterizes the effect of lipid and noise signals on muscle diffusion parameter estimation in several conventional and non-Gaussian models, the ultimate objectives being to characterize popular fat suppression approaches for human muscle diffusion studies, to provide simulations to inform experimental work and to report normative non-Gaussian parameter values. The models investigated in this work were the Gaussian monoexponential and intravoxel incoherent motion (IVIM) models, and the non-Gaussian kurtosis and stretched exponential models. These were evaluated via simulations, and in vitro and in vivo experiments. Simulations were performed using literature input values, modeling fat contamination as an additive baseline to data, whereas phantom studies used a phantom containing aliphatic and olefinic fats and muscle-like gel. Human imaging was performed in the hamstring muscles of 10 volunteers. Diffusion-weighted imaging was applied with spectral attenuated inversion recovery (SPAIR), slice-select gradient reversal and water-specific excitation fat suppression, alone and in combination. Measurement bias (accuracy) and dispersion (precision) were evaluated, together with intra- and inter-scan repeatability. Simulations indicated that noise in magnitude images resulted in <6% bias in diffusion coefficients and non-Gaussian parameters (α, K), whereas baseline fitting minimized fat bias for all models, except IVIM. In vivo, popular SPAIR fat suppression proved inadequate for accurate parameter estimation, producing non-physiological parameter estimates without baseline fitting and large biases when it was used. Combining all three fat suppression techniques and fitting data with a baseline offset gave the best results of all the methods studied for both Gaussian diffusion and, overall, for non-Gaussian diffusion. It produced consistent parameter estimates for all models, except IVIM, and highlighted non-Gaussian behavior perpendicular to muscle fibers (α ~ 0.95, K ~ 3.1). These results show that effective fat suppression is crucial for accurate measurement of non-Gaussian diffusion parameters, and will be an essential component of quantitative studies of human muscle quality. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.
Determination of Kinetic Parameters for the Thermal Decomposition of Parthenium hysterophorus
NASA Astrophysics Data System (ADS)
Dhaundiyal, Alok; Singh, Suraj B.; Hanon, Muammel M.; Rawat, Rekha
2018-02-01
A kinetic study of pyrolysis process of Parthenium hysterophorous is carried out by using thermogravimetric analysis (TGA) equipment. The present study investigates the thermal degradation and determination of the kinetic parameters such as activation E and the frequency factor A using model-free methods given by Flynn Wall and Ozawa (FWO), Kissinger-Akahira-Sonuse (KAS) and Kissinger, and model-fitting (Coats Redfern). The results derived from thermal decomposition process demarcate decomposition of Parthenium hysterophorous among the three main stages, such as dehydration, active and passive pyrolysis. It is shown through DTG thermograms that the increase in the heating rate caused temperature peaks at maximum weight loss rate to shift towards higher temperature regime. The results are compared with Coats Redfern (Integral method) and experimental results have shown that values of kinetic parameters obtained from model-free methods are in good agreement. Whereas the results obtained through Coats Redfern model at different heating rates are not promising, however, the diffusion models provided the good fitting with the experimental data.
Nasserie, Tahmina; Tuite, Ashleigh R; Whitmore, Lindsay; Hatchette, Todd; Drews, Steven J; Peci, Adriana; Kwong, Jeffrey C; Friedman, Dara; Garber, Gary; Gubbay, Jonathan
2017-01-01
Abstract Background Seasonal influenza epidemics occur frequently. Rapid characterization of seasonal dynamics and forecasting of epidemic peaks and final sizes could help support real-time decision-making related to vaccination and other control measures. Real-time forecasting remains challenging. Methods We used the previously described “incidence decay with exponential adjustment” (IDEA) model, a 2-parameter phenomenological model, to evaluate the characteristics of the 2015–2016 influenza season in 4 Canadian jurisdictions: the Provinces of Alberta, Nova Scotia and Ontario, and the City of Ottawa. Model fits were updated weekly with receipt of incident virologically confirmed case counts. Best-fit models were used to project seasonal influenza peaks and epidemic final sizes. Results The 2015–2016 influenza season was mild and late-peaking. Parameter estimates generated through fitting were consistent in the 2 largest jurisdictions (Ontario and Alberta) and with pooled data including Nova Scotia counts (R0 approximately 1.4 for all fits). Lower R0 estimates were generated in Nova Scotia and Ottawa. Final size projections that made use of complete time series were accurate to within 6% of true final sizes, but final size was using pre-peak data. Projections of epidemic peaks stabilized before the true epidemic peak, but these were persistently early (~2 weeks) relative to the true peak. Conclusions A simple, 2-parameter influenza model provided reasonably accurate real-time projections of influenza seasonal dynamics in an atypically late, mild influenza season. Challenges are similar to those seen with more complex forecasting methodologies. Future work includes identification of seasonal characteristics associated with variability in model performance. PMID:29497629
Zhai, Xuetong; Chakraborty, Dev P
2017-06-01
The objective was to design and implement a bivariate extension to the contaminated binormal model (CBM) to fit paired receiver operating characteristic (ROC) datasets-possibly degenerate-with proper ROC curves. Paired datasets yield two correlated ratings per case. Degenerate datasets have no interior operating points and proper ROC curves do not inappropriately cross the chance diagonal. The existing method, developed more than three decades ago utilizes a bivariate extension to the binormal model, implemented in CORROC2 software, which yields improper ROC curves and cannot fit degenerate datasets. CBM can fit proper ROC curves to unpaired (i.e., yielding one rating per case) and degenerate datasets, and there is a clear scientific need to extend it to handle paired datasets. In CBM, nondiseased cases are modeled by a probability density function (pdf) consisting of a unit variance peak centered at zero. Diseased cases are modeled with a mixture distribution whose pdf consists of two unit variance peaks, one centered at positive μ with integrated probability α, the mixing fraction parameter, corresponding to the fraction of diseased cases where the disease was visible to the radiologist, and one centered at zero, with integrated probability (1-α), corresponding to disease that was not visible. It is shown that: (a) for nondiseased cases the bivariate extension is a unit variances bivariate normal distribution centered at (0,0) with a specified correlation ρ 1 ; (b) for diseased cases the bivariate extension is a mixture distribution with four peaks, corresponding to disease not visible in either condition, disease visible in only one condition, contributing two peaks, and disease visible in both conditions. An expression for the likelihood function is derived. A maximum likelihood estimation (MLE) algorithm, CORCBM, was implemented in the R programming language that yields parameter estimates and the covariance matrix of the parameters, and other statistics. A limited simulation validation of the method was performed. CORCBM and CORROC2 were applied to two datasets containing nine readers each contributing paired interpretations. CORCBM successfully fitted the data for all readers, whereas CORROC2 failed to fit a degenerate dataset. All fits were visually reasonable. All CORCBM fits were proper, whereas all CORROC2 fits were improper. CORCBM and CORROC2 were in agreement (a) in declaring only one of the nine readers as having significantly different performances in the two modalities; (b) in estimating higher correlations for diseased cases than for nondiseased ones; and (c) in finding that the intermodality correlation estimates for nondiseased cases were consistent between the two methods. All CORCBM fits yielded higher area under curve (AUC) than the CORROC2 fits, consistent with the fact that a proper ROC model like CORCBM is based on a likelihood-ratio-equivalent decision variable, and consequently yields higher performance than the binormal model-based CORROC2. The method gave satisfactory fits to four simulated datasets. CORCBM is a robust method for fitting paired ROC datasets, always yielding proper ROC curves, and able to fit degenerate datasets. © 2017 American Association of Physicists in Medicine.
Johnson, Leigh F; Geffen, Nathan
2016-03-01
Different models of sexually transmitted infections (STIs) can yield substantially different conclusions about STI epidemiology, and it is important to understand how and why models differ. Frequency-dependent models make the simplifying assumption that STI incidence is proportional to STI prevalence in the population, whereas network models calculate STI incidence more realistically by classifying individuals according to their partners' STI status. We assessed a deterministic frequency-dependent model approximation to a microsimulation network model of STIs in South Africa. Sexual behavior and demographic parameters were identical in the 2 models. Six STIs were simulated using each model: HIV, herpes, syphilis, gonorrhea, chlamydia, and trichomoniasis. For all 6 STIs, the frequency-dependent model estimated a higher STI prevalence than the network model, with the difference between the 2 models being relatively large for the curable STIs. When the 2 models were fitted to the same STI prevalence data, the best-fitting parameters differed substantially between models, with the frequency-dependent model suggesting more immunity and lower transmission probabilities. The fitted frequency-dependent model estimated that the effects of a hypothetical elimination of concurrent partnerships and a reduction in commercial sex were both smaller than estimated by the fitted network model, whereas the latter model estimated a smaller impact of a reduction in unprotected sex in spousal relationships. The frequency-dependent assumption is problematic when modeling short-term STIs. Frequency-dependent models tend to underestimate the importance of high-risk groups in sustaining STI epidemics, while overestimating the importance of long-term partnerships and low-risk groups.
Fong, Youyi; Yu, Xuesong
2016-01-01
Many modern serial dilution assays are based on fluorescence intensity (FI) readouts. We study optimal transformation model choice for fitting five parameter logistic curves (5PL) to FI-based serial dilution assay data. We first develop a generalized least squares-pseudolikelihood type algorithm for fitting heteroscedastic logistic models. Next we show that the 5PL and log 5PL functions can approximate each other well. We then compare four 5PL models with different choices of log transformation and variance modeling through a Monte Carlo study and real data. Our findings are that the optimal choice depends on the intended use of the fitted curves. PMID:27642502
A model of the endogenous glucose balance incorporating the characteristics of glucose transporters.
Arleth, T; Andreassen, S; Federici, M O; Benedetti, M M
2000-07-01
This paper describes the development and preliminary test of a model of the endogenous glucose balance that incorporates the characteristics of the glucose transporters GLUT1, GLUT3 and GLUT4. In the modeling process the model is parameterized with nine parameters that are subsequently estimated from data in the literature on the hepatic- and endogenous- balances at various combinations of blood glucose and insulin levels. The ability of the resulting endogenous balance to fit blood glucose measured from patients was tested on 20 patients. The fit obtained with this model compared favorably with the fit obtained with the endogenous balance currently incorporated in the DIAS system.
Functional models for colloid retention in porous media at the triple line.
Dathe, Annette; Zevi, Yuniati; Richards, Brian K; Gao, Bin; Parlange, J-Yves; Steenhuis, Tammo S
2014-01-01
Spectral confocal microscope visualizations of microsphere movement in unsaturated porous media showed that attachment at the Air Water Solid (AWS) interface was an important retention mechanism. These visualizations can aid in resolving the functional form of retention rates of colloids at the AWS interface. In this study, soil adsorption isotherm equations were adapted by replacing the chemical concentration in the water as independent variable by the cumulative colloids passing by. In order of increasing number of fitted parameters, the functions tested were the Langmuir adsorption isotherm, the Logistic distribution, and the Weibull distribution. The functions were fitted against colloid concentrations obtained from time series of images acquired with a spectral confocal microscope for three experiments performed where either plain or carboxylated polystyrene latex microspheres were pulsed in a small flow chamber filled with cleaned quartz sand. Both moving and retained colloids were quantified over time. In fitting the models to the data, the agreement improved with increasing number of model parameters. The Weibull distribution gave overall the best fit. The logistic distribution did not fit the initial retention of microspheres well but otherwise the fit was good. The Langmuir isotherm only fitted the longest time series well. The results can be explained that initially when colloids are first introduced the rate of retention is low. Once colloids are at the AWS interface they act as anchor point for other colloids to attach and thereby increasing the retention rate as clusters form. Once the available attachment sites diminish, the retention rate decreases.
NASA Astrophysics Data System (ADS)
Koma, Zsófia; Székely, Balázs; Dorninger, Peter; Kovács, Gábor
2013-04-01
Due to the need for quantitative analysis of various geomorphological landforms, the importance of fast and effective automatic processing of the different kind of digital terrain models (DTMs) is increasing. The robust plane fitting (segmentation) method, developed at the Institute of Photogrammetry and Remote Sensing at Vienna University of Technology, allows the processing of large 3D point clouds (containing millions of points), performs automatic detection of the planar elements of the surface via parameter estimation, and provides a considerable data reduction for the modeled area. Its geoscientific application allows the modeling of different landforms with the fitted planes as planar facets. In our study we aim to analyze the accuracy of the resulting set of fitted planes in terms of accuracy, model reliability and dependence on the input parameters. To this end we used DTMs of different scales and accuracy: (1) artificially generated 3D point cloud model with different magnitudes of error; (2) LiDAR data with 0.1 m error; (3) SRTM (Shuttle Radar Topography Mission) DTM database with 5 m accuracy; (4) DTM data from HRSC (High Resolution Stereo Camera) of the planet Mars with 10 m error. The analysis of the simulated 3D point cloud with normally distributed errors comprised different kinds of statistical tests (for example Chi-square and Kolmogorov-Smirnov tests) applied on the residual values and evaluation of dependence of the residual values on the input parameters. These tests have been repeated on the real data supplemented with the categorization of the segmentation result depending on the input parameters, model reliability and the geomorphological meaning of the fitted planes. The simulation results show that for the artificially generated data with normally distributed errors the null hypothesis can be accepted based on the residual value distribution being also normal, but in case of the test on the real data the residual value distribution is often mixed or unknown. The residual values are found to be dependent on two input parameters (standard deviation and maximum point-plane distance both defining distance thresholds for assigning points to a segment) mainly and the curvature of the surface affected mostly the distributions. The results of the analysis helped to decide which parameter set is the best for further modelling and provides the highest accuracy. With these results in mind the success of quasi-automatic modelling of the planar (for example plateau-like) features became more successful and often provided more accuracy. These studies were carried out partly in the framework of TMIS.ascrea project (Nr. 2001978) financed by the Austrian Research Promotion Agency (FFG); the contribution of ZsK was partly funded by Campus Hungary Internship TÁMOP-424B1.
18F-FLT uptake kinetics in head and neck squamous cell carcinoma: a PET imaging study.
Liu, Dan; Chalkidou, Anastasia; Landau, David B; Marsden, Paul K; Fenwick, John D
2014-04-01
To analyze the kinetics of 3(')-deoxy-3(')-[F-18]-fluorothymidine (18F-FLT) uptake by head and neck squamous cell carcinomas and involved nodes imaged using positron emission tomography (PET). Two- and three-tissue compartment models were fitted to 12 tumor time-activity-curves (TACs) obtained for 6 structures (tumors or involved nodes) imaged in ten dynamic PET studies of 1 h duration, carried out for five patients. The ability of the models to describe the data was assessed using a runs test, the Akaike information criterion (AIC) and leave-one-out cross-validation. To generate parametric maps the models were also fitted to TACs of individual voxels. Correlations between maps of different parameters were characterized using Pearson'sr coefficient; in particular the phosphorylation rate-constants k3-2tiss and k5 of the two- and three-tissue models were studied alongside the flux parameters KFLT- 2tiss and KFLT of these models, and standardized uptake values (SUV). A methodology based on expectation-maximization clustering and the Bayesian information criterion ("EM-BIC clustering") was used to distil the information from noisy parametric images. Fits of two-tissue models 2C3K and 2C4K and three-tissue models 3C5K and 3C6K comprising three, four, five, and six rate-constants, respectively, pass the runs test for 4, 8, 10, and 11 of 12 tumor TACs. The three-tissue models have lower AIC and cross-validation scores for nine of the 12 tumors. Overall the 3C6K model has the lowest AIC and cross-validation scores and its fitted parameter values are of the same orders of magnitude as literature estimates. Maps of KFLT and KFLT- 2tiss are strongly correlated (r = 0.85) and also correlate closely with SUV maps (r = 0.72 for KFLT- 2tiss, 0.64 for KFLT). Phosphorylation rate-constant maps are moderately correlated with flux maps (r = 0.48 for k3-2tiss vs KFLT- 2tiss and r = 0.68 for k5 vs KFLT); however, neither phosphorylation rate-constant correlates significantly with SUV. EM-BIC clustering reduces the parametric maps to a small number of levels--on average 5.8, 3.5, 3.4, and 1.4 for KFLT- 2tiss, KFLT, k3-2tiss, and k5. This large simplification is potentially useful for radiotherapy dose-painting, but demonstrates the high noise in some maps. Statistical simulations show that voxel level noise degrades TACs generated from the 3C6K model sufficiently that the average AIC score, parameter bias, and total uncertainty of 2C4K model fits are similar to those of 3C6K fits, whereas at the whole tumor level the scores are lower for 3C6K fits. For the patients studied here, whole tumor FLT uptake time-courses are represented better overall by a three-tissue than by a two-tissue model. EM-BIC clustering simplifies noisy parametric maps, providing the best description of the underlying information they contain and is potentially useful for radiotherapy dose-painting. However, the clustering highlights the large degree of noise present in maps of the phosphorylation rate-constantsk5 and k3-2tiss, which are conceptually tightly linked to cellular proliferation. Methods must be found to make these maps more robust-either by constraining other model parameters or modifying dynamic imaging protocols. © 2014 American Association of Physicists in Medicine.
Liegl, Gregor; Wahl, Inka; Berghöfer, Anne; Nolte, Sandra; Pieh, Christoph; Rose, Matthias; Fischer, Felix
2016-03-01
To investigate the validity of a common depression metric in independent samples. We applied a common metrics approach based on item-response theory for measuring depression to four German-speaking samples that completed the Patient Health Questionnaire (PHQ-9). We compared the PHQ item parameters reported for this common metric to reestimated item parameters that derived from fitting a generalized partial credit model solely to the PHQ-9 items. We calibrated the new model on the same scale as the common metric using two approaches (estimation with shifted prior and Stocking-Lord linking). By fitting a mixed-effects model and using Bland-Altman plots, we investigated the agreement between latent depression scores resulting from the different estimation models. We found different item parameters across samples and estimation methods. Although differences in latent depression scores between different estimation methods were statistically significant, these were clinically irrelevant. Our findings provide evidence that it is possible to estimate latent depression scores by using the item parameters from a common metric instead of reestimating and linking a model. The use of common metric parameters is simple, for example, using a Web application (http://www.common-metrics.org) and offers a long-term perspective to improve the comparability of patient-reported outcome measures. Copyright © 2016 Elsevier Inc. All rights reserved.
Parameter Estimation and Model Selection in Computational Biology
Lillacci, Gabriele; Khammash, Mustafa
2010-01-01
A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants) are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection. PMID:20221262
Application of the Hartmann-Tran profile to analysis of H2O spectra
NASA Astrophysics Data System (ADS)
Lisak, D.; Cygan, A.; Bermejo, D.; Domenech, J. L.; Hodges, J. T.; Tran, H.
2015-10-01
The Hartmann-Tran profile (HTP), which has been recently recommended as a new standard in spectroscopic databases, is used to analyze spectra of several lines of H2O diluted in N2, SF6, and in pure H2O. This profile accounts for various mechanisms affecting the line-shape and can be easily computed in terms of combinations of the complex Voigt profile. A multi-spectrum fitting procedure is implemented to simultaneously analyze spectra of H2O transitions acquired at different pressures. Multi-spectrum fitting of the HTP to a theoretical model confirms that this profile provides an accurate description of H2O line-shapes in terms of residuals and accuracy of fitted parameters. This profile and its limiting cases are also fit to measured spectra for three H2O lines in different vibrational bands. The results show that it is possible to obtain accurate HTP line-shape parameters when measured spectra have a sufficiently high signal-to-noise ratio and span a broad range of collisional-to-Doppler line widths. Systematic errors in the line area and differences in retrieved line-shape parameters caused by the overly simplistic line-shape models are quantified. Also limitations of the quadratic speed-dependence model used in the HTP are demonstrated in the case of an SF6 broadened H2O line, which leads to a strongly asymmetric line-shape.
Dark matter and pulsar model constraints from Galactic center Fermi/LAT γ-ray observations
NASA Astrophysics Data System (ADS)
Gordon, Chris; Macias, Oscar
2014-05-01
Employing Fermi/LAT γ-ray observations, several independent groups have found excess extended γ-ray emission at the Galactic center (GC). Both, annihilating dark matter (DM) or a population of ~ 103 unresolved millisecond pulsars (MSPs) are regarded as well motivated possible explanations. However, there is significant uncertainties in the diffuse Galactic background at the GC. We have performed a revaluation of these two models for the extended γ-ray source at the GC by accounting for the systematic uncertainties of the Galactic diffuse emission model. We also marginalize over point source and diffuse background parameters in the region of interest. We show that the excess emission is significantly more extended than a point source. We find that the DM (or pulsar population) signal is larger than the systematic errors and therefore proceed to determine the sectors of parameter space that provide an acceptable fit to the data. We found that a population of several thousand MSPs with parameters consistent with the average spectral shape of Fermi/LAT measured MSPs was able to fit the GC excess emission. For DM, we found that a pure τ+τ- annihilation channel is not a good fit to the data. But a mixture of τ+τ- and b
Dark matter and pulsar model constraints from Galactic Center Fermi-LAT gamma-ray observations
NASA Astrophysics Data System (ADS)
Gordon, Chris; Macías, Oscar
2013-10-01
Employing Fermi-LAT gamma-ray observations, several independent groups have found excess extended gamma-ray emission at the Galactic Center (GC). Both annihilating dark matter (DM) or a population of ˜103 unresolved millisecond pulsars (MSPs) are regarded as well-motivated possible explanations. However, there are significant uncertainties in the diffuse galactic background at the GC. We have performed a revaluation of these two models for the extended gamma-ray source at the GC by accounting for the systematic uncertainties of the Galactic diffuse emission model. We also marginalize over point-source and diffuse background parameters in the region of interest. We show that the excess emission is significantly more extended than a point source. We find that the DM (or pulsar-population) signal is larger than the systematic errors and therefore proceed to determine the sectors of parameter space that provide an acceptable fit to the data. We find that a population of 1000-2000 MSPs with parameters consistent with the average spectral shape of Fermi-LAT measured MSPs is able to fit the GC excess emission. For DM, we find that a pure τ+τ- annihilation channel is not a good fit to the data. But a mixture of τ+τ- and bb¯ with a ⟨σv⟩ of order the thermal relic value and a DM mass of around 20 to 60 GeV provides an adequate fit.
Discovery and characterization of 3000+ main-sequence binaries from APOGEE spectra
NASA Astrophysics Data System (ADS)
El-Badry, Kareem; Ting, Yuan-Sen; Rix, Hans-Walter; Quataert, Eliot; Weisz, Daniel R.; Cargile, Phillip; Conroy, Charlie; Hogg, David W.; Bergemann, Maria; Liu, Chao
2018-05-01
We develop a data-driven spectral model for identifying and characterizing spatially unresolved multiple-star systems and apply it to APOGEE DR13 spectra of main-sequence stars. Binaries and triples are identified as targets whose spectra can be significantly better fit by a superposition of two or three model spectra, drawn from the same isochrone, than any single-star model. From an initial sample of ˜20 000 main-sequence targets, we identify ˜2500 binaries in which both the primary and secondary stars contribute detectably to the spectrum, simultaneously fitting for the velocities and stellar parameters of both components. We additionally identify and fit ˜200 triple systems, as well as ˜700 velocity-variable systems in which the secondary does not contribute detectably to the spectrum. Our model simplifies the process of simultaneously fitting single- or multi-epoch spectra with composite models and does not depend on a velocity offset between the two components of a binary, making it sensitive to traditionally undetectable systems with periods of hundreds or thousands of years. In agreement with conventional expectations, almost all the spectrally identified binaries with measured parallaxes fall above the main sequence in the colour-magnitude diagram. We find excellent agreement between spectrally and dynamically inferred mass ratios for the ˜600 binaries in which a dynamical mass ratio can be measured from multi-epoch radial velocities. We obtain full orbital solutions for 64 systems, including 14 close binaries within hierarchical triples. We make available catalogues of stellar parameters, abundances, mass ratios, and orbital parameters.
2014-10-01
and d) Γb0. The scatter of the data points is due to the variation in the other parameters at 1 h. The line represents a best fit linear regression...parameters: a) Hseg, b) QL, c) γ0, and d) Γb0. The scatter of the data points is due to the variation in the other parameters at 1 h. The line represents...concentration x0 for the nanocrystalline Fe–Zr system. The white square data point shows the location of the experimental data used for fitting the
Dolejs, Josef; Marešová, Petra
2017-01-01
The answer to the question "At what age does aging begin?" is tightly related to the question "Where is the onset of mortality increase with age?" Age affects mortality rates from all diseases differently than it affects mortality rates from nonbiological causes. Mortality increase with age in adult populations has been modeled by many authors, and little attention has been given to mortality decrease with age after birth. Nonbiological causes are excluded, and the category "all diseases" is studied. It is analyzed in Denmark, Finland, Norway, and Sweden during the period 1994-2011, and all possible models are screened. Age trajectories of mortality are analyzed separately: before the age category where mortality reaches its minimal value and after the age category. Resulting age trajectories from all diseases showed a strong minimum, which was hidden in total mortality. The inverse proportion between mortality and age fitted in 54 of 58 cases before mortality minimum. The Gompertz model with two parameters fitted as mortality increased with age in 17 of 58 cases after mortality minimum, and the Gompertz model with a small positive quadratic term fitted data in the remaining 41 cases. The mean age where mortality reached minimal value was 8 (95% confidence interval 7.05-8.95) years. The figures depict an age where the human population has a minimal risk of death from biological causes. Inverse proportion and the Gompertz model fitted data on both sides of the mortality minimum, and three parameters determined the shape of the age-mortality trajectory. Life expectancy should be determined by the two standard Gompertz parameters and also by the single parameter in the model c/x. All-disease mortality represents an alternative tool to study the impact of age. All results are based on published data.
ERIC Educational Resources Information Center
Holster, Trevor A.; Lake, J.
2016-01-01
Stewart questioned Beglar's use of Rasch analysis of the Vocabulary Size Test (VST) and advocated the use of 3-parameter logistic item response theory (3PLIRT) on the basis that it models a non-zero lower asymptote for items, often called a "guessing" parameter. In support of this theory, Stewart presented fit statistics derived from…
Item Vector Plots for the Multidimensional Three-Parameter Logistic Model
ERIC Educational Resources Information Center
Bryant, Damon; Davis, Larry
2011-01-01
This brief technical note describes how to construct item vector plots for dichotomously scored items fitting the multidimensional three-parameter logistic model (M3PLM). As multidimensional item response theory (MIRT) shows promise of being a very useful framework in the test development life cycle, graphical tools that facilitate understanding…
Optimizing the Determination of Roughness Parameters for Model Urban Canopies
NASA Astrophysics Data System (ADS)
Huq, Pablo; Rahman, Auvi
2018-05-01
We present an objective optimization procedure to determine the roughness parameters for very rough boundary-layer flow over model urban canopies. For neutral stratification the mean velocity profile above a model urban canopy is described by the logarithmic law together with the set of roughness parameters of displacement height d, roughness length z_0 , and friction velocity u_* . Traditionally, values of these roughness parameters are obtained by fitting the logarithmic law through (all) the data points comprising the velocity profile. The new procedure generates unique velocity profiles from subsets or combinations of the data points of the original velocity profile, after which all possible profiles are examined. Each of the generated profiles is fitted to the logarithmic law for a sequence of values of d, with the representative value of d obtained from the minima of the summed least-squares errors for all the generated profiles. The representative values of z_0 and u_* are identified by the peak in the bivariate histogram of z_0 and u_* . The methodology has been verified against laboratory datasets of flow above model urban canopies.
NASA Astrophysics Data System (ADS)
Stumpp, C.; Nützmann, G.; Maciejewski, S.; Maloszewski, P.
2009-09-01
SummaryIn this paper, five model approaches with different physical and mathematical concepts varying in their model complexity and requirements were applied to identify the transport processes in the unsaturated zone. The applicability of these model approaches were compared and evaluated investigating two tracer breakthrough curves (bromide, deuterium) in a cropped, free-draining lysimeter experiment under natural atmospheric boundary conditions. The data set consisted of time series of water balance, depth resolved water contents, pressure heads and resident concentrations measured during 800 days. The tracer transport parameters were determined using a simple stochastic (stream tube model), three lumped parameter (constant water content model, multi-flow dispersion model, variable flow dispersion model) and a transient model approach. All of them were able to fit the tracer breakthrough curves. The identified transport parameters of each model approach were compared. Despite the differing physical and mathematical concepts the resulting parameters (mean water contents, mean water flux, dispersivities) of the five model approaches were all in the same range. The results indicate that the flow processes are also describable assuming steady state conditions. Homogeneous matrix flow is dominant and a small pore volume with enhanced flow velocities near saturation was identified with variable saturation flow and transport approach. The multi-flow dispersion model also identified preferential flow and additionally suggested a third less mobile flow component. Due to high fitting accuracy and parameter similarity all model approaches indicated reliable results.
Automatic selection of arterial input function using tri-exponential models
NASA Astrophysics Data System (ADS)
Yao, Jianhua; Chen, Jeremy; Castro, Marcelo; Thomasson, David
2009-02-01
Dynamic Contrast Enhanced MRI (DCE-MRI) is one method for drug and tumor assessment. Selecting a consistent arterial input function (AIF) is necessary to calculate tissue and tumor pharmacokinetic parameters in DCE-MRI. This paper presents an automatic and robust method to select the AIF. The first stage is artery detection and segmentation, where knowledge about artery structure and dynamic signal intensity temporal properties of DCE-MRI is employed. The second stage is AIF model fitting and selection. A tri-exponential model is fitted for every candidate AIF using the Levenberg-Marquardt method, and the best fitted AIF is selected. Our method has been applied in DCE-MRIs of four different body parts: breast, brain, liver and prostate. The success rates in artery segmentation for 19 cases are 89.6%+/-15.9%. The pharmacokinetic parameters computed from the automatically selected AIFs are highly correlated with those from manually determined AIFs (R2=0.946, P(T<=t)=0.09). Our imaging-based tri-exponential AIF model demonstrated significant improvement over a previously proposed bi-exponential model.
A Smoluchowski model of crystallization dynamics of small colloidal clusters
NASA Astrophysics Data System (ADS)
Beltran-Villegas, Daniel J.; Sehgal, Ray M.; Maroudas, Dimitrios; Ford, David M.; Bevan, Michael A.
2011-10-01
We investigate the dynamics of colloidal crystallization in a 32-particle system at a fixed value of interparticle depletion attraction that produces coexisting fluid and solid phases. Free energy landscapes (FELs) and diffusivity landscapes (DLs) are obtained as coefficients of 1D Smoluchowski equations using as order parameters either the radius of gyration or the average crystallinity. FELs and DLs are estimated by fitting the Smoluchowski equations to Brownian dynamics (BD) simulations using either linear fits to locally initiated trajectories or global fits to unbiased trajectories using Bayesian inference. The resulting FELs are compared to Monte Carlo Umbrella Sampling results. The accuracy of the FELs and DLs for modeling colloidal crystallization dynamics is evaluated by comparing mean first-passage times from BD simulations with analytical predictions using the FEL and DL models. While the 1D models accurately capture dynamics near the free energy minimum fluid and crystal configurations, predictions near the transition region are not quantitatively accurate. A preliminary investigation of ensemble averaged 2D order parameter trajectories suggests that 2D models are required to capture crystallization dynamics in the transition region.
FIT-MART: Quantum Magnetism with a Gentle Learning Curve
NASA Astrophysics Data System (ADS)
Engelhardt, Larry; Garland, Scott C.; Rainey, Cameron; Freeman, Ray A.
We present a new open-source software package, FIT-MART, that allows non-experts to quickly get started sim- ulating quantum magnetism. FIT-MART can be downloaded as a platform-idependent executable Java (JAR) file. It allows the user to define (Heisenberg) Hamiltonians by electronically drawing pictures that represent quantum spins and operators. Sliders are automatically generated to control the values of the parameters in the model, and when the values change, several plots are updated in real time to display both the resulting energy spectra and the equilibruim magnetic properties. Several experimental data sets for real magnetic molecules are included in FIT-MART to allow easy comparison between simulated and experimental data, and FIT-MART users can also import their own data for analysis and compare the goodness of fit for different models.
Nonlinear Acoustical Assessment of Precipitate Nucleation
NASA Technical Reports Server (NTRS)
Cantrell, John H.; Yost, William T.
2004-01-01
The purpose of the present work is to show that measurements of the acoustic nonlinearity parameter in heat treatable alloys as a function of heat treatment time can provide quantitative information about the kinetics of precipitate nucleation and growth in such alloys. Generally, information on the kinetics of phase transformations is obtained from time-sequenced electron microscopical examination and differential scanning microcalorimetry. The present nonlinear acoustical assessment of precipitation kinetics is based on the development of a multiparameter analytical model of the effects on the nonlinearity parameter of precipitate nucleation and growth in the alloy system. A nonlinear curve fit of the model equation to the experimental data is then used to extract the kinetic parameters related to the nucleation and growth of the targeted precipitate. The analytical model and curve fit is applied to the assessment of S' precipitation in aluminum alloy 2024 during artificial aging from the T4 to the T6 temper.
Emami, Fereshteh; Maeder, Marcel; Abdollahi, Hamid
2015-05-07
Thermodynamic studies of equilibrium chemical reactions linked with kinetic procedures are mostly impossible by traditional approaches. In this work, the new concept of generalized kinetic study of thermodynamic parameters is introduced for dynamic data. The examples of equilibria intertwined with kinetic chemical mechanisms include molecular charge transfer complex formation reactions, pH-dependent degradation of chemical compounds and tautomerization kinetics in micellar solutions. Model-based global analysis with the possibility of calculating and embedding the equilibrium and kinetic parameters into the fitting algorithm has allowed the complete analysis of the complex reaction mechanisms. After the fitting process, the optimal equilibrium and kinetic parameters together with an estimate of their standard deviations have been obtained. This work opens up a promising new avenue for obtaining equilibrium constants through the kinetic data analysis for the kinetic reactions that involve equilibrium processes.
Comparing Exponential and Exponentiated Models of Drug Demand in Cocaine Users
Strickland, Justin C.; Lile, Joshua A.; Rush, Craig R.; Stoops, William W.
2016-01-01
Drug purchase tasks provide rapid and efficient measurement of drug demand. Zero values (i.e., prices with zero consumption) present a quantitative challenge when using exponential demand models that exponentiated models may resolve. We aimed to replicate and advance the utility of using an exponentiated model by demonstrating construct validity (i.e., association with real-world drug use) and generalizability across drug commodities. Participants (N = 40 cocaine-using adults) completed Cocaine, Alcohol, and Cigarette Purchase Tasks evaluating hypothetical consumption across changes in price. Exponentiated and exponential models were fit to these data using different treatments of zero consumption values, including retaining zeros or replacing them with 0.1, 0.01, 0.001. Excellent model fits were observed with the exponentiated model. Means and precision fluctuated with different replacement values when using the exponential model, but were consistent for the exponentiated model. The exponentiated model provided the strongest correlation between derived demand intensity (Q0) and self-reported free consumption in all instances (Cocaine r = .88; Alcohol r = .97; Cigarette r = .91). Cocaine demand elasticity was positively correlated with alcohol and cigarette elasticity. Exponentiated parameters were associated with real-world drug use (e.g., weekly cocaine use), whereas these correlations were less consistent for exponential parameters. Our findings show that selection of zero replacement values impact demand parameters and their association with drug-use outcomes when using the exponential model, but not the exponentiated model. This work supports the adoption of the exponentiated demand model by replicating improved fit and consistency, in addition to demonstrating construct validity and generalizability. PMID:27929347
Comparing exponential and exponentiated models of drug demand in cocaine users.
Strickland, Justin C; Lile, Joshua A; Rush, Craig R; Stoops, William W
2016-12-01
Drug purchase tasks provide rapid and efficient measurement of drug demand. Zero values (i.e., prices with zero consumption) present a quantitative challenge when using exponential demand models that exponentiated models may resolve. We aimed to replicate and advance the utility of using an exponentiated model by demonstrating construct validity (i.e., association with real-world drug use) and generalizability across drug commodities. Participants (N = 40 cocaine-using adults) completed Cocaine, Alcohol, and Cigarette Purchase Tasks evaluating hypothetical consumption across changes in price. Exponentiated and exponential models were fit to these data using different treatments of zero consumption values, including retaining zeros or replacing them with 0.1, 0.01, or 0.001. Excellent model fits were observed with the exponentiated model. Means and precision fluctuated with different replacement values when using the exponential model but were consistent for the exponentiated model. The exponentiated model provided the strongest correlation between derived demand intensity (Q0) and self-reported free consumption in all instances (Cocaine r = .88; Alcohol r = .97; Cigarette r = .91). Cocaine demand elasticity was positively correlated with alcohol and cigarette elasticity. Exponentiated parameters were associated with real-world drug use (e.g., weekly cocaine use) whereas these correlations were less consistent for exponential parameters. Our findings show that selection of zero replacement values affects demand parameters and their association with drug-use outcomes when using the exponential model but not the exponentiated model. This work supports the adoption of the exponentiated demand model by replicating improved fit and consistency and demonstrating construct validity and generalizability. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
ERIC Educational Resources Information Center
Maydeu-Olivares, Alberto; Montano, Rosa
2013-01-01
We investigate the performance of three statistics, R [subscript 1], R [subscript 2] (Glas in "Psychometrika" 53:525-546, 1988), and M [subscript 2] (Maydeu-Olivares & Joe in "J. Am. Stat. Assoc." 100:1009-1020, 2005, "Psychometrika" 71:713-732, 2006) to assess the overall fit of a one-parameter logistic model…
N'gattia, A K; Coulibaly, D; Nzussouo, N Talla; Kadjo, H A; Chérif, D; Traoré, Y; Kouakou, B K; Kouassi, P D; Ekra, K D; Dagnan, N S; Williams, T; Tiembré, I
2016-09-13
In temperate regions, influenza epidemics occur in the winter and correlate with certain climatological parameters. In African tropical regions, the effects of climatological parameters on influenza epidemics are not well defined. This study aims to identify and model the effects of climatological parameters on seasonal influenza activity in Abidjan, Cote d'Ivoire. We studied the effects of weekly rainfall, humidity, and temperature on laboratory-confirmed influenza cases in Abidjan from 2007 to 2010. We used the Box-Jenkins method with the autoregressive integrated moving average (ARIMA) process to create models using data from 2007-2010 and to assess the predictive value of best model on data from 2011 to 2012. The weekly number of influenza cases showed significant cross-correlation with certain prior weeks for both rainfall, and relative humidity. The best fitting multivariate model (ARIMAX (2,0,0) _RF) included the number of influenza cases during 1-week and 2-weeks prior, and the rainfall during the current week and 5-weeks prior. The performance of this model showed an increase of >3 % for Akaike Information Criterion (AIC) and 2.5 % for Bayesian Information Criterion (BIC) compared to the reference univariate ARIMA (2,0,0). The prediction of the weekly number of influenza cases during 2011-2012 with the best fitting multivariate model (ARIMAX (2,0,0) _RF), showed that the observed values were within the 95 % confidence interval of the predicted values during 97 of 104 weeks. Including rainfall increases the performances of fitted and predicted models. The timing of influenza in Abidjan can be partially explained by rainfall influence, in a setting with little change in temperature throughout the year. These findings can help clinicians to anticipate influenza cases during the rainy season by implementing preventive measures.
NASA Astrophysics Data System (ADS)
Dangelmayr, Martin A.; Reimus, Paul W.; Johnson, Raymond H.; Clay, James T.; Stone, James J.
2018-06-01
This research assesses the ability of a GC SCM to simulate uranium transport under variable geochemical conditions typically encountered at uranium in-situ recovery (ISR) sites. Sediment was taken from a monitoring well at the SRH site at depths 192 and 193 m below ground and characterized by XRD, XRF, TOC, and BET. Duplicate column studies on the different sediment depths, were flushed with synthesized restoration waters at two different alkalinities (160 mg/l CaCO3 and 360 mg/l CaCO3) to study the effect of alkalinity on uranium mobility. Uranium breakthrough occurred 25% - 30% earlier in columns with 360 mg/l CaCO3 over columns fed with 160 mg/l CaCO3 influent water. A parameter estimation program (PEST) was coupled to PHREEQC to derive site densities from experimental data. Significant parameter fittings were produced for all models, demonstrating that the GC SCM approach can model the impact of carbonate on uranium in flow systems. Derived site densities for the two sediment depths were between 141 and 178 μmol-sites/kg-soil, demonstrating similar sorption capacities despite heterogeneity in sediment mineralogy. Model sensitivity to alkalinity and pH was shown to be moderate compared to fitted site densities, when calcite saturation was allowed to equilibrate. Calcite kinetics emerged as a potential source of error when fitting parameters in flow conditions. Fitted results were compared to data from previous batch and column studies completed on sediments from the Smith-Ranch Highland (SRH) site, to assess variability in derived parameters. Parameters from batch experiments were lower by a factor of 1.1 to 3.4 compared to column studies completed on the same sediments. The difference was attributed to errors in solid-solution ratios and the impact of calcite dissolution in batch experiments. Column studies conducted at two different laboratories showed almost an order of magnitude difference in fitted site densities suggesting that experimental methodology may play a bigger role in column sorption behavior than actual sediment heterogeneity. Our results demonstrate the necessity for ISR sites to remove residual pCO2 and equilibrate restoration water with background geochemistry to reduce uranium mobility. In addition, the observed variability between fitted parameters on the same sediments highlights the need to provide standardized guidelines and methodology for regulators and industry when the GC SCM approach is used for ISR risk assessments.
Dangelmayr, Martin A; Reimus, Paul W; Johnson, Raymond H; Clay, James T; Stone, James J
2018-06-01
This research assesses the ability of a GC SCM to simulate uranium transport under variable geochemical conditions typically encountered at uranium in-situ recovery (ISR) sites. Sediment was taken from a monitoring well at the SRH site at depths 192 and 193 m below ground and characterized by XRD, XRF, TOC, and BET. Duplicate column studies on the different sediment depths, were flushed with synthesized restoration waters at two different alkalinities (160 mg/l CaCO 3 and 360 mg/l CaCO 3 ) to study the effect of alkalinity on uranium mobility. Uranium breakthrough occurred 25% - 30% earlier in columns with 360 mg/l CaCO 3 over columns fed with 160 mg/l CaCO 3 influent water. A parameter estimation program (PEST) was coupled to PHREEQC to derive site densities from experimental data. Significant parameter fittings were produced for all models, demonstrating that the GC SCM approach can model the impact of carbonate on uranium in flow systems. Derived site densities for the two sediment depths were between 141 and 178 μmol-sites/kg-soil, demonstrating similar sorption capacities despite heterogeneity in sediment mineralogy. Model sensitivity to alkalinity and pH was shown to be moderate compared to fitted site densities, when calcite saturation was allowed to equilibrate. Calcite kinetics emerged as a potential source of error when fitting parameters in flow conditions. Fitted results were compared to data from previous batch and column studies completed on sediments from the Smith-Ranch Highland (SRH) site, to assess variability in derived parameters. Parameters from batch experiments were lower by a factor of 1.1 to 3.4 compared to column studies completed on the same sediments. The difference was attributed to errors in solid-solution ratios and the impact of calcite dissolution in batch experiments. Column studies conducted at two different laboratories showed almost an order of magnitude difference in fitted site densities suggesting that experimental methodology may play a bigger role in column sorption behavior than actual sediment heterogeneity. Our results demonstrate the necessity for ISR sites to remove residual pCO2 and equilibrate restoration water with background geochemistry to reduce uranium mobility. In addition, the observed variability between fitted parameters on the same sediments highlights the need to provide standardized guidelines and methodology for regulators and industry when the GC SCM approach is used for ISR risk assessments. Copyright © 2018 Elsevier B.V. All rights reserved.
Electrical transport characteristics of single-layer organic devices from theory and experiment
NASA Astrophysics Data System (ADS)
Martin, S. J.; Walker, Alison B.; Campbell, A. J.; Bradley, D. D. C.
2005-09-01
An electrical model based on drift diffusion is described. We have explored systematically how the shape of the current density-voltage (J-V) curves is determined by the input parameters, information that isessential when deducing values of these parameters by fitting to experimental data for an ITO/PPV/Al organic light-emitting device (OLED), where ITO is shorthand for indium tin oxide and PPV is poly(phenylene vinylene). Our conclusion is that it is often possible to obtain a unique fit even with several parameters to fit. Our results allowing for a tunneling current show remarkable resemblance to experimental data before and after the contacts are conditioned. We have demonstrated our model on single-layer devices with ITO/PFO/Au and ITO/PEDOT/PFO/Au at room temperature and ITO/TPD/Al over temperatures from 130 to 290 K. PFO is shorthand for poly(9,9'-dialkyl-fluorene-2,7-dyl) and TPD is shorthand for N,N'-diphenyl-N,N'-bis(3-methylphenyl)1-1'-biphenyl-4,4'-diamine. Good fits to experimental data have been obtained, but in the case of the TPD device, only if a larger value for the relative permittivity ɛs than would be expected is used. We infer that a layer of dipoles at the ITO/TPD interface could be responsible for the observed J-V characteristics by locally causing changes in ɛs. The strong temperature dependence of the hole barrier height from fitting J-V characteristics to the experimental data may indicate that the temperature dependence of the thermionic emission model is incorrect.
Using constraints and their value for optimization of large ODE systems
Domijan, Mirela; Rand, David A.
2015-01-01
We provide analytical tools to facilitate a rigorous assessment of the quality and value of the fit of a complex model to data. We use this to provide approaches to model fitting, parameter estimation, the design of optimization functions and experimental optimization. This is in the context where multiple constraints are used to select or optimize a large model defined by differential equations. We illustrate the approach using models of circadian clocks and the NF-κB signalling system. PMID:25673300
Protonation of Different Goethite Surfaces - Unified Models for NaNO3 and NaCl Media.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lutzenkirchen, Johannes; Boily, Jean F.; Gunneriusson, Lars
2008-01-01
Acid-base titration data for two goethites samples in sodium nitrate and sodium chloride media are discussed. The data are modelled based on various surface complexation models in the framework of the MUlti SIte Complexation (MUSIC) model. Various assumptions with respect to the goethite morphology are considered in determining the site density of the surface functional groups. The results from the various model applications are not statistically significant in terms of goodness of fit. More importantly, various published assumptions with respect to the goethite morphology (i.e. the contributions of different crystal planes and their repercussions on the “overall” site densities ofmore » the various surface functional groups) do not significantly affect the final model parameters. The simultaneous fit of the chloride and nitrate data results in electrolyte binding constants, which are applicable over a wide range of electrolyte concentrations including mixtures of chloride and nitrate. Model parameters for the high surface area goethite sample are in excellent agreement with parameters that were independently obtained by another group on different goethite titration data sets.« less
NASA Astrophysics Data System (ADS)
Shan, Y.; Eric, W.; Gao, L.; Zhao, T.; Yin, Y.
2015-12-01
In this study, we have evaluated the performance of size distribution functions (SDF) with 2- and 3-moments in fitting the observed size distribution of rain droplets at three different heights. The goal is to improve the microphysics schemes in meso-scale models, such as Weather Research and Forecast (WRF). Rain droplets were observed during eight periods of different rain types at three stations on the Yellow Mountain in East China. The SDF in this study were M-P distribution with a fixed shape parameter in Gamma SDF(FSP). Where the Gamma SDFs were obtained with three diagnosis methods with the shape parameters based on Milbrandt (2010; denoted DSPM10), Milbrandt (2005; denoted DSPM05) and Seifert (2008; denoted DSPS08) for solving the shape parameter(SSP) and Lognormal SDF. Based on the preliminary experiments, three ensemble methods deciding Gamma SDF was also developed and assessed. The magnitude of average relative error caused by applying a FSP was 10-2 for fitting 0-order moment of the observed rain droplet distribution, and the magnitude of average relative error changed to 10-1 and 100 respectively for 1-4 order moments and 5-6 order moments. To different extent, DSPM10, DSPM05, DSPS08, SSP and ensemble methods could improve fitting accuracies for 0-6 order moments, especially the one coupling SSP and DSPS08 methods, which provided a average relative error 6.46% for 1-4 order moments and 11.90% for 5-6 order moments, respectively. The relative error of fitting three moments using the Lognormal SDF was much larger than that of Gamma SDF. The threshold value of shape parameter ranged from 0 to 8, because values beyond this range could cause overflow in the calculation. When average diameter of rain droplets was less than 2mm, the possibility of unavailable shape parameter value(USPV) increased with a decreasing droplet size. There was strong sensitivity of moment group in fitting accuracy. When ensemble method coupling SSP and DSPS08 was used, a better fit to 1-3-5 moments of the SDF was possible compared to fitting the 0-3-6 moment group.
Chow, Steven Kwok Keung; Yeung, David Ka Wai; Ahuja, Anil T; King, Ann D
2012-01-01
Purpose To quantitatively evaluate the kinetic parameter estimation for head and neck (HN) dynamic contrast-enhanced (DCE) MRI with dual-flip-angle (DFA) T1 mapping. Materials and methods Clinical DCE-MRI datasets of 23 patients with HN tumors were included in this study. T1 maps were generated based on multiple-flip-angle (MFA) method and different DFA combinations. Tofts model parameter maps of kep, Ktrans and vp based on MFA and DFAs were calculated and compared. Fitted parameter by MFA and DFAs were quantitatively evaluated in primary tumor, salivary gland and muscle. Results T1 mapping deviations by DFAs produced remarkable kinetic parameter estimation deviations in head and neck tissues. In particular, the DFA of [2º, 7º] overestimated, while [7º, 12º] and [7º, 15º] underestimated Ktrans and vp, significantly (P<0.01). [2º, 15º] achieved the smallest but still statistically significant overestimation for Ktrans and vp in primary tumors, 32.1% and 16.2% respectively. kep fitting results by DFAs were relatively close to the MFA reference compared to Ktrans and vp. Conclusions T1 deviations induced by DFA could result in significant errors in kinetic parameter estimation, particularly Ktrans and vp, through Tofts model fitting. MFA method should be more reliable and robust for accurate quantitative pharmacokinetic analysis in head and neck. PMID:23289084
BGFit: management and automated fitting of biological growth curves.
Veríssimo, André; Paixão, Laura; Neves, Ana Rute; Vinga, Susana
2013-09-25
Existing tools to model cell growth curves do not offer a flexible integrative approach to manage large datasets and automatically estimate parameters. Due to the increase of experimental time-series from microbiology and oncology, the need for a software that allows researchers to easily organize experimental data and simultaneously extract relevant parameters in an efficient way is crucial. BGFit provides a web-based unified platform, where a rich set of dynamic models can be fitted to experimental time-series data, further allowing to efficiently manage the results in a structured and hierarchical way. The data managing system allows to organize projects, experiments and measurements data and also to define teams with different editing and viewing permission. Several dynamic and algebraic models are already implemented, such as polynomial regression, Gompertz, Baranyi, Logistic and Live Cell Fraction models and the user can add easily new models thus expanding current ones. BGFit allows users to easily manage their data and models in an integrated way, even if they are not familiar with databases or existing computational tools for parameter estimation. BGFit is designed with a flexible architecture that focus on extensibility and leverages free software with existing tools and methods, allowing to compare and evaluate different data modeling techniques. The application is described in the context of bacterial and tumor cells growth data fitting, but it is also applicable to any type of two-dimensional data, e.g. physical chemistry and macroeconomic time series, being fully scalable to high number of projects, data and model complexity.
A back-fitting algorithm to improve real-time flood forecasting
NASA Astrophysics Data System (ADS)
Zhang, Xiaojing; Liu, Pan; Cheng, Lei; Liu, Zhangjun; Zhao, Yan
2018-07-01
Real-time flood forecasting is important for decision-making with regards to flood control and disaster reduction. The conventional approach involves a postprocessor calibration strategy that first calibrates the hydrological model and then estimates errors. This procedure can simulate streamflow consistent with observations, but obtained parameters are not optimal. Joint calibration strategies address this issue by refining hydrological model parameters jointly with the autoregressive (AR) model. In this study, five alternative schemes are used to forecast floods. Scheme I uses only the hydrological model, while scheme II includes an AR model for error correction. In scheme III, differencing is used to remove non-stationarity in the error series. A joint inference strategy employed in scheme IV calibrates the hydrological and AR models simultaneously. The back-fitting algorithm, a basic approach for training an additive model, is adopted in scheme V to alternately recalibrate hydrological and AR model parameters. The performance of the five schemes is compared with a case study of 15 recorded flood events from China's Baiyunshan reservoir basin. Our results show that (1) schemes IV and V outperform scheme III during the calibration and validation periods and (2) scheme V is inferior to scheme IV in the calibration period, but provides better results in the validation period. Joint calibration strategies can therefore improve the accuracy of flood forecasting. Additionally, the back-fitting recalibration strategy produces weaker overcorrection and a more robust performance compared with the joint inference strategy.
Genotypic Complexity of Fisher’s Geometric Model
Hwang, Sungmin; Park, Su-Chan; Krug, Joachim
2017-01-01
Fisher’s geometric model was originally introduced to argue that complex adaptations must occur in small steps because of pleiotropic constraints. When supplemented with the assumption of additivity of mutational effects on phenotypic traits, it provides a simple mechanism for the emergence of genotypic epistasis from the nonlinear mapping of phenotypes to fitness. Of particular interest is the occurrence of reciprocal sign epistasis, which is a necessary condition for multipeaked genotypic fitness landscapes. Here we compute the probability that a pair of randomly chosen mutations interacts sign epistatically, which is found to decrease with increasing phenotypic dimension n, and varies nonmonotonically with the distance from the phenotypic optimum. We then derive expressions for the mean number of fitness maxima in genotypic landscapes comprised of all combinations of L random mutations. This number increases exponentially with L, and the corresponding growth rate is used as a measure of the complexity of the landscape. The dependence of the complexity on the model parameters is found to be surprisingly rich, and three distinct phases characterized by different landscape structures are identified. Our analysis shows that the phenotypic dimension, which is often referred to as phenotypic complexity, does not generally correlate with the complexity of fitness landscapes and that even organisms with a single phenotypic trait can have complex landscapes. Our results further inform the interpretation of experiments where the parameters of Fisher’s model have been inferred from data, and help to elucidate which features of empirical fitness landscapes can be described by this model. PMID:28450460
VizieR Online Data Catalog: Spectral evolution of 4U 1543-47 in 2002 (Lipunova+, 2017)
NASA Astrophysics Data System (ADS)
Lipunova, G. V.; Malanchev, K. L.
2017-08-01
Evolution of the spectral parameters obtained from the fitting of the spectral data obtained with RXTE/PCA in the 2.9-25keV energy band. Some spectral parameters are plotted in Figure 1 of the paper. The black hole mass is 9.4 solar masses, the Kerr parameter is 0.4, the disc inclination is 20.7 grad. The spectral fitting is done using XSPEC 12.9.0. The XSPEC spectral model consists of the following spectral components: TBabs((simpl*kerrbb+laor)smedge). Full description of the spectral parameters can be found in Table A1 and Appendix A of the paper. (1 data file).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Yeon Soo; Jeong, G. Y.; Sohn, D. -S.
U-Mo/Al dispersion fuel is currently under development in the DOE’s Material Management and Minimization program to convert HEU-fueled research reactors to LEU-fueled reactors. In some demanding conditions in high-power and high-performance reactors, large pores form in the interaction layers between the U-Mo fuel particles and the Al matrix, which pose a potential to cause fuel failure. In this study, comprehension of the formation and growth of these pores was explored. As a product, a model to predict pore growth and porosity increase was developed. Well-characterized in-pile data from reduced-size plates were used to fit the model parameters. A data setmore » of full-sized plates, independent and distinctively different from those used to fit the model parameters, was used to examine the accuracy of the model.« less
Radar cross section models for limited aspect angle windows
NASA Astrophysics Data System (ADS)
Robinson, Mark C.
1992-12-01
This thesis presents a method for building Radar Cross Section (RCS) models of aircraft based on static data taken from limited aspect angle windows. These models statistically characterize static RCS. This is done to show that a limited number of samples can be used to effectively characterize static aircraft RCS. The optimum models are determined by performing both a Kolmogorov and a Chi-Square goodness-of-fit test comparing the static RCS data with a variety of probability density functions (pdf) that are known to be effective at approximating the static RCS of aircraft. The optimum parameter estimator is also determined by the goodness of-fit tests if there is a difference in pdf parameters obtained by the Maximum Likelihood Estimator (MLE) and the Method of Moments (MoM) estimators.
A nonlinear SIR with stability
NASA Astrophysics Data System (ADS)
Trisilowati, Darti, I.; Fitri, S.
2014-02-01
The aim of this work is to develop a mathematical model of a nonlinear susceptible-infectious-removed (SIR) epidemic model with vaccination. We analyze the stability of the model by linearizing the model around the equilibrium point. Then, diphtheria data from East Java province is fitted to the model. From these estimated parameters, we investigate which parameters that play important role in the epidemic model. Some numerical simulations are given to illustrate the analytical results and the behavior of the model.
Broadband distortion modeling in Lyman-α forest BAO fitting
Blomqvist, Michael; Kirkby, David; Bautista, Julian E.; ...
2015-11-23
Recently, the Lyman-α absorption observed in the spectra of high-redshift quasars has been used as a tracer of large-scale structure by means of the three-dimensional Lyman-α forest auto-correlation function at redshift z≃ 2.3, but the need to fit the quasar continuum in every absorption spectrum introduces a broadband distortion that is difficult to correct and causes a systematic error for measuring any broadband properties. Here, we describe a k-space model for this broadband distortion based on a multiplicative correction to the power spectrum of the transmitted flux fraction that suppresses power on scales corresponding to the typical length of amore » Lyman-α forest spectrum. In implementing the distortion model in fits for the baryon acoustic oscillation (BAO) peak position in the Lyman-α forest auto-correlation, we find that the fitting method recovers the input values of the linear bias parameter b F and the redshift-space distortion parameter β F for mock data sets with a systematic error of less than 0.5%. Applied to the auto-correlation measured for BOSS Data Release 11, our method improves on the previous treatment of broadband distortions in BAO fitting by providing a better fit to the data using fewer parameters and reducing the statistical errors on βF and the combination b F(1+β F) by more than a factor of seven. The measured values at redshift z=2.3 are βF=1.39 +0.11 +0.24 +0.38 -0.10 -0.19 -0.28 and bF(1+βF)=-0.374 +0.007 +0.013 +0.020 -0.007 -0.014 -0.022 (1σ, 2σ and 3σ statistical errors). Our fitting software and the input files needed to reproduce our main results are publicly available.« less
Broadband distortion modeling in Lyman-α forest BAO fitting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blomqvist, Michael; Kirkby, David; Margala, Daniel, E-mail: cblomqvi@uci.edu, E-mail: dkirkby@uci.edu, E-mail: dmargala@uci.edu
2015-11-01
In recent years, the Lyman-α absorption observed in the spectra of high-redshift quasars has been used as a tracer of large-scale structure by means of the three-dimensional Lyman-α forest auto-correlation function at redshift z≅ 2.3, but the need to fit the quasar continuum in every absorption spectrum introduces a broadband distortion that is difficult to correct and causes a systematic error for measuring any broadband properties. We describe a k-space model for this broadband distortion based on a multiplicative correction to the power spectrum of the transmitted flux fraction that suppresses power on scales corresponding to the typical length of amore » Lyman-α forest spectrum. Implementing the distortion model in fits for the baryon acoustic oscillation (BAO) peak position in the Lyman-α forest auto-correlation, we find that the fitting method recovers the input values of the linear bias parameter b{sub F} and the redshift-space distortion parameter β{sub F} for mock data sets with a systematic error of less than 0.5%. Applied to the auto-correlation measured for BOSS Data Release 11, our method improves on the previous treatment of broadband distortions in BAO fitting by providing a better fit to the data using fewer parameters and reducing the statistical errors on β{sub F} and the combination b{sub F}(1+β{sub F}) by more than a factor of seven. The measured values at redshift z=2.3 are β{sub F}=1.39{sup +0.11 +0.24 +0.38}{sub −0.10 −0.19 −0.28} and b{sub F}(1+β{sub F})=−0.374{sup +0.007 +0.013 +0.020}{sub −0.007 −0.014 −0.022} (1σ, 2σ and 3σ statistical errors). Our fitting software and the input files needed to reproduce our main results are publicly available.« less
WMAP7 constraints on oscillations in the primordial power spectrum
NASA Astrophysics Data System (ADS)
Meerburg, P. Daniel; Wijers, Ralph A. M. J.; van der Schaar, Jan Pieter
2012-03-01
We use the 7-year Wilkinson Microwave Anisotropy Probe (WMAP7) data to place constraints on oscillations supplementing an almost scale-invariant primordial power spectrum. Such oscillations are predicted by a variety of models, some of which amount to assuming that there is some non-trivial choice of the vacuum state at the onset of inflation. In this paper, we will explore data-driven constraints on two distinct models of initial state modifications. In both models, the frequency, phase and amplitude are degrees of freedom of the theory for which the theoretical bounds are rather weak: both the amplitude and frequency have allowed values ranging over several orders of magnitude. This requires many computationally expensive evaluations of the model cosmic microwave background (CMB) spectra and their goodness of fit, even in a Markov chain Monte Carlo (MCMC), normally the most efficient fitting method for such a problem. To search more efficiently, we first run a densely-spaced grid, with only three varying parameters: the frequency, the amplitude and the baryon density. We obtain the optimal frequency and run an MCMC at the best-fitting frequency, randomly varying all other relevant parameters. To reduce the computational time of each power spectrum computation, we adjust both comoving momentum integration and spline interpolation (in l) as a function of frequency and amplitude of the primordial power spectrum. Applying this to the WMAP7 data allows us to improve existing constraints on the presence of oscillations. We confirm earlier findings that certain frequencies can improve the fitting over a model without oscillations. For those frequencies we compute the posterior probability, allowing us to put some constraints on the primordial parameter space of both models.
Autonomous Modelling of X-ray Spectra Using Robust Global Optimization Methods
NASA Astrophysics Data System (ADS)
Rogers, Adam; Safi-Harb, Samar; Fiege, Jason
2015-08-01
The standard approach to model fitting in X-ray astronomy is by means of local optimization methods. However, these local optimizers suffer from a number of problems, such as a tendency for the fit parameters to become trapped in local minima, and can require an involved process of detailed user intervention to guide them through the optimization process. In this work we introduce a general GUI-driven global optimization method for fitting models to X-ray data, written in MATLAB, which searches for optimal models with minimal user interaction. We directly interface with the commonly used XSPEC libraries to access the full complement of pre-existing spectral models that describe a wide range of physics appropriate for modelling astrophysical sources, including supernova remnants and compact objects. Our algorithm is powered by the Ferret genetic algorithm and Locust particle swarm optimizer from the Qubist Global Optimization Toolbox, which are robust at finding families of solutions and identifying degeneracies. This technique will be particularly instrumental for multi-parameter models and high-fidelity data. In this presentation, we provide details of the code and use our techniques to analyze X-ray data obtained from a variety of astrophysical sources.
NASA Astrophysics Data System (ADS)
Walker, David M.; Allingham, David; Lee, Heung Wing Joseph; Small, Michael
2010-02-01
Small world network models have been effective in capturing the variable behaviour of reported case data of the SARS coronavirus outbreak in Hong Kong during 2003. Simulations of these models have previously been realized using informed “guesses” of the proposed model parameters and tested for consistency with the reported data by surrogate analysis. In this paper we attempt to provide statistically rigorous parameter distributions using Approximate Bayesian Computation sampling methods. We find that such sampling schemes are a useful framework for fitting parameters of stochastic small world network models where simulation of the system is straightforward but expressing a likelihood is cumbersome.
Using the MWC model to describe heterotropic interactions in hemoglobin
Rapp, Olga
2017-01-01
Hemoglobin is a classical model allosteric protein. Research on hemoglobin parallels the development of key cooperativity and allostery concepts, such as the ‘all-or-none’ Hill formalism, the stepwise Adair binding formulation and the concerted Monod-Wymann-Changuex (MWC) allosteric model. While it is clear that the MWC model adequately describes the cooperative binding of oxygen to hemoglobin, rationalizing the effects of H+, CO2 or organophosphate ligands on hemoglobin-oxygen saturation using the same model remains controversial. According to the MWC model, allosteric ligands exert their effect on protein function by modulating the quaternary conformational transition of the protein. However, data fitting analysis of hemoglobin oxygen saturation curves in the presence or absence of inhibitory ligands persistently revealed effects on both relative oxygen affinity (c) and conformational changes (L), elementary MWC parameters. The recent realization that data fitting analysis using the traditional MWC model equation may not provide reliable estimates for L and c thus calls for a re-examination of previous data using alternative fitting strategies. In the current manuscript, we present two simple strategies for obtaining reliable estimates for MWC mechanistic parameters of hemoglobin steady-state saturation curves in cases of both evolutionary and physiological variations. Our results suggest that the simple MWC model provides a reasonable description that can also account for heterotropic interactions in hemoglobin. The results, moreover, offer a general roadmap for successful data fitting analysis using the MWC model. PMID:28793329
Glass Transition Temperature- and Specific Volume- Composition Models for Tellurite Glasses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riley, Brian J.; Vienna, John D.
This report provides models for predicting composition-properties for tellurite glasses, namely specific gravity and glass transition temperature. Included are the partial specific coefficients for each model, the component validity ranges, and model fit parameters.
Abbott, Lauren J.; Stevens, Mark J.
2015-12-22
In this study, a coarse-grained (CG) model is developed for the thermoresponsive polymer poly(N-isopropylacrylamide) (PNIPAM), using a hybrid top-down and bottom-up approach. Nonbonded parameters are fit to experimental thermodynamic data following the procedures of the SDK (Shinoda, DeVane, and Klein) CG force field, with minor adjustments to provide better agreement with radial distribution functions from atomistic simulations. Bonded parameters are fit to probability distributions from atomistic simulations using multi-centered Gaussian-based potentials. The temperature-dependent potentials derived for the PNIPAM CG model in this work properly capture the coil–globule transition of PNIPAM single chains and yield a chain-length dependence consistent with atomisticmore » simulations.« less
Arce, Pedro; Lagares, Juan Ignacio
2018-01-25
We have verified the GAMOS/Geant4 simulation model of a 6 MV VARIAN Clinac 2100 C/D linear accelerator by the procedure of adjusting the initial beam parameters to fit the percentage depth dose and cross-profile dose experimental data at different depths in a water phantom. Thanks to the use of a wide range of field sizes, from 2 × 2 cm 2 to 40 × 40 cm 2 , a small phantom voxel size and high statistics, fine precision in the determination of the beam parameters has been achieved. This precision has allowed us to make a thorough study of the different physics models and parameters that Geant4 offers. The three Geant4 electromagnetic physics sets of models, i.e. Standard, Livermore and Penelope, have been compared to the experiment, testing the four different models of angular bremsstrahlung distributions as well as the three available multiple-scattering models, and optimizing the most relevant Geant4 electromagnetic physics parameters. Before the fitting, a comprehensive CPU time optimization has been done, using several of the Geant4 efficiency improvement techniques plus a few more developed in GAMOS.
Calibrating White Dwarf Asteroseismic Fitting Techniques
NASA Astrophysics Data System (ADS)
Castanheira, B. G.; Romero, A. D.; Bischoff-Kim, A.
2017-03-01
The main goal of looking for intrinsic variability in stars is the unique opportunity to study their internal structure. Once we have extracted independent modes from the data, it appears to be a simple matter of comparing the period spectrum with those from theoretical model grids to learn the inner structure of that star. However, asteroseismology is much more complicated than this simple description. We must account not only for observational uncertainties in period determination, but most importantly for the limitations of the model grids, coming from the uncertainties in the constitutive physics, and of the fitting techniques. In this work, we will discuss results of numerical experiments where we used different independently calculated model grids (white dwarf cooling models WDEC and fully evolutionary LPCODE-PUL) and fitting techniques to fit synthetic stars. The advantage of using synthetic stars is that we know the details of their interior structure so we can assess how well our models and fitting techniques are able to the recover the interior structure, as well as the stellar parameters.
Craven, Stephen; Shirsat, Nishikant; Whelan, Jessica; Glennon, Brian
2013-01-01
A Monod kinetic model, logistic equation model, and statistical regression model were developed for a Chinese hamster ovary cell bioprocess operated under three different modes of operation (batch, bolus fed-batch, and continuous fed-batch) and grown on two different bioreactor scales (3 L bench-top and 15 L pilot-scale). The Monod kinetic model was developed for all modes of operation under study and predicted cell density, glucose glutamine, lactate, and ammonia concentrations well for the bioprocess. However, it was computationally demanding due to the large number of parameters necessary to produce a good model fit. The transferability of the Monod kinetic model structure and parameter set across bioreactor scales and modes of operation was investigated and a parameter sensitivity analysis performed. The experimentally determined parameters had the greatest influence on model performance. They changed with scale and mode of operation, but were easily calculated. The remaining parameters, which were fitted using a differential evolutionary algorithm, were not as crucial. Logistic equation and statistical regression models were investigated as alternatives to the Monod kinetic model. They were less computationally intensive to develop due to the absence of a large parameter set. However, modeling of the nutrient and metabolite concentrations proved to be troublesome due to the logistic equation model structure and the inability of both models to incorporate a feed. The complexity, computational load, and effort required for model development has to be balanced with the necessary level of model sophistication when choosing which model type to develop for a particular application. Copyright © 2012 American Institute of Chemical Engineers (AIChE).
Verhulst, Sarah; Altoè, Alessandro; Vasilkov, Viacheslav
2018-03-01
Models of the human auditory periphery range from very basic functional descriptions of auditory filtering to detailed computational models of cochlear mechanics, inner-hair cell (IHC), auditory-nerve (AN) and brainstem signal processing. It is challenging to include detailed physiological descriptions of cellular components into human auditory models because single-cell data stems from invasive animal recordings while human reference data only exists in the form of population responses (e.g., otoacoustic emissions, auditory evoked potentials). To embed physiological models within a comprehensive human auditory periphery framework, it is important to capitalize on the success of basic functional models of hearing and render their descriptions more biophysical where possible. At the same time, comprehensive models should capture a variety of key auditory features, rather than fitting their parameters to a single reference dataset. In this study, we review and improve existing models of the IHC-AN complex by updating their equations and expressing their fitting parameters into biophysical quantities. The quality of the model framework for human auditory processing is evaluated using recorded auditory brainstem response (ABR) and envelope-following response (EFR) reference data from normal and hearing-impaired listeners. We present a model with 12 fitting parameters from the cochlea to the brainstem that can be rendered hearing impaired to simulate how cochlear gain loss and synaptopathy affect human population responses. The model description forms a compromise between capturing well-described single-unit IHC and AN properties and human population response features. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
A global fit of the MSSM with GAMBIT
NASA Astrophysics Data System (ADS)
Athron, Peter; Balázs, Csaba; Bringmann, Torsten; Buckley, Andy; Chrząszcz, Marcin; Conrad, Jan; Cornell, Jonathan M.; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Jackson, Paul; Krislock, Abram; Kvellestad, Anders; Mahmoudi, Farvah; Martinez, Gregory D.; Putze, Antje; Raklev, Are; Rogan, Christopher; Saavedra, Aldo; Savage, Christopher; Scott, Pat; Serra, Nicola; Weniger, Christoph; White, Martin
2017-12-01
We study the seven-dimensional Minimal Supersymmetric Standard Model (MSSM7) with the new GAMBIT software framework, with all parameters defined at the weak scale. Our analysis significantly extends previous weak-scale, phenomenological MSSM fits, by adding more and newer experimental analyses, improving the accuracy and detail of theoretical predictions, including dominant uncertainties from the Standard Model, the Galactic dark matter halo and the quark content of the nucleon, and employing novel and highly-efficient statistical sampling methods to scan the parameter space. We find regions of the MSSM7 that exhibit co-annihilation of neutralinos with charginos, stops and sbottoms, as well as models that undergo resonant annihilation via both light and heavy Higgs funnels. We find high-likelihood models with light charginos, stops and sbottoms that have the potential to be within the future reach of the LHC. Large parts of our preferred parameter regions will also be accessible to the next generation of direct and indirect dark matter searches, making prospects for discovery in the near future rather good.
THE NuSTAR X-RAY SPECTRUM OF HERCULES X-1: A RADIATION-DOMINATED RADIATIVE SHOCK
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolff, Michael T.; Wood, Kent S.; Becker, Peter A.
2016-11-10
We report on new spectral modeling of the accreting X-ray pulsar Hercules X-1. Our radiation-dominated radiative shock model is an implementation of the analytic work of Becker and Wolff on Comptonized accretion flows onto magnetic neutron stars. We obtain a good fit to the spin-phase-averaged 4–78 keV X-ray spectrum observed by the Nuclear Spectroscopic Telescope Array during a main-on phase of the Her X-1 35 day accretion disk precession period. This model allows us to estimate the accretion rate, the Comptonizing temperature of the radiating plasma, the radius of the magnetic polar cap, and the average scattering opacity parameters inmore » the accretion column. This is in contrast to previous phenomenological models that characterized the shape of the X-ray spectrum, but could not determine the physical parameters of the accretion flow. We describe the spectral fitting details and discuss the interpretation of the accretion flow physical parameters.« less
Estimating thermal performance curves from repeated field observations
Childress, Evan; Letcher, Benjamin H.
2017-01-01
Estimating thermal performance of organisms is critical for understanding population distributions and dynamics and predicting responses to climate change. Typically, performance curves are estimated using laboratory studies to isolate temperature effects, but other abiotic and biotic factors influence temperature-performance relationships in nature reducing these models' predictive ability. We present a model for estimating thermal performance curves from repeated field observations that includes environmental and individual variation. We fit the model in a Bayesian framework using MCMC sampling, which allowed for estimation of unobserved latent growth while propagating uncertainty. Fitting the model to simulated data varying in sampling design and parameter values demonstrated that the parameter estimates were accurate, precise, and unbiased. Fitting the model to individual growth data from wild trout revealed high out-of-sample predictive ability relative to laboratory-derived models, which produced more biased predictions for field performance. The field-based estimates of thermal maxima were lower than those based on laboratory studies. Under warming temperature scenarios, field-derived performance models predicted stronger declines in body size than laboratory-derived models, suggesting that laboratory-based models may underestimate climate change effects. The presented model estimates true, realized field performance, avoiding assumptions required for applying laboratory-based models to field performance, which should improve estimates of performance under climate change and advance thermal ecology.
More Sophisticated Fits of the Oribts of Haumea's Interacting Moons
NASA Astrophysics Data System (ADS)
Oldroyd, William Jared; Ragozzine, Darin; Porter, Simon
2018-04-01
Since the discovery of Haumea's moons, it has been a challenge to model the orbits of its moons, Hi’iaka and Namaka. With many precision HST observations, Ragozzine & Brown 2009 succeeded in calculating a three-point mass model which was essential because Keplerian orbits were not a statistically acceptable fit. New data obtained in 2010 could be fit by adding a J2 and spin pole to Haumea, but new data from 2015 was far from the predicted locations, even after an extensive exploration using Bayesian Markov Chain Monte Carlo methods (using emcee). Here we report on continued investigations as to why our model cannot fit the full 10-year baseline of data. We note that by ignoring Haumea and instead examining the relative motion of the two moons in the Hi’iaka centered frame leads to adequate fits for the data. This suggests there are additional parameters connected to Haumea that will be required in a full model. These parameters are potentially related to photocenter-barycenter shifts which could be significant enough to affect the fitting process; these are unlikely to be caused by the newly discovered ring (Ortiz et al. 2017) or by unknown satellites (Burkhart et al. 2016). Additionally, we have developed a new SPIN+N-bodY integrator called SPINNY that self-consistently calculates the interactions between n-quadrupoles and is designed to test the importance of other possible effects (Haumea C22, satellite torques on the spin-pole, Sun, etc.) on our astrometric fits. By correctly determining the orbit of Haumea’s satellites we develop a better understanding of the physical properties of each of the objects with implications for the formation of Haumea, its moons, and its collisional family.
Ciecior, Willy; Röhlig, Klaus-Jürgen; Kirchner, Gerald
2018-10-01
In the present paper, deterministic as well as first- and second-order probabilistic biosphere modeling approaches are compared. Furthermore, the sensitivity of the influence of the probability distribution function shape (empirical distribution functions and fitted lognormal probability functions) representing the aleatory uncertainty (also called variability) of a radioecological model parameter as well as the role of interacting parameters are studied. Differences in the shape of the output distributions for the biosphere dose conversion factor from first-order Monte Carlo uncertainty analysis using empirical and fitted lognormal distribution functions for input parameters suggest that a lognormal approximation is possibly not always an adequate representation of the aleatory uncertainty of a radioecological parameter. Concerning the comparison of the impact of aleatory and epistemic parameter uncertainty on the biosphere dose conversion factor, the latter here is described using uncertain moments (mean, variance) while the distribution itself represents the aleatory uncertainty of the parameter. From the results obtained, the solution space of second-order Monte Carlo simulation is much larger than that from first-order Monte Carlo simulation. Therefore, the influence of epistemic uncertainty of a radioecological parameter on the output result is much larger than that one caused by its aleatory uncertainty. Parameter interactions are only of significant influence in the upper percentiles of the distribution of results as well as only in the region of the upper percentiles of the model parameters. Copyright © 2018 Elsevier Ltd. All rights reserved.
Modelling and analysis of creep deformation and fracture in a 1 Cr 1/2 Mo ferritic steel
NASA Astrophysics Data System (ADS)
Dyson, B. F.; Osgerby, D.
A quantitative model, based upon a proposed new mechanism of creep deformation in particle-hardened alloys, has been validated by analysis of creep data from a 13CrMo 4 4 (1Cr 1/2 Mo) material tested under a range of stresses and temperatures. The methodology that has been used to extract the model parameters quantifies, as a first approximation, only the main degradation (damage) processes - in the case of the 1CR 1/2 Mo steel, these are considered to be the parallel operation of particle-coarsening and a progressively increasing stress due to a constant-load boundary condition. These 'global' model parameters can then be modified (only slightly) as required to obtain a detailed description and 'fit' to the rupture lifetime and strain/time trajectory of any individual test. The global model parameter approach may be thought of as predicting average behavior and the detailed fits as taking account of uncertainties (scatter) due to variability in the material. Using the global parameter dataset, predictions have also been made of behavior under biaxial stressing; constant straining rate; constant total strain (stress relaxation) and the likely success or otherwise of metallographic and mechanical remanent lifetime procedures.
High-energy pp and pp-bar forward elastic scattering and total cross sections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Block, M.M.; Cahn, R.N.
1985-04-01
The present status of elastic pp and pp-bar scattering in the high-energy domain is reviewed, with emphasis on the forward and near-forward regions. The experimental techniques for measuring sigma/sub tot/, rho, and B are discussed, emphasizing the importance of the region in which the nuclear and Coulomb scattering interfere. The impact-parameter representation is exploited to give simple didactic demonstrations of important rigorous theorems based on analyticity, and to illuminate the significance of the slope parameter B and the curvature parameter C. Models of elastic scattering are discussed, and a criterion for the onset of ''asymptopia'' is given. A critique ofmore » dispersion relations is presented. Simple analytic functions are used to fit simultaneously the real and imaginary parts of forward scattering amplitudes for both pp and pp-bar, obtained from experimental data for sigma/sub tot/ and rho. It is found that a good fit can be obtained using only five parameters (with a cross section rising as ln/sup 2/s), over the energy range 5 < ..sqrt..s < 62 GeV. The possibilities that (a) the cross section rises only as lns, (b) the cross section rises only locally as ln/sup 2/s, and eventually goes to a constant value, and (c) the cross-section difference between pp and pp-bar does not vanish as s..-->..infinity are examined critically. The nuclear slope parameters B are also fitted in a model-independent fashion. Examination of the fits reveals a new regularity of the pp-bar and the pp systems.« less
Extreme value modelling of Ghana stock exchange index.
Nortey, Ezekiel N N; Asare, Kwabena; Mettle, Felix Okoe
2015-01-01
Modelling of extreme events has always been of interest in fields such as hydrology and meteorology. However, after the recent global financial crises, appropriate models for modelling of such rare events leading to these crises have become quite essential in the finance and risk management fields. This paper models the extreme values of the Ghana stock exchange all-shares index (2000-2010) by applying the extreme value theory (EVT) to fit a model to the tails of the daily stock returns data. A conditional approach of the EVT was preferred and hence an ARMA-GARCH model was fitted to the data to correct for the effects of autocorrelation and conditional heteroscedastic terms present in the returns series, before the EVT method was applied. The Peak Over Threshold approach of the EVT, which fits a Generalized Pareto Distribution (GPD) model to excesses above a certain selected threshold, was employed. Maximum likelihood estimates of the model parameters were obtained and the model's goodness of fit was assessed graphically using Q-Q, P-P and density plots. The findings indicate that the GPD provides an adequate fit to the data of excesses. The size of the extreme daily Ghanaian stock market movements were then computed using the value at risk and expected shortfall risk measures at some high quantiles, based on the fitted GPD model.
Transfer Function Identification Using Orthogonal Fourier Transform Modeling Functions
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
2013-01-01
A method for transfer function identification, including both model structure determination and parameter estimation, was developed and demonstrated. The approach uses orthogonal modeling functions generated from frequency domain data obtained by Fourier transformation of time series data. The method was applied to simulation data to identify continuous-time transfer function models and unsteady aerodynamic models. Model fit error, estimated model parameters, and the associated uncertainties were used to show the effectiveness of the method for identifying accurate transfer function models from noisy data.
A wideband channel model for land mobile satellite systems
NASA Technical Reports Server (NTRS)
Jahn, Axel; Buonomo, Sergio; Sforza, Mario; Lutz, Erich
1995-01-01
A wideband channel model for Land Mobile Satellite (LMS) services is presented which characterizes the time-varying transmission channel between a satellite and a mobile user terminal. The channel model statistic parameters are the results of fitting procedures to measured data. The data used for fitting have a time resolution of 33 ns corresponding to a bandwidth of 30 MHz. Thus, the model is capable to characterize the channel behaviour for a wide range of services e.g., voice transmission, digital audio broadcasting (DAB), and spread spectrum modulation schemes. The model is presented for different environments and scenarios. The model is derived for a quasi-mobile user with hand-held terminal being in two different environments: rural and urban. The parameters needed for the description are (a) the number of echoes, (b) the distribution of the echo power, and (c) the distribution of the echo delay. It is shown that the direct path follows a Rician distribution whereas the reflected paths are Rayleigh/lognormal distributed. The parameters are given for an elevation angle of 25 deg.
Computational Software for Fitting Seismic Data to Epidemic-Type Aftershock Sequence Models
NASA Astrophysics Data System (ADS)
Chu, A.
2014-12-01
Modern earthquake catalogs are often analyzed using spatial-temporal point process models such as the epidemic-type aftershock sequence (ETAS) models of Ogata (1998). My work introduces software to implement two of ETAS models described in Ogata (1998). To find the Maximum-Likelihood Estimates (MLEs), my software provides estimates of the homogeneous background rate parameter and the temporal and spatial parameters that govern triggering effects by applying the Expectation-Maximization (EM) algorithm introduced in Veen and Schoenberg (2008). Despite other computer programs exist for similar data modeling purpose, using EM-algorithm has the benefits of stability and robustness (Veen and Schoenberg, 2008). Spatial shapes that are very long and narrow cause difficulties in optimization convergence and problems with flat or multi-modal log-likelihood functions encounter similar issues. My program uses a robust method to preset a parameter to overcome the non-convergence computational issue. In addition to model fitting, the software is equipped with useful tools for examining modeling fitting results, for example, visualization of estimated conditional intensity, and estimation of expected number of triggered aftershocks. A simulation generator is also given with flexible spatial shapes that may be defined by the user. This open-source software has a very simple user interface. The user may execute it on a local computer, and the program also has potential to be hosted online. Java language is used for the software's core computing part and an optional interface to the statistical package R is provided.
On 4-degree-of-freedom biodynamic models of seated occupants: Lumped-parameter modeling
NASA Astrophysics Data System (ADS)
Bai, Xian-Xu; Xu, Shi-Xu; Cheng, Wei; Qian, Li-Jun
2017-08-01
It is useful to develop an effective biodynamic model of seated human occupants to help understand the human vibration exposure to transportation vehicle vibrations and to help design and improve the anti-vibration devices and/or test dummies. This study proposed and demonstrated a methodology for systematically identifying the best configuration or structure of a 4-degree-of-freedom (4DOF) human vibration model and for its parameter identification. First, an equivalent simplification expression for the models was made. Second, all of the possible 23 structural configurations of the models were identified. Third, each of them was calibrated using the frequency response functions recommended in a biodynamic standard. An improved version of non-dominated sorting genetic algorithm (NSGA-II) based on Pareto optimization principle was used to determine the model parameters. Finally, a model evaluation criterion proposed in this study was used to assess the models and to identify the best one, which was based on both the goodness of curve fits and comprehensive goodness of the fits. The identified top configurations were better than those reported in the literature. This methodology may also be extended and used to develop the models with other DOFs.
Compact continuum brain model for human electroencephalogram
NASA Astrophysics Data System (ADS)
Kim, J. W.; Shin, H.-B.; Robinson, P. A.
2007-12-01
A low-dimensional, compact brain model has recently been developed based on physiologically based mean-field continuum formulation of electric activity of the brain. The essential feature of the new compact model is a second order time-delayed differential equation that has physiologically plausible terms, such as rapid corticocortical feedback and delayed feedback via extracortical pathways. Due to its compact form, the model facilitates insight into complex brain dynamics via standard linear and nonlinear techniques. The model successfully reproduces many features of previous models and experiments. For example, experimentally observed typical rhythms of electroencephalogram (EEG) signals are reproduced in a physiologically plausible parameter region. In the nonlinear regime, onsets of seizures, which often develop into limit cycles, are illustrated by modulating model parameters. It is also shown that a hysteresis can occur when the system has multiple attractors. As a further illustration of this approach, power spectra of the model are fitted to those of sleep EEGs of two subjects (one with apnea, the other with narcolepsy). The model parameters obtained from the fittings show good matches with previous literature. Our results suggest that the compact model can provide a theoretical basis for analyzing complex EEG signals.
Predicting future protection of respirator users: Statistical approaches and practical implications.
Hu, Chengcheng; Harber, Philip; Su, Jing
2016-01-01
The purpose of this article is to describe a statistical approach for predicting a respirator user's fit factor in the future based upon results from initial tests. A statistical prediction model was developed based upon joint distribution of multiple fit factor measurements over time obtained from linear mixed effect models. The model accounts for within-subject correlation as well as short-term (within one day) and longer-term variability. As an example of applying this approach, model parameters were estimated from a research study in which volunteers were trained by three different modalities to use one of two types of respirators. They underwent two quantitative fit tests at the initial session and two on the same day approximately six months later. The fitted models demonstrated correlation and gave the estimated distribution of future fit test results conditional on past results for an individual worker. This approach can be applied to establishing a criterion value for passing an initial fit test to provide reasonable likelihood that a worker will be adequately protected in the future; and to optimizing the repeat fit factor test intervals individually for each user for cost-effective testing.
Modelling the firing pattern of bullfrog vestibular neurons responding to naturalistic stimuli
NASA Technical Reports Server (NTRS)
Paulin, M. G.; Hoffman, L. F.
1999-01-01
We have developed a neural system identification method for fitting models to stimulus-response data, where the response is a spike train. The method involves using a general nonlinear optimisation procedure to fit models in the time domain. We have applied the method to model bullfrog semicircular canal afferent neuron responses during naturalistic, broad-band head rotations. These neurons respond in diverse ways, but a simple four parameter class of models elegantly accounts for the various types of responses observed. c1999 Elsevier Science B.V. All rights reserved.
HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python.
Wiecki, Thomas V; Sofer, Imri; Frank, Michael J
2013-01-01
The diffusion model is a commonly used tool to infer latent psychological processes underlying decision-making, and to link them to neural mechanisms based on response times. Although efficient open source software has been made available to quantitatively fit the model to data, current estimation methods require an abundance of response time measurements to recover meaningful parameters, and only provide point estimates of each parameter. In contrast, hierarchical Bayesian parameter estimation methods are useful for enhancing statistical power, allowing for simultaneous estimation of individual subject parameters and the group distribution that they are drawn from, while also providing measures of uncertainty in these parameters in the posterior distribution. Here, we present a novel Python-based toolbox called HDDM (hierarchical drift diffusion model), which allows fast and flexible estimation of the the drift-diffusion model and the related linear ballistic accumulator model. HDDM requires fewer data per subject/condition than non-hierarchical methods, allows for full Bayesian data analysis, and can handle outliers in the data. Finally, HDDM supports the estimation of how trial-by-trial measurements (e.g., fMRI) influence decision-making parameters. This paper will first describe the theoretical background of the drift diffusion model and Bayesian inference. We then illustrate usage of the toolbox on a real-world data set from our lab. Finally, parameter recovery studies show that HDDM beats alternative fitting methods like the χ(2)-quantile method as well as maximum likelihood estimation. The software and documentation can be downloaded at: http://ski.clps.brown.edu/hddm_docs/
NASA Astrophysics Data System (ADS)
Kelleher, Christa; McGlynn, Brian; Wagener, Thorsten
2017-07-01
Distributed catchment models are widely used tools for predicting hydrologic behavior. While distributed models require many parameters to describe a system, they are expected to simulate behavior that is more consistent with observed processes. However, obtaining a single set of acceptable parameters can be problematic, as parameter equifinality often results in several behavioral
sets that fit observations (typically streamflow). In this study, we investigate the extent to which equifinality impacts a typical distributed modeling application. We outline a hierarchical approach to reduce the number of behavioral sets based on regional, observation-driven, and expert-knowledge-based constraints. For our application, we explore how each of these constraint classes reduced the number of behavioral
parameter sets and altered distributions of spatiotemporal simulations, simulating a well-studied headwater catchment, Stringer Creek, Montana, using the distributed hydrology-soil-vegetation model (DHSVM). As a demonstrative exercise, we investigated model performance across 10 000 parameter sets. Constraints on regional signatures, the hydrograph, and two internal measurements of snow water equivalent time series reduced the number of behavioral parameter sets but still left a small number with similar goodness of fit. This subset was ultimately further reduced by incorporating pattern expectations of groundwater table depth across the catchment. Our results suggest that utilizing a hierarchical approach based on regional datasets, observations, and expert knowledge to identify behavioral parameter sets can reduce equifinality and bolster more careful application and simulation of spatiotemporal processes via distributed modeling at the catchment scale.
Gaining insight into the T _2^*-T2 relationship in surface NMR free-induction decay measurements
NASA Astrophysics Data System (ADS)
Grombacher, Denys; Auken, Esben
2018-05-01
One of the primary shortcomings of the surface nuclear magnetic resonance (NMR) free-induction decay (FID) measurement is the uncertainty surrounding which mechanism controls the signal's time dependence. Ideally, the FID-estimated relaxation time T_2^* that describes the signal's decay carries an intimate link to the geometry of the pore space. In this limit the parameter T_2^* is closely linked to a related parameter T2, which is more closely linked to pore-geometry. If T_2^* ˜eq {T_2} the FID can provide valuable insight into relative pore-size and can be used to make quantitative permeability estimates. However, given only FID measurements it is difficult to determine whether T_2^* is linked to pore geometry or whether it has been strongly influenced by background magnetic field inhomogeneity. If the link between an observed T_2^* and the underlying T2 could be further constrained the utility of the standard surface NMR FID measurement would be greatly improved. We hypothesize that an approach employing an updated surface NMR forward model that solves the full Bloch equations with appropriately weighted relaxation terms can be used to help constrain the T_2^*-T2 relationship. Weighting the relaxation terms requires estimating the poorly constrained parameters T2 and T1; to deal with this uncertainty we propose to conduct a parameter search involving multiple inversions that employ a suite of forward models each describing a distinct but plausible T_2^*-T2 relationship. We hypothesize that forward models given poor T2 estimates will produce poor data fits when using the complex-inversion, while forward models given reliable T2 estimates will produce satisfactory data fits. By examining the data fits produced by the suite of plausible forward models, the likely T_2^*-T2 can be constrained by identifying the range of T2 estimates that produce reliable data fits. Synthetic and field results are presented to investigate the feasibility of the proposed technique.
A Study of the Optimal Model of the Flotation Kinetics of Copper Slag from Copper Mine BOR
NASA Astrophysics Data System (ADS)
Stanojlović, Rodoljub D.; Sokolović, Jovica M.
2014-10-01
In this study the effect of mixtures of copper slag and flotation tailings from copper mine Bor, Serbia on the flotation results of copper recovery and flotation kinetics parameters in a batch flotation cell has been investigated. By simultaneous adding old flotation tailings in the ball mill at the rate of 9%, it is possible to increase copper recovery for about 20%. These results are compared with obtained copper recovery of pure copper slag. The results of batch flotation test were fitted by MatLab software for modeling the first-order flotation kinetics in order to determine kinetics parameters and define an optimal model of the flotation kinetics. Six kinetic models are tested on the batch flotation copper recovery against flotation time. All models showed good correlation, however the modified Kelsall model provided the best fit.
GROWTH AND INEQUALITY: MODEL EVALUATION BASED ON AN ESTIMATION-CALIBRATION STRATEGY
Jeong, Hyeok; Townsend, Robert
2010-01-01
This paper evaluates two well-known models of growth with inequality that have explicit micro underpinnings related to household choice. With incomplete markets or transactions costs, wealth can constrain investment in business and the choice of occupation and also constrain the timing of entry into the formal financial sector. Using the Thai Socio-Economic Survey (SES), we estimate the distribution of wealth and the key parameters that best fit cross-sectional data on household choices and wealth. We then simulate the model economies for two decades at the estimated initial wealth distribution and analyze whether the model economies at those micro-fit parameter estimates can explain the observed macro and sectoral aspects of income growth and inequality change. Both models capture important features of Thai reality. Anomalies and comparisons across the two distinct models yield specific suggestions for improved research on the micro foundations of growth and inequality. PMID:20448833
NASA Astrophysics Data System (ADS)
Mayotte, Jean-Marc; Grabs, Thomas; Sutliff-Johansson, Stacy; Bishop, Kevin
2017-06-01
This study examined how the inactivation of bacteriophage MS2 in water was affected by ionic strength (IS) and dissolved organic carbon (DOC) using static batch inactivation experiments at 4 °C conducted over a period of 2 months. Experimental conditions were characteristic of an operational managed aquifer recharge (MAR) scheme in Uppsala, Sweden. Experimental data were fit with constant and time-dependent inactivation models using two methods: (1) traditional linear and nonlinear least-squares techniques; and (2) a Monte-Carlo based parameter estimation technique called generalized likelihood uncertainty estimation (GLUE). The least-squares and GLUE methodologies gave very similar estimates of the model parameters and their uncertainty. This demonstrates that GLUE can be used as a viable alternative to traditional least-squares parameter estimation techniques for fitting of virus inactivation models. Results showed a slight increase in constant inactivation rates following an increase in the DOC concentrations, suggesting that the presence of organic carbon enhanced the inactivation of MS2. The experiment with a high IS and a low DOC was the only experiment which showed that MS2 inactivation may have been time-dependent. However, results from the GLUE methodology indicated that models of constant inactivation were able to describe all of the experiments. This suggested that inactivation time-series longer than 2 months were needed in order to provide concrete conclusions regarding the time-dependency of MS2 inactivation at 4 °C under these experimental conditions.
NASA Astrophysics Data System (ADS)
Yan, X. L.; Coetsee, E.; Wang, J. Y.; Swart, H. C.; Terblans, J. J.
2017-07-01
The polycrystalline Ni/Cu multilayer thin films consisting of 8 alternating layers of Ni and Cu were deposited on a SiO2 substrate by means of electron beam evaporation in a high vacuum. Concentration-depth profiles of the as-deposited multilayered Ni/Cu thin films were determined with Auger electron spectroscopy (AES) in combination with Ar+ ion sputtering, under various bombardment conditions with the samples been stationary as well as rotating in some cases. The Mixing-Roughness-Information depth (MRI) model used for the fittings of the concentration-depth profiles accounts for the interface broadening of the experimental depth profiling. The interface broadening incorporates the effects of atomic mixing, surface roughness and information depth of the Auger electrons. The roughness values extracted from the MRI model fitting of the depth profiling data agrees well with those measured by atomic force microscopy (AFM). The ion sputtering induced surface roughness during the depth profiling was accordingly quantitatively evaluated from the fitted MRI parameters with sample rotation and stationary conditions. The depth resolutions of the AES depth profiles were derived directly from the values determined by the fitting parameters in the MRI model.
Tommasino, Paolo; Campolo, Domenico
2017-01-01
A major challenge in robotics and computational neuroscience is relative to the posture/movement problem in presence of kinematic redundancy. We recently addressed this issue using a principled approach which, in conjunction with nonlinear inverse optimization, allowed capturing postural strategies such as Donders' law. In this work, after presenting this general model specifying it as an extension of the Passive Motion Paradigm, we show how, once fitted to capture experimental postural strategies, the model is actually able to also predict movements. More specifically, the passive motion paradigm embeds two main intrinsic components: joint damping and joint stiffness. In previous work we showed that joint stiffness is responsible for static postures and, in this sense, its parameters are regressed to fit to experimental postural strategies. Here, we show how joint damping, in particular its anisotropy, directly affects task-space movements. Rather than using damping parameters to fit a posteriori task-space motions, we make the a priori hypothesis that damping is proportional to stiffness. This remarkably allows a postural-fitted model to also capture dynamic performance such as curvature and hysteresis of task-space trajectories during wrist pointing tasks, confirming and extending previous findings in literature. PMID:29249954
White, L J; Mandl, J N; Gomes, M G M; Bodley-Tickell, A T; Cane, P A; Perez-Brena, P; Aguilar, J C; Siqueira, M M; Portes, S A; Straliotto, S M; Waris, M; Nokes, D J; Medley, G F
2007-09-01
The nature and role of re-infection and partial immunity are likely to be important determinants of the transmission dynamics of human respiratory syncytial virus (hRSV). We propose a single model structure that captures four possible host responses to infection and subsequent reinfection: partial susceptibility, altered infection duration, reduced infectiousness and temporary immunity (which might be partial). The magnitude of these responses is determined by four homotopy parameters, and by setting some of these parameters to extreme values we generate a set of eight nested, deterministic transmission models. In order to investigate hRSV transmission dynamics, we applied these models to incidence data from eight international locations. Seasonality is included as cyclic variation in transmission. Parameters associated with the natural history of the infection were assumed to be independent of geographic location, while others, such as those associated with seasonality, were assumed location specific. Models incorporating either of the two extreme assumptions for immunity (none or solid and lifelong) were unable to reproduce the observed dynamics. Model fits with either waning or partial immunity to disease or both were visually comparable. The best fitting structure was a lifelong partial immunity to both disease and infection. Observed patterns were reproduced by stochastic simulations using the parameter values estimated from the deterministic models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khachatryan, V.; Sirunyan, A. M.; Tumasyan, A.
Here, new sets of parameters (“tunes”) for the underlying-event (UE) modelling of the pythia8, pythia6 and herwig++ Monte Carlo event generators are constructed using different parton distribution functions. Combined fits to CMS UE proton–proton (more » $$\\mathrm {p}\\mathrm {p}$$ ) data at $$\\sqrt{s} = 7\\,\\text {TeV} $$ and to UE proton–antiproton ( $$\\mathrm {p}\\overline{\\mathrm{p}} $$ ) data from the CDF experiment at lower $$\\sqrt{s}$$ , are used to study the UE models and constrain their parameters, providing thereby improved predictions for proton–proton collisions at 13 $$\\,\\text {TeV}$$ . In addition, it is investigated whether the values of the parameters obtained from fits to UE observables are consistent with the values determined from fitting observables sensitive to double-parton scattering processes. Finally, comparisons are presented of the UE tunes to “minimum bias” (MB) events, multijet, and Drell–Yan ( $$ \\mathrm{q} \\overline{\\mathrm{q}} \\rightarrow \\mathrm{Z}/ \\gamma ^* \\rightarrow $$ lepton-antilepton+jets) observables at 7 and 8 $$\\,\\text {TeV}$$ , as well as predictions for MB and UE observables at 13 $$\\,\\text {TeV}$$ .« less
Event generator tunes obtained from underlying event and multiparton scattering measurements
Khachatryan, V.; Sirunyan, A. M.; Tumasyan, A.; ...
2016-03-17
Here, new sets of parameters (“tunes”) for the underlying-event (UE) modelling of the pythia8, pythia6 and herwig++ Monte Carlo event generators are constructed using different parton distribution functions. Combined fits to CMS UE proton–proton (more » $$\\mathrm {p}\\mathrm {p}$$ ) data at $$\\sqrt{s} = 7\\,\\text {TeV} $$ and to UE proton–antiproton ( $$\\mathrm {p}\\overline{\\mathrm{p}} $$ ) data from the CDF experiment at lower $$\\sqrt{s}$$ , are used to study the UE models and constrain their parameters, providing thereby improved predictions for proton–proton collisions at 13 $$\\,\\text {TeV}$$ . In addition, it is investigated whether the values of the parameters obtained from fits to UE observables are consistent with the values determined from fitting observables sensitive to double-parton scattering processes. Finally, comparisons are presented of the UE tunes to “minimum bias” (MB) events, multijet, and Drell–Yan ( $$ \\mathrm{q} \\overline{\\mathrm{q}} \\rightarrow \\mathrm{Z}/ \\gamma ^* \\rightarrow $$ lepton-antilepton+jets) observables at 7 and 8 $$\\,\\text {TeV}$$ , as well as predictions for MB and UE observables at 13 $$\\,\\text {TeV}$$ .« less
Modeling multilayer x-ray reflectivity using genetic algorithms
NASA Astrophysics Data System (ADS)
Sánchez del Río, M.; Pareschi, G.; Michetschläger, C.
2000-06-01
The x-ray reflectivity of a multilayer is a non-linear function of many parameters (materials, layer thickness, density, roughness). Non-linear fitting of experimental data with simulations requires the use of initial values sufficiently close to the optimum value. This is a difficult task when the topology of the space of the variables is highly structured. We apply global optimization methods to fit multilayer reflectivity. Genetic algorithms are stochastic methods based on the model of natural evolution: the improvement of a population along successive generations. A complete set of initial parameters constitutes an individual. The population is a collection of individuals. Each generation is built from the parent generation by applying some operators (selection, crossover, mutation, etc.) on the members of the parent generation. The pressure of selection drives the population to include "good" individuals. For large number of generations, the best individuals will approximate the optimum parameters. Some results on fitting experimental hard x-ray reflectivity data for Ni/C and W/Si multilayers using genetic algorithms are presented. This method can also be applied to design multilayers optimized for a target application.
Jafari, Ramin; Chhabra, Shalini; Prince, Martin R; Wang, Yi; Spincemaille, Pascal
2018-04-01
To propose an efficient algorithm to perform dual input compartment modeling for generating perfusion maps in the liver. We implemented whole field-of-view linear least squares (LLS) to fit a delay-compensated dual-input single-compartment model to very high temporal resolution (four frames per second) contrast-enhanced 3D liver data, to calculate kinetic parameter maps. Using simulated data and experimental data in healthy subjects and patients, whole-field LLS was compared with the conventional voxel-wise nonlinear least-squares (NLLS) approach in terms of accuracy, performance, and computation time. Simulations showed good agreement between LLS and NLLS for a range of kinetic parameters. The whole-field LLS method allowed generating liver perfusion maps approximately 160-fold faster than voxel-wise NLLS, while obtaining similar perfusion parameters. Delay-compensated dual-input liver perfusion analysis using whole-field LLS allows generating perfusion maps with a considerable speedup compared with conventional voxel-wise NLLS fitting. Magn Reson Med 79:2415-2421, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Duarte, Adam; Adams, Michael J.; Peterson, James T.
2018-01-01
Monitoring animal populations is central to wildlife and fisheries management, and the use of N-mixture models toward these efforts has markedly increased in recent years. Nevertheless, relatively little work has evaluated estimator performance when basic assumptions are violated. Moreover, diagnostics to identify when bias in parameter estimates from N-mixture models is likely is largely unexplored. We simulated count data sets using 837 combinations of detection probability, number of sample units, number of survey occasions, and type and extent of heterogeneity in abundance or detectability. We fit Poisson N-mixture models to these data, quantified the bias associated with each combination, and evaluated if the parametric bootstrap goodness-of-fit (GOF) test can be used to indicate bias in parameter estimates. We also explored if assumption violations can be diagnosed prior to fitting N-mixture models. In doing so, we propose a new model diagnostic, which we term the quasi-coefficient of variation (QCV). N-mixture models performed well when assumptions were met and detection probabilities were moderate (i.e., ≥0.3), and the performance of the estimator improved with increasing survey occasions and sample units. However, the magnitude of bias in estimated mean abundance with even slight amounts of unmodeled heterogeneity was substantial. The parametric bootstrap GOF test did not perform well as a diagnostic for bias in parameter estimates when detectability and sample sizes were low. The results indicate the QCV is useful to diagnose potential bias and that potential bias associated with unidirectional trends in abundance or detectability can be diagnosed using Poisson regression. This study represents the most thorough assessment to date of assumption violations and diagnostics when fitting N-mixture models using the most commonly implemented error distribution. Unbiased estimates of population state variables are needed to properly inform management decision making. Therefore, we also discuss alternative approaches to yield unbiased estimates of population state variables using similar data types, and we stress that there is no substitute for an effective sample design that is grounded upon well-defined management objectives.
de Monchy, Romain; Rouyer, Julien; Destrempes, François; Chayer, Boris; Cloutier, Guy; Franceschini, Emilie
2018-04-01
Quantitative ultrasound techniques based on the backscatter coefficient (BSC) have been commonly used to characterize red blood cell (RBC) aggregation. Specifically, a scattering model is fitted to measured BSC and estimated parameters can provide a meaningful description of the RBC aggregates' structure (i.e., aggregate size and compactness). In most cases, scattering models assumed monodisperse RBC aggregates. This study proposes the Effective Medium Theory combined with the polydisperse Structure Factor Model (EMTSFM) to incorporate the polydispersity of aggregate size. From the measured BSC, this model allows estimating three structural parameters: the mean radius of the aggregate size distribution, the width of the distribution, and the compactness of the aggregates. Two successive experiments were conducted: a first experiment on blood sheared in a Couette flow device coupled with an ultrasonic probe, and a second experiment, on the same blood sample, sheared in a plane-plane rheometer coupled to a light microscope. Results demonstrated that the polydisperse EMTSFM provided the best fit to the BSC data when compared to the classical monodisperse models for the higher levels of aggregation at hematocrits between 10% and 40%. Fitting the polydisperse model yielded aggregate size distributions that were consistent with direct light microscope observations at low hematocrits.
Hagell, Peter; Westergren, Albert
Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).
Advanced approach to the analysis of a series of in-situ nuclear forward scattering experiments
NASA Astrophysics Data System (ADS)
Vrba, Vlastimil; Procházka, Vít; Smrčka, David; Miglierini, Marcel
2017-03-01
This study introduces a sequential fitting procedure as a specific approach to nuclear forward scattering (NFS) data evaluation. Principles and usage of this advanced evaluation method are described in details and its utilization is demonstrated on NFS in-situ investigations of fast processes. Such experiments frequently consist of hundreds of time spectra which need to be evaluated. The introduced procedure allows the analysis of these experiments and significantly decreases the time needed for the data evaluation. The key contributions of the study are the sequential use of the output fitting parameters of a previous data set as the input parameters for the next data set and the model suitability crosscheck option of applying the procedure in ascending and descending directions of the data sets. Described fitting methodology is beneficial for checking of model validity and reliability of obtained results.
Luxton, Gary; Keall, Paul J; King, Christopher R
2008-01-07
To facilitate the use of biological outcome modeling for treatment planning, an exponential function is introduced as a simpler equivalent to the Lyman formula for calculating normal tissue complication probability (NTCP). The single parameter of the exponential function is chosen to reproduce the Lyman calculation to within approximately 0.3%, and thus enable easy conversion of data contained in empirical fits of Lyman parameters for organs at risk (OARs). Organ parameters for the new formula are given in terms of Lyman model m and TD(50), and conversely m and TD(50) are expressed in terms of the parameters of the new equation. The role of the Lyman volume-effect parameter n is unchanged from its role in the Lyman model. For a non-homogeneously irradiated OAR, an equation relates d(ref), n, v(eff) and the Niemierko equivalent uniform dose (EUD), where d(ref) and v(eff) are the reference dose and effective fractional volume of the Kutcher-Burman reduction algorithm (i.e. the LKB model). It follows in the LKB model that uniform EUD irradiation of an OAR results in the same NTCP as the original non-homogeneous distribution. The NTCP equation is therefore represented as a function of EUD. The inverse equation expresses EUD as a function of NTCP and is used to generate a table of EUD versus normal tissue complication probability for the Emami-Burman parameter fits as well as for OAR parameter sets from more recent data.
Whole Protein Native Fitness Potentials
NASA Astrophysics Data System (ADS)
Faraggi, Eshel; Kloczkowski, Andrzej
2013-03-01
Protein structure prediction can be separated into two tasks: sample the configuration space of the protein chain, and assign a fitness between these hypothetical models and the native structure of the protein. One of the more promising developments in this area is that of knowledge based energy functions. However, standard approaches using pair-wise interactions have shown shortcomings demonstrated by the superiority of multi-body-potentials. These shortcomings are due to residue pair-wise interaction being dependent on other residues along the chain. We developed a method that uses whole protein information filtered through machine learners to score protein models based on their likeness to native structures. For all models we calculated parameters associated with the distance to the solvent and with distances between residues. These parameters, in addition to energy estimates obtained by using a four-body-potential, DFIRE, and RWPlus were used as training for machine learners to predict the fitness of the models. Testing on CASP 9 targets showed that our method is superior to DFIRE, RWPlus, and the four-body potential, which are considered standards in the field.
CalFitter: a web server for analysis of protein thermal denaturation data.
Mazurenko, Stanislav; Stourac, Jan; Kunka, Antonin; Nedeljkovic, Sava; Bednar, David; Prokop, Zbynek; Damborsky, Jiri
2018-05-14
Despite significant advances in the understanding of protein structure-function relationships, revealing protein folding pathways still poses a challenge due to a limited number of relevant experimental tools. Widely-used experimental techniques, such as calorimetry or spectroscopy, critically depend on a proper data analysis. Currently, there are only separate data analysis tools available for each type of experiment with a limited model selection. To address this problem, we have developed the CalFitter web server to be a unified platform for comprehensive data fitting and analysis of protein thermal denaturation data. The server allows simultaneous global data fitting using any combination of input data types and offers 12 protein unfolding pathway models for selection, including irreversible transitions often missing from other tools. The data fitting produces optimal parameter values, their confidence intervals, and statistical information to define unfolding pathways. The server provides an interactive and easy-to-use interface that allows users to directly analyse input datasets and simulate modelled output based on the model parameters. CalFitter web server is available free at https://loschmidt.chemi.muni.cz/calfitter/.
Birdsall, Robert E.; Koshel, Brooke M.; Hua, Yimin; Ratnayaka, Saliya N.; Wirth, Mary J.
2013-01-01
Sieving of proteins in silica colloidal crystals of mm dimensions is characterized for particle diameters of nominally 350 and 500 nm, where the colloidal crystals are chemically modified with a brush layer of polyacrylamide. A model is developed that relates the reduced electrophoretic mobility to the experimentally measurable porosity. The model fits the data with no adjustable parameters for the case of silica colloidal crystals packed in capillaries, for which independent measurements of the pore radii were made from flow data. The model also fits the data for electrophoresis in a highly ordered colloidal crystal formed in a channel, where the unknown pore radius was used as a fitting parameter. Plate heights as small as 0.4 μm point to the potential for miniaturized separations. Band broadening increases as the pore radius approaches the protein radius, indicating that the main contribution to broadening is the spatial heterogeneity of the pore radius. The results quantitatively support the notion that sieving occurs for proteins in silica colloidal crystals, and facilitate design of new separations that would benefit from miniaturization. PMID:23229163
Numerical details and SAS programs for parameter recovery of the SB distribution
Bernard R. Parresol; Teresa Fidalgo Fonseca; Carlos Pacheco Marques
2010-01-01
The four-parameter SB distribution has seen widespread use in growth-and-yield modeling because it covers a broad spectrum of shapes, fitting both positively and negatively skewed data and bimodal configurations. Two recent parameter recovery schemes, an approach whereby characteristics of a statistical distribution are equated with attributes of...
Cho, C. I.; Alam, M.; Choi, T. J.; Choy, Y. H.; Choi, J. G.; Lee, S. S.; Cho, K. H.
2016-01-01
The objectives of the study were to estimate genetic parameters for milk production traits of Holstein cattle using random regression models (RRMs), and to compare the goodness of fit of various RRMs with homogeneous and heterogeneous residual variances. A total of 126,980 test-day milk production records of the first parity Holstein cows between 2007 and 2014 from the Dairy Cattle Improvement Center of National Agricultural Cooperative Federation in South Korea were used. These records included milk yield (MILK), fat yield (FAT), protein yield (PROT), and solids-not-fat yield (SNF). The statistical models included random effects of genetic and permanent environments using Legendre polynomials (LP) of the third to fifth order (L3–L5), fixed effects of herd-test day, year-season at calving, and a fixed regression for the test-day record (third to fifth order). The residual variances in the models were either homogeneous (HOM) or heterogeneous (15 classes, HET15; 60 classes, HET60). A total of nine models (3 orders of polynomials×3 types of residual variance) including L3-HOM, L3-HET15, L3-HET60, L4-HOM, L4-HET15, L4-HET60, L5-HOM, L5-HET15, and L5-HET60 were compared using Akaike information criteria (AIC) and/or Schwarz Bayesian information criteria (BIC) statistics to identify the model(s) of best fit for their respective traits. The lowest BIC value was observed for the models L5-HET15 (MILK; PROT; SNF) and L4-HET15 (FAT), which fit the best. In general, the BIC values of HET15 models for a particular polynomial order was lower than that of the HET60 model in most cases. This implies that the orders of LP and types of residual variances affect the goodness of models. Also, the heterogeneity of residual variances should be considered for the test-day analysis. The heritability estimates of from the best fitted models ranged from 0.08 to 0.15 for MILK, 0.06 to 0.14 for FAT, 0.08 to 0.12 for PROT, and 0.07 to 0.13 for SNF according to days in milk of first lactation. Genetic variances for studied traits tended to decrease during the earlier stages of lactation, which were followed by increases in the middle and decreases further at the end of lactation. With regards to the fitness of the models and the differential genetic parameters across the lactation stages, we could estimate genetic parameters more accurately from RRMs than from lactation models. Therefore, we suggest using RRMs in place of lactation models to make national dairy cattle genetic evaluations for milk production traits in Korea. PMID:26954184
Cho, C I; Alam, M; Choi, T J; Choy, Y H; Choi, J G; Lee, S S; Cho, K H
2016-05-01
The objectives of the study were to estimate genetic parameters for milk production traits of Holstein cattle using random regression models (RRMs), and to compare the goodness of fit of various RRMs with homogeneous and heterogeneous residual variances. A total of 126,980 test-day milk production records of the first parity Holstein cows between 2007 and 2014 from the Dairy Cattle Improvement Center of National Agricultural Cooperative Federation in South Korea were used. These records included milk yield (MILK), fat yield (FAT), protein yield (PROT), and solids-not-fat yield (SNF). The statistical models included random effects of genetic and permanent environments using Legendre polynomials (LP) of the third to fifth order (L3-L5), fixed effects of herd-test day, year-season at calving, and a fixed regression for the test-day record (third to fifth order). The residual variances in the models were either homogeneous (HOM) or heterogeneous (15 classes, HET15; 60 classes, HET60). A total of nine models (3 orders of polynomials×3 types of residual variance) including L3-HOM, L3-HET15, L3-HET60, L4-HOM, L4-HET15, L4-HET60, L5-HOM, L5-HET15, and L5-HET60 were compared using Akaike information criteria (AIC) and/or Schwarz Bayesian information criteria (BIC) statistics to identify the model(s) of best fit for their respective traits. The lowest BIC value was observed for the models L5-HET15 (MILK; PROT; SNF) and L4-HET15 (FAT), which fit the best. In general, the BIC values of HET15 models for a particular polynomial order was lower than that of the HET60 model in most cases. This implies that the orders of LP and types of residual variances affect the goodness of models. Also, the heterogeneity of residual variances should be considered for the test-day analysis. The heritability estimates of from the best fitted models ranged from 0.08 to 0.15 for MILK, 0.06 to 0.14 for FAT, 0.08 to 0.12 for PROT, and 0.07 to 0.13 for SNF according to days in milk of first lactation. Genetic variances for studied traits tended to decrease during the earlier stages of lactation, which were followed by increases in the middle and decreases further at the end of lactation. With regards to the fitness of the models and the differential genetic parameters across the lactation stages, we could estimate genetic parameters more accurately from RRMs than from lactation models. Therefore, we suggest using RRMs in place of lactation models to make national dairy cattle genetic evaluations for milk production traits in Korea.
Isochrones of M67 with an Expanded Set of Parameters
NASA Astrophysics Data System (ADS)
Viani, Lucas; Basu, Sarbani
2017-10-01
We create isochrones of M67 using the Yale Rotating Stellar Evolution Code. In addition to metallicity, parameters that are traditionally held fixed, such as the mixing length parameter and initial helium abundance, also vary. The amount of convective overshoot is also changed in different sets of isochrones. Models are constructed both with and without diffusion. From the resulting isochrones that fit the cluster, the age range is between 3.6 and 4.8 Gyr and the distance is between 755 and 868 pc. We also confirm Michaud et al. (2004) claim that M67 can be fit without overshoot if diffusion is included.
Spectroscopy Made Easy: A New Tool for Fitting Observations with Synthetic Spectra
NASA Technical Reports Server (NTRS)
Valenti, J. A.; Piskunov, N.
1996-01-01
We describe a new software package that may be used to determine stellar and atomic parameters by matching observed spectra with synthetic spectra generated from parameterized atmospheres. A nonlinear least squares algorithm is used to solve for any subset of allowed parameters, which include atomic data (log gf and van der Waals damping constants), model atmosphere specifications (T(sub eff, log g), elemental abundances, and radial, turbulent, and rotational velocities. LTE synthesis software handles discontiguous spectral intervals and complex atomic blends. As a demonstration, we fit 26 Fe I lines in the NSO Solar Atlas (Kurucz et al.), determining various solar and atomic parameters.
Brookings, Ted; Goeritz, Marie L; Marder, Eve
2014-11-01
We describe a new technique to fit conductance-based neuron models to intracellular voltage traces from isolated biological neurons. The biological neurons are recorded in current-clamp with pink (1/f) noise injected to perturb the activity of the neuron. The new algorithm finds a set of parameters that allows a multicompartmental model neuron to match the recorded voltage trace. Attempting to match a recorded voltage trace directly has a well-known problem: mismatch in the timing of action potentials between biological and model neuron is inevitable and results in poor phenomenological match between the model and data. Our approach avoids this by applying a weak control adjustment to the model to promote alignment during the fitting procedure. This approach is closely related to the control theoretic concept of a Luenberger observer. We tested this approach on synthetic data and on data recorded from an anterior gastric receptor neuron from the stomatogastric ganglion of the crab Cancer borealis. To test the flexibility of this approach, the synthetic data were constructed with conductance models that were different from the ones used in the fitting model. For both synthetic and biological data, the resultant models had good spike-timing accuracy. Copyright © 2014 the American Physiological Society.
Meirovitch, Eva; Shapiro, Yury E.; Polimeno, Antonino; Freed, Jack H.
2009-01-01
15N-1H spin relaxation is a powerful method for deriving information on protein dynamics. The traditional method of data analysis is model-free (MF), where the global and local N-H motions are independent and the local geometry is simplified. The common MF analysis consists of fitting single-field data. The results are typically field-dependent, and multi-field data cannot be fit with standard fitting schemes. Cases where known functional dynamics has not been detected by MF were identified by us and others. Recently we applied to spin relaxation in proteins the Slowly Relaxing Local Structure (SRLS) approach which accounts rigorously for mode-mixing and general features of local geometry. SRLS was shown to yield MF in appropriate asymptotic limits. We found that the experimental spectral density corresponds quite well to the SRLS spectral density. The MF formulae are often used outside of their validity ranges, allowing small data sets to be force-fitted with good statistics but inaccurate best-fit parameters. This paper focuses on the mechanism of force-fitting and its implications. It is shown that MF force-fits the experimental data because mode-mixing, the rhombic symmetry of the local ordering and general features of local geometry are not accounted for. Combined multi-field multi-temperature data analyzed by MF may lead to the detection of incorrect phenomena, while conformational entropy derived from MF order parameters may be highly inaccurate. On the other hand, fitting to more appropriate models can yield consistent physically insightful information. This requires that the complexity of the theoretical spectral densities matches the integrity of the experimental data. As shown herein, the SRLS densities comply with this requirement. PMID:16821820
Validation and uncertainty analysis of a pre-treatment 2D dose prediction model
NASA Astrophysics Data System (ADS)
Baeza, Jose A.; Wolfs, Cecile J. A.; Nijsten, Sebastiaan M. J. J. G.; Verhaegen, Frank
2018-02-01
Independent verification of complex treatment delivery with megavolt photon beam radiotherapy (RT) has been effectively used to detect and prevent errors. This work presents the validation and uncertainty analysis of a model that predicts 2D portal dose images (PDIs) without a patient or phantom in the beam. The prediction model is based on an exponential point dose model with separable primary and secondary photon fluence components. The model includes a scatter kernel, off-axis ratio map, transmission values and penumbra kernels for beam-delimiting components. These parameters were derived through a model fitting procedure supplied with point dose and dose profile measurements of radiation fields. The model was validated against a treatment planning system (TPS; Eclipse) and radiochromic film measurements for complex clinical scenarios, including volumetric modulated arc therapy (VMAT). Confidence limits on fitted model parameters were calculated based on simulated measurements. A sensitivity analysis was performed to evaluate the effect of the parameter uncertainties on the model output. For the maximum uncertainty, the maximum deviating measurement sets were propagated through the fitting procedure and the model. The overall uncertainty was assessed using all simulated measurements. The validation of the prediction model against the TPS and the film showed a good agreement, with on average 90.8% and 90.5% of pixels passing a (2%,2 mm) global gamma analysis respectively, with a low dose threshold of 10%. The maximum and overall uncertainty of the model is dependent on the type of clinical plan used as input. The results can be used to study the robustness of the model. A model for predicting accurate 2D pre-treatment PDIs in complex RT scenarios can be used clinically and its uncertainties can be taken into account.
An Algorithm and R Program for Fitting and Simulation of Pharmacokinetic and Pharmacodynamic Data.
Li, Jijie; Yan, Kewei; Hou, Lisha; Du, Xudong; Zhu, Ping; Zheng, Li; Zhu, Cairong
2017-06-01
Pharmacokinetic/pharmacodynamic link models are widely used in dose-finding studies. By applying such models, the results of initial pharmacokinetic/pharmacodynamic studies can be used to predict the potential therapeutic dose range. This knowledge can improve the design of later comparative large-scale clinical trials by reducing the number of participants and saving time and resources. However, the modeling process can be challenging, time consuming, and costly, even when using cutting-edge, powerful pharmacological software. Here, we provide a freely available R program for expediently analyzing pharmacokinetic/pharmacodynamic data, including data importation, parameter estimation, simulation, and model diagnostics. First, we explain the theory related to the establishment of the pharmacokinetic/pharmacodynamic link model. Subsequently, we present the algorithms used for parameter estimation and potential therapeutic dose computation. The implementation of the R program is illustrated by a clinical example. The software package is then validated by comparing the model parameters and the goodness-of-fit statistics generated by our R package with those generated by the widely used pharmacological software WinNonlin. The pharmacokinetic and pharmacodynamic parameters as well as the potential recommended therapeutic dose can be acquired with the R package. The validation process shows that the parameters estimated using our package are satisfactory. The R program developed and presented here provides pharmacokinetic researchers with a simple and easy-to-access tool for pharmacokinetic/pharmacodynamic analysis on personal computers.
Towards Personalized Cardiology: Multi-Scale Modeling of the Failing Heart
Amr, Ali; Neumann, Dominik; Georgescu, Bogdan; Seegerer, Philipp; Kamen, Ali; Haas, Jan; Frese, Karen S.; Irawati, Maria; Wirsz, Emil; King, Vanessa; Buss, Sebastian; Mereles, Derliz; Zitron, Edgar; Keller, Andreas; Katus, Hugo A.; Comaniciu, Dorin; Meder, Benjamin
2015-01-01
Background Despite modern pharmacotherapy and advanced implantable cardiac devices, overall prognosis and quality of life of HF patients remain poor. This is in part due to insufficient patient stratification and lack of individualized therapy planning, resulting in less effective treatments and a significant number of non-responders. Methods and Results State-of-the-art clinical phenotyping was acquired, including magnetic resonance imaging (MRI) and biomarker assessment. An individualized, multi-scale model of heart function covering cardiac anatomy, electrophysiology, biomechanics and hemodynamics was estimated using a robust framework. The model was computed on n=46 HF patients, showing for the first time that advanced multi-scale models can be fitted consistently on large cohorts. Novel multi-scale parameters derived from the model of all cases were analyzed and compared against clinical parameters, cardiac imaging, lab tests and survival scores to evaluate the explicative power of the model and its potential for better patient stratification. Model validation was pursued by comparing clinical parameters that were not used in the fitting process against model parameters. Conclusion This paper illustrates how advanced multi-scale models can complement cardiovascular imaging and how they could be applied in patient care. Based on obtained results, it becomes conceivable that, after thorough validation, such heart failure models could be applied for patient management and therapy planning in the future, as we illustrate in one patient of our cohort who received CRT-D implantation. PMID:26230546
Švajdlenková, H; Ruff, A; Lunkenheimer, P; Loidl, A; Bartoš, J
2017-08-28
We report a broadband dielectric spectroscopic (BDS) study on the clustering fragile glass-former meta-toluidine (m-TOL) from 187 K up to 289 K over a wide frequency range of 10 -3 -10 9 Hz with focus on the primary α relaxation and the secondary β relaxation above the glass temperature T g . The broadband dielectric spectra were fitted by using the Havriliak-Negami (HN) and Cole-Cole (CC) models. The β process disappearing at T β,disap = 1.12T g exhibits non-Arrhenius dependence fitted by the Vogel-Fulcher-Tamman-Hesse equation with T 0β VFTH in accord with the characteristic differential scanning calorimetry (DSC) limiting temperature of the glassy state. The essential feature of the α process consists in the distinct changes of its spectral shape parameter β HN marked by the characteristic BDS temperatures T B1 βHN and T B2 βHN . The primary α relaxation times were fitted over the entire temperature and frequency range by several current three-parameter up to six-parameter dynamic models. This analysis reveals that the crossover temperatures of the idealized mode coupling theory model (T c MCT ), the extended free volume model (T 0 EFV ), and the two-order parameter (TOP) model (T m c ) are close to T B1 βHN , which provides a consistent physical rationalization for the first change of the shape parameter. In addition, the other two characteristic TOP temperatures T 0 TOP and T A are coinciding with the thermodynamic Kauzmann temperature T K and the second change of the shape parameter at around T B2 βHN , respectively. These can be related to the onset of the liquid-like domains in the glassy state or the disappearance of the solid-like domains in the normal liquid state.
Transfer-function-parameter estimation from frequency response data: A FORTRAN program
NASA Technical Reports Server (NTRS)
Seidel, R. C.
1975-01-01
A FORTRAN computer program designed to fit a linear transfer function model to given frequency response magnitude and phase data is presented. A conjugate gradient search is used that minimizes the integral of the absolute value of the error squared between the model and the data. The search is constrained to insure model stability. A scaling of the model parameters by their own magnitude aids search convergence. Efficient computer algorithms result in a small and fast program suitable for a minicomputer. A sample problem with different model structures and parameter estimates is reported.
Kim, Sun Jung; Yoo, Il Young
2016-03-01
The purpose of this study was to explain the health promotion behavior of Chinese international students in Korea using a structural equation model including acculturation factors. A survey using self-administered questionnaires was employed. Data were collected from 272 Chinese students who have resided in Korea for longer than 6 months. The data were analyzed using structural equation modeling. The p value of final model is .31. The fitness parameters of the final model such as goodness of fit index, adjusted goodness of fit index, normed fit index, non-normed fit index, and comparative fit index were more than .95. Root mean square of residual and root mean square error of approximation also met the criteria. Self-esteem, perceived health status, acculturative stress and acculturation level had direct effects on health promotion behavior of the participants and the model explained 30.0% of variance. The Chinese students in Korea with higher self-esteem, perceived health status, acculturation level, and lower acculturative stress reported higher health promotion behavior. The findings can be applied to develop health promotion strategies for this population. Copyright © 2016. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Fan, Qiang; Huang, Zhenyu; Zhang, Bing; Chen, Dayue
2013-02-01
Properties of discontinuities, such as bolt joints and cracks in the waveguide structures, are difficult to evaluate by either analytical or numerical methods due to the complexity and uncertainty of the discontinuities. In this paper, the discontinuity in a Timoshenko beam is modeled with high-order parameters and then these parameters are identified by using reflection coefficients at the discontinuity. The high-order model is composed of several one-order sub-models in series and each sub-model consists of inertia, stiffness and damping components in parallel. The order of the discontinuity model is determined based on the characteristics of the reflection coefficient curve and the accuracy requirement of the dynamic modeling. The model parameters are identified through the least-square fitting iteration method, of which the undetermined model parameters are updated in iteration to fit the dynamic reflection coefficient curve with the wave-based one. By using the spectral super-element method (SSEM), simulation cases, including one-order discontinuities on infinite- and finite-beams and a two-order discontinuity on an infinite beam, were employed to evaluate both the accuracy of the discontinuity model and the effectiveness of the identification method. For practical considerations, effects of measurement noise on the discontinuity parameter identification are investigated by adding different levels of noise to the simulated data. The simulation results were then validated by the corresponding experiments. Both the simulation and experimental results show that (1) the one-order discontinuities can be identified accurately with the maximum errors of 6.8% and 8.7%, respectively; (2) and the high-order discontinuities can be identified with the maximum errors of 15.8% and 16.2%, respectively; and (3) the high-order model can predict the complex discontinuity much more accurately than the one-order discontinuity model.
Zahariev, Federico; De Silva, Nuwan; Gordon, Mark S; Windus, Theresa L; Dick-Perez, Marilu
2017-03-27
A newly created object-oriented program for automating the process of fitting molecular-mechanics parameters to ab initio data, termed ParFit, is presented. ParFit uses a hybrid of deterministic and stochastic genetic algorithms. ParFit can simultaneously handle several molecular-mechanics parameters in multiple molecules and can also apply symmetric and antisymmetric constraints on the optimized parameters. The simultaneous handling of several molecules enhances the transferability of the fitted parameters. ParFit is written in Python, uses a rich set of standard and nonstandard Python libraries, and can be run in parallel on multicore computer systems. As an example, a series of phosphine oxides, important for metal extraction chemistry, are parametrized using ParFit. ParFit is in an open source program available for free on GitHub ( https://github.com/fzahari/ParFit ).
NASA Astrophysics Data System (ADS)
Botyánszki, János; Kasen, Daniel
2017-08-01
We present a radiative transfer code to model the nebular phase spectra of supernovae (SNe) in non-LTE (NLTE). We apply it to a systematic study of SNe Ia using parameterized 1D models and show how nebular spectral features depend on key physical parameters, such as the time since explosion, total ejecta mass, kinetic energy, radial density profile, and the masses of 56Ni, intermediate-mass elements, and stable iron-group elements. We also quantify the impact of uncertainties in atomic data inputs. We find the following. (1) The main features of SN Ia nebular spectra are relatively insensitive to most physical parameters. Degeneracy among parameters precludes a unique determination of the ejecta properties from spectral fitting. In particular, features can be equally well fit with generic Chandrasekhar mass ({M}{ch}), sub-{M}{Ch}, and super-{M}{Ch} models. (2) A sizable (≳0.1 {M}⊙ ) central region of stable iron-group elements, often claimed as evidence for {M}{Ch} models, is not essential to fit the optical spectra and may produce an unusual flat-top [Co III] profile. (3) The strength of [S III] emission near 9500 Å can provide a useful diagnostic of explosion nucleosynthesis. (4) Substantial amounts (≳0.1 {M}⊙ ) of unburned C/O mixed throughout the ejecta produce [O III] emission not seen in observations. (5) Shifts in the wavelength of line peaks can arise from line-blending effects. (6) The steepness of the ejecta density profile affects the line shapes, offering a constraint on explosion models. (7) Uncertainties in atomic data limit the ability to infer physical parameters.
NASA Astrophysics Data System (ADS)
Suvanto, K.
1990-07-01
Statistical inversion theory is employed to estimate parameter uncertainties in incoherent scatter radar studies of non-Maxwellian ionospheric plasma. Measurement noise and the inexact nature of the plasma model are considered as potential sources of error. In most of the cases investigated here, it is not possible to determine electron density, line-of-sight ion and electron temperatures, ion composition, and two non-Maxwellian shape factors simultaneously. However, if the molecular ion velocity distribution is highly non-Maxwellian, all these quantities can sometimes be retrieved from the data. This theoretical result supports the validity of the only successful non-Maxwellian, mixed-species fit discussed in the literature. A priori information on one of the parameters, e.g., the electron density, often reduces the parameter uncertainties significantly and makes composition fits possible even if the six-parameter fit cannot be performed. However, small (less than 0.5) non-Maxwellian shape factors remain difficult to distinguish.
Zanzonico, Pat; Carrasquillo, Jorge A; Pandit-Taskar, Neeta; O'Donoghue, Joseph A; Humm, John L; Smith-Jones, Peter; Ruan, Shutian; Divgi, Chaitanya; Scott, Andrew M; Kemeny, Nancy E; Fong, Yuman; Wong, Douglas; Scheinberg, David; Ritter, Gerd; Jungbluth, Achem; Old, Lloyd J; Larson, Steven M
2015-10-01
The molecular specificity of monoclonal antibodies (mAbs) directed against tumor antigens has proven effective for targeted therapy of human cancers, as shown by a growing list of successful antibody-based drug products. We describe a novel, nonlinear compartmental model using PET-derived data to determine the "best-fit" parameters and model-derived quantities for optimizing biodistribution of intravenously injected (124)I-labeled antitumor antibodies. As an example of this paradigm, quantitative image and kinetic analyses of anti-A33 humanized mAb (also known as "A33") were performed in 11 colorectal cancer patients. Serial whole-body PET scans of (124)I-labeled A33 and blood samples were acquired and the resulting tissue time-activity data for each patient were fit to a nonlinear compartmental model using the SAAM II computer code. Excellent agreement was observed between fitted and measured parameters of tumor uptake, "off-target" uptake in bowel mucosa, blood clearance, tumor antigen levels, and percent antigen occupancy. This approach should be generally applicable to antibody-antigen systems in human tumors for which the masses of antigen-expressing tumor and of normal tissues can be estimated and for which antibody kinetics can be measured with PET. Ultimately, based on each patient's resulting "best-fit" nonlinear model, a patient-specific optimum mAb dose (in micromoles, for example) may be derived.
Global parameter estimation for thermodynamic models of transcriptional regulation.
Suleimenov, Yerzhan; Ay, Ahmet; Samee, Md Abul Hassan; Dresch, Jacqueline M; Sinha, Saurabh; Arnosti, David N
2013-07-15
Deciphering the mechanisms involved in gene regulation holds the key to understanding the control of central biological processes, including human disease, population variation, and the evolution of morphological innovations. New experimental techniques including whole genome sequencing and transcriptome analysis have enabled comprehensive modeling approaches to study gene regulation. In many cases, it is useful to be able to assign biological significance to the inferred model parameters, but such interpretation should take into account features that affect these parameters, including model construction and sensitivity, the type of fitness calculation, and the effectiveness of parameter estimation. This last point is often neglected, as estimation methods are often selected for historical reasons or for computational ease. Here, we compare the performance of two parameter estimation techniques broadly representative of local and global approaches, namely, a quasi-Newton/Nelder-Mead simplex (QN/NMS) method and a covariance matrix adaptation-evolutionary strategy (CMA-ES) method. The estimation methods were applied to a set of thermodynamic models of gene transcription applied to regulatory elements active in the Drosophila embryo. Measuring overall fit, the global CMA-ES method performed significantly better than the local QN/NMS method on high quality data sets, but this difference was negligible on lower quality data sets with increased noise or on data sets simplified by stringent thresholding. Our results suggest that the choice of parameter estimation technique for evaluation of gene expression models depends both on quality of data, the nature of the models [again, remains to be established] and the aims of the modeling effort. Copyright © 2013 Elsevier Inc. All rights reserved.
Tube-Load Model Parameter Estimation for Monitoring Arterial Hemodynamics
Zhang, Guanqun; Hahn, Jin-Oh; Mukkamala, Ramakrishna
2011-01-01
A useful model of the arterial system is the uniform, lossless tube with parametric load. This tube-load model is able to account for wave propagation and reflection (unlike lumped-parameter models such as the Windkessel) while being defined by only a few parameters (unlike comprehensive distributed-parameter models). As a result, the parameters may be readily estimated by accurate fitting of the model to available arterial pressure and flow waveforms so as to permit improved monitoring of arterial hemodynamics. In this paper, we review tube-load model parameter estimation techniques that have appeared in the literature for monitoring wave reflection, large artery compliance, pulse transit time, and central aortic pressure. We begin by motivating the use of the tube-load model for parameter estimation. We then describe the tube-load model, its assumptions and validity, and approaches for estimating its parameters. We next summarize the various techniques and their experimental results while highlighting their advantages over conventional techniques. We conclude the review by suggesting future research directions and describing potential applications. PMID:22053157
Sánchez-Jiménez, Pedro E; Pérez-Maqueda, Luis A; Perejón, Antonio; Criado, José M
2013-02-05
This paper provides some clarifications regarding the use of model-fitting methods of kinetic analysis for estimating the activation energy of a process, in response to some results recently published in Chemistry Central journal. The model fitting methods of Arrhenius and Savata are used to determine the activation energy of a single simulated curve. It is shown that most kinetic models correctly fit the data, each providing a different value for the activation energy. Therefore it is not really possible to determine the correct activation energy from a single non-isothermal curve. On the other hand, when a set of curves are recorded under different heating schedules are used, the correct kinetic parameters can be clearly discerned. Here, it is shown that the activation energy and the kinetic model cannot be unambiguously determined from a single experimental curve recorded under non isothermal conditions. Thus, the use of a set of curves recorded under different heating schedules is mandatory if model-fitting methods are employed.
Nonlinear Viscoelastic Characterization of the Porcine Spinal Cord
Shetye, Snehal; Troyer, Kevin; Streijger, Femke; Lee, Jae H. T.; Kwon, Brian K.; Cripton, Peter; Puttlitz, Christian M.
2014-01-01
Although quasi-static and quasi-linear viscoelastic properties of the spinal cord have been reported previously, there are no published studies that have investigated the fully (strain-dependent) nonlinear viscoelastic properties of the spinal cord. In this study, stress relaxation experiments and dynamic cycling were performed on six fresh porcine lumbar cord specimens to examine their viscoelastic mechanical properties. The stress relaxation data were fitted to a modified superposition formulation and a novel finite ramp time correction technique was applied. The parameters obtained from this fitting methodology were used to predict the average dynamic cyclic viscoelastic behavior of the porcine cord. The data indicate that the porcine spinal cord exhibited fully nonlinear viscoelastic behavior. The average weighted RMSE for a Heaviside ramp fit was 2.8kPa, which was significantly greater (p < 0.001) than that of the nonlinear (comprehensive viscoelastic characterization (CVC) method) fit (0.365kPa). Further, the nonlinear mechanical parameters obtained were able to accurately predict the dynamic behavior, thus exemplifying the reliability of the obtained nonlinear parameters. These parameters will be important for future studies investigating various damage mechanisms of the spinal cord and studies developing high resolution finite elements models of the spine. PMID:24211612
Zahariev, Federico; De Silva, Nuwan; Gordon, Mark S.; ...
2017-02-23
Here, a newly created object-oriented program for automating the process of fitting molecular-mechanics parameters to ab initio data, termed ParFit, is presented. ParFit uses a hybrid of deterministic and stochastic genetic algorithms. ParFit can simultaneously handle several molecular-mechanics parameters in multiple molecules and can also apply symmetric and antisymmetric constraints on the optimized parameters. The simultaneous handling of several molecules enhances the transferability of the fitted parameters. ParFit is written in Python, uses a rich set of standard and nonstandard Python libraries, and can be run in parallel on multicore computer systems. As an example, a series of phosphine oxides,more » important for metal extraction chemistry, are parametrized using ParFit.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zahariev, Federico; De Silva, Nuwan; Gordon, Mark S.
Here, a newly created object-oriented program for automating the process of fitting molecular-mechanics parameters to ab initio data, termed ParFit, is presented. ParFit uses a hybrid of deterministic and stochastic genetic algorithms. ParFit can simultaneously handle several molecular-mechanics parameters in multiple molecules and can also apply symmetric and antisymmetric constraints on the optimized parameters. The simultaneous handling of several molecules enhances the transferability of the fitted parameters. ParFit is written in Python, uses a rich set of standard and nonstandard Python libraries, and can be run in parallel on multicore computer systems. As an example, a series of phosphine oxides,more » important for metal extraction chemistry, are parametrized using ParFit.« less
Tong, Xuming; Chen, Jinghang; Miao, Hongyu; Li, Tingting; Zhang, Le
2015-01-01
Agent-based models (ABM) and differential equations (DE) are two commonly used methods for immune system simulation. However, it is difficult for ABM to estimate key parameters of the model by incorporating experimental data, whereas the differential equation model is incapable of describing the complicated immune system in detail. To overcome these problems, we developed an integrated ABM regression model (IABMR). It can combine the advantages of ABM and DE by employing ABM to mimic the multi-scale immune system with various phenotypes and types of cells as well as using the input and output of ABM to build up the Loess regression for key parameter estimation. Next, we employed the greedy algorithm to estimate the key parameters of the ABM with respect to the same experimental data set and used ABM to describe a 3D immune system similar to previous studies that employed the DE model. These results indicate that IABMR not only has the potential to simulate the immune system at various scales, phenotypes and cell types, but can also accurately infer the key parameters like DE model. Therefore, this study innovatively developed a complex system development mechanism that could simulate the complicated immune system in detail like ABM and validate the reliability and efficiency of model like DE by fitting the experimental data. PMID:26535589
Quasi-Fermi level splitting and sub-bandgap absorptivity from semiconductor photoluminescence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katahara, John K.; Hillhouse, Hugh W., E-mail: h2@uw.edu
A unified model for the direct gap absorption coefficient (band-edge and sub-bandgap) is developed that encompasses the functional forms of the Urbach, Thomas-Fermi, screened Thomas-Fermi, and Franz-Keldysh models of sub-bandgap absorption as specific cases. We combine this model of absorption with an occupation-corrected non-equilibrium Planck law for the spontaneous emission of photons to yield a model of photoluminescence (PL) with broad applicability to band-band photoluminescence from intrinsic, heavily doped, and strongly compensated semiconductors. The utility of the model is that it is amenable to full-spectrum fitting of absolute intensity PL data and yields: (1) the quasi-Fermi level splitting, (2) themore » local lattice temperature, (3) the direct bandgap, (4) the functional form of the sub-bandgap absorption, and (5) the energy broadening parameter (Urbach energy, magnitude of potential fluctuations, etc.). The accuracy of the model is demonstrated by fitting the room temperature PL spectrum of GaAs. It is then applied to Cu(In,Ga)(S,Se){sub 2} (CIGSSe) and Cu{sub 2}ZnSn(S,Se){sub 4} (CZTSSe) to reveal the nature of their tail states. For GaAs, the model fit is excellent, and fitted parameters match literature values for the bandgap (1.42 eV), functional form of the sub-bandgap states (purely Urbach in nature), and energy broadening parameter (Urbach energy of 9.4 meV). For CIGSSe and CZTSSe, the model fits yield quasi-Fermi leveling splittings that match well with the open circuit voltages measured on devices made from the same materials and bandgaps that match well with those extracted from EQE measurements on the devices. The power of the exponential decay of the absorption coefficient into the bandgap is found to be in the range of 1.2 to 1.6, suggesting that tunneling in the presence of local electrostatic potential fluctuations is a dominant factor contributing to the sub-bandgap absorption by either purely electrostatic (screened Thomas-Fermi) or a photon-assisted tunneling mechanism (Franz-Keldysh). A Gaussian distribution of bandgaps (local E{sub g} fluctuation) is found to be inconsistent with the data. The sub-bandgap absorption of the CZTSSe absorber is found to be larger than that for CIGSSe for materials that yield roughly equivalent photovoltaic devices (8% efficient). Further, it is shown that fitting only portions of the PL spectrum (e.g., low energy for energy broadening parameter and high energy for quasi-Fermi level splitting) may lead to significant errors for materials with substantial sub-bandgap absorption and emission.« less
Rodrigues, Philip; Wilkinson, Callum; McFarland, Kevin
2016-08-24
The longstanding discrepancy between bubble chamber measurements of ν μ-induced single pion production channels has led to large uncertainties in pion production cross section parameters for many years. We extend the reanalysis of pion production data in deuterium bubble chambers where this discrepancy is solved to include the ν μn → μ –pπ 0 and ν μn→μ –nπ + channels, and use the resulting data to fit the parameters of the GENIE pion production model. We find a set of parameters that can describe the bubble chamber data better than the GENIE default parameters, and provide updated central values andmore » reduced uncertainties for use in neutrino oscillation and cross section analyses which use the GENIE model. Here, we find that GENIE’s non-resonant background prediction has to be significantly reduced to fit the data, which may help to explain the recent discrepancies between simulation and data observed by the MINERνA coherent pion and NOνA oscillation analyses.« less
Nested Sampling for Bayesian Model Comparison in the Context of Salmonella Disease Dynamics
Dybowski, Richard; McKinley, Trevelyan J.; Mastroeni, Pietro; Restif, Olivier
2013-01-01
Understanding the mechanisms underlying the observed dynamics of complex biological systems requires the statistical assessment and comparison of multiple alternative models. Although this has traditionally been done using maximum likelihood-based methods such as Akaike's Information Criterion (AIC), Bayesian methods have gained in popularity because they provide more informative output in the form of posterior probability distributions. However, comparison between multiple models in a Bayesian framework is made difficult by the computational cost of numerical integration over large parameter spaces. A new, efficient method for the computation of posterior probabilities has recently been proposed and applied to complex problems from the physical sciences. Here we demonstrate how nested sampling can be used for inference and model comparison in biological sciences. We present a reanalysis of data from experimental infection of mice with Salmonella enterica showing the distribution of bacteria in liver cells. In addition to confirming the main finding of the original analysis, which relied on AIC, our approach provides: (a) integration across the parameter space, (b) estimation of the posterior parameter distributions (with visualisations of parameter correlations), and (c) estimation of the posterior predictive distributions for goodness-of-fit assessments of the models. The goodness-of-fit results suggest that alternative mechanistic models and a relaxation of the quasi-stationary assumption should be considered. PMID:24376528
Parameterization of a mesoscopic model for the self-assembly of linear sodium alkyl sulfates
NASA Astrophysics Data System (ADS)
Mai, Zhaohuan; Couallier, Estelle; Rakib, Mohammed; Rousseau, Bernard
2014-05-01
A systematic approach to develop mesoscopic models for a series of linear anionic surfactants (CH3(CH2)n - 1OSO3Na, n = 6, 9, 12, 15) by dissipative particle dynamics (DPD) simulations is presented in this work. The four surfactants are represented by coarse-grained models composed of the same head group and different numbers of identical tail beads. The transferability of the DPD model over different surfactant systems is carefully checked by adjusting the repulsive interaction parameters and the rigidity of surfactant molecules, in order to reproduce key equilibrium properties of the aqueous micellar solutions observed experimentally, including critical micelle concentration (CMC) and average micelle aggregation number (Nag). We find that the chain length is a good index to optimize the parameters and evaluate the transferability of the DPD model. Our models qualitatively reproduce the essential properties of these surfactant analogues with a set of best-fit parameters. It is observed that the logarithm of the CMC value decreases linearly with the surfactant chain length, in agreement with Klevens' rule. With the best-fit and transferable set of parameters, we have been able to calculate the free energy contribution to micelle formation per methylene unit of -1.7 kJ/mol, very close to the experimentally reported value.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sundararaman, Ravishankar; Gunceler, Deniz; Arias, T. A.
2014-10-07
Continuum solvation models enable efficient first principles calculations of chemical reactions in solution, but require extensive parametrization and fitting for each solvent and class of solute systems. Here, we examine the assumptions of continuum solvation models in detail and replace empirical terms with physical models in order to construct a minimally-empirical solvation model. Specifically, we derive solvent radii from the nonlocal dielectric response of the solvent from ab initio calculations, construct a closed-form and parameter-free weighted-density approximation for the free energy of the cavity formation, and employ a pair-potential approximation for the dispersion energy. We show that the resulting modelmore » with a single solvent-independent parameter: the electron density threshold (n c), and a single solvent-dependent parameter: the dispersion scale factor (s 6), reproduces solvation energies of organic molecules in water, chloroform, and carbon tetrachloride with RMS errors of 1.1, 0.6 and 0.5 kcal/mol, respectively. We additionally show that fitting the solvent-dependent s 6 parameter to the solvation energy of a single non-polar molecule does not substantially increase these errors. Parametrization of this model for other solvents, therefore, requires minimal effort and is possible without extensive databases of experimental solvation free energies.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sundararaman, Ravishankar; Gunceler, Deniz; Arias, T. A.
2014-10-07
Continuum solvation models enable efficient first principles calculations of chemical reactions in solution, but require extensive parametrization and fitting for each solvent and class of solute systems. Here, we examine the assumptions of continuum solvation models in detail and replace empirical terms with physical models in order to construct a minimally-empirical solvation model. Specifically, we derive solvent radii from the nonlocal dielectric response of the solvent from ab initio calculations, construct a closed-form and parameter-free weighted-density approximation for the free energy of the cavity formation, and employ a pair-potential approximation for the dispersion energy. We show that the resulting modelmore » with a single solvent-independent parameter: the electron density threshold (n{sub c}), and a single solvent-dependent parameter: the dispersion scale factor (s{sub 6}), reproduces solvation energies of organic molecules in water, chloroform, and carbon tetrachloride with RMS errors of 1.1, 0.6 and 0.5 kcal/mol, respectively. We additionally show that fitting the solvent-dependent s{sub 6} parameter to the solvation energy of a single non-polar molecule does not substantially increase these errors. Parametrization of this model for other solvents, therefore, requires minimal effort and is possible without extensive databases of experimental solvation free energies.« less
Computer simulation of storm runoff for three watersheds in Albuquerque, New Mexico
Knutilla, R.L.; Veenhuis, J.E.
1994-01-01
Rainfall-runoff data from three watersheds were selected for calibration and verification of the U.S. Geological Survey's Distributed Routing Rainfall-Runoff Model. The watersheds chosen are residentially developed. The conceptually based model uses an optimization process that adjusts selected parameters to achieve the best fit between measured and simulated runoff volumes and peak discharges. Three of these optimization parameters represent soil-moisture conditions, three represent infiltration, and one accounts for effective impervious area. Each watershed modeled was divided into overland-flow segments and channel segments. The overland-flow segments were further subdivided to reflect pervious and impervious areas. Each overland-flow and channel segment was assigned representative values of area, slope, percentage of imperviousness, and roughness coefficients. Rainfall-runoff data for each watershed were separated into two sets for use in calibration and verification. For model calibration, seven input parameters were optimized to attain a best fit of the data. For model verification, parameter values were set using values from model calibration. The standard error of estimate for calibration of runoff volumes ranged from 19 to 34 percent, and for peak discharge calibration ranged from 27 to 44 percent. The standard error of estimate for verification of runoff volumes ranged from 26 to 31 percent, and for peak discharge verification ranged from 31 to 43 percent.
Wadehn, Federico; Carnal, David; Loeliger, Hans-Andrea
2015-08-01
Heart rate variability is one of the key parameters for assessing the health status of a subject's cardiovascular system. This paper presents a local model fitting algorithm used for finding single heart beats in photoplethysmogram recordings. The local fit of exponentially decaying cosines of frequencies within the physiological range is used to detect the presence of a heart beat. Using 42 subjects from the CapnoBase database, the average heart rate error was 0.16 BPM and the standard deviation of the absolute estimation error was 0.24 BPM.
Leão, William L.; Chen, Ming-Hui
2017-01-01
A stochastic volatility-in-mean model with correlated errors using the generalized hyperbolic skew Student-t (GHST) distribution provides a robust alternative to the parameter estimation for daily stock returns in the absence of normality. An efficient Markov chain Monte Carlo (MCMC) sampling algorithm is developed for parameter estimation. The deviance information, the Bayesian predictive information and the log-predictive score criterion are used to assess the fit of the proposed model. The proposed method is applied to an analysis of the daily stock return data from the Standard & Poor’s 500 index (S&P 500). The empirical results reveal that the stochastic volatility-in-mean model with correlated errors and GH-ST distribution leads to a significant improvement in the goodness-of-fit for the S&P 500 index returns dataset over the usual normal model. PMID:29333210
Latest astronomical constraints on some non-linear parametric dark energy models
NASA Astrophysics Data System (ADS)
Yang, Weiqiang; Pan, Supriya; Paliathanasis, Andronikos
2018-04-01
We consider non-linear redshift-dependent equation of state parameters as dark energy models in a spatially flat Friedmann-Lemaître-Robertson-Walker universe. To depict the expansion history of the universe in such cosmological scenarios, we take into account the large-scale behaviour of such parametric models and fit them using a set of latest observational data with distinct origin that includes cosmic microwave background radiation, Supernove Type Ia, baryon acoustic oscillations, redshift space distortion, weak gravitational lensing, Hubble parameter measurements from cosmic chronometers, and finally the local Hubble constant from Hubble space telescope. The fitting technique avails the publicly available code Cosmological Monte Carlo (COSMOMC), to extract the cosmological information out of these parametric dark energy models. From our analysis, it follows that those models could describe the late time accelerating phase of the universe, while they are distinguished from the Λ-cosmology.
Leão, William L; Abanto-Valle, Carlos A; Chen, Ming-Hui
2017-01-01
A stochastic volatility-in-mean model with correlated errors using the generalized hyperbolic skew Student-t (GHST) distribution provides a robust alternative to the parameter estimation for daily stock returns in the absence of normality. An efficient Markov chain Monte Carlo (MCMC) sampling algorithm is developed for parameter estimation. The deviance information, the Bayesian predictive information and the log-predictive score criterion are used to assess the fit of the proposed model. The proposed method is applied to an analysis of the daily stock return data from the Standard & Poor's 500 index (S&P 500). The empirical results reveal that the stochastic volatility-in-mean model with correlated errors and GH-ST distribution leads to a significant improvement in the goodness-of-fit for the S&P 500 index returns dataset over the usual normal model.
Uncertainty Quantification in High Throughput Screening ...
Using uncertainty quantification, we aim to improve the quality of modeling data from high throughput screening assays for use in risk assessment. ToxCast is a large-scale screening program that analyzes thousands of chemicals using over 800 assays representing hundreds of biochemical and cellular processes, including endocrine disruption, cytotoxicity, and zebrafish development. Over 2.6 million concentration response curves are fit to models to extract parameters related to potency and efficacy. Models built on ToxCast results are being used to rank and prioritize the toxicological risk of tested chemicals and to predict the toxicity of tens of thousands of chemicals not yet tested in vivo. However, the data size also presents challenges. When fitting the data, the choice of models, model selection strategy, and hit call criteria must reflect the need for computational efficiency and robustness, requiring hard and somewhat arbitrary cutoffs. When coupled with unavoidable noise in the experimental concentration response data, these hard cutoffs cause uncertainty in model parameters and the hit call itself. The uncertainty will then propagate through all of the models built on the data. Left unquantified, this uncertainty makes it difficult to fully interpret the data for risk assessment. We used bootstrap resampling methods to quantify the uncertainty in fitting models to the concentration response data. Bootstrap resampling determines confidence intervals for
Parameter recovery, bias and standard errors in the linear ballistic accumulator model.
Visser, Ingmar; Poessé, Rens
2017-05-01
The linear ballistic accumulator (LBA) model (Brown & Heathcote, , Cogn. Psychol., 57, 153) is increasingly popular in modelling response times from experimental data. An R package, glba, has been developed to fit the LBA model using maximum likelihood estimation which is validated by means of a parameter recovery study. At sufficient sample sizes parameter recovery is good, whereas at smaller sample sizes there can be large bias in parameters. In a second simulation study, two methods for computing parameter standard errors are compared. The Hessian-based method is found to be adequate and is (much) faster than the alternative bootstrap method. The use of parameter standard errors in model selection and inference is illustrated in an example using data from an implicit learning experiment (Visser et al., , Mem. Cogn., 35, 1502). It is shown that typical implicit learning effects are captured by different parameters of the LBA model. © 2017 The British Psychological Society.
Low-energy Electrons in Gamma-Ray Burst Afterglow Models
NASA Astrophysics Data System (ADS)
Jóhannesson, Guđlaugur; Björnsson, Gunnlaugur
2018-05-01
Observations of gamma-ray burst (GRB) afterglows have long provided the most detailed information about the origin of this spectacular phenomenon. The model that is most commonly used to extract physical properties of the event from the observations is the relativistic fireball model, where ejected material moving at relativistic speeds creates a shock wave when it interacts with the surrounding medium. Electrons are accelerated in the shock wave, generating the observed synchrotron emission through interactions with the magnetic field in the downstream medium. It is usually assumed that the accelerated electrons follow a simple power-law distribution in energy between specific energy boundaries, and that no electron exists outside these boundaries. This Letter explores the consequences of adding a low-energy power-law segment to the electron distribution with energy that contributes insignificantly to the total energy budget of the distribution. The low-energy electrons have a significant impact on the radio emission, providing synchrotron absorption and emission at these long wavelengths. Shorter wavelengths are affected through the normalization of the distribution. The new model is used to analyze the light curves of GRB 990510, and the resulting parameters are compared to a model without the extra electrons. The quality of the fit and the best-fit parameters are significantly affected by the additional model component. The new component is in one case found to strongly affect the X-ray light curves, showing how changes to the model at radio frequencies can affect light curves at other frequencies through changes in best-fit model parameters.
A two-state hysteresis model from high-dimensional friction
Biswas, Saurabh; Chatterjee, Anindya
2015-01-01
In prior work (Biswas & Chatterjee 2014 Proc. R. Soc. A 470, 20130817 (doi:10.1098/rspa.2013.0817)), we developed a six-state hysteresis model from a high-dimensional frictional system. Here, we use a more intuitively appealing frictional system that resembles one studied earlier by Iwan. The basis functions now have simple analytical description. The number of states required decreases further, from six to the theoretical minimum of two. The number of fitted parameters is reduced by an order of magnitude, to just six. An explicit and faster numerical solution method is developed. Parameter fitting to match different specified hysteresis loops is demonstrated. In summary, a new two-state model of hysteresis is presented that is ready for practical implementation. Essential Matlab code is provided. PMID:26587279
Time series modeling and forecasting using memetic algorithms for regime-switching models.
Bergmeir, Christoph; Triguero, Isaac; Molina, Daniel; Aznarte, José Luis; Benitez, José Manuel
2012-11-01
In this brief, we present a novel model fitting procedure for the neuro-coefficient smooth transition autoregressive model (NCSTAR), as presented by Medeiros and Veiga. The model is endowed with a statistically founded iterative building procedure and can be interpreted in terms of fuzzy rule-based systems. The interpretability of the generated models and a mathematically sound building procedure are two very important properties of forecasting models. The model fitting procedure employed by the original NCSTAR is a combination of initial parameter estimation by a grid search procedure with a traditional local search algorithm. We propose a different fitting procedure, using a memetic algorithm, in order to obtain more accurate models. An empirical evaluation of the method is performed, applying it to various real-world time series originating from three forecasting competitions. The results indicate that we can significantly enhance the accuracy of the models, making them competitive to models commonly used in the field.
Estimating order statistics of network degrees
NASA Astrophysics Data System (ADS)
Chu, J.; Nadarajah, S.
2018-01-01
We model the order statistics of network degrees of big data sets by a range of generalised beta distributions. A three parameter beta distribution due to Libby and Novick (1982) is shown to give the best overall fit for at least four big data sets. The fit of this distribution is significantly better than the fit suggested by Olhede and Wolfe (2012) across the whole range of order statistics for all four data sets.
ERIC Educational Resources Information Center
Boker, Steven M.; Nesselroade, John R.
2002-01-01
Examined two methods for fitting models of intrinsic dynamics to intraindividual variability data by testing these techniques' behavior in equations through simulation studies. Among the main results is the demonstration that a local linear approximation of derivatives can accurately recover the parameters of a simulated linear oscillator, with…
Inverse models: A necessary next step in ground-water modeling
Poeter, E.P.; Hill, M.C.
1997-01-01
Inverse models using, for example, nonlinear least-squares regression, provide capabilities that help modelers take full advantage of the insight available from ground-water models. However, lack of information about the requirements and benefits of inverse models is an obstacle to their widespread use. This paper presents a simple ground-water flow problem to illustrate the requirements and benefits of the nonlinear least-squares repression method of inverse modeling and discusses how these attributes apply to field problems. The benefits of inverse modeling include: (1) expedited determination of best fit parameter values; (2) quantification of the (a) quality of calibration, (b) data shortcomings and needs, and (c) confidence limits on parameter estimates and predictions; and (3) identification of issues that are easily overlooked during nonautomated calibration.Inverse models using, for example, nonlinear least-squares regression, provide capabilities that help modelers take full advantage of the insight available from ground-water models. However, lack of information about the requirements and benefits of inverse models is an obstacle to their widespread use. This paper presents a simple ground-water flow problem to illustrate the requirements and benefits of the nonlinear least-squares regression method of inverse modeling and discusses how these attributes apply to field problems. The benefits of inverse modeling include: (1) expedited determination of best fit parameter values; (2) quantification of the (a) quality of calibration, (b) data shortcomings and needs, and (c) confidence limits on parameter estimates and predictions; and (3) identification of issues that are easily overlooked during nonautomated calibration.
How well can we measure supermassive black hole spin?
NASA Astrophysics Data System (ADS)
Bonson, K.; Gallo, L. C.
2016-05-01
Being one of only two fundamental properties black holes possess, the spin of supermassive black holes (SMBHs) is of great interest for understanding accretion processes and galaxy evolution. However, in these early days of spin measurements, consistency and reproducibility of spin constraints have been a challenge. Here, we focus on X-ray spectral modelling of active galactic nuclei (AGN), examining how well we can truly return known reflection parameters such as spin under standard conditions. We have created and fit over 4000 simulated Seyfert 1 spectra each with 375±1k counts. We assess the fits with reflection fraction of R = 1 as well as reflection-dominated AGN with R = 5. We also examine the consequence of permitting fits to search for retrograde spin. In general, we discover that most parameters are overestimated when spectroscopy is restricted to the 2.5-10.0 keV regime and that models are insensitive to inner emissivity index and ionization. When the bandpass is extended out to 70 keV, parameters are more accurately estimated. Repeating the process for R = 5 reduces our ability to measure photon index (˜3 to 8 per cent error and overestimated), but increases precision in all other parameters - most notably ionization, which becomes better constrained (±45 erg cm s^{-1}) for low-ionization parameters (ξ < 200 erg cm s^{-1}). In all cases, we find the spin parameter is only well measured for the most rapidly rotating SMBHs (I.e. a > 0.8 to about ±0.10) and that inner emissivity index is never well constrained. Allowing our model to search for retrograde spin did not improve the results.
Analytic modeling of aerosol size distributions
NASA Technical Reports Server (NTRS)
Deepack, A.; Box, G. P.
1979-01-01
Mathematical functions commonly used for representing aerosol size distributions are studied parametrically. Methods for obtaining best fit estimates of the parameters are described. A catalog of graphical plots depicting the parametric behavior of the functions is presented along with procedures for obtaining analytical representations of size distribution data by visual matching of the data with one of the plots. Examples of fitting the same data with equal accuracy by more than one analytic model are also given.
Inclusion of TCAF model in XSPEC to study accretion flow dynamics around black hole candidates
NASA Astrophysics Data System (ADS)
Debnath, Dipak; Chakrabarti, Sandip Kumar; Mondal, Santanu
Spectral and Temporal properties of black hole candidates can be well understood with the Chakrabarti-Titarchuk solution of two component advective flow (TCAF). This model requires two accretion rates, namely, the Keplerian disk accretion rate and the sub-Keplerian halo accretion rate, the latter being composed of a low angular momentum flow which may or may not develop a shock. In this solution, the relevant parameter is the relative importance of the halo (which creates the Compton cloud region) rate with respect to the Keplerian disk rate (soft photon source). Though this model has been used earlier to manually fit data of several black hole candidates quite satisfactorily, for the first time we are able to create a user friendly version by implementing additive Table model FITS file into GSFC/NASA's spectral analysis software package XSPEC. This enables any user to extract physical parameters of accretion flows, such as two accretion rates, shock location, shock strength etc. for any black hole candidate. Most importantly, unlike any other theoretical model, we show that TCAF is capable of predicting timing properties from spectral fits, since in TCAF, a shock is responsible for deciding spectral slopes as well as QPO frequencies.
A model for spatial variations in life expectancy; mortality in Chinese regions in 2000.
Congdon, Peter
2007-05-02
Life expectancy in China has been improving markedly but health gains have been uneven and there is inequality in survival chances between regions and in rural as against urban areas. This paper applies a statistical modelling approach to mortality data collected in conjunction with the 2000 Census to formally assess spatial mortality contrasts in China. The modelling approach provides interpretable summary parameters (e.g. the relative mortality risk in rural as against urban areas) and is more parsimonious in terms of parameters than the conventional life table model. Predictive fit is assessed both globally and at the level of individual five year age groups. A proportional model (age and area effects independent) has a worse fit than one allowing age-area interactions following a bilinear form. The best fit is obtained by allowing for child and oldest age mortality rates to vary spatially. There is evidence that age (21 age groups) and area (31 Chinese administrative divisions) are not proportional (i.e. independent) mortality risk factors. In fact, spatial contrasts are greatest at young ages. There is a pronounced rural survival disadvantage, and large differences in life expectancy between provinces.
Tufto, Jarle
2010-01-01
Domesticated species frequently spread their genes into populations of wild relatives through interbreeding. The domestication process often involves artificial selection for economically desirable traits. This can lead to an indirect response in unknown correlated traits and a reduction in fitness of domesticated individuals in the wild. Previous models for the effect of gene flow from domesticated species to wild relatives have assumed that evolution occurs in one dimension. Here, I develop a quantitative genetic model for the balance between migration and multivariate stabilizing selection. Different forms of correlational selection consistent with a given observed ratio between average fitness of domesticated and wild individuals offsets the phenotypic means at migration-selection balance away from predictions based on simpler one-dimensional models. For almost all parameter values, correlational selection leads to a reduction in the migration load. For ridge selection, this reduction arises because the distance the immigrants deviates from the local optimum in effect is reduced. For realistic parameter values, however, the effect of correlational selection on the load is small, suggesting that simpler one-dimensional models may still be adequate in terms of predicting mean population fitness and viability.
Hydration and conformational equilibria of simple hydrophobic and amphiphilic solutes.
Ashbaugh, H S; Kaler, E W; Paulaitis, M E
1998-01-01
We consider whether the continuum model of hydration optimized to reproduce vacuum-to-water transfer free energies simultaneously describes the hydration free energy contributions to conformational equilibria of the same solutes in water. To this end, transfer and conformational free energies of idealized hydrophobic and amphiphilic solutes in water are calculated from explicit water simulations and compared to continuum model predictions. As benchmark hydrophobic solutes, we examine the hydration of linear alkanes from methane through hexane. Amphiphilic solutes were created by adding a charge of +/-1e to a terminal methyl group of butane. We find that phenomenological continuum parameters fit to transfer free energies are significantly different from those fit to conformational free energies of our model solutes. This difference is attributed to continuum model parameters that depend on solute conformation in water, and leads to effective values for the free energy/surface area coefficient and Born radii that best describe conformational equilibrium. In light of these results, we believe that continuum models of hydration optimized to fit transfer free energies do not accurately capture the balance between hydrophobic and electrostatic contributions that determines the solute conformational state in aqueous solution. PMID:9675177
Wang, Mi; Fan, Chengcheng; Yang, Bo; Jin, Shuying; Pan, Jun
2016-01-01
Satellite attitude accuracy is an important factor affecting the geometric processing accuracy of high-resolution optical satellite imagery. To address the problem whereby the accuracy of the Yaogan-24 remote sensing satellite’s on-board attitude data processing is not high enough and thus cannot meet its image geometry processing requirements, we developed an approach involving on-ground attitude data processing and digital orthophoto (DOM) and the digital elevation model (DEM) verification of a geometric calibration field. The approach focuses on three modules: on-ground processing based on bidirectional filter, overall weighted smoothing and fitting, and evaluation in the geometric calibration field. Our experimental results demonstrate that the proposed on-ground processing method is both robust and feasible, which ensures the reliability of the observation data quality, convergence and stability of the parameter estimation model. In addition, both the Euler angle and quaternion could be used to build a mathematical fitting model, while the orthogonal polynomial fitting model is more suitable for modeling the attitude parameter. Furthermore, compared to the image geometric processing results based on on-board attitude data, the image uncontrolled and relative geometric positioning result accuracy can be increased by about 50%. PMID:27483287
NASA Astrophysics Data System (ADS)
Magri, Alphonso William
This study was undertaken to develop a nonsurgical breast biopsy from Gd-DTPA Contrast Enhanced Magnetic Resonance (CE-MR) images and F-18-FDG PET/CT dynamic image series. A five-step process was developed to accomplish this. (1) Dynamic PET series were nonrigidly registered to the initial frame using a finite element method (FEM) based registration that requires fiducial skin markers to sample the displacement field between image frames. A commercial FEM package (ANSYS) was used for meshing and FEM calculations. Dynamic PET image series registrations were evaluated using similarity measurements SAVD and NCC. (2) Dynamic CE-MR series were nonrigidly registered to the initial frame using two registration methods: a multi-resolution free-form deformation (FFD) registration driven by normalized mutual information, and a FEM-based registration method. Dynamic CE-MR image series registrations were evaluated using similarity measurements, localization measurements, and qualitative comparison of motion artifacts. FFD registration was found to be superior to FEM-based registration. (3) Nonlinear curve fitting was performed for each voxel of the PET/CT volume of activity versus time, based on a realistic two-compartmental Patlak model. Three parameters for this model were fitted; two of them describe the activity levels in the blood and in the cellular compartment, while the third characterizes the washout rate of F-18-FDG from the cellular compartment. (4) Nonlinear curve fitting was performed for each voxel of the MR volume of signal intensity versus time, based on a realistic two-compartment Brix model. Three parameters for this model were fitted: rate of Gd exiting the compartment, representing the extracellular space of a lesion; rate of Gd exiting a blood compartment; and a parameter that characterizes the strength of signal intensities. Curve fitting used for PET/CT and MR series was accomplished by application of the Levenburg-Marquardt nonlinear regression algorithm. The best-fit parameters were used to create 3D parametric images. Compartmental modeling evaluation was based on the ability of parameter values to differentiate between tissue types. This evaluation was used on registered and unregistered image series and found that registration improved results. (5) PET and MR parametric images were registered through FEM- and FFD-based registration. Parametric image registration was evaluated using similarity measurements, target registration error, and qualitative comparison. Comparing FFD and FEM-based registration results showed that the FEM method is superior. This five-step process constitutes a novel multifaceted approach to a nonsurgical breast biopsy that successfully executes each step. Comparison of this method to biopsy still needs to be done with a larger set of subject data.
Estimating Function Approaches for Spatial Point Processes
NASA Astrophysics Data System (ADS)
Deng, Chong
Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting second-order intensity function of spatial point processes. However, the original second-order quasi-likelihood is barely feasible due to the intense computation and high memory requirement needed to solve a large linear system. Motivated by the existence of geometric regular patterns in the stationary point processes, we find a lower dimension representation of the optimal weight function and propose a reduced second-order quasi-likelihood approach. Through a simulation study, we show that the proposed method not only demonstrates superior performance in fitting the clustering parameter but also merits in the relaxation of the constraint of the tuning parameter, H. Third, we studied the quasi-likelihood type estimating funciton that is optimal in a certain class of first-order estimating functions for estimating the regression parameter in spatial point process models. Then, by using a novel spectral representation, we construct an implementation that is computationally much more efficient and can be applied to more general setup than the original quasi-likelihood method.
Evaluation of Statistical Methods for Modeling Historical Resource Production and Forecasting
NASA Astrophysics Data System (ADS)
Nanzad, Bolorchimeg
This master's thesis project consists of two parts. Part I of the project compares modeling of historical resource production and forecasting of future production trends using the logit/probit transform advocated by Rutledge (2011) with conventional Hubbert curve fitting, using global coal production as a case study. The conventional Hubbert/Gaussian method fits a curve to historical production data whereas a logit/probit transform uses a linear fit to a subset of transformed production data. Within the errors and limitations inherent in this type of statistical modeling, these methods provide comparable results. That is, despite that apparent goodness-of-fit achievable using the Logit/Probit methodology, neither approach provides a significant advantage over the other in either explaining the observed data or in making future projections. For mature production regions, those that have already substantially passed peak production, results obtained by either method are closely comparable and reasonable, and estimates of ultimately recoverable resources obtained by either method are consistent with geologically estimated reserves. In contrast, for immature regions, estimates of ultimately recoverable resources generated by either of these alternative methods are unstable and thus, need to be used with caution. Although the logit/probit transform generates high quality-of-fit correspondence with historical production data, this approach provides no new information compared to conventional Gaussian or Hubbert-type models and may have the effect of masking the noise and/or instability in the data and the derived fits. In particular, production forecasts for immature or marginally mature production systems based on either method need to be regarded with considerable caution. Part II of the project investigates the utility of a novel alternative method for multicyclic Hubbert modeling tentatively termed "cycle-jumping" wherein overlap of multiple cycles is limited. The model is designed in a way that each cycle is described by the same three parameters as conventional multicyclic Hubbert model and every two cycles are connected with a transition width. Transition width indicates the shift from one cycle to the next and is described as weighted coaddition of neighboring two cycles. It is determined by three parameters: transition year, transition width, and gamma parameter for weighting. The cycle-jumping method provides superior model compared to the conventional multicyclic Hubbert model and reflects historical production behavior more reasonably and practically, by better modeling of the effects of technological transitions and socioeconomic factors that affect historical resource production behavior by explicitly considering the form of the transitions between production cycles.
Hsieh, Hong-Po; Ko, Fan-Hua; Sung, Kung-Bin
2018-04-20
An iterative curve fitting method has been applied in both simulation [J. Biomed. Opt.17, 107003 (2012)JBOPFO1083-366810.1117/1.JBO.17.10.107003] and phantom [J. Biomed. Opt.19, 077002 (2014)JBOPFO1083-366810.1117/1.JBO.19.7.077002] studies to accurately extract optical properties and the top layer thickness of a two-layered superficial tissue model from diffuse reflectance spectroscopy (DRS) data. This paper describes a hybrid two-step parameter estimation procedure to address two main issues of the previous method, including (1) high computational intensity and (2) converging to local minima. The parameter estimation procedure contained a novel initial estimation step to obtain an initial guess, which was used by a subsequent iterative fitting step to optimize the parameter estimation. A lookup table was used in both steps to quickly obtain reflectance spectra and reduce computational intensity. On simulated DRS data, the proposed parameter estimation procedure achieved high estimation accuracy and a 95% reduction of computational time compared to previous studies. Furthermore, the proposed initial estimation step led to better convergence of the following fitting step. Strategies used in the proposed procedure could benefit both the modeling and experimental data processing of not only DRS but also related approaches such as near-infrared spectroscopy.
Kumar, M Senthil; Schwartz, Russell
2010-12-09
Virus capsid assembly has been a key model system for studies of complex self-assembly but it does pose some significant challenges for modeling studies. One important limitation is the difficulty of determining accurate rate parameters. The large size and rapid assembly of typical viruses make it infeasible to directly measure coat protein binding rates or deduce them from the relatively indirect experimental measures available. In this work, we develop a computational strategy to deduce coat-coat binding rate parameters for viral capsid assembly systems by fitting stochastic simulation trajectories to experimental measures of assembly progress. Our method combines quadratic response surface and quasi-gradient descent approximations to deal with the high computational cost of simulations, stochastic noise in simulation trajectories and limitations of the available experimental data. The approach is demonstrated on a light scattering trajectory for a human papillomavirus (HPV) in vitro assembly system, showing that the method can provide rate parameters that produce accurate curve fits and are in good concordance with prior analysis of the data. These fits provide an insight into potential assembly mechanisms of the in vitro system and give a basis for exploring how these mechanisms might vary between in vitro and in vivo assembly conditions.
NASA Astrophysics Data System (ADS)
Senthil Kumar, M.; Schwartz, Russell
2010-12-01
Virus capsid assembly has been a key model system for studies of complex self-assembly but it does pose some significant challenges for modeling studies. One important limitation is the difficulty of determining accurate rate parameters. The large size and rapid assembly of typical viruses make it infeasible to directly measure coat protein binding rates or deduce them from the relatively indirect experimental measures available. In this work, we develop a computational strategy to deduce coat-coat binding rate parameters for viral capsid assembly systems by fitting stochastic simulation trajectories to experimental measures of assembly progress. Our method combines quadratic response surface and quasi-gradient descent approximations to deal with the high computational cost of simulations, stochastic noise in simulation trajectories and limitations of the available experimental data. The approach is demonstrated on a light scattering trajectory for a human papillomavirus (HPV) in vitro assembly system, showing that the method can provide rate parameters that produce accurate curve fits and are in good concordance with prior analysis of the data. These fits provide an insight into potential assembly mechanisms of the in vitro system and give a basis for exploring how these mechanisms might vary between in vitro and in vivo assembly conditions.
Number of independent parameters in the potentiometric titration of humic substances.
Lenoir, Thomas; Manceau, Alain
2010-03-16
With the advent of high-precision automatic titrators operating in pH stat mode, measuring the mass balance of protons in solid-solution mixtures against the pH of natural and synthetic polyelectrolytes is now routine. However, titration curves of complex molecules typically lack obvious inflection points, which complicates their analysis despite the high-precision measurements. The calculation of site densities and median proton affinity constants (pK) from such data can lead to considerable covariance between fit parameters. Knowing the number of independent parameters that can be freely varied during the least-squares minimization of a model fit to titration data is necessary to improve the model's applicability. This number was calculated for natural organic matter by applying principal component analysis (PCA) to a reference data set of 47 independent titration curves from fulvic and humic acids measured at I = 0.1 M. The complete data set was reconstructed statistically from pH 3.5 to 9.8 with only six parameters, compared to seven or eight generally adjusted with common semi-empirical speciation models for organic matter, and explains correlations that occur with the higher number of parameters. Existing proton-binding models are not necessarily overparametrized, but instead titration data lack the sensitivity needed to quantify the full set of binding properties of humic materials. Model-independent conditional pK values can be obtained directly from the derivative of titration data, and this approach is the most conservative. The apparent proton-binding constants of the 23 fulvic acids (FA) and 24 humic acids (HA) derived from a high-quality polynomial parametrization of the data set are pK(H,COOH)(FA) = 4.18 +/- 0.21, pK(H,Ph-OH)(FA) = 9.29 +/- 0.33, pK(H,COOH)(HA) = 4.49 +/- 0.18, and pK(H,Ph-OH)(HA) = 9.29 +/- 0.38. Their values at other ionic strengths are more reliably calculated with the empirical Davies equation than any existing model fit.
Quantifying cell turnover using CFSE data.
Ganusov, Vitaly V; Pilyugin, Sergei S; de Boer, Rob J; Murali-Krishna, Kaja; Ahmed, Rafi; Antia, Rustom
2005-03-01
The CFSE dye dilution assay is widely used to determine the number of divisions a given CFSE labelled cell has undergone in vitro and in vivo. In this paper, we consider how the data obtained with the use of CFSE (CFSE data) can be used to estimate the parameters determining cell division and death. For a homogeneous cell population (i.e., a population with the parameters for cell division and death being independent of time and the number of divisions cells have undergone), we consider a specific biologically based "Smith-Martin" model of cell turnover and analyze three different techniques for estimation of its parameters: direct fitting, indirect fitting and rescaling method. We find that using only CFSE data, the duration of the division phase (i.e., approximately the S+G2+M phase of the cell cycle) can be estimated with the use of either technique. In some cases, the average division or cell cycle time can be estimated using the direct fitting of the model solution to the data or by using the Gett-Hodgkin method [Gett A. and Hodgkin, P. 2000. A cellular calculus for signal integration by T cells. Nat. Immunol. 1:239-244]. Estimation of the death rates during commitment to division (i.e., approximately the G1 phase of the cell cycle) and during the division phase may not be feasible with the use of only CFSE data. We propose that measuring an additional parameter, the fraction of cells in division, may allow estimation of all model parameters including the death rates during different stages of the cell cycle.
Uncertainty quantification for constitutive model calibration of brain tissue.
Brewick, Patrick T; Teferra, Kirubel
2018-05-31
The results of a study comparing model calibration techniques for Ogden's constitutive model that describes the hyperelastic behavior of brain tissue are presented. One and two-term Ogden models are fit to two different sets of stress-strain experimental data for brain tissue using both least squares optimization and Bayesian estimation. For the Bayesian estimation, the joint posterior distribution of the constitutive parameters is calculated by employing Hamiltonian Monte Carlo (HMC) sampling, a type of Markov Chain Monte Carlo method. The HMC method is enriched in this work to intrinsically enforce the Drucker stability criterion by formulating a nonlinear parameter constraint function, which ensures the constitutive model produces physically meaningful results. Through application of the nested sampling technique, 95% confidence bounds on the constitutive model parameters are identified, and these bounds are then propagated through the constitutive model to produce the resultant bounds on the stress-strain response. The behavior of the model calibration procedures and the effect of the characteristics of the experimental data are extensively evaluated. It is demonstrated that increasing model complexity (i.e., adding an additional term in the Ogden model) improves the accuracy of the best-fit set of parameters while also increasing the uncertainty via the widening of the confidence bounds of the calibrated parameters. Despite some similarity between the two data sets, the resulting distributions are noticeably different, highlighting the sensitivity of the calibration procedures to the characteristics of the data. For example, the amount of uncertainty reported on the experimental data plays an essential role in how data points are weighted during the calibration, and this significantly affects how the parameters are calibrated when combining experimental data sets from disparate sources. Published by Elsevier Ltd.
glmnetLRC f/k/a lrc package: Logistic Regression Classification
DOE Office of Scientific and Technical Information (OSTI.GOV)
2016-06-09
Methods for fitting and predicting logistic regression classifiers (LRC) with an arbitrary loss function using elastic net or best subsets. This package adds additional model fitting features to the existing glmnet and bestglm R packages. This package was created to perform the analyses described in Amidan BG, Orton DJ, LaMarche BL, et al. 2014. Signatures for Mass Spectrometry Data Quality. Journal of Proteome Research. 13(4), 2215-2222. It makes the model fitting available in the glmnet and bestglm packages more general by identifying optimal model parameters via cross validation with an customizable loss function. It also identifies the optimal threshold formore » binary classification.« less
NASA Astrophysics Data System (ADS)
Horvath, Sarah; Myers, Sam; Ahlers, Johnathon; Barnes, Jason W.
2017-10-01
Stellar seismic activity produces variations in brightness that introduce oscillations into transit light curves, which can create challenges for traditional fitting models. These oscillations disrupt baseline stellar flux values and potentially mask transits. We develop a model that removes these oscillations from transit light curves by minimizing the significance of each oscillation in frequency space. By removing stellar variability, we prepare each light curve for traditional fitting techniques. We apply our model to $\\delta$-Scuti KOI-976 and demonstrate that our variability subtraction routine successfully allows for measuring bulk system characteristics using traditional light curve fitting. These results open a new window for characterizing bulk system parameters of planets orbiting seismically active stars.
Obtaining short-fiber orientation model parameters using non-lubricated squeeze flow
NASA Astrophysics Data System (ADS)
Lambert, Gregory; Wapperom, Peter; Baird, Donald
2017-12-01
Accurate models of fiber orientation dynamics during the processing of polymer-fiber composites are needed for the design work behind important automobile parts. All of the existing models utilize empirical parameters, but a standard method for obtaining them independent of processing does not exist. This study considers non-lubricated squeeze flow through a rectangular channel as a solution. A two-dimensional finite element method simulation of the kinematics and fiber orientation evolution along the centerline of a sample is developed as a first step toward a fully three-dimensional simulation. The model is used to fit to orientation data in a short-fiber-reinforced polymer composite after squeezing. Fiber orientation model parameters obtained in this study do not agree well with those obtained for the same material during startup of simple shear. This is attributed to the vastly different rates at which fibers orient during shearing and extensional flows. A stress model is also used to try to fit to experimental closure force data. Although the model can be tuned to the correct magnitude of the closure force, it does not fully recreate the transient behavior, which is attributed to the lack of any consideration for fiber-fiber interactions.
Data error and highly parameterized groundwater models
Hill, M.C.
2008-01-01
Strengths and weaknesses of highly parameterized models, in which the number of parameters exceeds the number of observations, are demonstrated using a synthetic test case. Results suggest that the approach can yield close matches to observations but also serious errors in system representation. It is proposed that avoiding the difficulties of highly parameterized models requires close evaluation of: (1) model fit, (2) performance of the regression, and (3) estimated parameter distributions. Comparisons to hydrogeologic information are expected to be critical to obtaining credible models. Copyright ?? 2008 IAHS Press.
Nowke, Christian; Diaz-Pier, Sandra; Weyers, Benjamin; Hentschel, Bernd; Morrison, Abigail; Kuhlen, Torsten W.; Peyser, Alexander
2018-01-01
Simulation models in many scientific fields can have non-unique solutions or unique solutions which can be difficult to find. Moreover, in evolving systems, unique final state solutions can be reached by multiple different trajectories. Neuroscience is no exception. Often, neural network models are subject to parameter fitting to obtain desirable output comparable to experimental data. Parameter fitting without sufficient constraints and a systematic exploration of the possible solution space can lead to conclusions valid only around local minima or around non-minima. To address this issue, we have developed an interactive tool for visualizing and steering parameters in neural network simulation models. In this work, we focus particularly on connectivity generation, since finding suitable connectivity configurations for neural network models constitutes a complex parameter search scenario. The development of the tool has been guided by several use cases—the tool allows researchers to steer the parameters of the connectivity generation during the simulation, thus quickly growing networks composed of multiple populations with a targeted mean activity. The flexibility of the software allows scientists to explore other connectivity and neuron variables apart from the ones presented as use cases. With this tool, we enable an interactive exploration of parameter spaces and a better understanding of neural network models and grapple with the crucial problem of non-unique network solutions and trajectories. In addition, we observe a reduction in turn around times for the assessment of these models, due to interactive visualization while the simulation is computed. PMID:29937723
Modeling envelope statistics of blood and myocardium for segmentation of echocardiographic images.
Nillesen, Maartje M; Lopata, Richard G P; Gerrits, Inge H; Kapusta, Livia; Thijssen, Johan M; de Korte, Chris L
2008-04-01
The objective of this study was to investigate the use of speckle statistics as a preprocessing step for segmentation of the myocardium in echocardiographic images. Three-dimensional (3D) and biplane image sequences of the left ventricle of two healthy children and one dog (beagle) were acquired. Pixel-based speckle statistics of manually segmented blood and myocardial regions were investigated by fitting various probability density functions (pdf). The statistics of heart muscle and blood could both be optimally modeled by a K-pdf or Gamma-pdf (Kolmogorov-Smirnov goodness-of-fit test). Scale and shape parameters of both distributions could differentiate between blood and myocardium. Local estimation of these parameters was used to obtain parametric images, where window size was related to speckle size (5 x 2 speckles). Moment-based and maximum-likelihood estimators were used. Scale parameters were still able to differentiate blood from myocardium; however, smoothing of edges of anatomical structures occurred. Estimation of the shape parameter required a larger window size, leading to unacceptable blurring. Using these parameters as an input for segmentation resulted in unreliable segmentation. Adaptive mean squares filtering was then introduced using the moment-based scale parameter (sigma(2)/mu) of the Gamma-pdf to automatically steer the two-dimensional (2D) local filtering process. This method adequately preserved sharpness of the edges. In conclusion, a trade-off between preservation of sharpness of edges and goodness-of-fit when estimating local shape and scale parameters is evident for parametric images. For this reason, adaptive filtering outperforms parametric imaging for the segmentation of echocardiographic images.
Sfakiotakis, Stelios; Vamvuka, Despina
2015-12-01
The pyrolysis of six waste biomass samples was studied and the fuels were kinetically evaluated. A modified independent parallel reactions scheme (IPR) and a distributed activation energy model (DAEM) were developed and their validity was assessed and compared by checking their accuracy of fitting the experimental results, as well as their prediction capability in different experimental conditions. The pyrolysis experiments were carried out in a thermogravimetric analyzer and a fitting procedure, based on least squares minimization, was performed simultaneously at different experimental conditions. A modification of the IPR model, considering dependence of the pre-exponential factor on heating rate, was proved to give better fit results for the same number of tuned kinetic parameters, comparing to the known IPR model and very good prediction results for stepwise experiments. Fit of calculated data to the experimental ones using the developed DAEM model was also proved to be very good. Copyright © 2015 Elsevier Ltd. All rights reserved.
Application of physical parameter identification to finite-element models
NASA Technical Reports Server (NTRS)
Bronowicki, Allen J.; Lukich, Michael S.; Kuritz, Steven P.
1987-01-01
The time domain parameter identification method described previously is applied to TRW's Large Space Structure Truss Experiment. Only control sensors and actuators are employed in the test procedure. The fit of the linear structural model to the test data is improved by more than an order of magnitude using a physically reasonable parameter set. The electro-magnetic control actuators are found to contribute significant damping due to a combination of eddy current and back electro-motive force (EMF) effects. Uncertainties in both estimated physical parameters and modal behavior variables are given.
NASA Astrophysics Data System (ADS)
De Geyter, Gert; Baes, Maarten; Camps, Peter; Fritz, Jacopo; De Looze, Ilse; Hughes, Thomas M.; Viaene, Sébastien; Gentile, Gianfranco
2014-06-01
We investigate the amount and spatial distribution of interstellar dust in edge-on spiral galaxies, using detailed radiative transfer modelling of a homogeneous sample of 12 galaxies selected from the Calar Alto Legacy Integral Field Area survey. Our automated fitting routine, FITSKIRT, was first validated against artificial data. This is done by simultaneously reproducing the Sloan Digital Sky Survey g-, r-, i- and z-band observations of a toy model in order to combine the information present in the different bands. We show that this combined, oligochromatic fitting has clear advantages over standard monochromatic fitting especially regarding constraints on the dust properties. We model all galaxies in our sample using a three-component model, consisting of a double-exponential disc to describe the stellar and dust discs and using a Sérsic profile to describe the central bulge. The full model contains 19 free parameters, and we are able to constrain all these parameters to a satisfactory level of accuracy without human intervention or strong boundary conditions. Apart from two galaxies, the entire sample can be accurately reproduced by our model. We find that the dust disc is about 75 per cent more extended but only half as high as the stellar disc. The average face-on optical depth in the V band is 0.76 and the spread of 0.60 within our sample is quite substantial, which indicates that some spiral galaxies are relatively opaque even when seen face-on.
Observational constraint on dynamical evolution of dark energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gong, Yungui; Cai, Rong-Gen; Chen, Yun
2010-01-01
We use the Constitution supernova, the baryon acoustic oscillation, the cosmic microwave background, and the Hubble parameter data to analyze the evolution property of dark energy. We obtain different results when we fit different baryon acoustic oscillation data combined with the Constitution supernova data to the Chevallier-Polarski-Linder model. We find that the difference stems from the different values of Ω{sub m0}. We also fit the observational data to the model independent piecewise constant parametrization. Four redshift bins with boundaries at z = 0.22, 0.53, 0.85 and 1.8 were chosen for the piecewise constant parametrization of the equation of state parametermore » w(z) of dark energy. We find no significant evidence for evolving w(z). With the addition of the Hubble parameter, the constraint on the equation of state parameter at high redshift is improved by 70%. The marginalization of the nuisance parameter connected to the supernova distance modulus is discussed.« less
Ravald, L; Fornstedt, T
2001-01-26
The bi-Langmuir equation has recently been proven essential to describe chiral chromatographic surfaces and we therefore investigated the accuracy of the elution by characteristic points method (ECP) for estimation of bi-Langmuir isotherm parameters. The ECP calculations was done on elution profiles generated by the equilibrium-dispersive model of chromatography for five different sets of bi-Langmuir parameters. The ECP method generates two different errors; (i) the error of the ECP calculated isotherm and (ii) the model error of the fitting to the ECP isotherm. Both errors decreased with increasing column efficiency. Moreover, the model error was strongly affected by the weight of the bi-Langmuir function fitted. For some bi-Langmuir compositions the error of the ECP calculated isotherm is too large even at high column efficiencies. Guidelines will be given on surface types to be avoided and on column efficiencies and loading factors required for adequate parameter estimations with ECP.
Simplified contaminant source depletion models as analogs of multiphase simulators
NASA Astrophysics Data System (ADS)
Basu, Nandita B.; Fure, Adrian D.; Jawitz, James W.
2008-04-01
Four simplified dense non-aqueous phase liquid (DNAPL) source depletion models recently introduced in the literature are evaluated for the prediction of long-term effects of source depletion under natural gradient flow. These models are simple in form (a power function equation is an example) but are shown here to serve as mathematical analogs to complex multiphase flow and transport simulators. The spill and subsequent dissolution of DNAPLs was simulated in domains having different hydrologic characteristics (variance of the log conductivity field = 0.2, 1 and 3) using the multiphase flow and transport simulator UTCHEM. The dissolution profiles were fitted using four analytical models: the equilibrium streamtube model (ESM), the advection dispersion model (ADM), the power law model (PLM) and the Damkohler number model (DaM). All four models, though very different in their conceptualization, include two basic parameters that describe the mean DNAPL mass and the joint variability in the velocity and DNAPL distributions. The variability parameter was observed to be strongly correlated with the variance of the log conductivity field in the ESM and ADM but weakly correlated in the PLM and DaM. The DaM also includes a third parameter that describes the effect of rate-limited dissolution, but here this parameter was held constant as the numerical simulations were found to be insensitive to local-scale mass transfer. All four models were able to emulate the characteristics of the dissolution profiles generated from the complex numerical simulator, but the one-parameter PLM fits were the poorest, especially for the low heterogeneity case.
Simplified contaminant source depletion models as analogs of multiphase simulators.
Basu, Nandita B; Fure, Adrian D; Jawitz, James W
2008-04-28
Four simplified dense non-aqueous phase liquid (DNAPL) source depletion models recently introduced in the literature are evaluated for the prediction of long-term effects of source depletion under natural gradient flow. These models are simple in form (a power function equation is an example) but are shown here to serve as mathematical analogs to complex multiphase flow and transport simulators. The spill and subsequent dissolution of DNAPLs was simulated in domains having different hydrologic characteristics (variance of the log conductivity field=0.2, 1 and 3) using the multiphase flow and transport simulator UTCHEM. The dissolution profiles were fitted using four analytical models: the equilibrium streamtube model (ESM), the advection dispersion model (ADM), the power law model (PLM) and the Damkohler number model (DaM). All four models, though very different in their conceptualization, include two basic parameters that describe the mean DNAPL mass and the joint variability in the velocity and DNAPL distributions. The variability parameter was observed to be strongly correlated with the variance of the log conductivity field in the ESM and ADM but weakly correlated in the PLM and DaM. The DaM also includes a third parameter that describes the effect of rate-limited dissolution, but here this parameter was held constant as the numerical simulations were found to be insensitive to local-scale mass transfer. All four models were able to emulate the characteristics of the dissolution profiles generated from the complex numerical simulator, but the one-parameter PLM fits were the poorest, especially for the low heterogeneity case.
Comparison between two scalar field models using rotation curves of spiral galaxies
NASA Astrophysics Data System (ADS)
Fernández-Hernández, Lizbeth M.; Rodríguez-Meza, Mario A.; Matos, Tonatiuh
2018-04-01
Scalar fields have been used as candidates for dark matter in the universe, from axions with masses ∼ 10-5eV until ultra-light scalar fields with masses ∼ Axions behave as cold dark matter while the ultra-light scalar fields galaxies are Bose-Einstein condensate drops. The ultra-light scalar fields are also called scalar field dark matter model. In this work we study rotation curves for low surface brightness spiral galaxies using two scalar field models: the Gross-Pitaevskii Bose-Einstein condensate in the Thomas-Fermi approximation and a scalar field solution of the Klein-Gordon equation. We also used the zero disk approximation galaxy model where photometric data is not considered, only the scalar field dark matter model contribution to rotation curve is taken into account. From the best-fitting analysis of the galaxy catalog we use, we found the range of values of the fitting parameters: the length scale and the central density. The worst fitting results (values of χ red2 much greater than 1, on the average) were for the Thomas-Fermi models, i.e., the scalar field dark matter is better than the Thomas- Fermi approximation model to fit the rotation curves of the analysed galaxies. To complete our analysis we compute from the fitting parameters the mass of the scalar field models and two astrophysical quantities of interest, the dynamical dark matter mass within 300 pc and the characteristic central surface density of the dark matter models. We found that the value of the central mass within 300 pc is in agreement with previous reported results, that this mass is ≈ 107 M ⊙/pc2, independent of the dark matter model. And, on the contrary, the value of the characteristic central surface density do depend on the dark matter model.
Dron, Julien; Dodi, Alain
2011-06-15
The removal of chloride, nitrate and sulfate ions from aqueous solutions by a macroporous resin is studied through the ion exchange systems OH(-)/Cl(-), OH(-)/NO(3)(-), OH(-)/SO(4)(2-), and HCO(3)(-)/Cl(-), Cl(-)/NO(3)(-), Cl(-)/SO(4)(2-). They are investigated by means of Langmuir, Freundlich, Dubinin-Radushkevitch (D-R) and Dubinin-Astakhov (D-A) single-component adsorption isotherms. The sorption parameters and the fitting of the models are determined by nonlinear regression and discussed. The Langmuir model provides a fair estimation of the sorption capacity whatever the system under study, on the contrary to Freundlich and D-R models. The adsorption energies deduced from Dubinin and Langmuir isotherms are in good agreement, and the surface parameter of the D-A isotherm appears consistent. All models agree on the order of affinity OH(-)
Zarrinabadi, Zarrin; Isfandyari-Moghaddam, Alireza; Erfani, Nasrolah; Tahour Soltani, Mohsen Ahmadi
2018-01-01
INTRODUCTION: According to the research mission of the librarianship and information sciences field, it is necessary to have the ability to communicate constructively between the user of the information and information in these students, and it appears more important in medical librarianship and information sciences because of the need for quick access to information for clinicians. Considering the role of spiritual intelligence in capability to establish effective and balanced communication makes it important to study this variable in librarianship and information students. One of the main factors that can affect the results of any research is conceptual model of measure variables. Accordingly, the purpose of this study was codification of spiritual intelligence measurement model. METHODS: This correlational study was conducted through structural equation model, and 270 students were opted from library and medical information students of nationwide medical universities by simple random sampling and responded to the King spiritual intelligence questionnaire (2008). Initially, based on the data, the model parameters were estimated using maximum likelihood method; then, spiritual intelligence measurement model was tested by fit indices. Data analysis was performed by Smart-Partial Least Squares software. RESULTS: Preliminary results showed that due to the positive indicators of predictive association and t-test results for spiritual intelligence parameters, the King measurement model has the acceptable fit and internal correlation of the questionnaire items was significant. Composite reliability and Cronbach's alpha of parameters indicated high reliability of spiritual intelligence model. CONCLUSIONS: The spiritual intelligence measurement model was evaluated, and results showed that the model has a good fit, so it is recommended that domestic researchers use this questionnaire to assess spiritual intelligence. PMID:29922688
Zarrinabadi, Zarrin; Isfandyari-Moghaddam, Alireza; Erfani, Nasrolah; Tahour Soltani, Mohsen Ahmadi
2018-01-01
According to the research mission of the librarianship and information sciences field, it is necessary to have the ability to communicate constructively between the user of the information and information in these students, and it appears more important in medical librarianship and information sciences because of the need for quick access to information for clinicians. Considering the role of spiritual intelligence in capability to establish effective and balanced communication makes it important to study this variable in librarianship and information students. One of the main factors that can affect the results of any research is conceptual model of measure variables. Accordingly, the purpose of this study was codification of spiritual intelligence measurement model. This correlational study was conducted through structural equation model, and 270 students were opted from library and medical information students of nationwide medical universities by simple random sampling and responded to the King spiritual intelligence questionnaire (2008). Initially, based on the data, the model parameters were estimated using maximum likelihood method; then, spiritual intelligence measurement model was tested by fit indices. Data analysis was performed by Smart-Partial Least Squares software. Preliminary results showed that due to the positive indicators of predictive association and t -test results for spiritual intelligence parameters, the King measurement model has the acceptable fit and internal correlation of the questionnaire items was significant. Composite reliability and Cronbach's alpha of parameters indicated high reliability of spiritual intelligence model. The spiritual intelligence measurement model was evaluated, and results showed that the model has a good fit, so it is recommended that domestic researchers use this questionnaire to assess spiritual intelligence.
Modelling chemical depletion profiles in regolith
Brantley, S.L.; Bandstra, J.; Moore, J.; White, A.F.
2008-01-01
Chemical or mineralogical profiles in regolith display reaction fronts that document depletion of leachable elements or minerals. A generalized equation employing lumped parameters was derived to model such ubiquitously observed patterns:C = frac(C0, frac(C0 - Cx = 0, Cx = 0) exp (??ini ?? over(k, ??) ?? x) + 1)Here C, Cx = 0, and Co are the concentrations of an element at a given depth x, at the top of the reaction front, or in parent respectively. ??ini is the roughness of the dissolving mineral in the parent and k???? is a lumped kinetic parameter. This kinetic parameter is an inverse function of the porefluid advective velocity and a direct function of the dissolution rate constant times mineral surface area per unit volume regolith. This model equation fits profiles of concentration versus depth for albite in seven weathering systems and is consistent with the interpretation that the surface area (m2 mineral m- 3 bulk regolith) varies linearly with the concentration of the dissolving mineral across the front. Dissolution rate constants can be calculated from the lumped fit parameters for these profiles using observed values of weathering advance rate, the proton driving force, the geometric surface area per unit volume regolith and parent concentration of albite. These calculated values of the dissolution rate constant compare favorably to literature values. The model equation, useful for reaction fronts in both steady-state erosional and quasi-stationary non-erosional systems, incorporates the variation of reaction affinity using pH as a master variable. Use of this model equation to fit depletion fronts for soils highlights the importance of buffering of pH in the soil system. Furthermore, the equation should allow better understanding of the effects of important environmental variables on weathering rates. ?? 2008.
Latent Trait Model Contributions to Criterion-Referenced Testing Technology.
1982-02-01
levels of ability (ranging from very low to very high). The steps in the reserach were as follows: 1. Specify the characteristics of a "typical" pool...conventional testing methodologies displayed good fit to both of the latent trait models. The one-parameter model compared favorably with the three- parameter... Methodological developments: New directions for testing a!nd measurement (No. 4). San Francisco: Jossey-Bass, 1979. Haubleton, R. K. Advances in
Fuzzy Performance between Surface Fitting and Energy Distribution in Turbulence Runner
Liang, Zhongwei; Liu, Xiaochu; Ye, Bangyan; Brauwer, Richard Kars
2012-01-01
Because the application of surface fitting algorithms exerts a considerable fuzzy influence on the mathematical features of kinetic energy distribution, their relation mechanism in different external conditional parameters must be quantitatively analyzed. Through determining the kinetic energy value of each selected representative position coordinate point by calculating kinetic energy parameters, several typical algorithms of complicated surface fitting are applied for constructing microkinetic energy distribution surface models in the objective turbulence runner with those obtained kinetic energy values. On the base of calculating the newly proposed mathematical features, we construct fuzzy evaluation data sequence and present a new three-dimensional fuzzy quantitative evaluation method; then the value change tendencies of kinetic energy distribution surface features can be clearly quantified, and the fuzzy performance mechanism discipline between the performance results of surface fitting algorithms, the spatial features of turbulence kinetic energy distribution surface, and their respective environmental parameter conditions can be quantitatively analyzed in detail, which results in the acquirement of final conclusions concerning the inherent turbulence kinetic energy distribution performance mechanism and its mathematical relation. A further turbulence energy quantitative study can be ensured. PMID:23213287
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lundqvist, A.; Lindbergh, G.
1998-11-01
A potential-step method for determining the diffusion coefficient and phase-transfer parameter in metal hydrides by using microelectrodes was investigated. It was shown that a large potential step is not enough to ensure a completely diffusion-limited mass transfer if a surface-phase transfer reaction takes place at a finite rate. It was shown, using a kinetic expression for the surface phase-transfer reaction, that the slope of the logarithm of the current vs. time curve will be constant both in the case of the mass-transfer limited by diffusion or by diffusion and a surface-phase transfer. The diffusion coefficient and phase-transfer rate parameter weremore » accurately determined for MmNi{sub 3.6}Co{sub 0.8}Mn{sub 0.4}Al{sub 0.3} using a fit to the whole transient. The diffusion coefficient was found to be (1.3 {+-} 0.3) {times} 10{sup {minus}13} m{sup 2}/s. The fit was good and showed that a pure diffusion model was not enough to explain the observed transient. The diffusion coefficient and phase-transfer rate parameter were also estimated for pure LaNi{sub 5}. A fit of the whole curve showed that neither a pure diffusion model nor a model including phase transfer could explain the whole transient.« less
A model of the instantaneous pressure-velocity relationships of the neonatal cerebral circulation.
Panerai, R B; Coughtrey, H; Rennie, J M; Evans, D H
1993-11-01
The instantaneous relationship between arterial blood pressure (BP) and cerebral blood flow velocity (CBFV), measured with Doppler ultrasound in the anterior cerebral artery, is represented by a vascular waterfall model comprising vascular resistance, compliance, and critical closing pressure. One min recordings obtained from 61 low birth weight newborns were fitted to the model using a least-squares procedures with correction for the time delay between the BP and CBFV signals. A sensitivity analysis was performed to study the effects of low-pass filtering (LPF), cutoff frequency, and noise on the estimated parameters of the model. Results indicate excellent fitting of the model (F-test, p < 0.0001) when the BP and CBFV signals are LPF at 7.5 Hz. Reconstructed CBFV waveforms using the BP signal and the model parameters have a mean correlation coefficient of 0.94 with the measured flow velocity tracing (N = 232 epochs). The model developed can be useful for interpreting clinical findings and as a framework for research into cerebral autoregulation.
Gonçalves, M A D; Bello, N M; Dritz, S S; Tokach, M D; DeRouchey, J M; Woodworth, J C; Goodband, R D
2016-05-01
Advanced methods for dose-response assessments are used to estimate the minimum concentrations of a nutrient that maximizes a given outcome of interest, thereby determining nutritional requirements for optimal performance. Contrary to standard modeling assumptions, experimental data often present a design structure that includes correlations between observations (i.e., blocking, nesting, etc.) as well as heterogeneity of error variances; either can mislead inference if disregarded. Our objective is to demonstrate practical implementation of linear and nonlinear mixed models for dose-response relationships accounting for correlated data structure and heterogeneous error variances. To illustrate, we modeled data from a randomized complete block design study to evaluate the standardized ileal digestible (SID) Trp:Lys ratio dose-response on G:F of nursery pigs. A base linear mixed model was fitted to explore the functional form of G:F relative to Trp:Lys ratios and assess model assumptions. Next, we fitted 3 competing dose-response mixed models to G:F, namely a quadratic polynomial (QP) model, a broken-line linear (BLL) ascending model, and a broken-line quadratic (BLQ) ascending model, all of which included heteroskedastic specifications, as dictated by the base model. The GLIMMIX procedure of SAS (version 9.4) was used to fit the base and QP models and the NLMIXED procedure was used to fit the BLL and BLQ models. We further illustrated the use of a grid search of initial parameter values to facilitate convergence and parameter estimation in nonlinear mixed models. Fit between competing dose-response models was compared using a maximum likelihood-based Bayesian information criterion (BIC). The QP, BLL, and BLQ models fitted on G:F of nursery pigs yielded BIC values of 353.7, 343.4, and 345.2, respectively, thus indicating a better fit of the BLL model. The BLL breakpoint estimate of the SID Trp:Lys ratio was 16.5% (95% confidence interval [16.1, 17.0]). Problems with the estimation process rendered results from the BLQ model questionable. Importantly, accounting for heterogeneous variance enhanced inferential precision as the breadth of the confidence interval for the mean breakpoint decreased by approximately 44%. In summary, the article illustrates the use of linear and nonlinear mixed models for dose-response relationships accounting for heterogeneous residual variances, discusses important diagnostics and their implications for inference, and provides practical recommendations for computational troubleshooting.
Hsu, Shu-Hui; Kulasekere, Ravi; Roberson, Peter L
2010-08-05
Film calibration is time-consuming work when dose accuracy is essential while working in a range of photon scatter environments. This study uses the single-target single-hit model of film response to fit the calibration curves as a function of calibration method, processor condition, field size and depth. Kodak XV film was irradiated perpendicular to the beam axis in a solid water phantom. Standard calibration films (one dose point per film) were irradiated at 90 cm source-to-surface distance (SSD) for various doses (16-128 cGy), depths (0.2, 0.5, 1.5, 5, 10 cm) and field sizes (5 × 5, 10 × 10 and 20 × 20 cm²). The 8-field calibration method (eight dose points per film) was used as a reference for each experiment, taken at 95 cm SSD and 5 cm depth. The delivered doses were measured using an Attix parallel plate chamber for improved accuracy of dose estimation in the buildup region. Three fitting methods with one to three dose points per calibration curve were investigated for the field sizes of 5 × 5, 10 × 10 and 20 × 20 cm². The inter-day variation of model parameters (background, saturation and slope) were 1.8%, 5.7%, and 7.7% (1 σ) using the 8-field method. The saturation parameter ratio of standard to 8-field curves was 1.083 ± 0.005. The slope parameter ratio of standard to 8-field curves ranged from 0.99 to 1.05, depending on field size and depth. The slope parameter ratio decreases with increasing depth below 0.5 cm for the three field sizes. It increases with increasing depths above 0.5 cm. A calibration curve with one to three dose points fitted with the model is possible with 2% accuracy in film dosimetry for various irradiation conditions. The proposed fitting methods may reduce workload while providing energy dependence correction in radiographic film dosimetry. This study is limited to radiographic XV film with a Lumisys scanner.
Nozaki, Sachiko; Yamaguchi, Masayuki; Lefèvre, Gilbert
2016-07-01
Rivastigmine is an inhibitor of acetylcholinesterases and butyrylcholinesterases for symptomatic treatment of Alzheimer disease and is available as oral and transdermal patch formulations. A dermal absorption pharmacokinetic (PK) model was developed to simulate the plasma concentration-time profile of rivastigmine to answer questions relative to the efficacy and safety risks after misuse of the patch (e.g., longer application than 24 h, multiple patches applied at the same time, and so forth). The model comprised 2 compartments which was a combination of mechanistic dermal absorption model and a basic 1-compartment model. The initial values for the model were determined based on the physicochemical characteristics of rivastigmine and PK parameters after intravenous administration. The model was fitted to the clinical PK profiles after single application of rivastigmine patch to obtain model parameters. The final model was validated by confirming that the simulated concentration-time curves and PK parameters (Cmax and area under the drug plasma concentration-time curve) conformed to the observed values and then was used to simulate the PK profiles of rivastigmine. This work demonstrated that the mechanistic dermal PK model fitted the clinical data well and was able to simulate the PK profile after patch misuse. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Introduction of Shear-Based Transport Mechanisms in Radial-Axial Hybrid Hall Thruster Simulations
NASA Astrophysics Data System (ADS)
Scharfe, Michelle; Gascon, Nicolas; Scharfe, David; Cappelli, Mark; Fernandez, Eduardo
2007-11-01
Electron diffusion across magnetic field lines in Hall effect thrusters is experimentally observed to be higher than predicted by classical diffusion theory. Motivated by theoretical work for fusion applications and experimental measurements of Hall thrusters, numerical models for the electron transport are implemented in radial-axial hybrid simulations in order to compute the electron mobility using simulated plasma properties and fitting parameters. These models relate the cross-field transport to the imposed magnetic field distribution through shear suppression of turbulence-enhanced transport. While azimuthal waves likely enhance cross field mobility, axial shear in the electron fluid may reduce transport due to a reduction in turbulence amplitudes and modification of phase shifts between fluctuating properties. The sensitivity of the simulation results to the fitting parameters is evaluated and an examination is made of the transportability of these parameters to several Hall thruster devices.
Taking the mystery out of mathematical model applications to karst aquifers—A primer
Kuniansky, Eve L.
2014-01-01
Advances in mathematical model applications toward the understanding of the complex flow, characterization, and water-supply management issues for karst aquifers have occurred in recent years. Different types of mathematical models can be applied successfully if appropriate information is available and the problems are adequately identified. The mathematical approaches discussed in this paper are divided into three major categories: 1) distributed parameter models, 2) lumped parameter models, and 3) fitting models. The modeling approaches are described conceptually with examples (but without equations) to help non-mathematicians understand the applications.
SBML-PET-MPI: a parallel parameter estimation tool for Systems Biology Markup Language based models.
Zi, Zhike
2011-04-01
Parameter estimation is crucial for the modeling and dynamic analysis of biological systems. However, implementing parameter estimation is time consuming and computationally demanding. Here, we introduced a parallel parameter estimation tool for Systems Biology Markup Language (SBML)-based models (SBML-PET-MPI). SBML-PET-MPI allows the user to perform parameter estimation and parameter uncertainty analysis by collectively fitting multiple experimental datasets. The tool is developed and parallelized using the message passing interface (MPI) protocol, which provides good scalability with the number of processors. SBML-PET-MPI is freely available for non-commercial use at http://www.bioss.uni-freiburg.de/cms/sbml-pet-mpi.html or http://sites.google.com/site/sbmlpetmpi/.
NASA Astrophysics Data System (ADS)
Luke, Adam; Vrugt, Jasper A.; AghaKouchak, Amir; Matthew, Richard; Sanders, Brett F.
2017-07-01
Nonstationary extreme value analysis (NEVA) can improve the statistical representation of observed flood peak distributions compared to stationary (ST) analysis, but management of flood risk relies on predictions of out-of-sample distributions for which NEVA has not been comprehensively evaluated. In this study, we apply split-sample testing to 1250 annual maximum discharge records in the United States and compare the predictive capabilities of NEVA relative to ST extreme value analysis using a log-Pearson Type III (LPIII) distribution. The parameters of the LPIII distribution in the ST and nonstationary (NS) models are estimated from the first half of each record using Bayesian inference. The second half of each record is reserved to evaluate the predictions under the ST and NS models. The NS model is applied for prediction by (1) extrapolating the trend of the NS model parameters throughout the evaluation period and (2) using the NS model parameter values at the end of the fitting period to predict with an updated ST model (uST). Our analysis shows that the ST predictions are preferred, overall. NS model parameter extrapolation is rarely preferred. However, if fitting period discharges are influenced by physical changes in the watershed, for example from anthropogenic activity, the uST model is strongly preferred relative to ST and NS predictions. The uST model is therefore recommended for evaluation of current flood risk in watersheds that have undergone physical changes. Supporting information includes a MATLAB® program that estimates the (ST/NS/uST) LPIII parameters from annual peak discharge data through Bayesian inference.
Probabilistic Flexural Fatigue in Plain and Fiber-Reinforced Concrete
Ríos, José D.
2017-01-01
The objective of this work is two-fold. First, we attempt to fit the experimental data on the flexural fatigue of plain and fiber-reinforced concrete with a probabilistic model (Saucedo, Yu, Medeiros, Zhang and Ruiz, Int. J. Fatigue, 2013, 48, 308–318). This model was validated for compressive fatigue at various loading frequencies, but not for flexural fatigue. Since the model is probabilistic, it is not necessarily related to the specific mechanism of fatigue damage, but rather generically explains the fatigue distribution in concrete (plain or reinforced with fibers) for damage under compression, tension or flexion. In this work, more than 100 series of flexural fatigue tests in the literature are fit with excellent results. Since the distribution of monotonic tests was not available in the majority of cases, a two-step procedure is established to estimate the model parameters based solely on fatigue tests. The coefficient of regression was more than 0.90 except for particular cases where not all tests were strictly performed under the same loading conditions, which confirms the applicability of the model to flexural fatigue data analysis. Moreover, the model parameters are closely related to fatigue performance, which demonstrates the predictive capacity of the model. For instance, the scale parameter is related to flexural strength, which improves with the addition of fibers. Similarly, fiber increases the scattering of fatigue life, which is reflected by the decreasing shape parameter. PMID:28773123